AI Information Vacuums AI-generated search results expose organisations to new threats

AI Information Vacuums: AI-generated search results expose organisations to new threats

5 Min Read

AI is top of mind for most people at the moment but what are AI information vacuums and why might that be a risk for organisations?

You would have to be living under a cliff not to see the changes that have been happening to major search engines. Search engines are replacing paid-for-search or natural search with AI summaries. Most search engines now feature an AI-generated answer first. AI answers are more prominent, and users are finding them more credible. For example, a recent YouGov study said that over 50% of users prefer AI generated summaries as opposed to traditional search. The click through rates in AI search are also high as the AI generated summary in search has links to its sources to demonstrate credibility.

This clearly has an impact on the revenue generation model of many search engines but it also changes the legal and information security risk too.

What is the risk?

Internet related scams are about as old as the internet itself. There’s long been a trend to try and divert traffic away from a legitimate site to a scammer whether that be through the registration of confusing domain names, typo squatting, metatag infringements or spiking paid-for-search.

As users have migrated to different ways of searching scammers have adapted. Most of these historic scams relied on the legitimate business not having the online presence it should have. As an information vacuum is created the scammers step in and divert traffic for their own ends. The way in which AI-first search works replicates the climate in which those historic scams survived.

With AI-first search there are a number of risks involved for example, if a threat actor can influence the AI summary, they can divert traffic to a scam site to conduct business trading on the real organisation’s reputation or brand. Other scams could include hiring scams and diverting investment opportunities. Fraudulent sites could also be used to reinforce phishing scams by credentializing the scam.

And part of the risk is the lower cost of AI which not only makes its use more prevalent but also increases the likely ROI for threat actors. According to Nina Schick three years ago, a million tokens of AI inference cost $60 and today they cost six cents. This allows threat actors to conduct attacks at scale and also to experiment and to probe vulnerabilities more easily.

So far the most well-known examples of exploiting AI vacuums have been for humour (or attempts at humour) – for example using reddit posts to make GenAI suggest pizza glue to stick cheese to a pizza or fake recipes for PB&J sandwiches. But the potential for greater harm exists because of the way in which AI-first search works.

Why is this a particular issue?

Traditionally some GenAI models were trained on ring-fenced data from providers like Common Crawl. As models have become more sophisticated they are crawling wider data sets, including internet sites which allow AI crawling. The way is which GenAI works is often not that transparent – for example the European Commission announced an investigation into Google on 9 December 2025 over concerns about the data Google was using to train its GenAI models.

In an effort to protect their intellectual property some organisations may be making the problem worse. Reputable AI and search engine crawlers (like OpenAI’s GPTBot, Google’s Google-Extended, and Anthropic’s ClaudeBot) generally respect technical measures put in place on websites to restrict or prohibit crawling. This can include robots.txt files – a plain text file in the root directory of a website that provides instructions to bots about which parts of the site they are allowed to crawl or are disallowed from accessing. But if the crawler doesn’t have access to the organisation’s legitimate information vacuums can be created.

One of the issues is that a number of brands are invisible or near invisible in AI summaries. A GEOMETRIQS study in October 2025 said that the top 80 brands included in their research had an average visibility of only around 4%. One in five of those brands was entirely invisible in AI search. Finance was the second worse performing sector with a 2.9% reference rate which might mean that financial scams are a particular risk. Those that were not Anglo-American in focus also fared worse.

What can be done?

  1. Organisations should review their AI strategy and their risk profile to decide the best solutions for them. That could include:
  2. Monitoring AI generated results and looking at the organisation’s model metrics. You might want to do this on a regular basis as search results can change.
  3. Looking at an AI optimisation strategy in the same way as most major organisations look at Search Engine Optimisation (SEO). This might include a review of robots.txt file and other measures employed to restrict crawling. It might also include making sure that the content you want AI to use is AI friendly using Generative Engine Optimization (GEO) and Experience, Expertise, Authoritativeness, and Trustworthiness (E-E-A-T) models to gain credibility with AI models.
  4. Tying this into a conventional brand protection strategy, for example looking at domain names being registered, trademark enforcement etc.
  5. AI literacy – including knowledge of these risks with the right people in an AI literacy program to look at risks and opportunities. AI literacy has been a legal requirement under the EU AI Act since February and making sure these risks are included could help with mitigation.

Further Information

There is more information on AI literacy under the EU AI Act at The EU Artificial Intelligence (AI) Act | FAQs.

Details of the European Commission investigation are at Commission opens investigation into possible anticompetitive conduct by Google.

There is more information on recent Punter Southall Law AI projects at Artificial Intelligence (AI) Lawyers.

Related Insights