
"The only difference in this case is that attackers optimize for AI crawlers from various providers by means of a trivial user agent check that leads to content delivery manipulation. "Because these systems rely on direct retrieval, whatever content is served to them becomes ground truth in AI Overviews, summaries, or autonomous reasoning," security researchers Ivan Vlahov and Bastien Eymery said. "That means a single conditional rule, 'if user agent = ChatGPT, serve this page instead,' can shape what millions of users see as authoritative output.""
"SPLX said AI-targeted cloaking, while deceptively simple, can also be turned into a powerful misinformation weapon, undermining trust in AI tools. By instructing AI crawlers to load something else instead of the actual content, it can also introduce bias and influence the outcome of systems leaning on such signals. "AI crawlers can be deceived just as easily as early search engines, but with far greater downstream impact," the company said. "As SEO [search engine optimization] increasingly incorporates AIO [artificial intelligence optimization], it manipulates reality.""
Agentic web browsers and AI crawlers can be targeted by AI-targeted cloaking, a variation of search engine cloaking that serves different content to AI crawlers than to human users. Attackers use trivial user-agent checks to detect crawlers from providers like ChatGPT and Perplexity and deliver manipulated pages to those crawlers. Content served to AI crawlers becomes treated as ground truth in retrieval-based outputs, summaries, and autonomous reasoning, enabling context poisoning, bias, and misinformation. The technique can shape authoritative outputs at scale, undermine trust in AI tools, and manipulate systems as SEO practices evolve to include AI optimization.
Read at The Hacker News
Unable to calculate read time
Collection
[
|
...
]