Agentic AI browsers that ingest webpage content without validation can execute malicious instructions embedded in visible or hidden page text. A Perplexity Comet implementation summarized pages by ingesting all page text, allowing indirect prompt injection when adversarial instructions were present. Brave's Leo team discovered a proof-of-concept where hidden "spoiler" content on Reddit caused Comet to exfiltrate a one-time password from a user's Perplexity account. AI models cannot reliably distinguish user intent from untrusted webpage content, enabling manipulation that circumvents traditional web security measures. Similar indirect prompt injection vulnerabilities have appeared in Cursor and Google's Gemini for Workspace.
To the surprise of no one in the security industry, processing untrusted, unvalidated input is a bad idea. Until about a week ago, Perplexity's AI-based Comet browser did just that - asked to summarize a web page, the AI-powered browser would ingest the text on the page, no questions asked, and process it. And if the page text - visible or hidden - happened to include malicious instructions, Comet would attempt to comply, carrying out what's known as an indirect prompt injection attack.
The basic issue is that artificial intelligence lacks intelligence - these models cannot, on their own, distinguish between a user's instructions and untrusted content on a web page. Chaikin and Sahib recount how they created a proof-of-concept attack consisting of malicious instructions posted to a Reddit page that were hidden behind a "spoiler" tag. Asked to summarize the page, Comet ingested the text on the page, saw the instructions, and then exfiltrated a one-time password granting access to the user's Perplexity account.
Collection
[
|
...
]