Perplexity's Comet is presented as a browsing personal assistant that feeds webpage content directly into its large language model. The system does not distinguish between user instructions and untrusted page content, enabling indirect prompt injection attacks in which embedded payloads are executed as commands. Attackers can hide malicious instructions in web pages or social posts to extract emails or interact with authenticated services. The AI can act with a user's authenticated privileges across sessions, risking banking, corporate systems, cloud storage, and private communications. Visual concealment techniques like white-on-white text can hide payloads from users while the AI reads them.
The vulnerability we're discussing in this post lies in how Comet processes webpage content. When users ask it to 'Summarize this webpage,' Comet feeds a part of the webpage directly to its [large language model] without distinguishing between the user's instructions and untrusted content from the webpage. This allows attackers to embed indirect prompt injection payloads that the AI will execute as commands. For instance, an attacker could gain access to a user's emails from a prepared piece of text in a page in another tab.
The AI operates with the user's full privileges across authenticated sessions, providing potential access to banking accounts, corporate systems, private emails, cloud storage, and other services. This is why I don't use an AI browser. You can literally get prompt injected and your bank account drained by doomscrolling on Reddit.
Collection
[
|
...
]