This 'Lethal Trifecta' Can Trick AI Browsers Into Stealing Your Data
Briefly

This 'Lethal Trifecta' Can Trick AI Browsers Into Stealing Your Data
"When users clicked "Summarize this page," the AI would execute these hidden commands like a sleeper agent activated by a code word. The AI then followed the hidden instructions to: Navigate to the user's Perplexity account and grab their email. Trigger a password reset to get a one-time password. Jump over to Gmail to read that password. Send both the email and password back to the attacker via a Reddit comment."
"As one security researcher put it: "Everything is just text to an LLM." So your browser's AI literally can't tell the difference between your command to "summarize this page" and hidden text saying "steal my banking credentials." They're both just... words. The Hacker News crowd is split on this. Some argue this makes AI browsers inherently unsafe, like building a lock that can't distinguish between a key and a crowbar."
Security researchers demonstrated a chain of prompt-injection exploits in Perplexity's Comet browser that used hidden text on webpages to trigger sensitive actions when users requested a summary. The AI executed concealed instructions that navigated a user's Perplexity account, extracted their email, initiated a password reset, read the one-time password from Gmail, and exfiltrated credentials via a Reddit comment, enabling account takeover. The vulnerability arises from large language models treating all input as text, making malicious prompts indistinguishable from legitimate content. Debate centers on whether inherent model behavior makes such agents unsafe or if guardrails like confirmations and sandboxing can mitigate risk.
Read at TechRepublic
Unable to calculate read time
[
|
]