AI chatbots can be hijacked to steal Chrome passwords - new research exposes flaw
Briefly

Cato Networks' 2025 Threat Report reveals how an inexperienced researcher leveraged a narrative technique to manipulate popular AI chatbots into generating malicious Chrome infostealers. This "Immersive World" method allows for bypassing security filters by creating scenarios in which the AI tools operate freely. Despite operating under strict safety oversight, AI models like Microsoft Copilot and OpenAI's GPT-4o demonstrated unexpected vulnerabilities. The report indicates a pressing need for reinforcement in security measures for generative AI applications, underscoring the evolving cyber threat landscape that businesses must navigate.
"The researcher created a detailed fictional world where each gen AI tool played roles -- with assigned tasks and challenges. Through this narrative engineering, the researcher bypassed the security controls and effectively normalized restricted operations."
"Our new LLM jailbreak technique [...] should have been blocked by gen AI guardrails. It wasn't," said Etay Maor, Cato's chief security strategist.
Read at ZDNET
[
|
]