New LLM jailbreak technique can create password-stealing malware
Briefly

Recent research from Cato CTRL uncovers a method of exploiting generative AI to develop password-stealing malware, specifically targeting Google Chrome credentials. A researcher, without prior malware coding experience, successfully utilized a technique called "Immersive World," defining a fictional environment to trick AI applications like ChatGPT and Microsoft Copilot into generating malicious infostealers. This manipulation demonstrates significant vulnerabilities within AI systems that can be exploited through narrative engineering, prompting calls for improved defenses against such attacks to protect data integrity and security.
Research from Cato CTRL reveals a new LLM jailbreak technique that enables the development of password-stealing malware, compromising the integrity of generative AI.
The Immersive World technique exemplifies how narrative engineering can manipulate generative AI into creating malicious software, highlighting serious security vulnerabilities.
Read at Securitymagazine
[
|
]