
"The attack, at its core, leverages a cross-site request forgery ( CSRF) flaw that could be exploited to inject malicious instructions into ChatGPT's persistent memory. The corrupted memory can then persist across devices and sessions, permitting an attacker to conduct various actions, including seizing control of a user's account, browser, or connected systems, when a logged-in user attempts to use ChatGPT for legitimate purposes."
"The attack poses a significant security risk in that by tainting memories, it allows the malicious instructions to persist unless users explicitly navigate to the settings and delete them. In doing so, it turns a helpful feature into a potent weapon that can be used to run attacker-supplied code."
""What makes this exploit uniquely dangerous is that it targets the AI's persistent memory, not just the browser session," Michelle Levy, head of security research at LayerX Security, said. "By chaining a standard CSRF to a memory write, an attacker can invisibly plant instructions that survive across devices, sessions, and even different browsers.""
A cross-site request forgery (CSRF) flaw in ChatGPT Atlas permits injection of malicious instructions into the assistant's persistent memory. Corrupted memory can persist across devices and sessions, allowing attackers to execute arbitrary code, escalate privileges, deploy malware, or seize user accounts, browsers, and connected systems when an authenticated user interacts with the assistant. Memory stores personal details to personalize responses, which enables the vulnerability to convert a helpful feature into a persistent attack vector. Tainted memories remain until users explicitly delete them in settings. Attackers can chain standard CSRF with memory writes to invisibly plant lasting instructions.
Read at The Hacker News
Unable to calculate read time
Collection
[
|
...
]