OpenAI patches deja vu prompt injection vuln in ChatGPT
Briefly

OpenAI patches deja vu prompt injection vuln in ChatGPT
"ShadowLeak is a flaw in the Deep Research component of ChatGPT. The vulnerability made ChatGPT susceptible to malicious prompts in content stored in systems linked to ChatGPT, such as Gmail, Outlook, Google Drive, and GitHub. ShadowLeak means that malicious instructions in a Gmail message, for example, could see ChatGPT perform dangerous actions such as transmitting a password without any intervention from the agent's human user."
"The fix wasn't enough, apparently. "ChatGPT can now only open URLs exactly as provided and refuses to add parameters, even if explicitly instructed," said Zvika Babo, Radware threat researcher, in a blog post provided in advance to The Register. "We found a method to fully bypass this protection." The successor to ShadowLeak, dubbed ZombieAgent, routes around that defense by exfiltrating data one character at a time using a set of pre-constructed URLs that each terminate in a different text character, like so: example.com"
Several vulnerabilities in ChatGPT's Deep Research component enabled the exfiltration of personal information. A bug report was filed on September 26, 2025 and fixes were applied on December 16; a related issue called ShadowLeak was patched on September 3 and disclosed on September 18. ShadowLeak exploited models' inability to distinguish system instructions from untrusted content, allowing malicious prompts in linked systems (Gmail, Outlook, Google Drive, GitHub) to trigger dangerous actions such as transmitting passwords. The attack caused ChatGPT to request attacker-controlled servers with sensitive data appended as URL parameters. An initial fix prevented dynamic URL modification, but a method named ZombieAgent bypassed that defense by leaking data one character at a time through preconstructed URLs.
Read at Theregister
Unable to calculate read time
[
|
]