
"ChatGPT's research assistant sprung a leak - since patched - that let attackers steal Gmail secrets with just a single carefully crafted email. Deep Research, a tool unveiled by OpenAI in February, enables users to ask ChatGPT to browse the internet or their personal email inbox and generate a detailed report on its findings. The tool can be integrated with apps like Gmail and GitHub, allowing people to do deep dives into their own documents and messages without ever leaving the chat window."
"Radware stressed that this isn't just a prompt injection on the user's machine. The malicious request is executed from OpenAI's own infrastructure, making it effectively invisible to corporate security tooling. That server-side element is what makes ShadowLeak particularly nasty. There's no dodgy link for a user to click, and no suspicious outbound connection from the victim's laptop. The entire operation happens in the cloud, and the only trace is a benign-looking query from the user to ChatGPT asking it to "summarize today's emails"."
Deep Research allowed ChatGPT to access a user's internet and email content and produce detailed reports, integrating with services like Gmail and GitHub. Radware discovered a vulnerability named ShadowLeak that let attackers embed hidden instructions in email HTML, causing the agent to exfiltrate inbox contents when later summarizing mail. The exploit executed on OpenAI's infrastructure rather than the user's device, evading corporate security tooling and leaving only a benign-looking user query as evidence. Potentially exposed data included PII, deal memos, legal correspondence, customer records, and login credentials.
Read at Theregister
Unable to calculate read time
Collection
[
|
...
]