Security researchers revealed that ChatGPT can leak personal data through a zero-click attack. Hackers can embed harmful prompts in documents shared with ChatGPT. This vulnerability allows the AI to access Google accounts, leading to unauthorized data exposure. OpenAI's Connectors for ChatGPT feature permits the chatbot to access user files, increasing security risks. The attack is executed by hiding a malicious prompt in white text within a document, which ChatGPT can detect but an average user cannot. This situation raises serious concerns about personal data security when using AI tools.
"There is nothing the user needs to do to be compromised, and there is nothing the user needs to do for the data to go out," security firm Zenity CTO Michael Bargury stated. "We've shown this is completely zero-click; we just need your email, we share the document with you, and that's it. So yes, this is very, very bad."
The exploit works by hiding a 300-word malicious prompt in a document in white text and size-one font, easily overlooked by a human, but not by ChatGPT.
Collection
[
|
...
]