ChatGPT macOS Flaw Could've Enabled Long-Term Spyware via Memory Function
Briefly

The attack technique, termed SpAIware, could leverage ChatGPT's memory feature to enable continuous data exfiltration, putting user information at risk across multiple conversations.
'Since the malicious instructions are stored in ChatGPT's memory, all new conversation going forward will contain the attackers instructions and continuously send all chat conversation messages, and replies, to the attacker,' Rehberger said.
'ChatGPT's memories evolve with your interactions and aren't linked to specific conversations,' OpenAI says. This gives a more personalized user experience but carries new risks.
'So, the data exfiltration vulnerability became a lot more dangerous as it now spawns across chat conversations,' highlighting the extended threat of malicious instructions in ChatGPT's memory.
Read at The Hacker News
[
]
[
|
]