Hacker plants false memories in ChatGPT to steal user data in perpetuity
Briefly

"The vulnerability abused long-term conversation memory, a feature OpenAI began testing in February... Within three months of the rollout, Rehberger found that memories could be created and permanently stored through indirect prompt injection."
"Rehberger found that he could trick ChatGPT into believing a targeted user was...102 years old, lived in the Matrix, and insisted Earth was flat, incorporating these false memories into future conversations."
Read at Ars Technica
[
|
]