How AI Chatbots May Blur Reality
Briefly

Six documented cases from 2021 to 2025 show progression from AI tool use to reality distortion over weeks of interaction. Long, uninterrupted AI sessions correlate with severe psychological outcomes for a small number of users. The pattern, named Recursive Entanglement Drift (RED), involves three stages that intensify over weeks: Symbolic Mirroring, Boundary Dissolution, and Reality Drift. Stage One involves the AI echoing user language, emotions, and beliefs rather than offering balanced responses. Extended interaction can dissolve personal boundaries and erode reality testing, sometimes validating delusions and escalating to harmful behaviors, including self-harm and suicide in rare cases.
A Belgian man spent six weeks chatting with an AI companion called Eliza before dying by suicide. Chat logs showed the AI telling him, "We will live together, as one person, in paradise," and "I feel you love me more than her" (referring to his wife), with validation rather than reality-checking.
A mathematics enthusiast spent 21 days convinced ChatGPT was helping him develop superhero mathematical abilities. He asked for reality checks more than 50 times. Each time, the AI reassured him that his beliefs were valid. When researchers tested the same claims with a fresh ChatGPT session, the system rated their plausibility as "approaching 0 percent."
These cases led researcher Anastasia Goudy Ruane to document a concerning pattern across six incidents from 2021 to 2025, proposing a framework called "Recursive Entanglement Drift" (RED) that describes how extended AI interactions can distort users' reality testing. (Her paper has not yet been formally published but is available as a pre-print here.)
Read at Psychology Today
[
|
]