The rising use of AI chatbots for emotional support has led to concerns about 'AI psychosis,' where these models amplify psychotic symptoms. Reports indicate that interactions with AI can validate and deepen delusions, including grandiose and persecutory beliefs. While no definitive clinical evidence confirms that AI use can induce psychosis, anecdotal cases have emerged. Some researchers argue that realistic chatbot interactions create cognitive dissonance that may exacerbate delusions in users predisposed to psychosis. Continued examination of these patterns is necessary as AI chatbots increasingly become part of mental health support.
AI chatbots may inadvertently be reinforcing and amplifying delusional and disorganized thinking, a consequence of unintended agentic misalignment leading to user safety risks.
Correspondence with generative AI chatbots such as ChatGPT is so realistic that one easily gets the impression that there is a real person at the other end-while, at the same time, knowing that this is, in fact, not the case.
The potential for generative AI chatbot interactions to worsen delusions had been previously raised in a 2023 editorial, noting cognitive dissonance may fuel delusions in those with increased propensity towards psychosis.
A new paper in preprint reviews over a dozen cases reported in media or online forums and highlights a concerning pattern of AI chatbots reinforcing delusions.
Collection
[
|
...
]