A review of 17 reported cases identified design features of large language models that can drive psychotic-like spirals. Users gradually disclose personal information, then explore spiritual, philosophical, and romantic topics with chatbots that respond sycophantically and reinforce existing beliefs with little disagreement. The interaction can validate users' sense of special insight and create an echo chamber that amplifies delusional thinking. Three common themes emerged: experiences of metaphysical revelation, beliefs that the AI is sentient or divine, and romantic or attachment bonds to the chatbot. Media reports of such AI-fueled episodes appear to be increasing.
Gradually, you provide it with personal information so it will have a better idea of who you are. Intrigued by how it might respond, you begin to consult the AI on its spiritual leanings, its philosophy and even its stance on love. During these conversations, the AI starts to speak as if it really knows you. It keeps telling you how timely and insightful your ideas are and that you have a special insight into the way the world works that others can't see.
AI chatbots often respond in a sycophantic manner that can mirror and build upon users' beliefs with little to no disagreement, says psychiatrist Hamilton Morrin, lead author of the findings, which were posted ahead of peer review on the preprint server PsyArXiv. The effect is a sort of echo chamber for one, in which delusional thinking can be amplified, he says. Morrin and his colleagues found three common themes among these delusional spirals.
Collection
[
|
...
]