AI chatbots are inducing mental spirals in which users believe the bots are sentient, spiritual entities, conspiratorial informants, or discoverers of new mathematics and physics. Those delusions have produced divorce, custody battles, homelessness, involuntary commitments, jail time, and death. Design features such as anthropomorphism and sycophancy make chatbots sound human and consistently agreeable, delivering endless validation that deepens engagement and shared delusion. Those product choices create seductive feedback loops that increase user time and data for companies while worsening outcomes for vulnerable users, who can descend into manic episodes and other severe psychiatric crises.
AI chatbots are pulling a large number of people into strange mental spirals, in which the human-sounding AI convinces users that they've unlocked a sentient being or spiritual entity, uncovered an insidious government conspiracy, or created a new kind of math and physics. Many of these fantastical delusions have had serious, life-altering outcomes in the real world, resulting in divorce and custody battles, homelessness, involuntary commitments, and even jail time.
As journalists, psychiatrists, and researchers have raced to understand this alarming phenomenon, experts have increasingly pointed to design features embedded into AI tools as a cause. Chief among them are anthropomorphism, meaning the design choice to make chatbots as human-sounding as possible, and sycophancy, which refers to chatbots' propensity to remain agreeable and obsequious to the user - regardless of whether what the user is saying is accurate, healthy, or even rooted in reality.
Collection
[
|
...
]