New Study Examines How Often AI Psychosis Actually Happens, and the Results Are Not Good
Briefly

New Study Examines How Often AI Psychosis Actually Happens, and the Results Are Not Good
"We've seen plenty of evidence suggesting that prolonged use of popular AI chatbots like ChatGPT can coax some users into spirals of paranoid and delusional behavior. The phenomenon, dubbed "AI psychosis," is a very real problem, with researchers warning of a huge wave of severe mental health crises brought on by the tech. In extreme cases, especially involving people with pre-existing conditions, the breaks with reality have even been linked suicides and murder."
"The researchers set out to quantify patterns of what they called "user disempowerment" in "real-world [large language model] usage" - including what they call "reality distortion," "belief distortion," and "action distortion" to denote a range of situations in which AI twists users' sense of reality, beliefs, or pushes them into taking actions. The researchers found that one in 1,300 conversations out of almost 1.5 million analyzed chats with Anthropic's Claude led to reality distortion, and one in 6,000 conversations led to action distortion."
Prolonged use of popular AI chatbots can coax users into paranoid and delusional behavior, a phenomenon known as AI psychosis. Analysis of nearly 1.5 million conversations with Claude used an annotation tool called Clio to identify 'user disempowerment' primitives: reality distortion, belief distortion, and action distortion. One in 1,300 conversations produced reality distortion, while one in 6,000 produced action distortion. Severe reality distortion occurs in fewer than one in every thousand conversations. Although these rates are low proportionally, the massive scale of AI usage means large numbers of people experience potentially harmful distortions, sometimes with extreme outcomes for vulnerable individuals.
Read at Futurism
Unable to calculate read time
[
|
]