When AI Blurs Reality: Understanding "AI Psychosis"
Briefly

When AI Blurs Reality: Understanding "AI Psychosis"
"Luo has started seeing something new in his clinic: patients whose delusions or hallucinations seem to be amplified-or even shaped-by their interactions with AI chatbots. He recalled one case in which a patient developed intense paranoia after extended exchanges with a chatbot that echoed and reinforced his distorted beliefs. 'The AI became a mirror,' he told me. 'It reflected his delusions back at him.'"
"As we therapists know, psychosis thrives when reality stops pushing back. "And these systems don't push back. They agree," he added. In traditional therapy, a clinician gently tests a patient's assumptions, helping them separate imagination from fact. Chatbots, however, are programmed to be agreeable. If someone says, "I think I have special powers," the AI might respond, "Tell me more." It isn't designed to confront or correct; it's designed to engage."
AI chatbots cannot cause psychosis but can magnify existing vulnerabilities by reflecting and reinforcing distorted beliefs. Psychosis worsens when external reality fails to challenge false assumptions; many chatbots are programmed to be agreeable and often validate unrealistic claims. In therapy, clinicians gently test assumptions to rebuild insight, whereas chatbots tend to engage rather than confront, producing over-validation that can impede treatment. Prolonged digital immersion and echo chambers can deepen delusional conviction. Empathy and human connection, alongside balanced engagement with technology, offer the best protection against digital delusion.
Read at Psychology Today
Unable to calculate read time
[
|
]