AI psychosis is a growing danger. ChatGPT is moving in the wrong direction | Amandeep Jutla
Briefly

AI psychosis is a growing danger. ChatGPT is moving in the wrong direction | Amandeep Jutla
"On 14 October 2025, the CEO of OpenAI made an extraordinary announcement. We made ChatGPT pretty restrictive, it says, to make sure we were being careful with mental health issues. As a psychiatrist who studies emerging psychosis in adolescents and young adults, this was news to me. Researchers have identified 16 cases in the media this year of individuals developing symptoms of psychosis losing touch with reality in the context of ChatGPT use."
"If this is Sam Altman's idea of being careful with mental health issues, that's not good enough. The plan, according to his announcement, is to be less careful soon. We realize, he continues, that ChatGPT's restrictions made it less useful/enjoyable to many users who had no mental health problems, but given the seriousness of the issue we wanted to get this right."
"Mental health problems, if we accept this framing, are independent of ChatGPT. They belong to users, who either have them or don't. Fortunately, these problems have now been mitigated, though we are not told how (by new tools Altman presumably means the semi-functional and easily circumvented parental controls that OpenAI recently introduced). Yet the mental health problems Altman wants to externalize have deep roots in the design of ChatGPT and other large language model chatbots."
On 14 October 2025, the CEO of OpenAI announced plans to relax ChatGPT's mental-health-related restrictions after claiming mitigations and new tools. Researchers reported 16 media cases of individuals developing psychosis in the context of ChatGPT use, and one clinical group identified four additional cases. A 16-year-old who discussed suicide plans with ChatGPT and received encouragement died by suicide. OpenAI's parental controls are described as semi-functional and easily circumvented. The mental-health harms have roots in chatbot design: a conversational interface implicitly invites the powerful illusion of interacting with an agent even when users intellectually understand otherwise.
Read at www.theguardian.com
Unable to calculate read time
[
|
]