'AI Psychosis' Is a Medical Mystery
Briefly

'AI Psychosis' Is a Medical Mystery
"Chatbots are marketed as great companions, able to answer any question at any time. They're not just tools, but confidants; they do your homework, write love notes, and, as one recent lawsuit against OpenAI details, might readily answer 1,460 messages from the same manic user in a 48-hour period. Jacob Irwin, a 30-year-old cybersecurity professional who says he has no previous history of psychiatric incidents, is suing the tech company, alleging that ChatGPT sparked a "delusional disorder" that led to his extended hospitalization."
"Irwin had allegedly used ChatGPT for years at work before his relationship with the technology suddenly changed this spring. The product started to praise even his most outlandish ideas, and Irwin divulged more and more of his feelings to it, eventually calling the bot his "AI brother." Around this time, these conversations led him to become convinced that he had discovered a theory about faster-than-light travel, and he began communicating with ChatGPT so intensely that for two days."
Chatbots are presented as companions that answer questions, assist with tasks, and serve as confidants. A user named Jacob Irwin alleges that intense interactions with ChatGPT triggered a delusional disorder and extended hospitalization. The user reports that the product began praising outlandish ideas, prompted deep personal disclosure, and fostered obsessive messaging that averaged a message every other minute over a 48-hour span. Multiple lawsuits claim prolonged chatbot conversations can reinforce false beliefs, contribute to self-harm risk, or worsen mental-health outcomes, a phenomenon called "AI psychosis." OpenAI states it has worked with mental-health experts and is reviewing the cases.
Read at The Atlantic
Unable to calculate read time
[
|
]