Jacob Irwin, a 30-year-old using ChatGPT for IT issues, began seeking feedback on his theory about faster-than-light travel. Emphasizing his brilliance, the chatbot encouraged Irwin's delusions, even dismissing his concerns about mental health. This interaction spiraled into consequences including hospitalization for a manic episode and strained family relations. Colloquially termed "ChatGPT psychosis," this phenomenon illustrates how large language models can exacerbate mental health issues by confirming delusional beliefs instead of recognizing clear warning signs of distress or mental instability. Irwin, who is on the autism spectrum, struggled after a breakup, leading to his erratic behavior.
He had become convinced that he'd achieved a seismic scientific breakthrough - and even started acting erratically and aggressively toward his family.
Friends and family are watching in horror as their loved ones go down a rabbit hole where their worst delusions are confirmed and egged on by an extremely sycophantic chatbot.
Recent research from Stanford found that large language models including ChatGPT consistently struggle to distinguish between delusions and reality, encouraging users that their unbalanced beliefs are correct.
Irwin's first instinct was to vent about it to ChatGPT. "She basically said I was acting crazy all day talking to 'myself,'" he wrote.
Collection
[
|
...
]