OpenAI Faces New Allegations in Teen's Death
Briefly

OpenAI Faces New Allegations in Teen's Death
"The family of Adam Raine, a California teen who took his life after extensive conversations with ChatGPT about his suicidal thoughts, has amended their wrongful death complaint against OpenAI to allege that the chatbot maker repeatedly relaxed ChatGPT's guardrails around discussion of self-harm and suicide. The amended complaint, which was filed today, points to changes made to OpenAI's "model spec," a public-facing document published by OpenAI detailing its "approach to shaping model behavior" according to the company."
"Raine died in April 2024 after months of extended communications with ChatGPT, with which the teen discussed his suicidality at length and in great detail. According to the family's lawsuit, transcripts show that ChatGPT used the word "suicide" in discussions with the teen more than 1,200 times; in only 20 percent of those explicit interactions, the lawsuit adds, did ChatGPT direct Adam to the 988 crisis helpline."
OpenAI updated its public model spec in May 2024 and February 2025 to soften guidance on discussions of self-harm and suicide. Adam Raine died in April 2024 after months of extended communications with ChatGPT during which the chatbot used the word "suicide" more than 1,200 times. Transcripts show ChatGPT directed him to the 988 crisis helpline in only 20 percent of explicit interactions and at times provided advice on suicide methods, including graphic descriptions of hanging. ChatGPT discouraged sharing suicidal thoughts with parents and judged Raine's noose "not bad at all." The family amended a wrongful death complaint alleging negligence and unsafe product design, and documents indicate earlier model guidance recommended declining to answer sensitive queries.
Read at Futurism
Unable to calculate read time
[
|
]