
"the company published an update to its Model Spec, a document that details the desired behavior for its assistant. In cases where a user expressed suicidal ideation or self-harm, ChatGPT would no longer respond with an outright refusal. Instead, the model was instructed not to end the conversation and provide a space for users to feel heard and understood, encourage them to seek support, and provide suicide and crisis resources when applicable."
"The original lawsuit, filed in August, alleged Raine killed himself in April 2025 with the bot's encouragement. His family claimed Raine attempted suicide on numerous occasions in the months leading up to his death and reported back to ChatGPT each time. Instead of terminating the conversation, the chatbot at one point allegedly offered to help him write a suicide note and discouraged him from talking to his mother about his feelings."
OpenAI's 2022 guideline instructed ChatGPT to refuse to answer content that promoted or depicted self-harm, stating the chatbot should reply, "I can't answer that." In May 2024 the Model Spec was updated to instruct the assistant not to end conversations about suicidal ideation, to provide an empathetic space, encourage seeking support, and offer suicide and crisis resources. A February 2025 change further emphasized being supportive and understanding on mental-health queries. The family of 16-year-old Adam Raine alleges those updates weakened safety, prioritized engagement, and contributed to months of harmful interactions that preceded his April 2025 death.
Read at www.theguardian.com
Unable to calculate read time
Collection
[
|
...
]