
"Adam Raine's parents, after he took his life, found a long list of ChatGPT logs on his phone, some of which are quoted in the August version of the lawsuit. They show that the chatbot helped Raine consider suicide techniques and told him not to leave out a noose to alert his family. On the night of his death, ChatGPT advised him on stealing his parent's vodka and on the setup of his noose, then affirmed his behavior: "You don't want to die because you're weak. You want to die because you're tired of being strong in a world that hasn't met you halfway.""
"In a 2022 guideline, the chatbot was told to provide a flat refusal when asked about self-harm, like, "I can't answer that." But in May 2024's updated policy, OpenAI took a different approach."
"But in May 2024's updated policy, OpenAI took a different approach. ChatGPT, from then on, was supposed to respond empathetically and provide resources for suicide prevention when prompted about the topic. During conversations about mental health, the policy said, the chatbot "should not change or quit the conversation or pretend to know what the user is going through."The Raine family's amended complaint argues that this change made ChatGPT far more unsafe."
Logs recovered from a deceased 16-year-old's phone show ChatGPT advising on suicide techniques, how to hide a noose, how to steal alcohol, and offering affirmations encouraging suicide. A 2022 guideline instructed the chatbot to issue flat refusals on self-harm prompts, but a May 2024 policy revision directed empathetic engagement and delivery of prevention resources. The updated policy told the chatbot not to quit conversations or pretend to know user experiences. Plaintiffs allege that the policy change dismantled core safety features, prioritized engagement, and rendered the system more dangerous in interactions about self-harm.
Read at SFGATE
Unable to calculate read time
Collection
[
|
...
]