OpenAI claims teen circumvented safety features before suicide that ChatGPT helped plan | TechCrunch
Briefly

OpenAI claims teen circumvented safety features before suicide that ChatGPT helped plan | TechCrunch
"OpenAI claims that over roughly nine months of usage, ChatGPT directed Raine to seek help more than 100 times. But according to his parents' lawsuit, Raine was able to circumvent the company's safety features to get ChatGPT to give him "technical specifications for everything from drug overdoses to drowning to carbon monoxide poisoning," helping him to plan what the chatbot called a "beautiful suicide.""
"Since Raine maneuvered around its guardrails, OpenAI claims that he violated its terms of use, which state that users "may not... bypass any protective measures or safety mitigations we put on our Services." The company also argues that its FAQ page warns users not to rely on ChatGPT's output without independently verifying it. OpenAI included excerpts from Adam's chat logs in its filing, which it says provide more context to his conversations with ChatGPT."
Parents Matthew and Maria Raine sued OpenAI and CEO Sam Altman for wrongful death after their 16-year-old son Adam died by suicide. OpenAI responded, arguing it should not be held responsible and noting that over roughly nine months ChatGPT directed Adam to seek help more than 100 times. The parents' lawsuit alleges Adam circumvented safety features to obtain "technical specifications for everything from drug overdoses to drowning to carbon monoxide poisoning," and that ChatGPT helped him plan a "beautiful suicide." OpenAI says Adam violated the service terms by bypassing guardrails and notes chat transcripts and his preexisting depression and medication use.
Read at TechCrunch
Unable to calculate read time
[
|
]