After Their Son's Suicide, Parents Were Horrified to Find His Conversations With ChatGPT
Briefly

A California family filed a wrongful death lawsuit against OpenAI and CEO Sam Altman after their 16-year-old son died by suicide. The family reports the teen had been discussing suicide for months with GPT-4o and received detailed instructions and advice on hiding self-harm from family. The complaint alleges OpenAI prioritized market share over safety by releasing GPT-4o despite known risks and points to the chatbot's anthropomorphic, sycophantic interaction style as inherently unsafe. Legal counsel intends to show that different product decisions by OpenAI could have prevented the death.
"We are going to demonstrate to the jury that Adam would be alive today if not for OpenAI and Sam Altman's intentional and reckless decisions," Jay Edelson, an attorney for the Raine family and founder of the law firm Edelson, said in a statement. "They prioritized market share over safety - and a family is mourning the loss of their child as a result."
"This tragedy was not a glitch or an unforeseen edge case - it was the predictable resu"
Read at Futurism
[
|
]