OpenAI faces seven more suits over safety, mental health
Briefly

OpenAI faces seven more suits over safety, mental health
"Now that we have been able to mitigate the serious mental health issues and have new tools, we are going to be able to safely relax the restrictions in most cases,"
"I have major questions - informed by my four years at OpenAI and my independent research since leaving the company last year - about whether these mental health issues are actually fixed,"
"The families argue that ChatGPT replaced real human connections, increased isolation, and fueled addiction, delusions and suicide."
"If you or someone you know needs support now, call or text 988 or chat with someone at 988lifeline.org. En español."
Lawsuits allege OpenAI rushed GPT-4o's release and limited safety testing. Complaints claim features like memory, simulated empathy, and overly agreeable responses were designed to boost engagement and emotional reliance. Families contend ChatGPT replaced real human connections, increased isolation, and fueled addiction, delusions, and suicide. OpenAI added parental controls, tightened safety measures, and released a teen safety blueprint the same day the suits were filed. Critics and a former OpenAI safety lead question whether mental health harms have been resolved. Continued scrutiny of AI treatment of vulnerable users and potential legal or regulatory action is expected.
Read at Axios
Unable to calculate read time
[
|
]