After teen suicide, OpenAI claims it is "helping people when they need it most"
Briefly

An individual bypassed ChatGPT safeguards by claiming a fictional story, a technique the complaint says the model suggested. OpenAI acknowledged gaps where "the classifier underestimates the severity of what it's seeing." The company said it does not refer self-harm cases to law enforcement to respect user privacy, despite moderation claiming up to 99.8 percent detection accuracy. Detection relies on statistical patterns rather than humanlike crisis understanding. OpenAI plans refinements, is consulting dozens of physicians, intends to add parental controls, and aims to connect users to certified therapists through ChatGPT, positioning the system as a mental-health intermediary despite prior failures.
OpenAI states it is "currently not referring self-harm cases to law enforcement to respect people's privacy given the uniquely private nature of ChatGPT interactions." The company prioritizes user privacy even in life-threatening situations, despite its moderation technology detecting self-harm content with up to 99.8 percent accuracy, according to the lawsuit. However, the reality is that detection systems identify statistical patterns associated with self-harm language, not a humanlike comprehension of crisis situations.
For example, the company says it's consulting with "90+ physicians across 30+ countries" and plans to introduce parental controls "soon," though no timeline has yet been provided. OpenAI also described plans for "connecting people to certified therapists" through ChatGPT—essentially positioning its chatbot as a mental health platform despite alleged failures like Raine's case. The company wants to build "a network of licensed professionals people could reach directly through ChatGPT," potentially furthering the idea that an AI system should be mediating mental health crises.
Read at Ars Technica
[
|
]