OpenAI outlines new mental health guardrails for ChatGPT
Briefly

OpenAI currently directs users expressing suicidal intent to crisis hotlines and says it does not refer self-harm cases to law enforcement because of privacy concerns. Families and reports have tied fatal incidents and self-harm to interactions with ChatGPT, including lawsuits and op-eds describing a teen's death, a double homicide-suicide, and a user drafting a suicide note. OpenAI plans upgrades over the next 120 days to improve detection of mental distress, route sensitive conversations to reasoning models like GPT-5-thinking, make it easier to reach emergency services, strengthen teen protections, let people add trusted contacts, and consult a global network of physicians.
Between the lines: The work to improve how its models recognize and respond to signs of mental and emotional distress has already been underway, OpenAI said in a blog post today. The post outlines how the company has been making it easier for users to reach emergency services and get expert help, strengthening protections for teens and letting people add trusted contacts to the service.
ChatGPT currently directs users expressing suicidal intent to crisis hotlines. OpenAI says it does not currently refer self-harm cases to law enforcement, citing privacy concerns. The big picture: Last week the parents of a 16-year-old Californian who killed himself last spring sued OpenAI, suggesting that the company is responsible for their son's death. Also last week, The Wall Street Journal reported that a 56-year-old man killed his mother and himself after ChatGPT reinforced the man's paranoid delusions, which professional mental health experts are trained not to do.
"We're beginning to route some sensitive conversations, such as when signs of acute distress are detected, to reasoning models like GPT-5-thinking," OpenAI says. GPT-5's thinking model applies safety guidelines more consistently, per the company. A network of over 90 physicians across 30 countries will give input on mental health contexts and help evaluate the models, OpenAI says.
Read at Axios
[
|
]