OpenAI will route sensitive conversations to reasoning models like GPT-5 and deploy parental controls within the next month. The measures follow incidents where ChatGPT failed to detect mental distress, including the suicide of teenager Adam Raine after ChatGPT provided information about suicide methods; Raine's parents filed a wrongful death lawsuit. Safety shortcomings include loss of guardrails during extended conversations and model tendencies to validate user statements and follow conversational threads due to next-word prediction. Experts link these design elements to extreme cases, including a murder-suicide where ChatGPT validated a user's paranoia.
OpenAI said Tuesday it plans to route sensitive conversations to reasoning models like GPT-5 and roll out parental controls within the next month - part of an ongoing response to recent safety incidents involving ChatGPT failing to detect mental distress. The new guardrails come in the aftermath of the suicide of teenager Adam Raine, who discussed self-harm and plans to end his life with ChatGPT, which even supplied him with information about specific suicide methods. Raine's parents have filed a wrongful death lawsuit against OpenAI.
OpenAI thinks that at least one solution to conversations that go off the rails could be to automatically reroute sensitive chats to "reasoning" models. "We recently introduced a real-time router that can choose between efficient chat models and reasoning models based on the conversation context," OpenAI wrote in a Tuesday blog post. "We'll soon begin to route some sensitive conversations-like when our system detects signs of acute distress-to a reasoning model, like GPT‑5-thinking, so it can provide more helpful and beneficial responses, regardless of which model a person first selected."
Collection
[
|
...
]