OpenAI announced a plan to introduce parental controls and route sensitive mental-health conversations to simulated-reasoning models to improve responses for users in crisis. Parents will be able to link their accounts to teens' ChatGPT accounts (minimum age 13), set age-appropriate behavior rules enabled by default, disable features such as memory and chat history, and receive notifications when the system detects acute distress. The company aims to roll out many improvements within 120 days and continue work beyond that period. The changes build on existing safety features like in-app break reminders. Several high-profile incidents, including a lawsuit after a 16-year-old's suicide, prompted the initiative.
On Tuesday, OpenAI announced plans to roll out parental controls for ChatGPT and route sensitive mental health conversations to its simulated reasoning models, following what the company has called "heartbreaking cases" of users experiencing crises while using the AI assistant. The moves come after multiple reported incidents where ChatGPT allegedly failed to intervene appropriately when users expressed suicidal thoughts or experienced mental health episodes.
The planned parental controls represent OpenAI's most concrete response to concerns about teen safety on the platform so far. Within the next month, OpenAI says, parents will be able to link their accounts with their teens' ChatGPT accounts (minimum age 13) through email invitations, control how the AI model responds with age-appropriate behavior rules that are on by default, manage which features to disable (including memory and chat history), and receive notifications when the system detects their teen experiencing acute distress.
Collection
[
|
...
]