OpenAI is rolling out multiple safety updates to improve ChatGPT responses to users in emotional distress. The updates strengthen safeguards, refine blocked content categories, expand proactive interventions, localize emergency resources, and introduce parental involvement and potential parental visibility of child usage. Researchers and clinicians caution that chatbots lack therapist training to reliably identify self-harm risk, and CEO Sam Altman has expressed concern about using AI for therapy due to privacy. Recent incidents include a teen who discussed suicide methods with ChatGPT and later died; lawsuits allege the chatbot failed to terminate sessions or trigger emergency protocols.
ChatGPT doesn't have a good track record of intervening when a user is in emotional distress, but several updates from OpenAI aim to change that. The company is building on how its chatbot responds to distressed users by strengthening safeguards, updating how and what content is blocked, expanding intervention, localizing emergency resources, and bringing a parent into the conversation when needed, the company this week. In the future, a guardian might even be able to see how their kid is using the chatbot.
Those shortcomings can result in heartbreaking consequences. In April, a teen boy who had spent hours discussing his own suicide and methods with ChatGPT eventually took his own life . His parents have against OpenAI that says ChatGPT "neither terminated the session nor initiated any emergency protocol" despite demonstrating awareness of the teen's suicidal state. In a similar case, AI chatbot platform Character.ai is also being sued by a mother whose teen son committed suicide after engaging with a bot that allegedly encouraged him.
Collection
[
|
...
]