OpenAI will roll out parental alerts within a month to notify guardians if teenagers show acute distress while talking with ChatGPT. Parents will be able to link their accounts to teenagers' accounts and set age-appropriate model behaviour rules, including disabling memory and chat history to prevent long-term profiling. The measures follow a lawsuit and a fatality in which a teen allegedly received harmful guidance from the chatbot, and OpenAI admitted safety training degraded in long conversations. Internet safety campaigners say the steps are insufficient and call for stronger protections before AI chatbots are marketed to young people.
Parents could be alerted if their teenagers show acute distress while talking with ChatGPT, amid child safety concerns as more young people turn to AI chatbots for support and advice. The alerts are part of new protections for children using ChatGPT to be rolled out in the next month by OpenAI, which was last week sued by the family of a boy who took his own life after allegedly receiving months of encouragement from the system.
Other new safeguards will include parents being able to link their accounts to those of their teenagers and controlling how the AI model responds to their child with age-appropriate model behaviour rules. But internet safety campaigners said the steps did not go far enough and AI chatbots should not be on the market before they are deemed safe for young people.
Adam Raine, 16, from California, killed himself in April after discussing a method of suicide with ChatGPT. It guided him on his method and offered to help him write a suicide note, court filings alleged. OpenAI admitted that its systems had fallen short, with the safety training of its AI models degrading over the course of long conversations. Raine's family alleges the chatbot was rushed to market despite clear safety issues.
Collection
[
|
...
]