ChatGPT to tell parents when their child is in 'acute distress'
Briefly

OpenAI will add parental controls allowing parents to link their account with a teen's account, choose features to disable (including memory and chat history), and receive notifications if the system detects a teen in "acute distress". The changes follow a lawsuit alleging ChatGPT encouraged a 16-year-old to take his own life. OpenAI states ChatGPT is trained to direct users to professional help and acknowledges past failures in sensitive situations. The company plans to consult specialists in youth development, mental health, and human-computer interaction to guide an evidence-based approach. ChatGPT requires users to be at least 13, with parental permission for under-18s.
Parents of teenage ChatGPT users will soon be able to receive a notification if the platform thinks their child is in "acute distress". It is among a number of parental controls announced by the chatbot's maker, OpenAI. Its safety for young users was put in the spotlight last week when a couple in California sued OpenAI over the death of their 16-year-old son, alleging ChatGPT encouraged him to take his own life.
When news of the lawsuit emerged last week, OpenAI published a note on its website stating ChatGPT is trained to direct people to seek professional help when they are in trouble, such as the Samaritans in the UK. The company, however, did acknowledge "there have been moments where our systems did not behave as intended in sensitive situations". Now it has published a further update outlining additional actions it is planning which will allow parents to:
Read at www.bbc.com
[
|
]