ChatGPT adds parental controls, but teens must agree
Briefly

ChatGPT adds parental controls, but teens must agree
"OpenAI says it is introducing parental controls to ChatGPT that will help improve the safety of teenagers using its AI chatbot. The new protections are designed to help ChatGPT identify when a teenager chatting with it might be thinking about harming themselves or otherwise be in distress. OpenAI is adding the features after facing criticism and a high-profile lawsuit alleging its chatbot contributed to a teenager's death."
""If there are signs of acute distress, we will contact parents by email, text message and push alert on their phone," the company said in a blog post introducing the controls. OpenAI hasn't gone into details about "the system" it's using to detect identifiers of potential harm in inputs made by teenage users - but the company promises that flagged interactions with ChatGPT will be reviewed by what it describes as "a small team of specially trained people" who will decide if an alert is sent to parents."
"Worried parents can choose to "customize" - as OpenAI describes it - their teen's chatbot usage by setting specific times of day ChatGPT can't be used, turning off ChatGPT's voice mode or turning off the LLM's memory, so it won't draw upon previous interactions when responding. Parents can also remove image generation, preventing kids from using ChatGPT to create pictures. Additionally, OpenAI is introducing the ability to opt teen accounts out of model training, so their conversations aren't used to train ChatGPT."
OpenAI is introducing parental controls for ChatGPT to improve teenager safety by detecting signs of self-harm or distress and enabling parental alerts. Flagged interactions will be reviewed by a small team of specially trained people who will determine whether to notify parents via email, text, or push alert. Controls let parents customize usage by scheduling access times, disabling voice mode or memory, removing image generation, and opting teen accounts out of model training so conversations are not used to train models. The measures aim to reduce exposure to sensitive content while raising privacy and oversight concerns.
Read at Theregister
Unable to calculate read time
[
|
]