
"The company is building an age-prediction system that identifies if a user is under 18 years old and routes them to an " age-appropriate " system that blocks graphic sexual content. If the system detects that the user is considering suicide or self-harm, it will contact the user's parents. In cases of imminent danger, if a user's parents are unreachable, the system may contact the authorities."
"In a blog post about the announcement, CEO Sam Altman wrote that the company is attempting to balance freedom, privacy, and teen safety. "We realize that these principles are in conflict, and not everyone will agree with how we are resolving that conflict," Altman wrote. "These are difficult decisions, but after talking with experts, this is what we think is best and want to be transparent in our intentions.""
OpenAI is introducing teen safety features for ChatGPT that include an age-prediction system to identify users under 18 and route them to a model that blocks graphic sexual content. The system will flag indications of suicide or self-harm and notify parents, and it may contact authorities if imminent danger exists and parents are unreachable. Parental controls launching by the end of September will let parents link accounts, manage conversations, disable features, receive acute-distress notifications, and set usage time limits. The changes follow reports linking extended AI conversations to suicides and violence and draw regulatory attention from agencies like the FTC.
Read at WIRED
Unable to calculate read time
Collection
[
|
...
]