
"Chatbot users are increasingly confiding in tools like ChatGPT for sensitive or personal matters, including mental health issues. The consequences can be devastating: A teen boy took his own life after spending hours chatting with ChatGPT about suicide. Amidst a resulting lawsuit and other mounting pressures, OpenAI has reevaluated its chatbot's safeguards with changes that aim to protect teens, but will impact all users."
"In a posted on Tuesday, OpenAI CEO Sam Altman explored the complexity of implementing a teen safety framework while adhering to his company's principles, which conflict when weighing freedom, privacy, and teen safety as individual goals. The result is a new age prediction model that attempts to verify the age of the user to ensure teens have a safer experience -- even if it means adult users may have to prove they're over 18, too."
"For teens specifically, OpenAI appears to be prioritizing safety over privacy and freedom with added measures. ChatGPT is intended for users 13 years or older; The first step of verification is differentiating between users who are 13 through 18 years old. OpenAI is building an age-prediction system that estimates a user's age based on how they use ChatGPT. When in doubt, ChatGPT will boot users down to the under-18 experience, and in some cases and countries, will ask for an ID."
Chatbot use increasingly includes sensitive, personal, and mental health conversations, prompting a safety reevaluation after a teen suicide and related lawsuit. OpenAI plans to prioritize teen safety over privacy and freedom by building an age-prediction system that estimates user age from usage patterns and differentiates 13–18-year-olds. ChatGPT will default to an under-18 experience when age is uncertain and may request ID in some countries. Adults may need to verify they are over 18. The changes aim to protect teens but will affect all users and involve privacy trade-offs deemed necessary by leadership.
Read at ZDNET
Unable to calculate read time
Collection
[
|
...
]