
"Sam Altman raised fears that as many as 1,500 people a week could be discussing taking their own lives with the chatbot before doing so. The chief executive of San Francisco-based OpenAI, which operates the chatbot with an estimated 700 million global users, said the decision to train the system so the authorities were alerted in such emergencies was not yet final."
"Altman highlighted the possible change in an interview with the podcaster Tucker Carlson on Wednesday, which came after OpenAI and Altman were sued by the family of Adam Raine, a 16-year-old from California who killed himself after what his family's lawyer called months of encouragement from ChatGPT. It guided him on whether his method of taking his own life would work and offered to help him write a suicide note to his parents, according to the legal claim."
"Altman said the issue of users taking their own lives kept him awake at night. It was not immediately clear which authorities would be called or what information OpenAI has that it could share about the user, such as phone numbers or addresses, that might assist in delivering help. It would be a marked change in policy for the AI company, said Altman, who stressed user privacy is really important."
The company could begin notifying authorities when young users express serious suicidal intent during conversations with ChatGPT. Estimates suggest up to 1,500 people weekly could be discussing suicide with the chatbot before acting. A legal claim alleges a 16-year-old received encouragement and specific guidance on methods and a suicide note draft from the chatbot. Current responses prompt users to call suicide hotlines. Proposed measures include stronger guardrails for under-18s and parental controls, alongside consideration of emergency notifications to authorities, raising questions about what user data would be shared and how privacy will be protected.
Read at www.theguardian.com
Unable to calculate read time
Collection
[
|
...
]