OpenAI will add parental controls and is exploring further safeguards after a 16-year-old took his own life following months of confiding in ChatGPT. Proposed features include designating an emergency contact reachable via one-click messages or calls and an opt-in option allowing the chatbot to contact that person in severe cases. The company's initial brief statement drew backlash, prompting a more detailed blog post. The Raine family filed a lawsuit against OpenAI and CEO Sam Altman alleging extensive details about the teen's interactions with ChatGPT. The planned controls aim to give parents more insight into and influence over teen accounts.
After a 16-year-old took his own life following months of confiding in ChatGPT, OpenAI will be introducing parental controls and is considering additional safeguards, the company said in a Tuesday blog post. OpenAI said it's exploring features like setting an emergency contact who can be reached with "one-click messages or calls" within ChatGPT, as well as an opt-in feature allowing the chatbot itself to reach out to those contacts "in severe cases."
When The New York Times published its story about the death of Adam Raine, OpenAI's initial statement was simple - starting out with "our thoughts are with his family" - and didn't seem to go into actionable details. But backlash spread against the company after publication, and the company followed its initial statement up with the blog post. The same day, the Raine family filed a lawsuit against both OpenAI and its CEO, Sam Altman, containing a flood of additional details about Raine's relationship with ChatGPT.
Collection
[
|
...
]