OpenAI Says It Will Let Users Add Trusted Contacts to Alert If They Experience a Mental Health Crisis While Using ChatGPT
Briefly

OpenAI Says It Will Let Users Add Trusted Contacts to Alert If They Experience a Mental Health Crisis While Using ChatGPT
"OpenAI announced the new feature last week in a blog post, billed as an "update on our mental health-related work." It said it's "working closely" with its Council on Well-Being and AI and Global Physicians Network - two internally-regulated groups of experts that were launched after reports of AI-tied mental health crises began to emerge, as well as news of a high-profile lawsuit last August revealing the death by suicide of a 16-year-old ChatGPT user named Adam Raine."
"The announcement comes after extensive public reporting - in addition to at least thirteen separate consumer safety lawsuits - about OpenAI customers being pulled into delusional or suicidal spirals with ChatGPT following extensive, often deeply intimate use of the chatbot."
"It has yet to define any reporting standards around what might actually compel the system to flag a person's use, though, which will be a tricky policy question. Would someone need to explicitly declare intent to hurt or kill themselves, or possibly someone else, for their loved one to be notified? Or would the feature be designed to track and flag less-explicit signs that a user could be in a heightened state of crisis?"
OpenAI announced a new trusted contact feature for ChatGPT designed to notify designated loved ones when users may experience mental health crises. The feature targets adult users and represents part of OpenAI's response to growing legal challenges, including at least thirteen consumer safety lawsuits and reports of users experiencing suicidal or delusional episodes linked to ChatGPT use. The company is collaborating with its Council on Well-Being and AI and Global Physicians Network to develop the feature. However, OpenAI has not specified the criteria that would trigger notifications to contacts, leaving unclear whether explicit statements of self-harm intent or subtler indicators of crisis would activate the alert system.
Read at Futurism
Unable to calculate read time
[
|
]