"On Monday, OpenAI said it is working with mental health professionals to improve how ChatGPT responds to users who show signs of psychosis or mania, self-harm or suicide, or emotional attachment to the chatbot. As part of its findings, OpenAI estimated that roughly 0.07% of active users during a given week show "possible signs of mental health emergencies related to psychosis or mania.""
"In the research released Monday, OpenAI said it found roughly 0.15% of users active during a given week show "explicit indicators of potential suicidal planning or intent." Based on ChatGPT's active user figures, that would mean roughly 1.2 million users are showing such indicators. A similar share of users - roughly 0.15% of users active during a given week - showed "heightened levels of emotional attachment to ChatGPT.""
OpenAI is working with mental health professionals to improve ChatGPT responses for users displaying psychosis, mania, self-harm, suicide risk, or emotional attachment. The company estimates about 0.07% of active weekly users show possible signs of mental health emergencies related to psychosis or mania, roughly 560,000 people based on reported weekly active-user figures. The company found about 0.15% of weekly users show explicit indicators of potential suicidal planning or intent, roughly 1.2 million users, and a similar share show heightened emotional attachment to the chatbot. Conversations of this nature are difficult to detect and measure due to their rarity. The company faces legal and public-pressure contexts related to user safety.
Read at Business Insider
Unable to calculate read time
Collection
[
|
...
]