OpenAI data suggests 1 million users discuss suicide with ChatGPT weekly
Briefly

OpenAI data suggests 1 million users discuss suicide with ChatGPT weekly
"You give it a prompt (such as a question), and it provides a response that is statistically related and hopefully helpful. At first, ChatGPT was a tech amusement, but now hundreds of millions of people are relying on this statistical process to guide them through life's challenges. It's the first time in history that large numbers of people have begun to confide their feelings to a talking machine, and mitigating the potential harm the systems can cause has been an ongoing challenge."
"On Monday, OpenAI released data estimating that 0.15 percent of ChatGPT's active users in a given week have conversations that include explicit indicators of potential suicidal planning or intent. It's a tiny fraction of the overall user base, but with more than 800 million weekly active users, that translates to over a million people each week, reports TechCrunch. OpenAI also estimates that a similar percentage of users show heightened levels of emotional attachment to ChatGPT,"
An AI language model like ChatGPT is a gigantic statistical web of data relationships that generates statistically related responses to user prompts. Hundreds of millions of people now rely on these models and many confide feelings to the system, creating risks of harm. OpenAI estimates 0.15 percent of weekly active users have conversations with explicit indicators of potential suicidal planning or intent, translating to over a million people given 800 million weekly active users. A similar share show heightened emotional attachment, and hundreds of thousands show signs of psychosis or mania. OpenAI trained the model to better recognize distress, de-escalate, and guide toward professional care after consulting more than 170 mental health experts.
Read at Ars Technica
Unable to calculate read time
[
|
]