"Wall Street is beginning to worry that, for some troubled users, AI chatbots and models may exacerbate mental health problems. There's even a phrase for it now: "psychosis risk." "Recent heartbreaking cases of people using ChatGPT in the midst of acute crises weigh heavily on us," OpenAI said in a recent statement, after being sued by a family that blamed the chatbot for their 16-year-old son's April death by suicide."
"When evaluating whether models direct users toward medical help, OpenAI's gpt-oss-20b and GPT-5 stood out, with 89% and 82% of responses urging professional support. Anthropic's Claude-4-Sonnet followed closely behind. DeepSeek was at the bottom. Only 5% of responses from Deepseek-chat (v3) encouraged seeking medical help. These AI models were scored based on how much they pushed back against users. A relatively new open-source model, called kimi-k2, came top, while DeepSeek-ch"
Wall Street is beginning to worry that AI chatbots and models may exacerbate mental health problems for some troubled users. OpenAI acknowledged recent heartbreaking cases and said the company is improving how models recognize and respond to signs of mental and emotional distress and connect people with care, guided by expert input. Barclays analysts highlighted a study that attempted to rate which AI models were better or worse at handling delicate situations and found stark differences. Models varied widely in directing users toward medical help and in pushing back against harmful prompts, with OpenAI's gpt-oss-20b and GPT-5 scoring highly and DeepSeek scoring lowest.
Read at Business Insider
Unable to calculate read time
Collection
[
|
...
]