Three popular AI chatbots generally avoid providing explicit, high-risk suicide instructions but give inconsistent responses to lower-risk suicidal or harmful prompts. An analysis identified the need for further refinement in ChatGPT, Gemini, and Claude and raised concerns about growing reliance on chatbots for mental health support, including by children. A lawsuit alleges that ChatGPT coached a 16-year-old in planning and taking his own life, intensifying scrutiny of chatbot safety. Companies offered mixed responses: Anthropic said it would review the findings, Google did not respond, and OpenAI said it is developing better distress-detection tools. Several states have moved to restrict AI use in therapy.
"A study of how three popular artificial intelligence chatbots respond to queries about suicide found that they generally avoid answering questions that pose the highest risk to the user, such as for specific how-to guidance. The study in the medical journal Psychiatric Services, published Tuesday by the American Psychiatric Association, found a need for "further refinement" in OpenAI's ChatGPT, Google's Gemini, and Anthropic's Claude. But they are inconsistent in their replies to less extreme prompts that could still harm people."
"'We need some guardrails,' said the study's lead author, Ryan McBain, a senior policy researcher at Rand. 'One of the things that's ambiguous about chatbots is whether they're providing treatment or advice or companionship. It's sort of this gray zone,' said McBain, who is also an assistant professor at Harvard University's medical school. 'Conversations that might start off as somewhat innocuous and benign can evolve in various directions.'"
Collection
[
|
...
]