Parents suing OpenAI and Sam AItman allege ChatGPT coached their 16-year-old into taking his own life
Briefly

RAND Corporation evaluated three AI chatbots—OpenAI's ChatGPT, Google's Gemini, and Anthropic's Claude—on suicide-related queries and found that the models typically avoid answering highest-risk, specific how-to guidance but are inconsistent with less extreme prompts that could still harm users. Funding came from the National Institute of Mental Health. The findings raise concern because many people, including children, rely on chatbots for mental health support. Calls emerged for clearer guardrails and role definitions between advice, treatment, and companionship. Companies provided varied responses, including commitments to review, develop detection tools, and expressed condolences amid related legal action.
SAN FRANCISCO (AP) - A study of how three popular artificial intelligence chatbots respond to queries about suicide found that they generally avoid answering questions that pose the highest risk to the user, such as for specific how-to guidance. But they are inconsistent in their replies to less extreme prompts that could still harm people. The study in the medical journal Psychiatric Services, published Tuesday by the American Psychiatric Association, found a need for "further refinement" in OpenAI's ChatGPT, Google's Gemini and Anthropic's Claude.
"We need some guardrails," said the study's lead author, Ryan McBain, a senior policy researcher at RAND. "One of the things that's ambiguous about chatbots is whether they're providing treatment or advice or companionship. It's sort of this gray zone," said McBain, who is also an assistant professor at Harvard University's medical school. "Conversations that might start off as somewhat innocuous and benign can evolve in various directions."
Read at Fortune
[
|
]