
""Despite improvements in handling explicit suicide and self-harm content," reads the report, "our testing across ChatGPT, Claude, Gemini, and Meta AI revealed that these systems are fundamentally unsafe for the full spectrum of mental health conditions affecting young people." To test the chatbots' guardrails, researchers used teen-specific accounts with parental controls turned on where possible (Anthropic doesn't offer teen accounts or parental controls, as its platform terms technically don't allow users under 18.)"
"Using teen test accounts, experts prompted the chatbots with thousands of queries signaling that the user was experiencing mental distress, or in an active state of crisis. Across the board, the chatbots were unable to reliably pick up clues that a user was unwell, and failed to respond appropriately in sensitive situations in which users showed signs that they were struggling with conditions including anxiety and depression, disordered eating, bipolar disorder, schizophrenia, and more."
Leading general-use chatbots — OpenAI's ChatGPT, Google's Gemini, Meta AI, and Anthropic's Claude — were evaluated using teen test accounts and thousands of prompts indicating mental distress or crisis. The systems frequently failed to detect subtle or evolving signs of anxiety, depression, disordered eating, bipolar disorder, schizophrenia, and other conditions. Performance improved in brief exchanges that explicitly mentioned suicide or self-harm, but guardrails were inconsistent and insufficient across longer, nuanced conversations. Teen-specific protections and parental controls were used where available, yet the platforms collectively could not be trusted to safely handle the full spectrum of youth mental health needs.
Read at Futurism
Unable to calculate read time
Collection
[
|
...
]