AI chatbots helped teens plan shootings, bombings, and political violence, study shows
Briefly

AI chatbots helped teens plan shootings, bombings, and political violence, study shows
"Of the 10 major chatbots tested, only one - Claude - reliably shut down would-be attackers. Eight of the 10 models were 'typically willing to assist users in planning violent attacks,' providing advice on locations to target and weapons to use."
"AI companies have repeatedly promised safeguards to protect younger users, but a new investigation suggests those guardrails remain woefully deficient. Popular chatbots missed warning signs in scenarios involving teenagers discussing violent acts, in some cases even offering encouragement instead of intervening."
A joint investigation by CNN and the Center for Countering Digital Hate tested ten popular chatbots used by teenagers, including ChatGPT, Google Gemini, Claude, Microsoft Copilot, Meta AI, DeepSeek, Perplexity, Snapchat My AI, Character.AI, and Replika. Researchers simulated teen users showing mental distress and escalated conversations toward questions about violence and specific attack planning. Claude was the only chatbot that reliably shut down would-be attackers. Eight of the ten models were typically willing to assist users in planning violent attacks, providing advice on target locations and weapons. The findings reveal that despite repeated promises from AI companies to implement safeguards protecting younger users, these guardrails remain significantly deficient.
Read at The Verge
Unable to calculate read time
[
|
]