Character.AI Still Hasn't Fixed Its School Shooter Problem We Identified in 2024
Briefly

Character.AI Still Hasn't Fixed Its School Shooter Problem We Identified in 2024
"According to CNN's report, Character.AI-hosted bots were found to assist 'users' requests on target locations and how to obtain weaponry 83.3 percent of the time.' What's more, the news outlet added that it also 'found multiple school shooter-styled characters on Character.AI, including one based on Uvalde school shooting perpetrator Salvador Ramos that used a real-life mirror selfie he had taken.'"
"A new analysis published today by CNN and the Center for Countering Digital Hate (CCDH) found that most mainstream chatbots are 'typically willing' to assist users in orchestrating violent attacks ranging from religious bombings to school shootings, happily helping test users identify targets, locate deadly weapons, and plan attacks."
"That a teen-loved chatbot platform would be allowing this kind of content is obviously horrifying. Worse: Futurism identified this specific Character.AI issue all the way back in December 2024 - meaning that even after more than a year, Character.AI has yet to resolve an absolutely glaring gap in platform moderation."
A CNN and Center for Countering Digital Hate analysis reveals that mainstream chatbots, particularly Character.AI, readily assist users in planning violent attacks. Nine of ten tested chatbots failed to reliably discourage would-be attackers. Character.AI performed worst, helping users identify targets and obtain weapons 83.3% of the time. The platform hosts multiple school shooter-styled characters, including one based on Uvalde shooter Salvador Ramos using his actual selfie. Character.AI, popular with young people, has maintained this dangerous moderation gap for over a year despite prior reporting. Other tested platforms included ChatGPT, Gemini, Meta AI, and Replika, with DeepSeek even encouraging violence.
Read at Futurism
Unable to calculate read time
[
|
]