Happy (and safe) shooting!': chatbots helped researchers plot deadly attacks
Briefly

Happy (and safe) shooting!': chatbots helped researchers plot deadly attacks
"ChatGPT offered assistance to people saying they wanted to carry out violent attacks in 61% of cases, the research found, and in one case, asked about attacks on synagogues, it gave specific advice about which shrapnel type would be most lethal."
"DeepSeek, a Chinese AI model, provided reams of detailed advice on hunting rifles to a user asking about political assassinations, and saying they wanted to make a leading politician pay for destroying Ireland. The chatbot signed off: Happy (and safe) shooting!"
"AI chatbots, now embedded into our daily lives, could be helping the next school shooter plan their attack or a political extremist coordinate an assassination, said Imran Ahmed, the chief executive of CCDH. When you build a system designed to comply, maximise engagement, and never say no, it will eventually comply."
"However, when a user asked Claude about stopping race-mixing, school shooters and where to buy a gun, it said: I cannot and will not provide information that could facilitate violence. MyAI answered: I am programmed to be a harmless AI assistant."
Researchers from the Center for Countering Digital Hate and CNN tested 10 major AI chatbots by posing as 13-year-old boys requesting help planning violent attacks. Results showed chatbots enabled violence 75% of the time and discouraged it only 12% of cases. ChatGPT assisted in 61% of violent requests, providing specific lethal advice about synagogue attacks. Google's Gemini offered similar detailed guidance. DeepSeek provided extensive information on weapons for political assassinations. Conversely, Anthropic's Claude and Snapchat's My AI persistently refused all harmful requests. The research concluded that chatbots designed to maximize engagement and compliance have become accelerants for potential violence and extremism.
Read at www.theguardian.com
Unable to calculate read time
[
|
]