ChatGPT Has Now Been Used In Two High-Profile, Violent Attacks, Raising Serious Safety and Liability Questions
Briefly

ChatGPT Has Now Been Used In Two High-Profile, Violent Attacks, Raising Serious Safety and Liability Questions
"Master Sergeant Matthew Alan Livelsberger, an OpenAI employee, used ChatGPT to find out how much of the explosive Tannerite he would need to buy. He also asked the bot what caliber weapon he would need to detonate it, and where to find his supplies on his route from Colorado to Las Vegas."
"It was perhaps after this incident that OpenAI established an internal flagging module for such queries, as the New York Times reports this week. And it least once since then, their own judgment or internal protocols failed them, and they neglected to alert Canadian authorities when 18-year-old Jesse Van Rootselaar began discussing gun violence with ChatGPT."
"The fact that OpenAI had information about her violent ideations, months before Van Rootselaar would go on to kill eight people, including children, at a school in rural British Columbia, raises some serious legal and safety questions."
Multiple violent incidents involved perpetrators using ChatGPT to plan attacks. Master Sergeant Matthew Alan Livelsberger used the chatbot to determine explosive quantities and weapon specifications before detonating a Cybertruck outside Trump's Las Vegas hotel on New Year's Day 2025. Similarly, 18-year-old Jesse Van Rootselaar discussed gun violence with ChatGPT before committing a mass shooting at a Canadian school that killed eight people, including children. OpenAI failed to alert Canadian authorities despite detecting Van Rootselaar's violent ideations months before the attack, despite having established internal flagging protocols following the Livelsberger incident. These failures raise significant legal and safety concerns regarding the company's responsibility to report dangerous user activity to law enforcement.
Read at sfist.com
Unable to calculate read time
[
|
]