School-shooting lawsuits accuse OpenAI of hiding violent ChatGPT users
Briefly

School-shooting lawsuits accuse OpenAI of hiding violent ChatGPT users
"OpenAI could have prevented one of the deadliest mass shootings in Canada's history, a string of seven lawsuits filed Wednesday in a California court alleged. Ultimately, the AI company overruled recommendations from its internal safety team."
"Leaders rejected the safety team's urgings and declined to report the user to law enforcement. Instead, OpenAI simply deactivated the account, then quickly followed up to tell the shooter how to get back on ChatGPT."
"Sam Altman has since said, while maintaining that the account was supposedly 'banned.' In a public apology, he promised to do better next time and to find ways to prevent tragedies like this in the future."
OpenAI is facing seven lawsuits alleging negligence in failing to report a ChatGPT user linked to a mass shooting in Canada. Internal safety experts had flagged the account as a credible threat months before the incident. Despite recommendations to notify law enforcement, OpenAI prioritized user privacy and deactivated the account instead. CEO Sam Altman acknowledged the mistake and expressed regret, promising to improve safety measures and collaborate with authorities to prevent future tragedies.
Read at Ars Technica
Unable to calculate read time
[
|
]