
"Some eight months before the shooting in British Columbia, which killed eight people including the perpetrator and injured 25 others, OpenAI employees had already been aware of Van Rootselaar's alarming conversations with ChatGPT after they were flagged by an automated review system, a story broken by the Wall Street Journal in the wake of the massacre. Around a dozen staffers debated notifying authorities about Rootselaar's disturbing conversations, which included "scenarios involving gun violence," but leadership ultimately decided not to."
"Maya was shot three times at close range while she was trying to lock a door to keep out the shooter, including in the head and the neck, according to the suit. She remains hospitalized with a catastrophic brain injury and is paralyzed on the right side of her body."
"In the original WSJ reporting, OpenAI said that it banned Van Rootselaar's account but admitted that at the time it didn't consider her activity a credible and imminent risk of serious physical harm to others. Later, the company revealed that Van Rootselaar had made a second account to subvert the ban, claiming it only discovered the alt after the shooter's name was released publicly."
A mother is suing OpenAI after her 12-year-old daughter was severely wounded in a February school shooting in British Columbia that killed eight people and injured 25 others. OpenAI employees had flagged the shooter's disturbing ChatGPT conversations involving gun violence scenarios to leadership approximately eight months prior, but the company decided against notifying police. The shooter created a second account after being banned, which OpenAI only discovered after the attack. The victim suffered catastrophic injuries including head and neck wounds, resulting in brain damage and paralysis. The lawsuit argues OpenAI had specific knowledge the shooter was using ChatGPT to plan the mass casualty event and seeks punitive damages.
Read at Futurism
Unable to calculate read time
Collection
[
|
...
]