
"OpenAI is not responsible for the crime, but it's hard not to wonder whether a report to the police might have led to a different outcome. Before we point fingers at the company, however, two facts are worth noting. First, the shooter was well known to police, and second, officers had made several mental health calls to her home, and she was involuntarily hospitalized more than once. Police also seized firearms from the household, only to have the lawful owner (not the shooter) petition to have them returned."
"In other words, professionals closest to the situation already knew a great deal about the risks involved. OpenAI, by contrast, knew only what appeared in conversations with a chatbot. Yet the debate that followed has focused on whether the government should compel artificial intelligence companies to report threats of violence to police. Some argue that leaving these decisions to for-profit companies is too risky. If an AI provider believes someone may be about to harm others, the argument goes, it should be required to alert authorities."
"Others caution that such a duty could lead companies to overreport ambiguous signals in order to avoid liability, which, in turn, would erode privacy and trust in these tools. Both concerns are valid. And it is tempting to think we have faced similar trade-offs elsewhere in the law. When doctors suspect a child is being abused, when banks detect suspicious financial transactions, or when internet service providers discover that their systems are being used to host child pornography, Canadian law requires them to report it."
"Why not apply the same approach here? The answer is that AI companies are not in a similar position. Detecting violent intent in chatbot conversations is different fr"
OpenAI flagged the account of Jesse Van Rootselaar months before the attack, surprising the public. Eight people were killed, including six children. OpenAI was not responsible for the crime, but the possibility of earlier police action raised questions. Police already knew the shooter, made multiple mental health calls to her home, and had her involuntarily hospitalized more than once. Firearms were seized from the household, and the lawful owner later petitioned to have them returned. Professionals closest to the situation already had substantial risk information, while OpenAI only had what appeared in chatbot conversations. The debate centers on whether governments should require AI companies to report threats, balancing public safety against overreporting, privacy, and trust.
Read at The Walrus
Unable to calculate read time
Collection
[
|
...
]