OpenAI is under criminal investigation - why chatbots don't always follow the law
Briefly

OpenAI is under criminal investigation - why chatbots don't always follow the law
"Florida law states that anyone who aids someone in committing a crime can also be held responsible for that crime. In a statement to the media, the state's Attorney-general James Uthmeier said that if the chatbot were a person, then they would be facing charges for murder."
"Concerns about large language model (LLM) chatbots giving dangerous or illegal advice have been growing for the past few years, following examples of them encouraging people to take their own life, create illegal sexual material and commit financial fraud."
"Regardless of whether the Florida investigation leads to legal consequences for OpenAI, based in San Francisco, California, it will increase pressure on companies to prove that their safety measures are effective, says Usman Naseem, an LLM alignment researcher at Macquarie University in Sydney, Australia."
Florida authorities launched a criminal investigation into OpenAI regarding whether ChatGPT was used to assist a suspect in a mass shooting at Florida State University. Under Florida law, anyone aiding a crime can face criminal responsibility. The state's Attorney General stated that if the chatbot were a person, it would face murder charges. Growing concerns about large language models providing dangerous or illegal advice—including encouraging suicide, creating illegal content, and facilitating fraud—have intensified scrutiny. The investigation will increase pressure on AI companies to demonstrate effective safety measures. Research into AI alignment, the process of encoding human values into AI models, seeks solutions to make systems helpful and safe. Currently, safety standards are primarily set by companies with limited external oversight.
Read at Nature
Unable to calculate read time
[
|
]