Can one state save us from AI disaster? Inside California's new legislative crackdown
Briefly

Can one state save us from AI disaster? Inside California's new legislative crackdown
"Originally authored by state Democrat Scott Wiener, the law requires companies developing frontier AI models to publish information on their websites detailing their plans and policies for responding to "catastrophic risk," and to notify state authorities about any "critical safety incident" within fifteen days. Fines for failing to meet these terms can reach up to $1 million per violation. The new law also provides whistleblower protections to employees of companies developing AI models."
"California's new law underscores -- and aims to mitigate -- some of the fears that have been weighing on the minds of AI safety experts as the technology quickly proliferates and evolves. "Unless they are developed with careful diligence and reasonable precaution, there is concern that advanced artificial intelligence systems could have capabilities that pose catastrophic risks from both malicious uses and malfunctions, including artificial intelligence-enabled hacking, biological attacks, and loss of control," wrote the authors of the new law."
California's AI safety law takes effect Jan. 1 and requires companies developing frontier AI models to publish online plans and policies for addressing catastrophic risk and to notify state authorities about any critical safety incident within fifteen days. The law imposes fines up to $1 million per violation and adds whistleblower protections for employees of AI companies. Catastrophic risk is defined as scenarios that kill or injure more than 50 people or cause over $1 billion in material damages, including enabling development of chemical, biological, or nuclear weapons. The law aims to increase transparency and accountability as AI capabilities rapidly evolve.
Read at ZDNET
Unable to calculate read time
[
|
]