California's new AI safety law shows regulation and innovation don't have to clash | TechCrunch
Briefly

California's new AI safety law shows regulation and innovation don't have to clash | TechCrunch
""At its core, SB 53 is a first-in-the-nation bill that requires large AI labs to be transparent about their safety and security protocols - specifically around how they prevent their models from catastrophic risks, like being used to commit cyberattacks on critical infrastructure or build bio-weapons. The law also mandates that companies stick to those protocols, which will be enforced by the Office of Emergency Services.""
""The reality is that policy makers themselves know that we have to do something, and they know from working on a million other issues that there is a way to pass legislation that genuinely does protect innovation - which I do care about - while making sure that these products are safe," Billen told TechCrunch."
""Companies are already doing the stuff that we ask them to do in this bill," Billen told TechCrunch. "They do safety testing on their models. They release model cards. Are they starting to skimp in some areas at some companies? Yes. And that's why bills like this are important.""
SB 53 mandates transparency from large AI labs about safety and security protocols and requires adherence to those protocols, with enforcement by the Office of Emergency Services. The law targets catastrophic risks such as cyberattacks on critical infrastructure and the development of biological threats. Industry practices like safety testing and release of model cards already exist, but some companies may be skimping under competitive pressure. The bill aims to codify and enforce existing safety promises to prevent firms from lowering standards. Public opposition to SB 53 was muted compared to the prior SB 1047 effort.
Read at TechCrunch
Unable to calculate read time
[
|
]