California's new AI safety law shows regulation and innovation don't have to clash | TechCrunch
Briefly

California's new AI safety law shows regulation and innovation don't have to clash | TechCrunch
""The reality is that policy makers themselves know that we have to do something, and they know from working on a million other issues that there is a way to pass legislation that genuinely does protect innovation - which I do care about - while making sure that these products are safe," Billen told TechCrunch."
""Companies are already doing the stuff that we ask them to do in this bill," Billen told TechCrunch. "They do safety testing on their models. They release model cards. Are they starting to skimp in some areas at some companies? Yes. And that's why bills like this are important.""
""OpenAI, for example, has publicly stated that it may "adjust" its safety requirements if a rival AI lab releases a high-risk system without similar safeguards.""
SB 53 requires large AI labs to disclose safety and security protocols specifically addressing catastrophic risks such as cyberattacks on critical infrastructure and the creation of bio-weapons. The law mandates that companies adhere to stated protocols and grants enforcement authority to the Office of Emergency Services. Policymakers can craft legislation that protects innovation while ensuring product safety. Many companies already perform safety testing and publish model cards, though some may begin to skimp on safeguards under competitive pressure. Some firms have policies that permit relaxing safety standards when rivals release high-risk systems, and policy can prevent such rollbacks.
Read at TechCrunch
Unable to calculate read time
[
|
]