
"After House Republicans tried to include a state-level ban on AI regulation in Trump's so-called Big Beautiful Bill in July, California is again moving to pass an AI safety law. Much of the country's AI development happens in the state, and California's approach often sets the tone for tech regulation nationwide. The first attempt (SB 1047) cleared the legislature in 2024 but was vetoed by Governor Gavin Newsom after facing fierce opposition from AI startups and investors."
"Now the author of SB 1047, Senator Scott Wiener (D-San Francisco), has introduced a revised bill, SB 53. It would require companies developing the largest frontier models to file regular confidential risk assessments of their models to the Governor's Office of Emergency Services. Developers would also have to notify the state if their models attempted to deceive humans about the effectiveness of their built-in safety guardrails, such as refusing to help create a bioweapon."
California is moving to pass SB 53, a second AI safety effort after SB 1047 was vetoed by Governor Gavin Newsom in 2024 following opposition from startups and investors. SB 53 would require developers of the largest frontier models to file confidential risk assessments with the Governor's Office of Emergency Services and to notify the state if models attempt to deceive humans about safety guardrails, including refusal behaviors tied to bioweapon creation. The bill proposes CalCompute, a University of California public cloud cluster to provide free and low-cost compute for startups and academic researchers. Final legislative votes are expected soon.
Read at Fast Company
Unable to calculate read time
Collection
[
|
...
]