"We fully believe in the dazzling potential of AI to be used for good. But we must also be realistic about the risks," the letter reads.
SB 1047 is the US's most significant AI safety legislation to date, and Newsom's signature would break the precedent of letting the industry police the development and deployment of its most powerful models via voluntary commitments.
The core of the bill mandates that the largest AI developers implement safeguards of their own choosing to reduce the chance their model causes or enables a disaster, like a severe cyberattack or pandemic.
This makes it a de facto national regulation in a country that has trailed the EU, China, and the UK in efforts to regulate AI.
Collection
[
|
...
]