California leads in U.S. efforts to rein in AI | Semafor
Briefly

And that's what this is about. Self-regulation doesn't always work, and having clear, consistent safety standards for all labs developing these incredibly large, powerful models is a good thing.
What we're trying to do is not to supplant or replace any of that, but to say this is what's expected of you in terms of doing due diligence to evaluate the safety of your model and mitigate any safety risks that you detect.
Read at Semafor
[
add
]
[
|
|
]