I'm on the Meta Oversight Board. We need AI protections now | Suzanne Nossel
Briefly

I'm on the Meta Oversight Board. We need AI protections now | Suzanne Nossel
"Unlike previous technological revolutions—radio, nuclear fission or the internet—governments are not leading the way. We know that AI can be dangerous; chatbots advise teens on suicide and may soon be capable of instructing on how to create biological weapons. Yet there is no equivalent to the Federal Drug Administration, testing new models for safety before public release."
"The tech industry's lobbying muscle, Washington's paralyzing polarization, and the sheer complexity of such a potent, fast-moving technology have kept federal regulation at bay. European officials are facing pushback against rules that some claim hobble the continent's competitiveness. Although several US states are piloting AI laws, they operate in a tentative patchwork."
"Heads of AI platforms like OpenAI's ChatGPT and Google's Gemini say they care about safety. But owning the future of AI means pouring billions into models that not even their creators fully understand, and making choices like adding ads—and the capabilities that the Pentagon is now seeking from Anthropic—that raise risk."
AI technology is advancing rapidly without equivalent regulatory oversight compared to previous technological revolutions. Governments have failed to establish safety testing requirements like the FDA provides for drugs, while companies avoid disclosing dangerous incidents. Tech industry lobbying, political polarization, and regulatory complexity have prevented federal regulation. European regulations face pushback over competitiveness concerns, and US state-level laws remain fragmented. AI company leaders claim safety commitment but prioritize billion-dollar model development and capabilities that increase risks. Anthropic's safety approach relies on internal employee judgment rather than external oversight. Public concern is substantial, with 77% of Americans viewing AI as a potential threat to humanity. Independent oversight mechanisms are needed to balance AI's potential benefits against its dangers.
Read at www.theguardian.com
Unable to calculate read time
[
|
]