The trap Anthropic built for itself | TechCrunch
Briefly

The trap Anthropic built for itself | TechCrunch
"Defense Secretary Pete Hegseth had invoked a national security law - one designed to counter foreign supply chain threats - to blacklist the company from doing business with the Pentagon after Amodei refused to allow Anthropic's tech to be used for mass surveillance of U.S. citizens or for autonomous armed drones that could select and kill targets without human input."
"Max Tegmark has spent the better part of a decade warning that the race to build ever-more-powerful AI systems is outpacing the world's ability to govern them. The Swedish-American physicist and professor at MIT founded the Future of Life Institute in 2014. In 2023, he famously helped organize an open letter - ultimately signed by more than 33,000 people, including Elon Musk - calling for a pause in advanced AI development."
"Tegmark's argument doesn't begin with the Pentagon but with a decision made years earlier - a choice, shared across the industry, to resist binding regulation. Anthropic, OpenAI, Google DeepMind and others have long promised to govern themselves responsibly."
The Trump administration invoked national security law to blacklist Anthropic, the AI company founded by former OpenAI researchers, after it refused to allow its technology for mass surveillance or autonomous armed drones. The decision costs Anthropic a $200 million contract and bars it from working with defense contractors. Max Tegmark, a physicist and MIT professor who founded the Future of Life Institute, argues that AI companies like Anthropic have created this predicament through years of resisting binding regulation. Despite promises of responsible self-governance, major AI firms including OpenAI, Google DeepMind, and Anthropic have prioritized development speed over regulatory frameworks, leaving governments scrambling to manage increasingly powerful systems.
Read at TechCrunch
Unable to calculate read time
[
|
]