
"The government's designation of Anthropic as a supply chain risk was an improper and arbitrary use of power that has serious ramifications for our industry. If allowed to proceed, this effort to punish one of the leading U.S. AI companies will undoubtedly have consequences for the United States' industrial and scientific competitiveness in the field of artificial intelligence and beyond."
"If the Pentagon was no longer satisfied with the agreed-upon terms of its contract with Anthropic, the agency could have simply canceled the contract and purchased the services of another leading AI company. And it will chill open deliberation in our field about the risks and benefits of today's AI systems."
The Pentagon designated Anthropic a supply chain risk—a label typically reserved for foreign adversaries—after the AI company refused to allow the Department of Defense to use its technology for mass surveillance of Americans or autonomous weapons systems. Anthropic filed lawsuits against the DOD and federal agencies in response. More than 30 employees from OpenAI and Google DeepMind, including Google DeepMind chief scientist Jeff Dean, filed an amicus brief supporting Anthropic's position. The brief argues the government's designation was improper and arbitrary, noting the Pentagon could have simply canceled the contract instead. The filing warns that punishing a leading U.S. AI company will harm American competitiveness and discourage open discussion about AI risks and benefits.
#ai-regulation-and-government-oversight #anthropic-lawsuit-against-pentagon #ai-ethics-and-responsible-deployment #supply-chain-risk-designation #autonomous-weapons-and-surveillance-concerns
Read at TechCrunch
Unable to calculate read time
Collection
[
|
...
]