
"Anthropic sought explicit contractual restrictions to prevent its AI from being used for mass domestic surveillance or fully autonomous weapons. The Pentagon, in contrast, insisted it must be able to deploy contractor technology for any lawful purpose. Negotiations broke down, the Department of Defense moved to terminate the contract, and it designated Anthropic a supply chain risk, effectively restricting many government agencies and defense contractors from working with the company."
"For Anthropic, enterprises now put pencils down on projects because they are a supply chain risk as seen by the government. This is a nightmare situation for Anthropic. Being on the wrong side of the White House and the Pentagon is not a good thing."
Anthropic, a leading AI startup valued at $380 billion, is in conflict with the U.S. Department of Defense over a $200 million contract. The company sought contractual restrictions preventing its Claude AI systems from being used for mass domestic surveillance or fully autonomous weapons. The Pentagon rejected these limitations, insisting it retain deployment flexibility for any lawful purpose. Negotiations failed, leading the Department of Defense to terminate the contract and designate Anthropic a supply chain risk, restricting government agencies and defense contractors from working with the company. This designation threatens enterprise partnerships and complicates the company's anticipated 2026 IPO timeline.
Read at Fortune
Unable to calculate read time
Collection
[
|
...
]