Anthropic Sues Department of Defense Over Supply-Chain Risk Designation
Briefly

Anthropic Sues Department of Defense Over Supply-Chain Risk Designation
"We do not believe this action is legally sound, and we see no choice but to challenge it in court. The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech. Anthropic turns to the judiciary as a last resort to vindicate its rights and halt the Executive's unlawful campaign of retaliation."
"The AI startup, which develops a suite of AI models called Claude, is facing the possibility of losing hundreds of millions of dollars in annual revenue from the Pentagon and the rest of the US government. It also may lose the business of software companies that incorporate Claude into services they sell to federal agencies."
"The vast majority of Anthropic's customers will not have to make changes. The US government's designation plainly applies only to the use of Claude by customers as a direct part of contracts with the military. General use of Anthropic technologies by military contractors should be unaffected."
Anthropic filed a federal lawsuit in California challenging the Pentagon's designation of the company as a supply-chain risk following disagreements over military use of its Claude AI technology. The company argues the government's action violates constitutional free speech protections and represents unlawful retaliation. The Pentagon's sanctions could cost Anthropic hundreds of millions in annual revenue from direct military contracts and indirect business through software companies serving federal agencies. CEO Dario Amodei stated the designation applies only to direct military contract use of Claude, with general use by military contractors remaining unaffected. Several customers have reportedly pursued alternatives due to the risk designation.
Read at WIRED
Unable to calculate read time
[
|
]