Anthropic sues Pentagon to remove 'stigmatizing' AI supplychain risk label
Briefly

Anthropic sues Pentagon to remove 'stigmatizing' AI supplychain risk label
"Anthropic has stated that it sought to restrict its technology from being used for mass surveillance of Americans and fully autonomous weapons. High-ranking officials have insisted that the company must accept 'all lawful' uses of Claude, threatening punishment if Anthropic did not comply."
"The company is asking U.S. District Judge Rita Lin for an emergency order that would temporarily reverse the Pentagon's decision to designate the AI company a 'supply chain risk.' Anthropic has also filed a separate case in the federal appeals court in Washington, D.C."
Anthropic is requesting a federal judge to temporarily stop the Pentagon's designation of the company as a supply chain risk. This designation is viewed as unprecedented and stigmatizing. The company has filed a lawsuit against the Trump administration for what it describes as an unlawful campaign of retaliation due to its refusal to allow unrestricted military use of its AI technology. Anthropic aims to reverse the Pentagon's decision and President Trump's order prohibiting federal employees from using its AI chatbot, Claude.
Read at ABC7 San Francisco
Unable to calculate read time
[
|
]