
"The conflict arose during negotiations for a new contract between the Pentagon and Anthropic. The Department of Defense attempted to establish clear agreements on access to and use of the AI technology. However, the talks stalled when Anthropic demanded guarantees that its models would not be used for mass surveillance of American citizens or for the deployment of autonomous weapons systems."
"The Department of Defense considers such restrictions unacceptable. According to an official involved, the military must be able to use technology for all legally permitted purposes. According to the Pentagon, when a supplier attempts to restrict the use of critical systems, it can jeopardize operational safety and the deployment of military personnel."
"Such a classification is typically applied to companies or technologies linked to geopolitical adversaries of the United States. It is therefore remarkable that an American AI company has been designated in this way."
The US Department of Defense officially designated AI company Anthropic as a supply chain risk, escalating tensions over military use of Claude AI technology. The conflict emerged during contract negotiations when Anthropic demanded guarantees preventing its models from being used for mass surveillance of American citizens or autonomous weapons deployment. The Pentagon rejected these restrictions, arguing the military must retain access to all legally permitted uses of critical technology. Despite this designation taking immediate effect, Anthropic's software continues operating in ongoing military operations, including activities around Iran. The classification is unusual for an American company, as such designations typically apply to entities linked to geopolitical adversaries. Anthropic announced plans to legally challenge the designation.
Read at Techzine Global
Unable to calculate read time
Collection
[
|
...
]