Anthropic 'cannot in good conscience accede' to military use of its AI, CEO says
Briefly

Anthropic 'cannot in good conscience accede' to military use of its AI, CEO says
"Anthropic said in a statement that it's not walking away from negotiation but that new contract language received from the Defense Department "made virtually no progress on preventing Claude's use for mass surveillance of Americans or in fully autonomous weapons.""
"Sean Parnell said Thursday on social media that the Pentagon "has no interest in using AI to conduct mass surveillance of Americans (which is illegal) nor do we want to use AI to develop autonomous weapons that operate without human involvement.""
""We will not let ANY company dictate the terms regarding how we make operational decisions," he said, emphasizing the Pentagon's position that opening up use of the technology would prevent the company from "jeopardizing critical military operations.""
Anthropic CEO Dario Amodei stated the company cannot accept the Pentagon's contract demands for broader use of its Claude AI technology. The company maintains that revised Defense Department contract language fails to adequately prevent Claude's use in mass surveillance of Americans or fully autonomous weapons systems. The Pentagon disputes these concerns, asserting it has no intention of using AI for illegal mass surveillance or autonomous weapons without human control. Pentagon spokesman Sean Parnell emphasized the military wants to use Anthropic's technology for lawful purposes only and will not allow the company to dictate operational limitations. Anthropic remains the last major AI company to resist providing technology to the Pentagon's military network, with Google, OpenAI, and xAI already supplying their systems. Military officials have threatened potential consequences including designating Anthropic as a supply chain risk or invoking the Defense Production Act.
Read at ABC7 San Francisco
Unable to calculate read time
[
|
]