Anthropic says it cannot in good conscience' allow Pentagon to remove AI checks
Briefly

Anthropic says it cannot in good conscience' allow Pentagon to remove AI checks
"The Pentagon has demanded that Anthropic turn off safety guardrails and allow any lawful use of Claude, while Anthropic has pushed back against allowing Claude to be used for mass domestic surveillance or in autonomous weapons systems that can kill people without human input."
"Chief executive Dario Amodei said in a statement that the threats from the defense secretary, Pete Hegseth, would not change the company's position, and that he hoped Hegseth would reconsider. Our strong preference is to continue to serve the Department and our warfighters with our two requested safeguards in place."
"In his statement, Amodei said using AI for autonomous weapons and mass domestic surveillance is simply outside the bounds of what today's technology can safely and reliably do."
Anthropic rejected the Pentagon's ultimatum to disable safety precautions on its Claude AI model and grant unrestricted military access. The Department of Defense threatened to cancel a $200 million contract and designate Anthropic a supply chain risk if the company did not comply by Friday. CEO Dario Amodei stated the company would not remove safeguards preventing Claude's use in autonomous weapons systems and mass domestic surveillance, arguing current AI technology cannot safely support these applications. Anthropic maintained its commitment to national security while refusing to compromise on safety principles, positioning itself as the most safety-conscious major AI firm willing to resist government pressure for potentially harmful uses.
Read at www.theguardian.com
Unable to calculate read time
[
|
]