Anthropic Denies It Could Sabotage AI Tools During War
Briefly

Anthropic Denies It Could Sabotage AI Tools During War
"Anthropic has never had the ability to cause Claude to stop working, alter its functionality, shut off access, or otherwise influence or imperil military operations. Anthropic does not have the access required to disable the technology or alter the model's behavior before or during ongoing operations."
"The Department of Defense is not required to tolerate the risk that critical military systems will be jeopardized at pivotal moments for national defense and active military operations."
Anthropic's executive stated that the company cannot manipulate its AI model Claude once it is operational within the US military. This response addresses accusations from the Trump administration regarding potential tampering with AI tools during wartime. The Pentagon has expressed concerns about the risks associated with Anthropic's technology, leading to a designation that prevents the Department of Defense from using the software. Anthropic has filed lawsuits challenging this ban while customers begin to cancel contracts.
Read at WIRED
Unable to calculate read time
[
|
]