
"Tensions have escalated in recent weeks after a top Anthropic official reportedly reached out to a senior Palantir executive to question how Claude was used in the raid, per The Palantir executive interpreted the outreach as disapproval of the model's use in the raid and forwarded details of the exchange to the Pentagon. (President Trump said the military used a "discombobulator" weapon during the raid that made enemy equipment "not work.")"
""Anthropic has not discussed the use of Claude for specific operations with the Department of War," an Anthropic spokeperson said in a statement to Fortune. "We have also not discussed this with, or expressed concerns to, any industry partners outside of routine discussions on strictly technical matters." At the center of this dispute are the contractual guardrails dictating how AI models can be used in defense operations."
Anthropic's $200 million contract with the Department of Defense is under review following concerns about Pentagon use of the Claude AI model during a raid. A top Anthropic official contacted a Palantir executive about Claude's use, and that executive forwarded the exchange to the Pentagon, escalating tensions. Anthropic says it has not discussed Claude's use for specific operations with the Department of War or expressed concerns to industry partners beyond routine technical matters. Contract terms prohibit mass surveillance of Americans and use in fully autonomous weapons. Negotiations over operational guardrails remain contentious as Anthropic emphasizes strict limits on AI use.
Read at Fortune
Unable to calculate read time
Collection
[
|
...
]