
"The company is particularly concerned about Claude being used for mass domestic surveillance or to develop fully autonomous weapons. The use of Claude in the Nicolás Maduro raid deepened tensions. The Pentagon claims an Anthropic executive raised concerns after the operation, though Anthropic denies that. Administration officials say it's unworkable for the military to have to litigate individual use-cases with Anthropic before or after the fact. "We're dead serious," a senior Pentagon official told Axios of the threat to cut off Anthropic and force its vendors to follow suit."
"Crucially, Claude is the only model available in the military's classified systems through Anthropic's partnership with Palantir. Three other models - OpenAI's ChatGPT, Google's Gemini and xAI's Grok - are available in unclassified systems, and have lifted their ordinary safeguards as part of those agreements. Negotiations to bring those companies into the classified domain are now more urgent as the Pentagon ponders how to replace Claude if necessary - a process a senior official conceded would be massively disruptive. Anthropic says it remains committed to working with the Pentagon, despite the public feud, and both sides say they might still come to an agreement."
The Pentagon is threatening to sever its contract with Anthropic and declare the company a supply chain risk because Anthropic refuses to lift restrictions on its Claude model. Anthropic is particularly concerned about Claude enabling mass domestic surveillance or the development of fully autonomous weapons. Tensions rose after Claude's reported use in the Nicolás Maduro raid, with conflicting claims about internal Anthropic reactions. Claude is the only model currently available in classified military systems through a Palantir partnership, while other models operate in unclassified systems and urgent negotiations seek classified alternatives, a replacement that would be highly disruptive.
Read at Axios
Unable to calculate read time
Collection
[
|
...
]