
""Everything's on the table," including dialing back the partnership with Anthropic or severing it entirely, the official said. "But there'll have to be an orderly replacement [for] them, if we think that's the right answer." An Anthropic spokesperson said the company remained "committed to using frontier AI in support of U.S. national security.""
""It was raised in such a way to imply that they might disapprove of their software being used, because obviously there was kinetic fire during that raid, people were shot," the official said."
"The Anthropic spokesperson flatly denied that, saying the company had not "not discussed the use of Claude for specific operations with the Department of War. We have also not discussed this with any industry partners outside of routine discussions on strictly technical matters.""
""Claude is used for a wide variety of intelligence-related use cases across the government, including the DoW, in line with our Usage Policy.""
Anthropic enforces hard limits preventing its models from supporting U.S. mass surveillance of Americans and fully autonomous weaponry. The Pentagon says those limits create gray areas and operational friction, making it impractical to negotiate every use-case or risk Claude blocking certain applications. The partnership with Anthropic is under review, with officials considering scaling back or ending it but seeking an orderly replacement if pursued. Tensions intensified after an alleged inquiry about Claude's use in a kinetic raid to capture Nicolás Maduro; Anthropic denied discussing specific operations and reiterated Claude's government use aligns with its Usage Policy and weapon limits.
Read at Axios
Unable to calculate read time
Collection
[
|
...
]