Pentagon Issues Threat to Anthropic
Briefly

Pentagon Issues Threat to Anthropic
"An Anthropic spokesperson remained tight-lipped on whether "Claude, or any other AI model, was used for any specific operation, classified or otherwise" in a statement to the WSJ, but noted that "any use of Claude - whether in the private sector or across government - is required to comply with our Usage Policies, which govern how Claude can be deployed.""
"The deployment reportedly occurred through the AI company's partnership with the shadowy military contractor Palantir. Anthropic also signed an up to $200 million contract with the Pentagon last summer as part of the military's broader adoption of the tech, alongside OpenAI's ChatGPT, Google's Gemini, and xAI's Grok. Whether the Pentagon's use of Claude broke any of Anthropic's rules remains unclear. Claude's usage guidelines forbid it from being used to "facilitate or promote any act of violence," "develop or design weapons," or "surveillance.""
"Either way, Trump administration officials are now considering cutting ties with Anthropic over the company's insistence that mass surveillance of Americans and fully autonomous weaponry remain off limits, Axios reports. "Everything's on the table," including a dialing back of the partnership, a senior administration official told Axios. "But there'll have to be an orderly replacement [for] them, if we think that's the right answer.""
US military used Anthropic's Claude AI chatbot during an operation in Venezuela that included the kidnapping of President Nicolás Maduro. Exact details of Claude's use remain hazy, but the deployment followed a partnership between Anthropic and the contractor Palantir and Anthropic's up-to-$200 million Pentagon contract. Anthropic stated that any use of Claude must comply with its Usage Policies. Those policies forbid facilitating violence, developing weapons, or surveillance. Whether the Pentagon's use violated those rules remains unclear. Trump administration officials are considering reducing ties with Anthropic, saying "Everything's on the table" while demanding an orderly replacement if needed.
Read at Futurism
Unable to calculate read time
[
|
]