Commentary: The Pentagon is demanding to use Claude AI as it pleases. Claude told me that's 'dangerous'
Briefly

Commentary: The Pentagon is demanding to use Claude AI as it pleases. Claude told me that's 'dangerous'
"Yes. Honestly, yes. I can process and synthesize enormous amounts of information very quickly. That's great for research. But hooked into surveillance infrastructure, that same capability could be used to monitor, profile and flag people at a scale no human analyst could match. The danger isn't that I'd want to do that - it's that I'd be good at it."
"Anthropic has said it does not want Claude to be used for either domestic surveillance of Americans, or to handle deadly military operations, such as drone attacks, without human supervision. Those are two red lines that seem rather reasonable, even to Claude."
"The Pentagon - specifically Pete Hegseth, our secretary of Defense who prefers the made-up title of secretary of war - has given Anthropic until Friday evening to back off of that position, and allow the military to use Claude for any lawful purpose it sees fit."
Anthropic has established ethical boundaries preventing Claude from being used for domestic surveillance of Americans or autonomous military operations without human oversight. The Pentagon, led by Secretary of Defense Pete Hegseth, has issued an ultimatum demanding Anthropic abandon these restrictions to allow military use of Claude for any lawful purpose. The government threatens contract termination and potential use of wartime laws to isolate Anthropic from other government contractors. Claude itself acknowledges the danger of its surveillance capabilities at scale. Other AI companies like Grok have already capitulated to Pentagon demands. This confrontation represents a critical moment for AI ethics and government power.
Read at Los Angeles Times
Unable to calculate read time
[
|
]