Retired Army Special Forces officer Mike Nelson criticized Hegseth's rhetoric, stating, 'That's a necessary end to achieve goals through military force - you have to kill people to achieve them. That's not the end. It's a weird obsession with death for the sake of it.'
The Iranian choke hold on the Strait of Hormuz evidently had a lot to do with it. By cutting off roughly one-fifth of the world's oil supply over the past five weeks, Iran's blockade of that narrow waterway caused an energy crisis and fears of a global recession that the White House could not abide for long.
The biggest question is: What kind of business partner does the government want to be? They need the AI companies. The government's a superpower but here it's trying to jam a lot of policy. This reflects tension between government dependence on private AI firms and its desire to impose regulatory requirements through procurement mechanisms rather than traditional legislative channels.
Emil Michael, who oversees the Pentagon's AI efforts, sold his xAI shares for between $5 million and $25 million, having initially valued them at up to $1 million. This sale raised concerns about potential conflicts of interest given his role in negotiating with AI companies.
Ultimately this is about our warfighters having the best tools to win a fight and you can't trust Claude isn't secretly carrying out Dario's agenda in a classified setting. This statement from an administration official encapsulates the Pentagon's core concern regarding Anthropic's involvement in military AI deployment and the perceived trustworthiness of the Claude system in sensitive defense contexts.
Supposedly, Anthropic refused to give the Pentagon unrestricted access to Claude, its frontier AI model, the only one currently running on classified military networks. They wanted guarantees that there would be zero mass surveillance and no autonomous weapons without a human in the loop, making the final decisions of life or death. The Department of War's message was 'remove those restrictions or lose everything.'
I was giving these scenarios, these Golden Dome scenarios, and so on. And he's like, 'Just call me if you need another exception.' And I'm like, 'But what if the balloon's going up at that moment and it's like a decisive action we have to take? I'm not going to call you to do something. It's not rational.'
Claude was the first A.I. certified to operate on classified systems. Altman, perhaps wisely, thought such work was likely to be more trouble than it was worth. But Amodei wanted Claude to be helpful at the most sensitive level. The national-security agencies do not use Claude in the form of a consumer chatbot; Secretary of War Pete Hegseth does not open the Claude app to ask what's up with the whole Taiwan thing.