
"Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems. The DoD agrees with these principles, reflects them in law and policy, and we put them into our agreement. We also will build technical safeguards to ensure our models behave as they should, which the Department of Defense also wanted."
"Under the new arrangement, OpenAI will provide its advanced AI models for use on the Pentagon's classified systems, allowing defense teams to build secure applications for logistics, intelligence analysis, cybersecurity, and operational planning. Unlike the breakdown with Anthropic, OpenAI agreed to a framework that permits lawful military use while maintaining defined safety guardrails and usage policies."
The Trump administration has dramatically shifted its AI strategy, granting OpenAI a new agreement to deploy advanced AI models on the Department of Defense's classified networks while simultaneously directing all federal agencies to cease using Anthropic's technology. OpenAI negotiated a framework permitting military applications in logistics, intelligence analysis, cybersecurity, and operational planning while maintaining safety guardrails prohibiting domestic mass surveillance and autonomous weapons without human oversight. This arrangement builds on OpenAI's existing $200 million 2025 Defense Department contract. Anthropic's exclusion resulted from months of tense negotiations where the company resisted expanded access tied to domestic surveillance and autonomous weapons concerns, ultimately leaving it outside the defense partnership while OpenAI secured its position as a key Pentagon partner.
#ai-defense-contracts #pentagon-ai-deployment #openai-vs-anthropic #government-ai-policy #military-ai-ethics
Read at TechRepublic
Unable to calculate read time
Collection
[
|
...
]