OpenAI says its US defense deal is safer than Anthropic's, but is it?
Briefly

OpenAI says its US defense deal is safer than Anthropic's, but is it?
"OpenAI has struck a deal to supply the US government with AI services, announcing it hours after US President Donald Trump's decision on Friday to ban its AI rival Anthropic from all US government contracts. Anthropic was banned for its refusal to let its product be used for mass surveillance of US citizens or in fully autonomous weapons; OpenAI said its deal contained the same limitations."
"We have three main red lines that guide our work with the DoW, which are generally shared by several other frontier labs. Those red lines are no use of its technology for mass domestic surveillance; no use of its technology to direct autonomous weapons systems, and no use of its technology for high-stakes automated decisions such as social credit systems."
"The US government wanted Anthropic to allow the use of its technology for all lawful purposes. Anthropic agreed, save for those two exceptions, and on Friday its intransigence got it banned."
OpenAI announced a deal to supply AI services to the US government shortly after the Trump administration banned Anthropic from government contracts for refusing mass surveillance and autonomous weapons applications. OpenAI claims its agreement includes the same limitations as Anthropic's, plus additional guardrails. The company identified three red lines: prohibiting mass domestic surveillance, autonomous weapons direction, and high-stakes automated decisions. However, legal experts express skepticism about how enforceable these restrictions are, given the agreement's broad allowance for 'all lawful use.' Questions remain about whether OpenAI genuinely secured stronger protections or merely negotiated faster than Anthropic.
Read at Computerworld
Unable to calculate read time
[
|
]