OpenAI shares its contract language and 'red lines' in agreement with the Department of War
Briefly

OpenAI shares its contract language and 'red lines' in agreement with the Department of War
"We think our agreement has more guardrails than any previous agreement for classified AI deployments, including Anthropic's. In our agreement, we protect our red lines through a more expansive, multi-layered approach. We retain full discretion over our safety stack, we deploy via cloud, cleared OpenAI personnel are in the loop, and we have strong contractual protections."
"Its tech can't be used for mass domestic surveillance or autonomous weapons, OpenAI said. OpenAI's agreement with the federal government comes on the heels of its AI rival, Anthropic, being blacklisted and declared a supply chain risk after refusing to comply with the military's terms of use for the company's frontier model, Claude."
"Anthropic, in a Friday statement, said that 'no amount of intimidation or punishment from the Department of War will change our position on mass domestic surveillance or fully autonomous weapons' and vowed to 'challenge any supply chain risk designation in court.'"
OpenAI published contract language from its Department of War agreement, highlighting safety protections that restrict its technology from mass domestic surveillance, autonomous weapons, and high-stakes decision systems. OpenAI claims its agreement contains more guardrails than previous classified AI deployments, including stronger protections than Anthropic's rejected terms. The company maintains full discretion over its safety stack, deploys via secure cloud infrastructure, requires cleared personnel involvement, and includes strong contractual protections. This development follows Anthropic's refusal to comply with military terms and subsequent blacklisting as a supply chain risk. OpenAI publicly opposed Anthropic's designation, stating it communicated this position to the government.
Read at Business Insider
Unable to calculate read time
[
|
]