OpenAI sweeps in to snag Pentagon contract after Anthropic labeled 'supply chain risk' in unprecedented move | Fortune
Briefly

OpenAI sweeps in to snag Pentagon contract after Anthropic labeled 'supply chain risk' in unprecedented move | Fortune
"Legal and policy experts said the government's unprecedented decision presents profound questions about the relationship between the government and business in the U.S. It is the first time the U.S. has ever designated an American company a supply chain risk, and the first time the designation has been used in apparent retaliation for a business not agreeing to certain contractual terms."
"In a statement announcing its deal, OpenAI CEO Sam Altman said that its agreement with the Pentagon contains the same two limitations on how the military can use its technology that Anthropic had been insisting on and which the government has said it could not accept."
"It may simply be that the contract language highlights that current U.S. law prohibits the Pentagon from deploying A.I. for mass surveillance of Americans and current U.S. military policy states that humans must retain appropriate levels of human judgment over the use of lethal force."
OpenAI announced a deal with the Pentagon to use its AI models in classified systems, coinciding with the U.S. government designating Anthropic a supply chain risk. This marks the first time the government has designated an American company a supply chain risk and the first use of this designation in apparent retaliation for contractual disagreements. Legal experts question the implications for government-business relationships. Both companies sought limitations on military AI use, but OpenAI framed them differently by agreeing to lawful purposes while embedding restrictions based on existing U.S. law prohibiting mass surveillance and requiring human judgment in lethal force decisions. Anthropic announced plans for legal action to challenge the designation.
Read at Fortune
Unable to calculate read time
[
|
]