
"We have long believed that AI should not be used for mass surveillance or autonomous lethal weapons, and that humans should remain in the loop for high-stakes automated decisions. These are our main red lines. It was the morning of February 27. That night, Altman made a deal with the DoW, which did not include any such contractual agreements."
"It's amazing what a company will do for a $200 million contract despite its endless billion-dollar funding announcements and a burn rate of over $9 billion in 2025."
"The problem wasn't that Anthropic was anti-military. It's that Amodei essentially argued he wouldn't kiss Trump's rump."
The Trump administration's Department of Defense demanded AI companies remove safety protections, specifically requiring the ability to use AI for mass surveillance and autonomous weapons. Anthropic refused these demands and lost all government contracts. OpenAI's CEO Sam Altman publicly defended Anthropic's position, stating that AI should not be used for mass surveillance or autonomous lethal weapons. However, that same night, Altman signed a $200 million Department of Defense deal with OpenAI that included no such contractual protections. This apparent contradiction highlights the tension between stated AI safety principles and commercial incentives in securing government contracts.
#ai-safety-and-ethics #government-contracts-and-military-ai #corporate-accountability #ai-policy-and-regulation
Read at Theregister
Unable to calculate read time
Collection
[
|
...
]