
"You do not get to make operational decisions, Altman told employees, according to reports by Bloomberg and CNBC. So maybe you think the Iran strike was good and the Venezuela invasion was bad. You don't get to weigh in on that, Altman reportedly said."
"The Pentagon has demanded AI companies remove safety guardrails on their models to allow a broader range of military applications. AI-enabled systems have reportedly already been used in the US military's operation to seize Venezuelan leader Nicolas Maduro and in targeting decisions in its war against Iran."
"Anthropic, OpenAI's rival and maker of the Claude chatbot, last week refused a deal with the Pentagon over concerns its model could be used for domestic mass surveillance or fully autonomous weapons. Pete Hegseth, the US defense secretary, declared the company a supply-chain risk as a result."
OpenAI CEO Sam Altman told employees the company cannot make operational decisions regarding Pentagon use of its AI technology in military applications. This statement emerged as the AI industry faces increased scrutiny over military AI deployment and ethical concerns from workers. The Pentagon has pressured AI companies to remove safety guardrails from their models for broader military applications. Anthropic refused a Pentagon deal over concerns about domestic surveillance and autonomous weapons, leading Defense Secretary Pete Hegseth to designate it a supply-chain risk. OpenAI subsequently announced a Pentagon deal, triggering employee backlash and accusations of crossing ethical lines that Anthropic rejected. Altman acknowledged the deal was rushed and attempted damage control.
#ai-military-applications #pentagon-ai-policy #ai-ethics-and-safety #openai-vs-anthropic #defense-technology
Read at www.theguardian.com
Unable to calculate read time
Collection
[
|
...
]