
""[R]egardless of how we got here, this is no longer just an issue between Anthropic and the [Pentagon]; this is an issue for the whole industry and it is important to clarify our stance. We have long believed that AI should not be used for mass surveillance or autonomous lethal weapons, and that humans should remain in the loop for high-stakes automated decisions. These are our main red lines.""
""We are going to see if there is a deal with the [Pentagon] that allows our models to be deployed in classified environments and that fits with our principles. We would ask for the contract to cover any use except those which are unlawful or unsuited to cloud deployments, such as domestic surveillance and autonomous offensive weapons.""
Major AI company leaders have taken a unified stance against certain military applications of their technology, establishing red lines around mass surveillance, autonomous lethal weapons, and high-stakes decisions without human oversight. OpenAI's Sam Altman issued a memo framing this as an industry-wide issue rather than isolated to Anthropic's Pentagon dispute. Despite this solidarity, Altman indicated OpenAI remains willing to negotiate military contracts for classified environments, provided the Pentagon accepts restrictions on domestic surveillance and autonomous offensive weapons. The Pentagon has demanded companies agree to "all lawful purposes," a standard Anthropic rejected. ChatGPT already operates in unclassified military systems, with accelerated discussions about classified deployment occurring amid the Pentagon-Anthropic conflict.
#ai-military-deployment #ethical-guardrails #pentagon-negotiations #autonomous-weapons #mass-surveillance
Read at Axios
Unable to calculate read time
Collection
[
|
...
]