
"For months, Anthropic CEO Dario Amodei has insisted that Anthropic's AI model, Claude, must not be used for mass surveillance in the U.S. or to power entirely autonomous weapons, such as a drone that uses AI to kill targets without human approval. He has described those uses as "entirely illegitimate" and says they are "bright red lines" for the company."
"The Pentagon says that it does not intend to use Anthropic's tools for surveillance or autonomous weapons. But it says that it's not up to a contractor like Anthropic to make decisions about how its technology is used, and says AI companies including Anthropic need to allow the U.S. government to use their tools "for all lawful purposes.""
""Legality is the Pentagon's responsibility as the end user," a senior Pentagon official who declined to give their name told NPR this week."
The Pentagon and Anthropic are engaged in a significant dispute over the military application of AI technology. Anthropic CEO Dario Amodei has established firm boundaries, declaring that Claude cannot be used for mass surveillance or autonomous weapons systems, describing these uses as "entirely illegitimate" and "bright red lines." The Pentagon contends it has no intention to use Anthropic's tools for these purposes but insists that AI companies must allow government use for all lawful purposes, arguing that legality determinations are the Pentagon's responsibility as the end user. Amodei rejected the Pentagon's latest contract modifications, though he affirmed Anthropic's commitment to defending democracies. The dispute threatens hundreds of millions in potential military contracts and access to advanced AI capabilities.
#ai-military-use #pentagon-anthropic-conflict #ai-safety-restrictions #autonomous-weapons #government-contracts
Read at www.npr.org
Unable to calculate read time
Collection
[
|
...
]