
"Mass domestic surveillance powered by AI poses profound risks to democratic governance-even in responsible hands. Current AI models are not reliable enough to bear the responsibility of making lethal targeting decisions entirely alone, and the risks of their deployment for that purpose require some kind of response and guardrails."
"Using the supply chain risk designation in response to Anthropic's contract negotiations introduces an unpredictability in our industry that undermines American innovation and competitiveness. It chills professional debate on the benefits and risks of frontier AI systems and various ways that risks can be addressed to optimize the technology's deployment."
"We support the use of AI for lawful foreign intelligence and counterintelligence missions. But using these systems for mass domestic surveillance is incompatible with democratic values. AI makes it possible to assemble scattered, individually innocuous data into a comprehensive picture of any person's life-automatically and at massive scale."
Google and OpenAI employees filed a brief supporting Anthropic's legal challenge against Pentagon policies. They argue that mass domestic surveillance powered by AI threatens democratic governance and that current AI models lack sufficient reliability for autonomous lethal decision-making. The employees contend that the Pentagon's supply chain risk designation creates unpredictability damaging American innovation and competitiveness while chilling professional debate on AI risks. Anthropic CEO Dario Amodei distinguishes between supporting lawful foreign intelligence operations and opposing domestic surveillance, noting that AI can assemble scattered data into comprehensive personal profiles automatically at scale. The company supports partially autonomous weapons like those in Ukraine but opposes fully autonomous systems that remove human oversight from lethal targeting decisions.
Read at Ars Technica
Unable to calculate read time
Collection
[
|
...
]