The Military's AI Fever Is Leading Into Disaster, Critics Say
Briefly

The Military's AI Fever Is Leading Into Disaster, Critics Say
"These failures could happen even with humans in the loop. Commanders and operators of weapons systems are generally supposed to independently verify and confirm AI-generated targets. In reality, they may become too willing to defer to algorithmic recommendations."
"Additionally, greater reliance on AI reduces the lives of individuals to blips and data points on a screen, which could desensitize soldiers to acts of killing and destruction."
"The US military's accelerated deployment of untested AI could lead to unsafe systems that inflict excessive civilian harm and infringe on privacy and civil liberties."
The Brennan Center warns that accelerated US military AI deployment poses significant risks to civilian safety and civil liberties. The Department of Defense requested $13.4 billion for autonomy and autonomous systems in 2026, extending beyond weapons to surveillance, maintenance, and administrative operations. Algorithmic errors could enable indiscriminate killings and wrongful arrests. Human oversight proves insufficient as commanders may defer excessively to AI recommendations rather than independently verify targets. Increased reliance on AI reduces individuals to data points, potentially desensitizing soldiers to killing. Real-world consequences are evident: US military attacks using AI intelligence killed over 1,332 Iranian civilians, including 175 elementary students and staff.
Read at Futurism
Unable to calculate read time
[
|
]