Why we should limit the autonomy of AI-enabled weapons
Briefly

Why we should limit the autonomy of AI-enabled weapons
"Weapons capable of identifying and attacking targets automatically have been in use for more than 80 years. An early example is the Mark 24 Fido, a US anti-submarine torpedo equipped with microphones to home in on targets, which was first deployed against German U-boats in 1943. Nature Spotlight: Robotics Such 'first-wave' autonomous systems were designed to be used in narrowly defined scenarios and programmed to act in response to signals such as the radiofrequency emissions of specific targets."
"The past ten years have seen the development of more advanced systems that can use artificial intelligence to navigate, identify and destroy targets with little or no human intervention. This has led to growing calls from human-rights groups to ban or regulate the technologies. Nehal Bhuta, a professor of international law at the University of Edinburgh, UK, has been investigating the legality and ethics of autonomous weapons for more than a decade."
Autonomous weapons have existed for more than 80 years, beginning with systems such as the Mark 24 Fido torpedo that homed on submarines. Early systems operated in narrowly defined scenarios and responded to specific signals. In the past decade, AI-enabled platforms have acquired capabilities to navigate, identify and engage targets with minimal human intervention. Human-rights groups have called for bans or regulation. Nehal Bhuta has investigated the legality and ethics of autonomous weapons and co-authored a report presented to the UN Security Council. AI-enabled systems raise legal and ethical concerns including responsibility for failures, intrusive collection of civilian data, and the risk of an arms race. No specific legal framework governs autonomy or AI in weapons; international humanitarian law requires distinction between civilian and military targets.
Read at Nature
Unable to calculate read time
[
|
]