Stop the use of AI in war until laws can be agreed
Briefly

Stop the use of AI in war until laws can be agreed
"Many AI researchers say that even the most advanced technologies, known as frontier AI models, are not yet capable of performing reliably or within the existing laws of war. Employees at OpenAI and Google, two of the technology firms developing such models, have gone public with their concerns. Their warning must be heeded."
"At present, there are no international laws that explicitly mention AI use in war. However, international humanitarian law states clearly that weapons must not be used indiscriminately. Moreover, combatants must take precautions to verify their targets and minimize the risk of civilian casualties. These requirements should apply to AI as much as to any other military technology."
Artificial intelligence is increasingly deployed in military operations, with reports of AI systems prioritizing targets in recent conflicts. Researchers developing frontier AI models warn these technologies cannot reliably operate within existing laws of war. Employees at major AI firms like OpenAI and Google have publicly expressed concerns about military applications. Currently, no international laws explicitly address AI in warfare, though humanitarian law requires weapons avoid indiscriminate use and protect civilians. Existing international humanitarian law principles should apply to AI systems, requiring target verification and civilian casualty minimization. Anthropic recently disputed with the US Department of Defense over military use of its technology, highlighting tensions between AI developers and military interests.
Read at Nature
Unable to calculate read time
[
|
]