A new Red Cross report says AI introduces risk of 'unaccountable errors' in warfare
Briefly

The report raises concerns about significant risks of 'unaccountable errors' in military AI systems due to hidden biases, uncertainties, and the faulty belief that these systems are either right or wrong.
Michel emphasizes the need for personnel to be aware that AI systems are not perfect and that blame cannot be shifted entirely to the AI for errors, highlighting the danger of this 'faulty narrative' and the lack of accountability when errors occur.
Read at Fast Company
[
add
]
[
|
|
]