Google spikes its explicit 'no AI for weapons' policy
Briefly

Google has released new AI principles, notably excluding prior promises not to engage in developing weapons or surveillance technologies that could violate international norms. The original principles, articulated by CEO Sundar Pichai in 2018, prohibited pursuing technologies that could cause harm or facilitated injury to individuals. After significant employee pushback regarding Project Maven, which analyzed drone footage using AI, Google halted its contract with the Pentagon. The revised principles emphasize bold innovation and responsible development, but lack mention of previous commitments that aimed to promote ethical AI applications, sparking discussions on corporate responsibility.
The updated AI principles from Google do not include mention of previous commitments to refrain from AI applications causing harm, raising concerns about accountability.
Google's 2018 AI guidelines included a pledge against developing technology for weapons and surveillance that violate international norms, a commitment now seemingly absent.
Read at Theregister
[
|
]