Google recently updated its AI ethical guidelines, notably omitting its 2018 commitment to refrain from using AI technology for weapons and surveillance. This shift in policy coincides with broader trends in Silicon Valley where companies seek partnerships with the US government for defense technology advancements. The original guidelines were established after employee protests against Project Maven, an AI project with the Department of Defense. The update reflects a changing landscape where AI is increasingly viewed as integral to national security and the global competition for technological leadership.
The 2018 post now includes an appended note at the top of the page that says the company has updated its AI principles in a new post, which does not mention the previous guidelines against using AI for weapons and some surveillance technologies.
After over 4,000 workers signed a petition demanding that Google stop working on Project Maven and promise never to again 'build warfare technology,' the company decided not to renew its contract to build AI tools for the Pentagon.
Collection
[
|
...
]