A new book about AI has a provocative title: If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All. Eliezer Yudkowsky and Nate Soares argue that the development of artificial intelligence that exceeds human intelligence will almost certainly lead to the extinction of our species. How plausible is the scenario that they think will lead to the death of all people?
The very idea of de-extinction raises profound questions about the meaning of extinction and how we treat life, whether living, endangered, dead or extinct.