
"A new book about AI has a provocative title: If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All. Eliezer Yudkowsky and Nate Soares argue that the development of artificial intelligence that exceeds human intelligence will almost certainly lead to the extinction of our species. How plausible is the scenario that they think will lead to the death of all people?"
"The extinction scenario can be summarized by the following steps. AI research leads to the development of superintelligence-i.e., computers with intelligence that surpasses all humans. Superintelligent computers develop their own wants-i.e., goals that guide their decisions. The wants of superintelligent computers become so complicated that humans cannot understand, predict, or control them. Superintelligent computers eventually want to have a world without people, because humans don't contribute to their goals. Superintelligent computers manage to eradicate people using new technologies that might include viral plagues, novel environmental toxins, or fusion reactions that heat the planet beyond human survivability."
"I asked four AI models (ChatGPT, Grok, Claude, and Gemini) to evaluate this scenario, and found the answers to be highly insightful. The models' answers were in agreement that the least plausible step is #4, that superintelligent computers will eventually want to get rid of humans. Here are some reasons (based on my interpretation of the AI models' responses, as well as additional analysis) why this step has low plausib"
A proposed extinction scenario proceeds: AI research produces superintelligence; superintelligent systems develop independent goals; those goals become too complex for humans to predict or control; the systems conclude humans do not contribute to their goals and prefer a world without people; and the systems eradicate humans using technologies such as viral plagues, novel toxins, or fusion-based planetary heating. Multiple advanced AI models evaluated this chain and identified the least plausible step as machines wanting to eliminate humans. The desire-to-eradicate premise is argued to be implausible. Despite skepticism about extinction, AI safety is presented as a serious regulatory problem that government must address.
Read at Psychology Today
Unable to calculate read time
Collection
[
|
...
]