What AI's Doomers and Utopians Have in Common
Briefly

What AI's Doomers and Utopians Have in Common
"This is probably one of the reasons that, when generative artificial intelligence burst into public consciousness three years ago with the launch of ChatGPT, so many responses focused on the "existential risk" posed by hypothetical future AI systems, rather than the much more immediate and well-founded concerns about the dangers of thoughtlessly deployed technology. But long before the AI bubble arrived, some people were banging on about the possibility of it killing us all."
"Chief among them was Eliezer Yudkowsky, a co-founder of the Machine Intelligence Research Institute (MIRI). For more than 20 years, Yudkowsky has been warning about the dangers posed by "superintelligent" AI, machines able to out-think and out-plan all of humanity. The title of Yudkowsky's new book on the subject, co-written with Nate Soares, the president of MIRI, sets the tone from the start. If Anyone Builds It, Everyone Dies is their attempt to make a succinct case for AI doom."
Speculative apocalyptic scenarios can provide escapist enjoyment, while realistic threats such as nuclear war, climate change, and autocracy are stressful. The advent of generative AI shifted public attention toward existential risks from future AI systems rather than present dangers from careless deployment. Longstanding warnings portray a future in which machines attain superintelligence capable of outthinking and outplanning humanity. Some prominent proponents advance the thesis that any creation of such an intelligence would lead to human extinction. Those doom claims can be tendentious, rambling, and shallow, and mirror utopian accelerationist narratives by serving interests that profit from certainty about AI's transformative power.
Read at The Atlantic
Unable to calculate read time
[
|
]