In "AI 2027" - a document outlining the impending impacts of AI, published in April 2025 - the former OpenAI employee and several peers announced that by April 2027, unchecked AI development would lead to superintelligence and consequently destroy humanity.
Yudkowsky, the founder of the Machine Intelligence Research Institute, sees the real threat as what happens when engineers create a system that's vastly more powerful than humans and completely indifferent to our survival. "If you have something that is very, very powerful and indifferent to you, it tends to wipe you out on purpose or as a side effect," he said inan episode of The New York Times podcast "Hard Fork" released last Saturday.
In 2000, Bill Joy, the co-founder and chief scientist of the computer company Sun Microsystems, sounded an alarm about technology. In an article in Wired titled 'Why the Future Doesn't Need Us', Joy wrote that we should 'limit development of the technologies that are too dangerous, by limiting our pursuit of certain kinds of knowledge.' He feared a future in which our inventions casually wipe us from the face of the planet.