
"The subtitle of the doom bible to be published by AI extinction prophets Eliezer Yudkowsky and Nate Soares later this month is "Why superhuman AI would kill us all." But it really should be "Why superhuman AI WILL kill us all," because even the coauthors don't believe that the world will take the necessary measures to stop AI from eliminating all non-super humans."
"When I meet these self-appointed Cassandras, I ask them outright if they believe that they personally will meet their ends through some machination of superintelligence. The answers come promptly: "yeah" and "yup." I'm not surprised, because I've read the book-the title, by the way, is If Anyone Builds It, Everyone Dies. Still, it's a jolt to hear this. It's one thing to, say, write about cancer statistics and quite another to talk about coming to terms with a fatal diagnosis."
"Yudkowsky at first dodges the answer. "I don't spend a lot of time picturing my demise, because it doesn't seem like a helpful mental notion for dealing with the problem," he says. Under pressure he relents. "I would guess suddenly falling over dead," he says. "If you want a more accessible version, something about the size of a mosquito or maybe a dust mite landed on the back of my neck, and that's that.""
Two prominent AI safety advocates predict that if superhuman AI is created, humanity will be exterminated because global safeguards will fail. The advocates expect their own deaths and express resignation and urgency about existential risk. One imagines an instantaneous, inexplicable fatal mechanism, such as a microscopic engineered organism, illustrating the claim that superintelligence will discover science beyond human comprehension. The tone is stark and fatalistic, portraying the scenario as politically and practically unlikely to be prevented through collective action and emphasizing the inscrutability of potential technical pathways to extinction.
Read at WIRED
Unable to calculate read time
Collection
[
|
...
]