Man Behind Simulation Hypothesis Warns That Extinction of Humanity Is a Risk We Have to Take
Briefly

Man Behind Simulation Hypothesis Warns That Extinction of Humanity Is a Risk We Have to Take
"Back in 2003, when he was at Oxford, Bostrom penned an influential philosophical paper with the incredible title of “Are You Living in a Computer Simulation?” Loosely speaking, his argument was that sufficiently advanced civilizations will eventually build sophisticated simulations of their own ancestors - and that, given enough time in the simulation, those simulated beings will develop their own simulation inside the simulation, where a new set of simulated ancestors will do the same thing, ad infinitum."
"with all these layers of simulated reality, Bostrom thinks that it's very unlikely that us humans are actually living in the original “base” reality. Instead, we're statistically probably in some tranche of an Escher-esque cosmic videogame."
"For a while, he seemed to be moving in the direction of an AI doomer, issuing a grave warning in 2019 about how AI posed a greater risk to humankind than climate change. Since then, though, he seems to be changing tack, albeit with his signature flare for ideas so outrageous that they almost sound like parody."
"In a new working paper, for instance, he argues that developing advanced AI may well result in the extinction of humankind - but that it's worth the risk, because the upsides of superintelligence could be so profound. “I call myself a fretful optimist,” Bostrom told Wired's Steven Levyin in a new interview, deploying a term he's used before."
A philosophical simulation hypothesis proposes that advanced civilizations will create detailed simulations of their ancestors, leading simulated beings to build further simulations indefinitely. With many layers of simulated reality, humans are statistically unlikely to be in the original base reality. The idea has generated decades of debate, with some prominent figures supporting it and others rejecting it. Attention has shifted toward artificial intelligence, including earlier warnings that AI could pose a greater risk than climate change. More recently, the view has moved toward a “fretful optimist” stance: advanced AI might cause human extinction, yet the potential gains from superintelligence could be so large that the risk is considered worthwhile.
Read at Futurism
Unable to calculate read time
[
|
]