Ilya Sutskever, OpenAI's former chief scientist, has been vocal about his concerns regarding artificial general intelligence (AGI) and its potential implications. In discussions around the upcoming challenges posed by AGI, he suggested building a bunker for protection, emphasizing that entering it would be optional. This sentiment reflects a broader belief among some within OpenAI that the advent of AGI might lead to apocalyptic scenarios, a stark contrast to opinions that argue AGI is still a distant goal. Sutskever's views have become increasingly pronounced in the lead-up to potential AGI advancements.
"Of course, it's going to be optional whether you want to get into the bunker." Sutskever emphasized the seriousness of AGI risks while discussing plans for protective measures.
"There is a group of people - Ilya being one of them - who believe that building AGI will bring about a rapture," reflects on the apocalyptic expectations surrounding AGI development.
Collection
[
|
...
]