Ilya Sutskever, co-founder of OpenAI, proposed constructing a doomsday bunker for the organizationâs top researchers to safeguard them against potential chaos stemming from the release of artificial general intelligence (AGI). At a meeting in summer 2023, he discussed the anticipated dangers, such as geopolitical conflicts, that could arise once AGI surpasses human intelligence. While Sutskeverâs comments stirred confusion, several sources confirmed his repeated references to the bunker concept, hinting at deeper anxieties and philosophical concerns surrounding the development of AI technology.
"Once we all get into the bunker..." Sutskever began, highlighting a serious plan to safeguard OpenAI's top minds in case of AGI-induced chaos.
The plan would be to protect OpenAI's core scientists from anticipated geopolitical chaos once AGI exceeds human capabilities and is released.
"There is a group of people - Ilya being one - who believe that building AGI will bring about a rapture. Literally, a rapture."
This idea underscores the extraordinary anxieties gripping some of the minds behind the most powerful technology, reflecting deep moral and even metaphysical concerns.
Collection
[
|
...
]