What Really Happened When OpenAI Turned on Sam Altman
Briefly

In a summer 2023 meeting, OpenAI co-founder Ilya Sutskever expressed deep concerns about the imminent arrival of artificial general intelligence (AGI). While known for his role in developing large language models like ChatGPT, he found himself torn between advancing AI capabilities and ensuring safety. Sutskever's focus shifted to AI safety, leading to a controversial plan involving a 'bunker' for researchers, reflecting the tension and uncertainty surrounding the implications of AGI for humanity and OpenAI's responsibility in this transformation.
"Once we all get into the bunker-" he began, according to a researcher who was present. "I'm sorry," the researcher interrupted, "the bunker?""
Sutskever had long believed that artificial general intelligence, or AGI, was inevitable-now, as things accelerated in the generative-AI industry, he believed AGI's arrival was imminent..."
What would the world look like when a supreme AGI emerged and surpassed humanity? And what responsibility did OpenAI have to ensure an end state of extraordinary prosperity, not extraordinary suffering?
Sutskever, who had previously dedicated most of his time to advancing AI capabilities, had started to focus half of his time on AI safety.
Read at The Atlantic
[
|
]