
"In "AI 2027" - a document outlining the impending impacts of AI, published in April 2025 - the former OpenAI employee and several peers announced that by April 2027, unchecked AI development would lead to superintelligence and consequently destroy humanity."
"Daniel Kokotajlo predicted the end of the world would happen in April 2027."
""For a scenario like 'AI 2027' to happen, [AI] would need a lot of more practical skills that are useful in real-world complexities," Murray said."
"On the other hand, people like Gary Marcus, emeritus professor of neuroscience at New York University, disregarded "AI 2027" as a "work of fiction," even calling various predictions "pure science fiction mumbo jumbo.""
Daniel Kokotajlo initially forecast human extinction by April 2027 through AI achieving "fully autonomous coding" that would let systems drive their own development. He later revised the timeline, forecasting superintelligence around 2034 while expressing uncertainty about if or when AI would destroy humanity. The release of ChatGPT in 2022 accelerated near-term AGI predictions and drew attention from political and religious leaders. Some experts dismissed the earlier forecasts as fiction, and observations of uneven AI capabilities have pushed many AGI timelines outward. Major AI labs continue pursuing self-training, self-improving models and internal goals toward greater autonomy.
Read at Fast Company
Unable to calculate read time
Collection
[
|
...
]