'Sapiens' author warns AI's real timeline is 200 years - but today's lack of concern is the real danger
Briefly

'Sapiens' author warns AI's real timeline is 200 years - but today's lack of concern is the real danger
"Speaking at the World Economic Forum in Davos on Tuesday, the historian and author of "Sapiens: A Brief History of Humankind" said that when he talks about the long-term impact of AI, he's not thinking in years or even decades. "A lot of the conversations here in Davos, when they say 'long term' they mean like two years," Harari said. "When I mean long term, I think 200 years.""
"Harari compared the current moment in AI to the early days of the Industrial Revolution, saying that humanity tends to misunderstand transformative technologies as they unfold. The deepest consequences of industrialization took generations to fully emerge, he said - often through social, political, and geopolitical upheaval that no one could have predicted in advance. "You can test for accidents," he said. "But you cannot test the geopolitical implications or the cultural implications of the steam engine in a laboratory. It's the same with AI.""
AI's long-term impact should be measured in centuries rather than years or decades, with major consequences unfolding over roughly 200 years. Transformative technologies are often misunderstood during their early stages, and deep social, political, and geopolitical consequences can take generations to fully emerge. Accidents and technical failures can be tested in controlled settings, but cultural and geopolitical implications cannot be reliably simulated or predicted before deployment. Even if AI development were to stop today, the long-term effects already set in motion would remain unpredictable and could cascade across societies. Early deployments have created initial disturbances whose full waves and consequences are still unknown, comparable to a stone just striking a pool.
Read at Business Insider
Unable to calculate read time
[
|
]