It's Hard to Feel the AGI
Briefly

"Ilya Sutskever shared his view on a recent podcast that the current approach around transformer-based LLMs is likely to stall out in the coming years as the scaling paradigm hits a ceiling. He notes a remarkable discrepancy in their excellent performance in evaluations despite inadequate generalization and low economic impact in practice. He argues that fundamentally new research insights are needed to break through this plateau."
"Moreover, he expresses doubts about the future profitability of the current business models around LLMs, despite massive potential revenues, due to lacking differentiation between competitors. Ultimately, he revises his estimate for the emergence of systems with human-like learning abilities back by 5-20 years. His startup, Safe Superintelligence Inc., is currently exploring research ideas that may identify viable new approaches towards this goal."
"As former chief scientist of OpenAI, his doubts about the future direction and profitability of their business model should raise concerns. OpenAI plays a central role in what has been described as circular investment dealings related to enormous investments into hardware and data centers. The latter have been claimed to account for over 90% of growth in US GDP over the first half of 2025."
Current transformer-based LLM scaling faces a likely stall as the scaling paradigm approaches a ceiling. Evaluation performance remains strong while real-world generalization and economic impact remain inadequate. Fundamental new research insights are required to overcome the plateau. Business models around LLMs lack differentiation, raising doubts about future profitability despite large potential revenues. Estimates for the arrival of systems with human-like learning abilities shift later by 5–20 years. A startup named Safe Superintelligence Inc. is exploring new research directions. OpenAI's investment patterns involve significant hardware and data center spending and fundraising strategies that raise financial concern.
Read at Tensorlabbet
Unable to calculate read time
[
|
]