Artificial intelligence
fromTheregister
1 week agoAttending GTC? Join The Register for dinner
AI projects fail at scale because data pipelines cannot move and serve data fast enough, causing GPU underutilization and stalled real-time inference.
In a pair of videos and an accompanying chart, top OpenAI executives made the case that the startup's biggest risk might be not spending enough on securing future compute, even though the company has already committed roughly $1.4 trillion on data center projects over the next eight years and is, according to CEO Sam Altman, five years away from profitability.
Over the past few months, we've seen a surge of skepticism around the phenomenon currently referred to as the "AI boom." The shift began when OpenAI released GPT-5 this summer to mixed reviews, mostly from casual users. We've since had months of breathless claims from pundits and influencers that the era of rapid AI advancement is ending, that AI scaling has hit the wall, and that the AI boom is just another tech bubble. These same voices overuse the phrase "AI slop" to disparage the remarkable images, documents, videos, and code that AI models produce at the touch of a button.
The wisdom goes that the more compute you have or the more training data you have, the smarter your AI tool will be. Sutskever said in the interview that, for around the past half-decade, this "recipe" has produced impactful results. It's also efficient for companies because the method provides a simple and "very low-risk way" of investing resources compared to pouring money into research that could lead nowhere.
AI labs are racing to build data centers as large as Manhattan, each costing billions of dollars and consuming as much energy as a small city. The effort is driven by a deep belief in "scaling" - the idea that adding more computing power to existing AI training methods will eventually yield superintelligent systems capable of performing all kinds of tasks.