OpenAI's GPT-5 has underwhelmed despite significant investment in AI research. Critics like Gary Marcus emphasize that AI improvements have stagnated. While GPT-5 shows better benchmark performance, experts believe its practical applications remain limited. There's a growing sentiment that newer models are not proving significantly more useful in real-world applications, despite theoretical advancements. The focus on scalable AI has prioritized financial growth over practical utility, leading to increased energy demands and capital requirements, countering expectations for dramatic capabilities growth in artificial intelligence.
"I don't hear a lot of companies using AI saying that 2025 models are a lot more useful to them than 2024 models, even though the 2025 models perform better on benchmarks."
In the US, tech companies like OpenAI and Anthropic have been focused on "scalable AI," a development approach that prioritizes rapid financial growth over useful tech.
The payoff, OpenAI CEO Sam Altman theorized in 2021, should be near-exponential improvements to AI's capabilities - if you spend the money, there's no reason the tech can't get better.
The rate at which new models grow against dubious benchmarks appears to be slowing down, challenging the notion of constant AI improvement.
Collection
[
|
...
]