DeepSeek has called into question Big AI's trillion-dollar assumption
Briefly

DeepSeek, a Chinese startup, has disrupted the AI landscape by developing advanced AI models with significantly less computing power and funding than previously believed necessary. By publishing its findings and allowing the models to explain their reasoning, DeepSeek has achieved top results in benchmark tests, outperforming established models like those from OpenAI. This shift challenges the long-held assumption in the AI industry that increasing computational resources is the key to developing smarter models, raising questions about the future demand for high-powered AI chips and the strategies of leading companies in the sector.
Recently, Chinese startup DeepSeek created state-of-the-art AI models using far less computing power and capital than anyone thought possible.
This surprising work seems to have let some of the air out of the AI industry's main assumption—that the best way to make models smarter is by giving them more computing power.
The assumption behind all this investment is theoretical . . . the so-called scaling laws where when you double compute, the quality of your models increases.
OpenAI CEO Sam Altman last year said he needs to raise $7 trillion to build the data centers needed to reach AGI.
Read at Fast Company
[
|
]