Jensen Huang says the 3 elements of AI scaling are all advancing. Nvidia's Blackwell demand will prove it.
Briefly

Huang emphasized that despite concerns about the AI models plateauing, foundation model pre-training scaling is intact and continues to advance, contributing positively to AI development.
Jensen Huang elaborated on the broader understanding of scaling, explaining that it's no longer solely reliant on more data; AI can now generate synthetic data and self-check its responses.
He highlighted the evolution of model training, noting that while early improvements came from manual human checks, modern strategies like 'chain of thought reasoning' enhance model quality significantly.
Huang stated, 'The longer it thinks, the better and higher quality answer it produces,' emphasizing the importance of reasoning strategies in achieving superior AI outputs.
Read at Business Insider
[
|
]