During Meta's Q3 earnings call, CEO Mark Zuckerberg shared that Llama 4 models are being trained on a GPU cluster larger than anything competitors have, surpassing 100,000 H100s.
Zuckerberg emphasized the significance of the computing power dedicated to training the Llama 4 AI models, stating it exceeds all current industry offerings and showcases Meta's commitment to AI innovation.
Nvidia's H100 chips, priced between $30,000 to $40,000 each, are essential for AI companies aiming to train large language models, influencing talent acquisition within the sector.
The competitive landscape is highlighted by the demand for H100 GPUs, with industry leaders like Musk and Zuckerberg leveraging vast resources to establish dominance in AI model development.
Collection
[
|
...
]