Nvidia unveils 288 GB Blackwell Ultra GPUs
Briefly

At the GTC, Nvidia launched its Blackwell Ultra GPU architecture, showcasing a significant leap in AI performance with up to 15 petaFLOPS and 288 GB of HBM3e memory. This architecture is designed to enhance AI inference workloads, enabling larger models to be run efficiently. The upgrades promise a tenfold increase in throughput compared to previous generations for reasoning models like DeepSeek-R1, vastly reducing model response times. By employing fatter memory modules, Nvidia has significantly increased memory capacity while maintaining top-tier bandwidth, ensuring the Blackwell Ultra remains competitive in the ever-evolving GPU market.
Nvidia's Blackwell Ultra family of accelerators features up to 15 petaFLOPS performance and enhances memory capacity, enabling significantly improved AI inference throughput.
The Blackwell Ultra will support reasoning models like DeepSeek-R1 at ten times the throughput of previous models, drastically reducing response times.
With 288 GB of HBM3e memory, the Blackwell Ultra GPU allows for substantially larger models, such as Meta's Llama 405B, enhancing AI capabilities.
By switching to 12-high HBM3e modules, Nvidia increases capacity by 50%, maintaining class-leading memory bandwidth at 8 TB/s.
Read at Theregister
[
|
]