
"HPE is aware of this and advocates for Nvidia-powered AI solutions that are largely plug-and-play, regardless of scale. There are no constraints, HPE representatives say ahead of Nvidia's annual GTC conference. It identifies three groups that currently use or want to use AI. First, AI model builders; they have the most demanding requirements in terms of scale and performance per chip."
"A single blade can support up to 8 nodes, each with two new Nvidia Vera CPUs, resulting in up to 1,408 ARM-based CPU cores per blade. There is also no shortage of system memory, with up to 24.5 TB of LPDDR5 RAM. A single GX5000 rack can support 40 blades, resulting in 640 CPUs and thus 56,320 ARM cores per rack."
"Since the integrated Nvidia Vera Rubin NVL72 by HPE effectively operates as a single system, as many bottlenecks as possible have been eliminated. On-site support and networking help keep the liquid-cooled system (down to the chip "die," the piece of silicon itself) under control."
HPE addresses the challenge of integrating AI hardware with existing infrastructure by promoting Nvidia-powered solutions that require minimal customization. The company identifies three distinct customer segments with different AI needs: AI model builders requiring maximum scale and performance, AI service providers seeking integrated solutions using HPE ProLiant servers and Aruba networking, and regulated organizations HPE calls "sovereigns." HPE introduces the Cray Supercomputing GX240 blade, supporting up to 8 nodes with Nvidia Vera CPUs, delivering 1,408 ARM-based cores per blade and 56,320 cores per rack. Neoclouds benefit from the unified Vera Rubin architecture's ability to scale flexibly while eliminating system bottlenecks through integrated design and liquid cooling.
#ai-infrastructure-integration #hpe-cray-supercomputing #nvidia-vera-architecture #enterprise-ai-solutions #liquid-cooled-systems
Read at Techzine Global
Unable to calculate read time
Collection
[
|
...
]