
"Nvidia's Grace Hopper may not be its latest accelerators, but they have proven to be among the most energy-efficient ever made. The parts combine Nvidia's 72-core Grace CPUs with a 144 GB H100 graphics accelerator. Each of these 1,000-watt chips is capable of delivering upwards of 67 teraFLOPS of FP64 matrix math for highly precise scientific workloads, or as much as 4 petaFLOPS of sparse FP8 for machine learning tasks."
"In this fall's Top500 ranking of the most powerful publicly known supers, Olivia's GPU partition took 134th place, delivering 13.2 petaFLOPS in the High-Performance Linpack (HPL) benchmark. The combination of the Nvidia-based GPU partition and the AMD-based CPU section allows the machine to address a broader range of high-performance computing and AI workloads. While many scientific workloads benefit heavily from the highly parallel processing afforded by GPUs, not all do. The two partitions enable Olivia to address both possibilities."
Olivia is a national supercomputer built by Hewlett Packard Enterprise for Sigma2 and housed in an underground Lefdal Mines datacenter. The system combines 504 AMD Turin CPUs and 304 Nvidia Grace Hopper Superchips with 5.3 petabytes of HPE Lustre storage connected via HPE 200 Gbps Slingshot 11 NICs. The GPU chips pair a 72-core Grace CPU with a 144 GB H100 accelerator, delivering about 67 teraFLOPS FP64 and up to 4 petaFLOPS sparse FP8 per chip. Olivia's GPU partition achieved 13.2 petaFLOPS in HPL, ranking 134th on the Top500. The machine supports renewable energy, climate, marine, health, and language research, offers national researcher access, multiplies computing capacity sixteenfold, and reduces power consumption by roughly 30 percent.
Read at Theregister
Unable to calculate read time
Collection
[
|
...
]