SC25 gets heavy with mega power and cooling solutions
Briefly

SC25 gets heavy with mega power and cooling solutions
"SC25 Hydrogen-fueled gas turbines, backup generators, and air handlers probably aren't the kinds of equipment you'd expect on the show floor of a supercomputing conference. But your expectations would be wrong. At SC25, datacenter physical infrastructure took center stage with sprawling dioramas of evaporative cooling towers and coolant distribution units (CDUs) filling massive booths rivaling those of major chip vendors and OEMs like Nvidia, AMD, and HPE Cray. Among the largest of these displays were those from Mitsubishi Heavy Industries and Danfoss,"
"HPC practitioners are no strangers to dense liquid-cooled systems. HPE's Cray EX4000 systems can be specced with up to 512 AMD MI300A APUs, totaling more than 293 kW per cabinet. Lawrence Livermore National Laboratory's El Capitan, the number one ranked system on the Top500 ranking of publicly known supercomputers, features 87 of these cabinets. But while El Capitan's 44,544 APUs or Aurora's 63,744 GPUs are certainly among the largest scientific instruments ever built, they pale in comparison to the AI superclusters being built for OpenAI"
At SC25, datacenter physical infrastructure commanded major exhibit space with hydrogen-fueled gas turbines, backup generators, air handlers, evaporative cooling towers, and coolant distribution units showcased by industrial OEMs. Large displays from Mitsubishi Heavy Industries and Danfoss emphasized power-plant–scale air handling and facility cooling equipment. Vertiv built a full-scale data hall mockup near Nvidia to demonstrate liquid cooling and power delivery for dense GPU racks. AI datacenter construction emerged as a major commercial driver, making power and cooling critical bottlenecks for high-density facilities. Dense liquid-cooled HPC systems already exist in scientific supercomputers, but emerging AI superclusters far exceed their scale.
Read at Theregister
Unable to calculate read time
[
|
]