How data centers are making the giant leap to 1 megawatt per rack
Briefly

How data centers are making the giant leap to 1 megawatt per rack
"Data centers already require a lot of electricity, but part of that demand is based on inefficiency. The major players in global IT infrastructure have therefore set their sights on streamlining the power supply to server racks, with significantly fewer transformations between AC and DC and higher voltages within data centers themselves. All this was once impractical; IT equipment requires low voltages and has traditionally been focused on energy efficiency."
"The upward trend in power density is most clearly visible on the roadmap of AI chip manufacturer Nvidia. Whereas the A100 GPUs from 2022 reached up to 25 kilowatts per rack, the latest Blackwell generation of AI chips has already increased this to 132 kilowatts per rack. In addition, 72 GPUs are integrated into a single Nvidia system, but Nvidia customers typically place such systems together in large quantities."
Data center power demand is rising as AI workloads drive much higher per-rack consumption. GPUs used for training and inference perform highly parallelized calculations, yielding greater power density than traditional CPU racks. Leading chipmakers have increased wattage per rack from tens of kilowatts to over a hundred kilowatts, with systems integrating dozens of GPUs and often aggregated across many racks. The higher power density incentivizes fewer AC-to-DC conversions and higher internal voltages to improve efficiency. Cooling requirements also escalate, prompting adoption of liquid cooling to manage heat. Continued chip advances promise further increases in rack power, nudging designs toward megawatt-scale racks.
Read at Techzine Global
Unable to calculate read time
[
|
]