Your datacenter's power architecture called. It's not happy
Briefly

Your datacenter's power architecture called. It's not happy
"GPU clusters and AI accelerators don't operate on the old rules. They don't ask for 15 kW. They demand hundreds of kilowatts per rack, an order-of-magnitude leap that legacy electrical and thermal architectures were never designed to survive. The comfortable assumptions baked into decades of datacenter design are now liabilities, and the industry is facing a reckoning it can no longer defer."
"The Nvidia GB200 NVL72 rack-scale system, for example, requires 120 kW per rack. At these power levels, the physics of low-voltage distribution face challenges. The requirement to deliver 120 kW at 48V requires currents exceeding 2.5 kA. To handle thousands of amperes within a rack means thick busbars, heavy copper mass, overheating connectors, significant resistive losses, and serviceability issues."
"One emerging solution to this problem is to increase the distribution voltage (400V or 800V), which reduces the current at the same power level. This is why the industry is now moving to high-voltage DC (HVDC) power architecture for next-generation AI factories."
Traditional datacenter power architectures operating at 12V and 48V were designed for 10-15 kW per rack using steady-state CPU and storage demands. GPU clusters and AI accelerators fundamentally changed this paradigm, requiring hundreds of kilowatts per rack. Systems like Nvidia's GB200 NVL72 demand 120 kW per rack, creating currents exceeding 2.5 kA at 48V. This extreme current causes thick busbars, copper mass accumulation, connector overheating, resistive losses, and serviceability problems. The industry is transitioning to high-voltage DC (HVDC) power architectures operating at 400V or 800V to reduce current levels while maintaining power delivery, addressing the fundamental physics limitations of low-voltage distribution systems.
Read at Theregister
Unable to calculate read time
[
|
]