
"The CTO of colocation provider Digital Realty explains that without power, there are no servers, no storage, no GPUs, and none of those AI tokens that have Wall Street in a frency. But power isn't only the limiting factor in the US and much of the world, it has also upended the way datacenters are designed and built."
""More times than not, customers are like, 'Okay, I broke through, and I'm free of the supply constraint, I have my chips,' and I have to say: slow down; there's a lot of other things you're going to need," Sharp said. For X amount of GPUs you now need so many switches, storage servers, power delivery units, and coolant distribution units. In the case of Nvidia's densest systems, existing datacenters may not even be able to support the physical load."
Power availability and rack density now determine datacenter capability, because GPUs and related hardware require far greater electrical and cooling capacity than traditional servers. GPU servers have evolved from modest air-cooled machines to rack-scale liquid-cooled systems that can consume tens to hundreds of kilowatts. This evolution forces upgrades to power delivery, switches, storage, and coolant distribution, and can exceed floor-loading and electrical capacity in existing facilities. Operators must plan infrastructure long before procuring silicon, and datacenter design and construction practices are being rethought to accommodate higher per-rack power, thermal management, and the permanence of built capacity.
Read at Theregister
Unable to calculate read time
Collection
[
|
...
]