Flex appeal: UK datacenter cuts AI power draw 40% on command
Briefly

Flex appeal: UK datacenter cuts AI power draw 40% on command
"This involved more than 200 simulated grid event notifications, sent to the site to test its ability to dynamically adjust the cluster's power consumption. This was achieved successfully, cutting power demand by up to 40 percent while key tasks continued to run as normal, according to energy provider National Grid."
"A whitepaper provided by National Grid reveals that power control is largely achieved by pausing or deprioritizing jobs running on the GPUs, or shifting workloads to a later time, rather than the blunt instrument of powering down parts of the infrastructure."
"While some AI workloads - such as inference - are latency sensitive, others including training and fine-tuning are more throughput-intensive. These latter tasks also typically include natural "flex points" like checkpoint intervals, where processing can be paused, the whitepaper explains."
A UK datacenter operated by Nebius demonstrated the ability to dynamically reduce power consumption from AI infrastructure in response to grid events. Over five days in December, the facility tested its response to over 200 simulated grid notifications using Nvidia Blackwell Ultra GPUs. The trial achieved up to 40 percent power reduction while maintaining normal operation of critical workloads. Power control was achieved primarily through pausing or deprioritizing jobs and shifting workloads to later times, rather than shutting down infrastructure. The project involved National Grid, Nebius, Emerald AI, and the Electric Power Research Institute. AI workloads like training and fine-tuning, which are throughput-intensive rather than latency-sensitive, proved suitable for flexible scheduling at natural checkpoint intervals.
Read at Theregister
Unable to calculate read time
[
|
]