On July 4, 2024, the American public broadcasting news program, PBS Newshour, reported a story on the impact of AI on data center power usage. Economics correspondent Paul Solman reported that the 11,000 data centers worldwide consume between two to eight percent of worldwide power consumption. Two percent is equal to the power consumed by the Netherlands.

Some experts predict that due to AI workloads, energy workloads could double in two years, equal to the energy usage of Japan. Two days earlier, Google reported that their goal of becoming a net zero power consumer including operations and supply chain by 2030 will not be met due to the impact of AI and that emissions grew 23 percent in 2023 from the previous year and grew 48 percent compared with the base year of 2019.

Anthropic CEO Dario Amodei has estimated that current AI models will cost up to $1 billion to train as compared to the estimated $100 million that OpenAI spent on ChatGPT-4. In 2023, ChatGPT was reported to have used 30,000 GPUs to train which OpenAI CEO Sam Altman confirmed cost $100 million. With 3.8 million GPUs shipped to data centers in 2023, estimates of the power needed just for the GPUs (assuming 700W per GPU and 61 percent utilization) is equivalent to 1.3 million homes.

Schneider Electric has published estimates that in 2023, total data center power consumption was 54GW with AI workloads accounting for about eight percent or 4.3GW split between 20% percent for training and 80 percent for inference workloads.

Forecasting power consumption in 2028, Schneider estimates that total data center power consumption to be 90GW with AI workloads consuming between 13.5GW to 20GW and accounting for 15-20 percent of total data center power. The distribution of AI workloads is forecast to change to 15 percent training and 85 percent inference in 2028 per Scheider Electric.

The acquisition of electricity is becoming a limiting factor in running data centers, and hyperscale customers have turned to nuclear power as a way of powering their data centers with zero-carbon generation. However, with the average age of US nuclear power plants at 42 years old according to the US Department of Energy, nuclear power does not fundamentally solve the data center power problem.

Equinix’s 260 data centers consume as much electricity as 750,000 US homes according to estimates. In my location in the San Francisco Bay Area, the utility PG&E serving Northern California has reported requests for 3.5GW for data center power through 2029 which requires between $500 million to $1.6 billion in capital projects to meet the demand. Data center operator Digital Core REIT has reported that incremental power allocations for data centers in Northern California are on hold through 2028.

University of Southern California researcher and professor Kate Crawford says that small language models and government regulation have the potential to limit the usage of power due to AI workloads.

Equinix’s VP of global sustainability, Christopher Wellise, says that AI also has the potential to save energy such as customer Air Canada’s route optimization, and that the energy used to train large language models should be thought of as storage energy which can utilized in the future for other productive work.

LinkedIn founder, Reid Hoffman, said that earlier AI used 10 years ago saved 15 percent of data center power in operation. He believes that AI used to improve the operation of the electrical grid will improve its efficiency.

Google’s DeepMind AI research lab recently published research that a new approach to training dubbed JEST for Joint Example Selection provides 10 times the power efficiency through batch training techniques using an AI model to supervise the training process. The European Union will be requiring data centers to report their energy and water usage this fall and list any steps being taken to reduce usage.

So, what are data center operators to do?

While there is potential for reducing the power consumption required for AI workloads through new algorithms and approaches, more power-efficient GPUs, and new sources of power, today, direct-to-chip liquid cooling (DLC) offers the most immediate opportunity to reduce PUE and improve power efficiency, with PUE of 1.06 achieved in practice through DLC.

In addition, the latest high core-count server CPUs have improved core/watt performance, allowing data center footprint reduction and the associated power savings while achieving the same level of performance as older systems. Many of these systems will also benefit from DLC due to the increased processor TDP needed for these higher core counts.

While many data center operators (CSPs or on-premises) want the latest and fastest CPU and GPU-based systems, there is an opportunity to investigate the right match for the agreed-upon SLAs and the energy required for the servers.

Matching the performance/energy usage needed for AI activities will be an important metric moving forward. Each generation of servers with the new CPU and GPU technology is showing a tremendous increase in this metric.