One of the big claims about liquid cooling is that it will deliver hot water that is good enough to make a valuable resource for district heating systems. As they emerge into real use, I am not sure how well this promise will be fulfilled.

In the past, data centers have tried to share their waste heat, but air-cooled data centers produce warm air. Modern district heating systems can make use of that, especially if you put in an extra bit of electricity to boost the temperature with heat pumps. But the heat is low-grade, and gets less useful as you transfer the heat to water, and/or pump it where it is needed.

When we first encountered liquid cooling, we were told it would change all that. Heat is taken from the electronics, directly to a cooling loop of fluid, and thence to a water system, which can take this good, high-temperature heat and deliver it where it is needed.

The reality of liquid cooling

In practice, things are going to get tricky, because it’s also not always clear where and how liquid cooling will be implemented - and it will be operating under certain constraints.

When liquid cooling vendors line up to start offering products to general data centers, it is obvious they are the answer. Air cooling, no matter how hard people work on it, won’t keep up with the demands for heat removal, so liquid cooling will have to come in.

But the actual steps to implementing and integrating liquid cooling into data centers are not clear.

Right now, there are few colocation vendors successfully offering liquid cooling to customers, and the leaders in liquid cooling tend to be large hyperscale vendors or HPC systems that can develop single-purpose systems.

Most of these are located where land and power are cheap and available, outside the urban centers where district heat systems, and heat customers, exist. Liquid-cooled remote HPC and hyperscale systems won’t find customers for their waste heat.

Data centers in urban areas are likely to have issues in implementing liquid cooling systems that can pump guaranteed satisfactory heat out to users. For instance, will a colo provider sign a 10-year contract to deliver waste heat, which depends on continually keeping customers running GPUs in a bath of coolant in the facility?

Even if the data center has a guaranteed supply of heat from GPUs, liquid cooling isn’t a magic solution that will automatically generate high-quality heat output. There’s a fundamental issue that hasn’t really been talked about: the heat output isn’t the primary goal of that system.

Liquid cooling servers will always be optimized for their primary purpose: For removing heat from the electronics, not for producing heat as output. They are designed to cool valuable electronic equipment and have to do that job before the heat output is even considered.

As Accelsius CEO Josh Claman told me earlier this year: “High-density racks can have up to $1 million of equipment in them.” Liquid cooling will need to preserve that value.

For that reason, in other conversations, I’ve been hearing that liquid cooling systems that could be generating very hot water may actually be adjusted much lower, to keep the expensive GPUs well away from their thermal design points (TDPs).

In other words, the flow rate may be increased, with the result that the water output temperature is lower than would be ideal for heating purposes.

It’s a logical outcome. Hot water requires high heat flux, and that means allowing the chip temperature to go up. Good quality waste heat, at some level, runs counter to the performance and longevity of the chips.

Anecdotally, I'm hearing from people who expected their customers to go with the ASHRAE definition of W27 (27°C or 80°F output water) and are instead opting for the W17 (17°C or 62°F) option. The reality, I’m being told, is counter to the expectation.

On the other side of the coin, warm water will be progress compared with warm area, and My Truong, Field CTO of Equinix and head of the SSIA industry group (formerly Open19,) has told me that groups like SSIA and the Open Compute Project will be working to focus attention on the warmer options.

Heard this before?

If water is cooler than expected, there’s a certain irony. Air-cooled data centers have been criticized for wasting energy by over-cooling. Air conditioning systems have been keeping the entire data center at 20°C, which uses more energy than necessary, and incidentally makes the output air cooler.

Now it seems that, when we finally get them into the data centers, liquid cooling systems may also end up over-cooling. And it’s happening for the same reason as over-cooling in air-based systems: to preserve the electronics.

At the same time, these systems will be going into buildings that use air conditioning to vent heat. Data centers will be buildings with a hybrid cooling system, and many of them will simply give up their heat to the building’s existing air-cooling system so that potentially-valuable heat will be vented to the outside world.

It’s clear that liquid cooling systems will, in general, produce more and better heat than air-cooled buildings. But the great revolution that has been promised? That might turn out to be little more than hot air.