Liquid cooling is supposed to be driving a revolution in heat management within data centers. The old way, air cooling, is on the way out, we are told. It must go, to make way for a world where servers are cooled with water, dielectrics, and other fluids.

In real life, revolutions are rarely so neat and tidy.

There is no doubt that the densities of servers in racks are reaching the point where some of them can no longer be cooled efficiently with air. Liquid cooling has a vast set of benefits, including increased efficiency, improved exclusion of dust and dirt, and quieter operation - and it delivers waste heat in a form where it can be used elsewhere.

But still, air cooling vendors have a backlog of orders that show no sign of diminishing, and new data centers are still being designed around chillers, HVACs, and other air-cooled equipment.

How do we explain this? And how will today’s air cooled environments coexist with tomorrow’s liquid cooled systems?

Palette of cooling

The story that air will give way to liquid cooling is wrong on two counts, says specialist cooling consultant Rolf Brink, the Open Compute Project lead for liquid cooling: “Air cooling will never disappear. And it is also incorrect to say they've always been air-cooled. It's not a battle about which technology will be left at the end of the road.

“You have to look at the IT equipment and see what it needs,” says Brink. “IT equipment has various requirements for cooling, and this is where the palette of cooling technologies that you should be considering is greatly enriched these days.

Cold-plate is becoming mainstream this year or next,” says Brink. “Immersion is going to take a few more years before it becomes mainstream. But not all IT equipment is suitable for immersion or cold plate or air cooling alone.

“That is the big paradigm shift,” he says. “We're going to see more hybrid environments where the underlying infrastructure and facilities can cater to both air and liquid cooling. And that is what the industry needs to get prepared for.”

“We're in this transition phase, where we see both extended demand for air cooling, and a lot of newer liquid cooling requirements coming in,” says Stuart Lawrence, VP of product innovation and sustainability at Stream Data Centers. “So we find configurability is the most important thing right now.”

As a data center operator, Lawrence has to deal with what his customers - mostly large players taking a whole building at a time - are ready for: “We're seeing some customers playing around with some liquid cooling direct to chip, either single phase fluids, or phase changing fluids or cold plates. We aren't seeing a lot of immersion “

The air perspective

Air-conditioning vendors admit that things must change. “At some point, air cooling has its limitations,” says Mukul Anand, global director of business development for applied HVAC products at Johnson Controls. “There's only so much amount of heat you can remove using air.”

As he explains, it takes a lot of air to cool a high-energy chip: “The velocity of air becomes very high, noise in the white space becomes a challenge, and the server fan power consumption increases - which does not show itself in the PUE calculation.”

He sees direct-to-chip, immersion, and two-phase cooling growing, and notes that air-cooled systems often have a water circuit, as well as using water in evaporative systems. Data centers are trying to minimize water consumption while switching off compressors when possible, and water cooling inside the white space can make their job easier.

“We've seen a distinct shift of municipalities and communities away from using water for data center cooling,” says Anand. “A shift from direct evaporative cooling technologies towards either air-cooled chillers or water-cooled chillers and dry coolers.”

As liquid cooling comes inside the white space, he says: “We have to make sure we completely understand the fluids that will be used (water, glycol, etc.) and make sure that we converge on an agreed liquid cooled server technology, and use economization as much as possible,” he says.

“One of the direct consequences is to use the chilled fluid temperature as high as the IT equipment will allow. 30°C (86°F) is being looked at as a median number. That is certainly higher than the chilled water fluid used in data center air cooling systems today.”

Air cooling systems will have to adapt, he says: “We must launch and use products that are as comfortable and efficient providing chilled fluid at 30°C.”

With that temperature in their cooling systems, data centers can spend more time on free cooling using outside air. “That allows for a whole lot of hours in the free cooling method where the compressors do not consume any significant amount of power. In places like Loudoun County, Virginia, and in Silicon Valley, we're using as much economization as possible.”

In this world, 10 percent of the racks in a data center can move to liquid. “You have a cooling architecture that can cool 90 percent air cooled servers, and gradually convert this data center to more and more liquid cooled.”

In a best-case scenario, many of the liquid cooling scenarios defined by ASHRAE rarely need chillers and mechanical cooling, and those chillers will become a backup system for emergencies, says Anand: “It is for those warm afternoons. You must have a generator for the times when you don't have power. You must have chillers for the few hours in a year that that you cannot get economization to do the cooling job for you.”

Those chillers could still be challenged, he says, because as well as running denser white space, “owners and operators are leaning towards multi-story data centers.”

These chillers will need to be built with a greater concern for the carbon embodied, both physically and in their supply chain: “If you're using less metal and lighter pieces of equipment, the carbon generated through the fabrication and shipping processes is lower.”

Chillers are placed on the roof of the building, and this means they are packed together tighter, causing a “heat island” problem: “When condensing units or chillers with condensers are spread far apart on a roof, one is not influenced by the other. When you have 32 or 64 chillers close together on a rooftop space, the discharge air from one goes into the condenser of the next one, adversely impacting its efficiency and capacity.”

Extending air cooling

Back inside the white space, Lawrence sees a lot of liquid cooling implementations as simply extending the air cooling provided in the building: “It's direct liquid to chip, but the liquid goes to a rear door heat exchanger or a sidecar heat exchanger.”

Precision cooling from companies like Iceotope, where servers remain in regular racks, and liquid gets to the specific parts which need cooling are a mid-point between direct-to-chip or cold plate, and the more extreme idea of total immersion in tanks sold by the likes of GRC and Asperitas.

fanwall .jpg
– Facebook

Direct-to-chip and precision liquid cooling products can be installed in an air cooled environment, says Lawrence: “They reject heat by means of an air-to-liquid heat exchange system within an air cooled data center.”

That may be disappointing to liquid cooling revolutionaries, but there’s a reason, says Lawrence: “Most colocation facilities aren't really ready to go direct liquid.”

He sees liquid cooling as additive, where it is required: “I think we will get this extension of air cooling where they will take 10kW racks and make four rack positions into 40kW racks.” Those high-density racks have an extra heat exchanger or “sidecar.”

“In the last 10 years, the majority of the products that I've deployed are air cooled with an internal liquid cooling loop,” says Dustin Demetriou, IBM Systems leader for sustainability and data center innovation. “As far back as 2016 we were doing this in a financial services company because they had basically DX chiller systems with no chilled water, but they needed a high power rack.”

“The great part about direct-to chip liquid cooling is that it uses the same IT architecture and the same rack form factor as air-cooled servers,” says Anand. “The cooling distribution units can be in the white space, or sometimes in the rack themselves. Using this technology, transitioning at least a portion of the data center for the intense compute load can be done relatively quickly.

When things move to immersion cooling tanks, there may be a division. Expelling the heat from an immersion tank into an air-cooled system might require the compressors to be turned on, or changes to the immersion system, says Anand.

He explains: “The power that's consumed by the servers in the immersion tub gets converted to heat and that heat has to be removed. In a bath, we can probably remove that heat using warmer temperature fluid. And the lower temperatures that mandate the operation of a compressor are probably not needed.”

Losing the benefit

There’s one obvious downside to this hybrid approach. One of the most vaunted benefits of liquid cooling is the provision of waste heat in the concentrated form of higher-temperature water.

If the heat gets rejected to the air-cooling system, then it is lost, just as before. Running the liquid bath at this lower temperature removes the benefit of useful waste heat. It’s like a re-run of the bad practice of over-cooled air-conditioned data centers.

“The sad part about it from a sustainability perspective is you are not raising any temperatures,” says Lawrence. “So we're not we're not getting the real sustainability benefits out of liquid cooling by utilizing this air extension technology.”

Demetriou points out that there are still sustainability benefits: “If you look at it in terms of performance per watt, a product with 5GHz chips, if it was strictly air cooled, would have probably given half the performance. So you would need fewer servers to do the work. You're not getting all of the benefits of liquid but I think you're getting a lot.”

Demetriou also sits on the ASHRAE 9.9 technical committee, a key developer of cooling guidelines and standards: “This is an area we spend a lot of time on, because it's not all liquid or all air. There are intermediate steps.”

Funneling

Another reason that all-liquid data centers are complex to imagine is the issue of “funneling,” getting enough power into the racks, says Lawrence.

“If I take a 40MW, 400,000 sq ft data center, made up of 25,000 sq ft data halls, I can get all my electrical lineups to deliver power to each data hall without much trouble. If I start doubling the density to make that 400,000 sq ft data center 200,000 sq ft or 100,000 sq ft, then I have a really big challenge.

“I have to make that building really long and thin to actually get all the electrical lineups to funnel correctly. If I make it small and square I end up having really big problems getting the actual electrical power into the space. The funneling becomes too much of a challenge.

“Not a lot of people are talking about that right now, but I think it's going to be a pretty big problem. The challenge with liquid cooling is to design the facility in such a way that you don't run into funneling issues to get the power into the space.”

Placing small quantities of high density racks within an air-cooled facility actually avoids this problem, he says: “If you're working with an air cooled space, you've got a lot of space to route your power around. When you make the building appropriately sized for liquid cooling, you run into all sorts of electrical funneling issues that you hadn't had to even think about before.”

Equipment lifecycles

One major reason why air-cooled systems will remain is because are very rugged and enduring pieces of equipment. A chiller system placed on the roof of a data center is expected to last for 20 to 25 years, a period that could see four different generations of chip hardware, all with different cooling needs.

Johnson’s Anand says this is possible: “If your HVAC architecture is designed to provide cooling required by liquid cooled servers, we will not have to change the cooling architecture through the life of the data center.

“The time period from when a data center is designed in one part of the world to when it is built and brought online in another part of the world might be several years,” he says. “We do not want to wait for liquid cooling technology to be adopted all across the world for the next architectural design of the building to materialize it in construction.”

It’s not just the equipment, it’s the building, says Lawrence: “Hyperscalers are signing leases over 10 years, and we are seeing IT refreshes in the four- to five-year range. That boggles my mind. If you're signing a lease today, it’s going to last three IT refreshes. That IT equipment that you're putting in is either going to be air cooled for that 15-year period, or you're going to have some form of liquid-to-air system in the rack or in the white space.”

Server makers like Dell and HP are producing liquid-cooled versions of their hardware, and are predicting that in 10 years' time data centers will be 50 percent liquid-cooled. Not every application has such high demands for cooling, and this means that half the servers can still be air-cooled.

It can also get complicated because of demarcation. If the building owner provides overall cooling with air, and tenants want to add liquid cooling, Lawrence explains: “It gets complicated if you bring liquid straight to a CDU (cooling distribution unit) on rack or an in-row liquid cooler.”

Forcing the issue

Rolf Brink thinks that it may take education, and even regulation, to push data center designs more quickly to liquid: “It still happens too often that new facilities are not yet designed for the future ecosystem. This is one of the core problems in the industry. And this is where regulation can really be beneficial - to require facilities to at least be prepared for liquid infrastructures in the white space.”

Brink says: “As soon as the data center is built and becomes operational, you're never going to rebuild the whitespace. You are not going to put water pipes into the whitespace in an operational environment. It is just impossible.”

Because liquid is not included in the design phase, this creates “resistance” from the industry to adding it later, he says: “People have neglected to make the make the necessary investments to make sure that they are future-proofed.”

This may be due to the way that facilities are financed and refinanced at various times during the build phase, he says, “or it may be lack of ambition or not believing in the evolution of liquid cooling.”

The problem is that it's creating an environment in which it's still going to be very difficult to become more sustainable. Data centers won’t take a risk and spend a bit more “just in case,” says Brink.

Some of this can be changed by education. ASHRAE has brought out papers describing different stages of using liquid cooling (see Box), and OCP has also done educational work, but in the end, he says “legislation can really make a significant difference in the industry by requiring the preparation for liquid.”

Compulsory pipework?

At this stage, there’s no prospect of a law to require new data centers to include pipes in the white space, although the German Energy Efficiency Act does attempt to encourage more waste heat reuse.

Early in its development, the Act tried to mandate that 30 percent of the heat from new data centers should be reused elsewhere. This was pushed back, because Germany doesn’t have sufficient district heating systems in the right place to make use of that heat.

But the requirement to at least consider waste heat reuse could mean that more data centers in Germany are built with heat outlets, and it is a logical step to connect those up with more efficient heat collection systems inside the white space.

Across Europe, the Energy Efficiency Directive will require data centers to report data on their energy consumption and efficiency in 2024, and the European Union will consider what efficiency measures are reasonable to impose in 2025.

Whatever intervention is imposed could have a big impact on the hand-over between air and liquid cooling.