Liquid cooling is still very much a revolution-in-waiting.  It’s more energy efficient, but never quite ready for mainstream computing, because it’s too complex, too expensive, or just too different.

The main issue, really, is being different. Liquid cooling works very well in niches - even quite large niches like supercomputing. There are few of these systems, but their performance is crucial, so they can bear the cost of equipment that is made in small production runs, and which requires specialized handling and maintenance.

A liquid cooled server
A liquid cooled server – Asetek

Chicken and egg

In the mainstream, liquid cooling won’t get adopted till the price and complexity comes down, and that won’t happen till it’s being sold in sufficient volumes. So it is in a bit of a chicken-and-egg situation.

Till the breakthrough comes, liquid cooling companies are concentrating on making their equipment more reliable, more simple to use, and/or cheaper. In the last week, I’ve heard from three companies with different approaches to the conundrum.

Asperitas, of the Netherlands, has a liquid-cooled system, which uses total immersion in a dielectric. This is an approach also used by Green Revolution Cooling of the US, and the benefit is that you don’t need to have a piped circulation of cooling fluid. The electronics just sits in a bath.

To the North, Asetek in Norway is taking a completely different approach. It pipes coolant to heatsinks directly on the processors - an “direct to chip” cooling approach which emerged from the world of games enthusiasts, who overclock their processors, and have a lot of heat to remove.

And in the UK, Iceotope is readying some interesting products and partners. The products are in the pipeline, but the company now has a partnership with service provider 2bm. 

Asperitas is aiming for simplicity in the coolant system: it’s got no circulation pumps, because the bath is designed to ensure convection currents remove the heat. It’s also opted for a “medicinal oil” which it says can be 20 to 40 times cheaper than the specialized fluids specified in some other systems. What is a medicinal oil? Asperitas says in its white paper, “it’s the same product as Vaseline, although with a different viscosity.”

In the mainstream, liquid cooling won’t get adopted till the price and complexity comes down, and that won’t happen till it’s being sold in sufficient volumes.

But all this makes for a unit that bears little relation to traditional racks. It looks like a deep freeze.

Asetek, meanwhile, has a fairly traditional rack, but one which has a third cirulcatory system alongside the electrical power and networking infrastructure. Liquid has to be piped through reliable connectors to reach those direct-to-chip cooling units.

That circulation is a crucial part of the system, and Asetek is keen to reassure customers of the reliability of its circulating pumps (a claimed 200 million hours of fault free operation in data centers).

Icetope has blades which have sealed casings holding a cooling fluid, with a water circulation system to take the heat away. 

Floodgates to open?

All three systems require their electronics to be packaged and handled differently. Asperitas requires specialized modules, which are bound to be more pricey than standard 19in systems (though the company says it is working with wholesale hardware manufacturers). They also have to be hoisted from the tank and drained before they can be serviced.

Asetek blades look more like conventional hardware, with the addition of hose connectors. They are designed to ensure there are no leaks when modules are pulled and swapped. Iceotope also requires modification of standard hardware.

Who is ahead?  It’s early to tell. But Asetek has a strong foothold in the supercomputer world, and this week it announced what could be a step outside that world. A “major partner” -  an un-named data center hardware provider - will be shipping its product later this year.

If this is a well-known brand, it could give liquid cooling the legitimization it needs.

A version of this article appeared on Green Data Center News