Motivair Corporation cooling systems will be used in all three of the upcoming US exascale supercomputers.

The company's new Exascale CDU can cool four '150kW plus' racks at a time. Motivair also announced a partnership with iM Data Centers to develop modular facilities for the high-performance computing market.


Exascale CDU
– Motivair

The 1 exaflops Aurora (2021), 1.5 exaflops Frontier (2021) and 1.5 exaflops El Capitan (2022) will all use the new cooling distribution unit.

Each system is built by Cray, which chose Motivair after a global bidding process, Motivair CEO Rich Whitmore told DCD. Due to the sheer density of the power required, Motivair had to use Cray's testbeds to see if its CDUs could handle the requirements.

"We can go up to certain points [at Motivair], but we had to work with Cray for this, they had to put a special test stand in their lab," Whitmore said. "Nobody has that type of power dedicated to one little spot. The temperatures are so extreme, you couldn't even put this into a typical chiller testing laboratory because nothing's configured to handle those types of peaks. They're actually operating at quite warm temperatures - we're doing the cooling with warm water.

"So technically you could get more out of these things if you started using more traditional chilled water on the primary loop. But these are designed for W4 water conditions. It's quite warm water and coming back quite hot from the chipset."

This extreme-scale power means that cooling needs to be redundant. "If you look at a typical hyperscale data center, even if they were to have a massive cooling system shutdown, and the whole chiller plant went down for some reason, the servers would shut down and protect themselves.

"These exascale systems can do the same, but with the cooling, it's just so intense. You have to have it there all the time. So we had to develop multiple levels of redundancy and resiliency, that goes far beyond what we would normally do for a CDU."

Whitmore expects to see the technologies developed in building these exascale machines start to spread to the wider data center industry in the years to come. "There's no question that the use of HPC is growing and well documented," he said. "And the interesting thing will be how the market prepares for that. Imagine 600 plus kilowatts in a ten-foot by six-foot square, where are there data centers in the world that have that infrastructure and power routed that densely?"

To handle that level of power and resultant heat, liquid cooling will increasingly dominate the HPC space, Whitmore believes. "The HPC industry is growing faster than it ever has. And all of those systems will be liquid-cooled."

But outside HPC, you "simply don't need liquid cooling until it gets really, really dense," he said.

For Motivair, the cutoff is around 75kW. "With the densities at that point, in the actual chassis of these computers, it becomes difficult to put enough fans in there to move the airflow that's required. You get to a point where, if you need to have so much airflow, the fans that would be required to move it just doesn't make sense. The fans become physically bigger than the rack."

The company has experimented with different forms of liquid cooling, including two-phase with Novec solutions. "But a two-phase system becomes very complex to control," Whitmore said. "There are vendors out there that are experimenting with it and looking at it. Some of those fluids become extremely efficient down at the chip level, but if you turn it into a two-phase you boil it and then you need to condense it back into its liquid form, and in its liquid form it's half as efficient as water.

"While you're getting very efficient down at the chip level, you're half as efficient out at the fluid level. And that is a fundamental problem in and of itself."

As for single-phase immersion cooling, "it's pretty neat," Whitmore said. "You'll always have some adoption, there's always gonna be somebody who wants to try it and play with it. But to get any traction, it would require a fundamental shift in just the mentality of how you build the data center. And I think that's 15-20 years away, at best."

For now, Whitmore believes that water will remain the market leader in HPC for "at least the next five years."

iM in this together

Motivair has ambitions to become more of a player in the HPC space, partnering with iM Data Centers for turnkey modular data center builds.

"Our expertise is IT systems cooling, cooling infrastructure and customization and Michael Roark [iM CEO]'s is specialty is high-end data center construction. And so we got together and packaged a system. With the implementation of our chilled door rack cooling system that's allowed us to take a 15kW rack load up to 75kW just in stride."

The company also offers its 150kW-capable CDUs in the modular deployment.

"It dawned on us a few years ago that most data centers that are currently in existence, they can have plenty of power and plenty of cooling, but they're not in close enough proximity or concentrated enough in certain areas to be able to handle some of these really dense systems that customers want to deploy. And we think we can bring some value to the market.

"There's plenty of manufacturers around the world that are doing, 'modular containerized data centers,' but nobody has addressed the market of HPC for that. And because that's an area that we specialize in, we started right at the IT level and developed a modular data center that can handle up to 150 kilowatts per rack, basically designed to handle an exascale-class system."