Innovation, expertise, execution. Three non-negotiables when bringing a new product to market and something industry newcomer Accelsius has in spades. Although new in name, having come to fruition in June of 2022, the company certainly isn’t new when it comes to know-how.

Formed around a set of technologies purchased from tech giant Nokia Bell Labs, specifically around two-phase direct-to-chip liquid cooling, Accelsius recognized the industry’s need for a robust, reliable cooling solution, particularly as compute-intensive workloads and power constraints continue to proliferate.

“Cooling challenges are increasing quickly and dramatically,” says Josh Claman, CEO at Accelsius. “We’ve developed a solution that is safe, pragmatic and elegant compared to other emerging technologies. We’ve taken the Bell Labs design and focused on creating robust products and enterprise grade services for the data center market.”

This sense of urgency has been the driving force behind Accelsius’ innovation, with the company having already developed a lab-based proof of concept – complete with engineering enhancements – which will start shipping to various partners and clients in the first quarter of 2024.

But despite the seemingly speedy turnaround from Accelsius, taking this kind of complex technology and turning it into a viable product designed specifically for the data center industry doesn’t just happen overnight.

“Getting the technology to this point has taken years,” says Claman. “Two-phase cooling is complicated, which is why Nokia Bell Labs had a research lab devoted to it. It’s also why we built specialized engineering and R&D teams, including a number of engineering PhDs, with extensive heat transfer and scaled development experience.

“Two-phase, direct to chip cooling has huge potential. Based on feedback from data center operators, ecosystem channel partners and industry analysts we believe it’s the right solution for data centers, but it’s not easy and requires a commitment to technological and operational innovations.”

It’s this combination of innovation and expertise that undoubtedly sets Accelsius apart from the competition, as Claman explains.

“Our experience is everything. Our initial engineering team was hired from the industry; server architects from the big OEMs, data center specialists, etc. They know how to create a product with the resilience and reliability that operators require.”

Refreshingly, Accelsius’ offering – dubbed NeuCoolTM – has also been designed with serviceability in mind, providing enough space in the data center rack to hot-swap power supplies, pumps and core electronic control units when required.

“We’ve designed a pragmatic solution that data center operators, who value uptime above all else, can buy, install and still be able to sleep at night.”

That said, those operating in mission critical environments tend to be averse to change. But for those clutching onto technologies of old, whether out of fear or finance, we asked Claman, has air cooling had its day?

“There are two perspectives. One being: was air cooling ever the right choice for the industry? When you look at power usage efficiencies (PUE) around the world, that average is 1.5, which means about 40 percent of the power brought into a data center is not used to process data, it’s used for cooling.

“The other perspective is that there’s a density at which liquid cooling is required. We understand that air cooling is viable for lower end compute workloads. However, air cooling is unfocused and less efficient at transporting heat generated by rapidly advancing chip technologies and more compute-intensive applications like AI. Yes, air incrementally became more efficient, but it clearly is not the right technology for the changing data center landscape.”

According to ASHRAE if a chip is over 400 Watts, it cannot be cooled by air. Today, CPUs of up to 400 watts and GPUs reaching anywhere between 700 and 1,000 are not uncommon, therefore traditional air-cooling systems simply aren’t up to task, prompting operators with high density racks to explore other alternatives.

The tortoise and the hare

When it comes to tech, the general consensus, rightly or wrongly, is that in order to succeed you don’t necessarily need to be the best, but you do have to be the first. However, this is a concept Accelsius has turned on its head.

“There were some early entrants in the liquid cooling market. For example, single-phase direct-to-chip, which is a water-based solution, pumping through your servers. Understandably, that makes a lot of people uncomfortable. Water is conductive, so if there’s a leak, you've destroyed the most expensive component of your server, the CPU or GPU –so it’s an impractical solution.”

And if potentially destroying millions of dollars’ worth of IT equipment wasn’t off putting enough, when it comes to single-phase water cooling, the higher the density, the more water you need to pump. This is not only an inefficient use of resources but serves to further pressurize the loops inside your server, compounding the chances of disaster.

“Immersion was also an early entrant,” says Claman. “Which captured the press’ attention; given the unusualness of submerging servers in a tub. But it also takes up a lot of room, is expensive and not backwards compatible with legacy infrastructure. So while immersion will find some use cases, the consensus in the market is that there won’t be broad-based adoption.

“So, when we look at this landscape, we thought the market deserves better, it deserves a more elegant solution that’s designed specifically for the data center environment and that’s what we’ve done.”

Accelsius’ NeuCool Platform uses a dielectric fluid which is non-conductive, with relatively low-pressure loops. The phase change occurs over the chip in a specially designed evaporative plate, capable of removing a tremendous amount of heat. Therefore, as chips become hotter, NeuCool can safely remove that heat as chip technology evolves.

“The magic of two-phase direct-to-chip is that it's a proximate cooling technology. It removes heat directly from the heat producing components while also preserving the lifespan of those components.

“The other benefit of our technology is that we use very little fluid. A 60kW rack only requires about four gallons, compared to immersion which can use up to 600 gallons of liquid.”

NeuCool(ing) requirements

Once upon a time, it was enough to simply keep your servers cool. But today, it’s not only the temperature of the IT equipment operators need to worry about, but that of the planet as well. As more onus is placed upon minimizing the environmental impact of the data center, and ESG regulations begin to ramp up, cooling considerations become far more complex.

“Heat re-use requirements are also coming at operators in the near future, and there will be caps on power use for data centers in much of Western Europe.”

And let’s not forget, PUE is one of many (many) sustainability metrics operators are going to have to keep tabs on, if not now, then certainly in the future. Afterall, you can’t manage what you don’t monitor.

“These metrics need to be refined so they cannot be gamed. For example, in the US, some operators are touting low PUEs, but not publishing that they’re using millions of gallons of water. We need a balanced scorecard of a few metrics so that we get a realistic picture of sustainability. PUE isn’t enough.”

And to compound the complexity, where do operators stand when it comes to distributed computing? Servers have, historically, been housed in a traditional data center, but as workloads requiring low latency become more commonplace, they are moving increasingly closer to the end user in the form of Edge and modular data centers. But it seems Accelsius has thought of everything.

“We have a configuration for Edge or modular data centers, which requires no water. It's a refrigerant loop going to the roof of that module, or cell tower, for example. Which reduces the space requirement and the need for water in a remote location.”

Decisions, decisions

It’s no secret that those operating mission critical facilities – wherever they happen to be – are rightfully an overtly cautious bunch, and highly prudent when it comes to evaluating which cooling technology is right for them. Chances are, they’ll have one or two solutions they want to test over time. This indicates, at least for a while, operators will be adopting hybrid solutions until they’re confident enough to make a definitive decision.

“Hybrid cooling is going to be with us for a while,” says Claman. “Our hope is that operators will see the benefit of our solution and implement it at scale, but I don’t think we can hold it against the industry for being a little reticent. They sacrifice pushing the envelope to restrict downtime and I completely understand that.”

But unfortunately for indecisive operators, the clock is ticking. As wattages continue to climb, Claman is of the opinion that we’re now in a situation where we’ll see the industry grapple with caution versus urgency.

“AI workloads are pushing the envelope very quickly. These timeframes can be hard to wrap your head around. For example, ChatGPT only came out within the last 12 months and suddenly data center operators are being asked to support these high density, hot workloads.

“I’m not sure that data center operators have ever seen dynamics like this before, so there’s going to be a quick transition relative to what has historically been a fairly slow moving industry.

“It's mainly about resetting data center architecture to utilize technologies that should have always been in use. It’s about coming up with a solution that is purpose-built for the data center. We believe the market deserves the right answer, not just the first answer.”

To find out more, please visit accelsius.com.