How to best cool a data center is perhaps one of the most important, ongoing discussions in the IT industry. The issue is simple to comprehend: in a poorly cooled facility heat is the enemy that places significant stress on all rack-mounted devices, including Power Distribution Units (PDUs). The byproduct of excessive heat is damage to IT equipment, leading to downtime and inevitably higher operating costs. 

Exacerbating this problem are the ever-increasing temperatures around the server racks caused by High-Performance Compute (HPC) power densities—that can quickly expand from a typical 15kW per rack into the power density realm approaching 100kW per rack. The combination of rising temperatures and more confined spaces presents significant challenges for data center managers. However, there are innovations on the market to help chill these HPC devices out. They range from futuristic to simplistic—but all are making a considerable contribution to dissipating the heat output from rack-mounted devices. 

Futuristic cooling

Liquid cooling

The liquid cooling method started to take hold when cryptocurrency servers were turned on in volume, and data center managers noticed how much power was consumed. Techcrunch.com writes that “bitcoin miners are expected to consume roughly 130 Terawatt-hours of energy (TWh), which is roughly 0.6 percent of global electricity consumption. This puts the bitcoin economy on par with the carbon dioxide emissions of a small, developing nation like Sri Lanka or Jordan. Jordan, in particular, is home to 10 million people.” Liquid cooling was an innovation whose time had arrived and would soon be taken to extremes [A better solution, of course, wouild be not burning the planet up with pointless bitcoi-related emisisons - Editor]

Microsoft took liquid cooling a step further by boiling liquid inside a steel holding tank packed with computer servers. According to a Microsoft blog, “Inside the tank, the vapor rising from the boiling fluid contacts a cooled condenser in the tank lid, which causes the vapor to change to liquid and rain back onto the immersed servers, creating a closed loop cooling system.” This liquid cooling project is in conjunction with Microsoft’s Project Natick, which places a data center on the ocean floor sealed inside “submarine-like tubes.” 

Direct-to-chip liquid ccooling

Another futuristic form of cooling is direct-to-chip cooling. In this method, a liquid coolant is brought directly to the processor via a series of tubes. This elegant and highly efficient means of heat dispersion causes unwanted high temperatures to be absorbed and easily removed.  

A ZutaCore collaboration produced the HPC direct chip cooling solution, a direct-on-chip, two-phase, waterless solution. This collaboration combines modular IT enclosures with the direct-to-chip evaporative cooling solution from ZutaCore. The new liquid cooling solution is scalable and, according to the ZutaCore site, “pushes the boundaries of cooling to 1000W chips and beyond.”

Tried and true cooling methods

For those data center managers who are not brave enough to, or can’t afford to, submerge their facilities on the ocean floor, there are several well-proven heat solutions within reach of more modest budgets.

Aisle containment

The data center aisle containment approach to heat separates the cold supply airflow from the hot equipment exhaust air. Because each data center has a unique set of parameters, the containment approach is ideal due to its highly customizable form factor. 

Like other data center cooling systems, the containment solution has also evolved and is now easier to install and more efficient. The design and measurements in legacy aisle containment systems are performed onsite—typically by the data center manager. Measurements are sent to the manufacturer where the panels are fabricated the way the containment provider interprets the data center manager’s drawing. Needless to say, this often leads to errors.

The more evolved approach to data center containment follows a two-by-four/sheetrock type setup. Some systems have panels readily available in standard sizes and can ship right away. In addition to the panels, the containment door is also configurable to row specifications. Every manufacturer has its specific door designs and the costs can range from $1,000 to $5,000. Data center managers need to carefully consider the container door’s purpose before ordering, such as:

  • Will it be just a panel to block airflow?
  • Will it be often used to enter and exit for server rack modifications?
  • Does it have to be tall enough to roll the cabinets in through the door? 
  • What is the size of the carts that are being brought into the containment area?

Modular and flexible aisle containment providers who work with their customers to develop the drawings along the way are signs of progression in flexible hot and cold aisle containment solutions.  

CRAC and CRAH

Rounding out the tried and true cooling methods are Computer Room Air Conditioners (CRAC) and Computer Room Air Handlers (CRAH). The big difference between the two is that one has a compressor and the other does not. CRACs are similar to conventional air conditioners because they use a compressor as the cooling mechanism. The operation is simple: it blows air over cooling coils that contain refrigerant. The downside of CRACs is that they are not very efficient, but the upside is that they are inexpensive.

By contrast, CRAHs are part of a more extensive system that leverages a chiller to supply a cooling coil with cold water and fans to draw in outside air. Using outside air to cool the facility makes this option more efficient—especially when the data center is in cooler environments. 

Conclusion

As new generations of power-hungry processors continue to hit the market, the efficiency of cooling systems will always remain a top priority for data center managers. Facilities operating in HPC environments may start to gravitate more towards newer liquid-cool systems as economy-of-scale kicks in and prices lower. But for those data centers that are not dealing with an abundance of artificial intelligence and machine learning processors, the new generation of hot and cold aisle containment solutions provide a quick and economical solution to heat issues. The common denominators to look for between the futuristic and tried-and-true systems are the innovation and evolution traits that promote efficiencies while decreasing OPEX.

Get a monthly roundup of Power & Cooling news, direct to your inbox.