The best way to beat the heat is to do as little as possible; any lizard knows that, and so do the coolest data center managers. Overworking just wastes energy.

But a lot of data centers are still doing just that – overworking and overcooling their spaces. In the process, they are wasting vast quantities of energy and – ironically – contributing to global warming and melting the world’s polar ice caps.

cool issue 1
– Spooky Pooka

Cooling for the tape era?

Chilly data centers date back to the 1950s, when tape drives could not stand high temperatures, and humidity crashed the system by making punchcards stick together. We are no longer in that era, and yet a lot of data centers still have aggressive and unnecessary cooling regimes, rigidly keeping their ambient temperature at 21°C (70°F).

Things started to change when ASHRAE (the American Society of Heating, Refrigerating and Air-conditioning Engineers) defined where temperatures should be measured, with a specific recommendation for the server inlet temperature, which has been increasing as the increased reliability of IT kit is more widely accepted and now stands at 27°C (80°F).

Web-scale data centers pay attention to this, and through a process of trial are operating at higher temperatures still. But enterprise data centers are seriously lagging.

In early 2015, IDC surveyed 404 data center managers in the US, all of whom have at least 100 physical servers, and who have an average IT budget of $1.2m. Fully 75 percent of them were operating below 24°C, and only five percent were at 27°C or above.

These facilities have PUE (power usage effectiveness) ratings of around 2.4 to 2.8 – meaning that 60 to 65 percent of the power they consume doesn’t reach their IT equipment.

cool issue 4
– Spooky Pooka

The result is doubly shocking when you consider two facts. First, IDC found out that these IT managers are spending 10 percent of their budget on cooling, out of a 24 percent segment for power and cooling combined. So each of these organizations is spending $1,200 a year on cooling, much of which may be unnecessary.

The other fact to consider is that, while the efficient web-scale cloud providers get the media attention, they are only a small percentage of the data centers of the world. At least half the world’s racks are in those overcooled enterprise sites. To make an impact on global emissions, these are the data centers that need to change.

Paranoia, or just being careful?

So why are data centers still too cool? Ian Bitterlin of Critical Facilities Consulting is in no doubt that fear is what drives it: “It’s paranoia.” People are excessively risk-averse.

But it might be more rational than that. At least one study has found that raising the air inlet temperature actually increased the amount of energy used in cooling.

“We went in fully sure we would be saving energy,” says Victor Avelar, senior research analyst at Schneider Electric, describing a study that compared the cooling energy needed at different temperatures for data centers in Chicago, Miami and Seattle. “But we found that above 27°C cooling took more energy and capital expense.”

The Schneider study – due to be published shortly – compared data centers with a standard chiller unit. The surprising result came about because of the complexity of the system. At higher temperatures, server fans come into play and more energy is used moving air around.

Technology options 

If you look into this, you will need to know your technology options. We are mostly starting – as the Schneider study did – from cooling using a traditional air-conditioning unit with a compressor, often referred to as a “direct expansion” (DX) unit.

Keeping your inlet temperature below 24°C - that’s paranoia

Ian Bitterlin, Critical Facilities Consulting

In most locations, there’s no other way to maintain the ultra-low temperatures that many people still think are necessary, and in many places the DX is in the mix to cover extreme conditions and reassure the service users.

If this is what you have, there are two main things you can do to cut your energy bills before you think of changing your cooling technology. First, as ASHRAE pointed out, you can feed your servers warmer air, thus cutting down the amount of cooling you do. Though Schneider also stresses that if you do this, you should know what the fans in your servers will be doing.

If you let the inlet temperature go up to 27°C, the temperature in the hot aisle at the back of the servers will be around 35°C. You will want to make sure all the connectors are on the front of the system, as no one will want to spend much time in the hot aisle.

Secondly, any cooling system works more efficiently when it is working on a high temperature difference (delta-T). That’s slightly counter-intuitive, but it’s basic thermodynamics: there’s a bigger driving force to move the heat when delta-T is greater.

This is one reason why it’s good to contain the hot air coming out of the back of your servers and exclude the cool air that slips past the racks. Hot-aisle containment means your cooling system is only working on the air that needs to be cooled.

cool issue 6
– Spooky Pooka

Once you have done all that, your DX system will be doing less work, and you could have a partial PUE (pPUE) of around 1.16, according to Bitterlin. Alternatively a chilled water system (where the refrigeration unit’s cooling is distributed using water) can get down to a pPUE of 1.12.

Doing without DX

But do you need your DX at all? ASHRAE publishes maps showing where in the world the climate is cool enough so outside air can be used to cool a data center all year around. Most of the US is in this zone, and so is the UK, where the record dry bulb temperature is 34°C and the highest wet bulb temperature (with evaporation) is 23°C.

This is the world of “outside air” cooling, “free” cooling or “adiabatic” cooling – all words that mean cooling without using the air-con. Sometimes filtered outside air is circulated through the data center (“direct” free cooling) and sometimes a secondary circuit is set up (“indirect” free cooling). Adding water evaporation on the cooling coils can be needed when the outside temperature is higher.

This might get you to a pPUE of 1.05, says Bitterlin, but there are some complications. One issue is that PUE depends on the utilization of a data center. If there are unused servers, this can increase the PUE, but adiabatic cooling has an opposite trend: “Under a partial load, adiabatic gets better,” he says. This means that beyond a certain point, chasing a lower PUE can be counter-productive. “We caution against being enslaved to PUE and having all your future strategies dictated by it,” says IDC research manager Kelly Quinn.

PUE isn’t everything

Avelar agrees: “PUE has done great things for the industry, but it is important to not look at that blindly.” In his example, when the server fans switched on, the PUE of the data center would go down, even while the overall energy used to cool it was going up and its efficiency was getting worse.

cool issue 7
– Spooky Pooka

Avelar warns that adiabatic cooling kit can raise availability concerns. These might be “paranoid,” but there are physical limits to what outside air can achieve, and in some parts of the world the concern will be justified.

More practically, adiabatic units are big and heavy, and will be tricky to retrofit into older data centers. New sites will try to put it on the roof, although it has to be fairly close to the IT floor.

Sound complicated? It all boils down to keeping a cool head and doing the math while your servers get warmer.

This article appears in the July/August 2015 issue of DatacenterDynamics magazine