For at least 15 years, there's been a consensus in the data center industry that the way to run data centers more efficiently is to run them warmer. Now, just when some of the more conservative parts of the industry are finally taking action in response, there are signs that things may not be so simple.

On the face of it, it is obvious. When the industry started to take efficiency seriously in the mid-2000s, engineers noticed that data centers were spending half their energy (or more) in cooling the building, and only half reached the racks of IT servers.

Obviously, if that could be reduced, the data center as a whole would use less energy. The industry created the PUE metric, and from then on focused on reducing the energy used in cooling data centers.

World of warmcraft

One way to spend less energy cooling the data center would be to allow it to warm up. Data center operators were keeping data centers below 20°C. This was seen as a waste of energy in the process, because IT hardware worked just fine at higher temperatures.

In 2004, ASHRAE (the American Society of Heating, Refrigerating and Air-Conditioning Engineers) recommended an operating temperature range from 20°C to 25°C. In 2008, it suggested that temperatures could be raised to 27°C. A further revision has taken this to 32°C (89.6°F) depending on conditions.

The US General Services Administration says data centers save four percent of their total energy, for every degree they allowed the temperature to climb.

Hyperscale companies picked this up, and Facebook took its Prineville and Forest City data centers to 29.4°C, Google went up to 26.6°C. Joe Kava, Google's vice president of data centers, said: “Google runs data centers warmer than most because it helps efficiency.”

For ten months in 2008, Intel took 900 servers, and ran half of them in a traditionally cooled data center, while the other 450 were given no external cooling, reaching temperatures up to 33.3°C (92°F). There were no ill effects, and Intel claimed it had saved 67 percent of its power budget.

The bulk of the industry didn't seem to take it on board. It has taken till this year, before colocation giant Equinix announced it would "adjust its thermostats" and make its data centers warmer. But the announcement only promised to push temperatures up towards 27°C - well within targets ASHRAE set 14 years ago.

And it's just a proposal at this stage, with no immediate effect. Equinix says it will be notifying customers at some time in the future, and negotiating the temperature increase with them, according to a "multi-year global roadmap."

Maybe cooling is cool

It's clear that Equinix is readying a process in which it will still have to convince customers to budge from the levels of cooling that have been the norm. It is pitching this as a way for customers to help Equinix reduce its own emissions, and thereby reduce their Scope 3 emissions - the emissions in their supply chain.

But, surprisingly, there's some evidence that those customers may have a point. Allowing temperatures to rise is actually a mixed blessing.

Professor Jon Summers, of Research Institutes of Sweden (RISE), suggests that much of the benefits claimed for hotter air temperatures may be completely spurious, and perhaps an artifact of the outmoded PUE metric.

Optimizing PUE treats energy in the racks as "good" while energy outside the racks is waste. Professor Summers has carried out research suggesting that cutting the energy in the cooling systems outside the racks actually increases the energy servers use to carry out the same calculations.

Warmer servers will fire up their fans, using electricity, and they will also lose energy and effectiveness in leakage currents.

“Increasing temperatures will improve the ISO PUE of a DC, which a vast majority appear to cite as a measure of efficiency,” says Summers. “At RISE Research Institutes of Sweden, in the ICE data center we have researched the effect of supply temperature on DC IT equipment using wind tunnels, full air-cooled data centers, direct-to-chip, and immersion systems connected to well-controlled liquid cooling testbeds,” says Summers. “The upshot is that irrespective of the cooling method, the microprocessors draw more power when operated hotter for the same digital workload due to current leakages.”

So running hotter shifts some energy usage from the building to the IT racks, in yet another example of the buck being passed between the old siloes of OT and IT.

Summers, and his colleague Tor Björn Minde, say that data centers should be run "as cold as possible," especially if there's a supply of cool air available outside.

It sounds to me like there's a trade-off here, between energy used in one part of the building and another.

In a while, we should have more insight on this from Interact, the TechBuyer spin-off which analyzes server energy use (and which won a DCD Award). There's a sneak peek here of its own experiments.

In the New Year, we could find ourselves working out a new consensus about data center cooling.

Get a monthly roundup of Sustainability news, direct to your inbox.