I haven’t blogged on this site for some time and was in danger of getting out of the habit. But to blog I need a seed and that usually comes from a spin-rich press release and there has been a dearth of those recently. But then along came three, just like red buses.

thermometer 428339 1920 pixabay jarmoluk
– Pixabay / Jarmoluk

Non-compliance with ASHRAE?

The first of the three items that raised my attention was the one that involved a temperature sensor company announcing that that they had surveyed nearly 200 data centres in the UK and found that 80 percent ‘didn’t comply with ASHRAE Thermal Guidelines’.

Now, from my perspective, although I was impressed by the amount of work that would go into auditing such a large sample, the first thing that struck me was that they quoted ’18°C to 27°C’ as if these are limits set in stone. That is the current ‘Recommended’ range but the press release didn’t say anything about the ‘Allowable’ range, nor what class of hardware was installed.

For example, if you purchase Dell servers they are all rated to be capable of 5°C to 40°C in the Allowable range and I almost refuse to accept that even a few (if any) of those data centres exceeded 40°C inlet. Not that deliberately raising the temperature in the UK above 30°C makes any sense anyway.

The release does not appear to say that the excursion in temperature was at the upper limit. In fact, I would be less surprised if most of the facilities were under the 18°C Recommended ‘limit’ as data centre temperatures are, in my experience, always rather too cold than too hot!

Luckily, it doesn’t mention humidity as, now, we don’t need to worry about anything other than a -9°C dew-point so humidity sensors, controls and humidifiers have become redundant in Europe. Also, as each server measures its own inlet temperature (along with more than 30 other internal sensor points) the cooling control system should be driven by the servers, not sensors in the aisle.

The other general enterprise data centre ‘thermal disease’ is too low a delta-T and this contributes to the low ‘hot return’ temperature that minimises the opportunity of using free-cooling. That said, it can easily be recommended that these 200 facilities should comply with best practice air management (including aisle containment, blanking plates and hole stopping) before they worry about the server inlet temperature. Then, if any hot-spots remain, they should employ an experienced computational fluid dynamics (CFD) practitioner with a good set of CFD tools to maximise air-management efficiency and minimise energy consumption.

If the press release is accurate then not enough of the facilities are thermally managed correctly

So, what do I think of the ‘ASHRAE non-compliance’ press-release? Well, any publicity is good publicity but it doesn’t make much sense as it stands.

Good things about ‘edge’?

This ‘seed’ was neither funny nor particularly wrong but is a great example as to how a lack of joined-up strategy can occur in a big organisation.

A comment piece referring to White Paper 174 published by Schneider (possibly revised recently?) was extolling the good things about ‘edge’ computing in facilities up to 10kW – i.e. one or two racks. I seem to remember that the list of good things was three items long, but, at my age, I could be mistaken. Anyway, regardless of that, the one good thing that was NOT mentioned was the recovery of the 10kW of waste heat by applying liquid cooling. In my opinion, this is the most important factor in future ‘edge’ applications for the IoT, driverless cars, smart grids, smart transport and smart buildings etc. In the opposite way, no-one can use hundreds of MW of waste low-grade air-cooled hyperscale heat in remote cold regions where power is plentiful & cheap – but in the city centre hot water at 70°C is highly usable.

So, what did I see as strange? Well, an investment arm of Schneider has investments in Iceotope, the UK based and globally leading start-up in liquid cooling. Surely someone in marketing needs to get the spin spun?

DCIM is a success? That’s news to me…

Intel has been pushing its Data Centre Manager software which, I guess, is potentially as close to a true DCIM product as anything can be. As the M in DCIM (data center infrastructure management) doesn’t manage anything but simply informs the human operator for them to make the (perhaps better) decisions and take any action - the Intel idea, based on the chip and, presumably, therefore based on ‘live’ processor load, could be a huge step forward.

My only reservation is that I hope the Intel initiative won’t go the same way as much of their on-chip thermal management firmware – turned off or overwritten by the server OEM before shipment to the user. However, before too much thought could be applied to that idea I was flabbergasted to read an Intel article (more or less) entitled ‘DCIM much more successful than expected’.

Wow! Who did they ask? Anyone in Europe? They even claimed 80 percent of the surveyed users had a DCIM product in service. If that were true why are there a high proportion of DCIM OEMs in the world trying to find a buyer for their business before they must invest more cash in better protects?

There is, as far as I can tell, no simple return-on-Investment (ROI) for DCIM. There is no doubt that it can be an attractive nice-to-have but it does no more than a combination of a BMS, EMS and MMI’s from the UPS, Cooling, Fire and Security systems. Most NOCs have multiple screens for clarity on each system so what is the advantage of a ‘single window’?

It’s not as if DCIM products ‘fail’. They don’t - they work. But what do they do? The point is that they generally don’t replace anything (apart from, possibly, the asset management tool if done on spreadsheets), cost quite a lot to install and integrate and are quite hard to sell. Who do you sell it to? When in the data centre build cycle?

And it’s not just me. Everywhere you look you will find debate on the DCIM hype-cycle and whether or not it will survive before it can mature – probably into something like the Intel product, based on the live processor load or the instantaneous UPS output. So, Intel’s claim that ‘80 percent of the survey has a DCIM’ article really took me by surprise. Or do you know better?

Ian Bitterlin is a consulting engineer at Critical Facilities Consulting Ltd and a visiting professor to the University of Leeds, School of Mechanical Engineering. He also coaches data center staff at DCPRO