The past few weeks have seen the latest editions of two series of reports analysing key data centre trends. The first – Cushman & Wakefield’s Data Centre Risk Index returns with a 2016 edition, after an absence of three years. The second - the LBNL analysis of American data center energy usage researched and authored by a panel of respected American academics and practitioners - returns after an initial report was presented to Congress in 2007, and an update to that report issued by Jonathan Koomey in 2011.

2016 Data Centre Risk Index: Cushman & Wakefield


United States Data Center Energy Usage Report:  Lawrence Berkeley National Laboratory

These reports are written for different purposes – one to raise questions, and one to provide answers. They also use different levels of analytic stricture, but each in its way demonstrates the changing science of data center market analytics. The Cushman & Wakefield report presents a risk profile of 37 countries and uses a process of ranking factors that have been weighted on the basis of interviews with 4,000 clients at the end of 2015.  This is a statistical process subject to strong variation between customer types and sensibly Cushman and Wakefield offer the possibility of adapting the weighting to meet an individual company’s risk requirements as well as a Data Centre Advisory team to assist in this.  

Graph business figures chart growth market research
– Thinkstock / Wavebreakmedia Ltd

The risks of risk assessment

There are some possible problems with the approach as demonstrated in this publication. Large data center markets, particularly the USA but also notably India, Canada, Russia and Brazil are a patchwork of different sub-markets. Most markets of any size divide their data centers between central city areas, fringe urban areas, smaller towns and remote locations. One would imagine that each has different risk components.

The 2016 report is 50 percent shorter than the 2013 version, and there is no demonstration of how rankings combine to form an overall score (the kind of itemized performance diagnostics that make publications such as the World Economic Forum Global Information Technology Report more navigable). The very useful thumbnail sketches of each market from the 2013 report are replaced by simple maps in case you didn’t remember where Hong Kong was. The more detailed risk profiling of 2013 has been replaced by more generalized discussions of data center topics.

The weighting criteria are mostly risks - but not all of them are. How is sustainability a risk? Is it the risk of inadequate sources of renewable energy? Or the risk (actual or perceptual) associated with the reliability of sustainable energy? The same with GDP per capita – lower may not necessarily be bad so long as it is growing and able to purchase IT hardware and services. And there is no mention of workforce capabilities and skills as a risk factor – even though this factor has usually rated third or fourth in the perceived risks listed by the DCD annual Data Center Census, not far behind issues that do occur in the Cushman’s risk index – operational costs, power and labour costs as a component of opex, power availability and adequacy of connectivity.

Aurora Borealis above Kirkjuffell in Iceland
Aurora Borealis above Kirkjuffell in Iceland – Thinkstock / Nikolay Pandev

Did the Nordics improve?

This brings us to the ranking outputs, and this reflects the shift between 2013 and 2016 from the major data center markets of Western Europe and the United States to the cleaner, cooler, renewable-richer countries of Iceland, Sweden, Norway, Finland, Switzerland and Canada.  The drop in position for three of the five largest data center markets in the world - the United States, the United Kingdom and Germany - from 2013 to 2016 is emphatic.  Sure, the weighting process has been firmed up but this should affect all countries equally.  There is no ‘real’ data to indicate whether the major markets have got worse as the Nordics have improved, or whether the latter have just capitalised far better on changing requirements.

The short discussions that round out this report actually indicate what is missing from the statistics – that data center decision making is a staged process and that different factors will have different weights at different stages of the process, and that risk is usually traded against cost and opportunity (there are, after all, risks associated with not doing something and missing an opportunity).

The report itself acknowledges upfront that it is intended as one input among a number of others. Yet, the Cushman and Wakefield Risk Index will certainly provoke discussion and further analysis of locational options for organisations which have some freedom to choose the location of their facilities or of their data just so long as geographical division remains a factor in the data center world.    

us american power flag electricity thinkstock photos blablo101
– Thinkstock / Blablo101

Nitty-gritty power study

While the Cushman & Wakefield publication demonstrates the value of simplicity, the second review publication – the Ernest Orlando Lawrence Berkeley National Laboratory report on United States Data Center Energy Usage Report – is necessarily an object lesson in dotting ‘i’s and crossing‘t’s.  After the quite substantial but necessary revisions to the original forecasts made in 2007, the 2016 document is very much more step-by-step than its precedent as it works its way through the complex and changing data center energy consumption process.  

The report provides a careful path through databases and sources and working analyses, equations and algorithms. Technical definitions have now been updated across all equipment types to reflect the evolution of the industry and the report describes also the assumptions made at key points in analytic process in terms of the frequency of IT refresh, server utilisation, port speed, the volume of ‘unbranded’ equipment and power draw by server class made through the course of the research process. A number of these items are referenced for further study in the end chapter.

While the ease of reading this report will depend on grasp of the technology and analytics described there are some possibly unintended gems. The careful attention paid to drawing out the learnings of the previous research and the changing of assumptions and definitions means the report can be read laterally as a history of American data centers from the turn of the century on. An earlier 2007 report produced on behalf of AMD by one of the authors – Jonathan Koomey - computed energy consumption by relevant classes of server world-wide and while the focus of updates has been on the United States, any attempt to apply the processes more widely across other countries now would be a fascinating and worthwhile exercise.

According to the LBNL 2016 report, the very much reduced rate of growth in data center energy consumption will be driven by the shift away from ‘on premise’ to hyperscale data centers. We can assume that the United States is well ahead of the rest of the world in this transition, and it accounts for some 70 percent of the facilities in the world which are large enough to offer the economies of scale implicit within hyperscale. So, is the US  now effectively on a different planet?   Which of the consumption and efficiency paths are the markets in the slipstream of the United States now following?

Solid conclusions 

There is little to argue with in such a thorough and well-staged report. Where there might be doubt, this is pointed out. The road to hyperscale is less a mass exodus than a series of ducking and weaving manoeuvres as companies work out the environments that best suit their IT requirements but the LBNL report is concerned with the overall direction rather than individual journeys. The very largest data centers in the United States use (reports suggest) only 5 percent to 6 percent of total US data center electricity and this report points out that half of servers are in server rooms and closets where the potential and the incentive to save energy may be far less than in the larger facilities (short of decommissioning those rooms and closets, of course).   

It is possible that there is a ceiling for the use of public cloud and also a line beyond which energy efficiency cannot go much further (although the PUE assumptions for 2016 are based on only a ‘modest’ improvement over 2007 suggesting room for improvement).

The key overall finding – that US data centers are not ‘hogs’ – is perhaps counter-intuitive and surprising.  That maybe partially the result of a media heritage that sought out examples of energy wastage like truffles.  Possibly a report in 2025 or thereabouts will surprise us again, hopefully in a positive way.  But the LNBL report is constructed for the post-cloud prediction era, modular so that assumptions can be replaced as soon as they are updated.  

nick parfitt
nick parfitt – DCD

Nick Parfitt is senior global analyst at DCD Intelligence