Archived Content

The following content is from an older version of this website, and may not display correctly.

Five years ago the nascent Data Center Infrastructure Management (DCIM) industry was thrilled to be able to offer to its earliest adopters the ability to model and view in high fidelity the floor plans and rack elevations found inside a data center.

In this first phase of the DCIM marketplace, it was awe-inspiring that such an accurate visual representation of the data center could be created by description alone, manipulated on screen and then reported in various different manners to suit their needs. Demo after demo was given by a dozen or more of the earliest DCIM vendors where the main criteria being reviewed by prospects was the simple ability to render photo-realistic images of racks.

End users appreciated what they saw with these early DCIM solutions but to a large degree, they struggled to place a specific value on what that new set of capabilities meant to their business challenges. In some cases the ability to easily manipulate and document their IT structure was enough and solutions were purchased for the visualization features alone but in general the market was waiting for the big business value to present itself.

In the second and current phase of DCIM it has become crucial to connect the DCIM model to the outside world in real-time. The core business value of any DCIM solution is rooted in how well integrated it is to the overall forward-looking data center management domain. Not just the connections to power and temperature sensors but to all IT service management (ITSM) process and systems as well. More information simply yields better decisions and in a world focused on creating cost-effective Service Catalogs, this connectivity becomes the only means to determine what the true cost is to deliver any given service.

With costing in mind, DCIM today is about enabling better decisions in the optimization of the data center and its ongoing operations. DCIM is being used to enable and enforce best practices and add consistency across the enterprise and all of its data centers. DCIM is about creating highly supportable structures, where all of the processes used to generate and maintain these IT structures can be optimized, defended and repeated regardless of who and where those structures are found as companies grow.

Today, a number of key trends have emerged in the broad DCIM category. Here I will list ten.

1: The formation of two camps:
As of now, two major camps have formed in the DCIM market segment; 1) The management of IT assets within racks over long periods of time, and 2) The management of mechanical, electrical and plumbing (MEP) or facilities style management of power and cooling.

Most vendors are beginning to acknowledge the existence of both of these camps and their intentions to deliver solutions that cross the divide. Today each vendor’s delivered solution is rooted predominantly in one of these camps, and in most cases they ‘peek’ across the line to provide some limited functionality on the other side as well.  Will these camps converge? Time will tell. The promise is there, but the delivery of such truly converged DCIM offerings has eluded all players to date.

2: DCIM as a strategic tool:
As the technology has transitioned from curiosity to high priority consideration, most end-users realize that DCIM must become part of their management strategy rather than be treated as yet another tactical tool.

Data centers have grown up over the years based upon tool after tool. Today, the trend is to think bigger and more strategically and identify higher value solutions for all stakeholders. In the most successful DCIM deployments, large user populations across many disciplines begin to take advantage of DCIM solutions, and treat DCIM as a “must-have” rather than a “nice to have.” Some of today’s DCIM deployments are being used by hundreds of users across IT, facilities, financial and even the executive suite. DCIM succeeds when it is viewed strategically and when it’s value can be tied directly to the costs to build and maintain the data center.

3: System of Record status:
As the data center digital convergence continues, we see the desired for fewer and fewer tools. The trend is to find deeper and more integrated solutions which share information. Tools that require duplication of data are on their way out. Integration at the CI level is critically important. In the most successful DCIM deployments, the integrity of the DCIM repository has become so high, that the DCIM system itself becomes treated as the “System of Record” status for data center assets.

DCIM essentially becomes the most accurate representation of the data center, even more accurate in many cases than the general ledger maintained in Finance.

4: Data center change through process re-engineering:
Data Centers have always been an accumulation of change. What starts out as a crisp, clean data center design quickly becomes a fairly chaotic accumulation of changes which are poorly documented and in most cases highly inefficient. The trend today is to design and schedule all of the change that occurs in the data center.

Conceptually, the efficiency of a data center can remain high as long as disciplined change management processes are used. As long as change happens with the support of the change management processes found in DCIM, this model is always an accurate reflection of the state of the data center itself and the data center remains optimized. By doing so, the trend today is to expect answers to new business questions by reviewing this model, rather than walking through the data center aisles themselves and trying to answer these questions by observing what actually exists on the floor.

5: SaaS is gaining in popularity:
While DCIM deployments initially began as on-premise solutions, most DCIM vendors have since began to offer Software-as-a-Service (SaaS) versions of their solution. DCIM as a service capabilities are becoming identical to each vendor’s existing on-premise solution.

One reason for the increasing trend to use SaaS versions of DCIM, is that SaaS offerings have the ability to be funded using operational expenses versus capital expenses. While the approval process may be similar, the perception is that SaaS offerings are much more open and flexible. Important to note is that while the trend is to consider DCIM as a Service, the level of commitment to on-premise or SaaS delivery must be the same in order to realize a successful DCIM deployment.

6: DCIM is no longer considered an island:
Whereas early functionality was derived from stand-alone capabilities, the integration to ITSM systems, including change management and CMDBs is critically important. The trend today is to build the desired higher value integration capabilities into the selection process for a DCIM supplier. Integration to ITSM systems is critically important to make DCIM a strategic part of the Data Center management framework. Above all, the trend to deploy DCIM today, and then figure out how to integrate it in the future is all but a thing of the past.

Complete thought on how DCIM can be leveraged by other core management solutions is being done at the very beginning of the DCIM journey.

7: Tight integrations with virtualization solutions taking place:
More than half of all servers today are virtualized, so it only makes sense that the most successful DCIM solutions must be able to tightly integrate with the industry’s top virtualization platforms.

The trend today is for prospects to look for DCIM solutions that can provide real-time representation of the existence and status of virtual servers on each of the base hardware devices. DCIM must maintain a view of the hardware itself as well as the guest operating systems in real-time. The trend is to use this connected view of virtualization to do capacity planning from the operating systems down to the server, and ultimately down to the physical resources required (space, power, cooling).

8: Increase usage of built in sensors:
It has always been a desire to understand environmental metrics found in the data center, but deployment of these types of sensor solutions has always been costly and cumbersome. The trend today is for IT gear manufacturers to build hardware sensors into each piece of enterprise-class gear they make.

These sensors can report power usage, various internal temperatures and physical security.

The trend is to take full advantage of these built-in sensors, and then combine them with additional sensors (typically wireless) for areas that are not part of an active gear location. The most successful DCIM software solutions take the combination of these sensor sources and use it in real-time to keep the DCIM model accurate.

9: CI/Asset accuracy via multiple means:
There is a growing trend within the enterprise data center to provide some form of check and balance processes to continuously ensure that the DCIM model accurately reflects the data center floor itself. This check and balance takes many forms; from very manual audit approaches using spreadsheets, to handheld barcode and/or automated RFID based location technology and even by combining information seen with discovery or CMDB connectors.

In any case, the trend is to raise the accuracy of these data center models in all ways that are practical, through ITSM integrations, integrations with virtualization systems, as well as location-based technologies.

10: Enterprise-wide adoption of DCIM:
DCIM solutions started as demonstrations of visualizations across a handful of racks. As the business value from a strategic standpoint became clearer, DCIM deployments dramatically increased in scale, to the point that some DCIM vendor solutions are now seen supporting tens of thousands of racks, across dozens of data center sites.
Remember that in-house, modular and co-location sites all appear as data center sites within a DCIM offering, so it is quite common to have dozens of sites which must be represented in a single occurrence of a DCIM solution. The trend in IT is to look at all resources at the same time, at any scale, regardless of where they exist.

Today, we can see a changing landscape in the DCIM market due primarily to the shift from tactical toolsets to a well-conceived data center strategy. DCIM vendors are finding that they must focus on delivering deep value in one or more areas, refraining from resting on demonstrations of fairly thin visualization capabilities alone.  End-users are finding themselves in the position of having to articulate their specific DCIM needs in much more detail than they have ever had to in the past. In fact, IT organizations are now being asked to comprehensively articulate their service portfolios, identifying each of the specific services found in the related service catalogs. DCIM becomes a critical component in creating those service catalog items since DCIM is all about managing the costs to deliver service.

So what does the next phase of DCIM look like? The next phase of DCIM will all be about control and orchestration. It will leverage the modeling and connectivity perfected in the first two phases, and then add a growing level of automation using the available control mechanisms, replacing manual and human processes with automated ones.

In most cases automation engines will become the means to execute complex sets of rules, based upon vast amounts of performance metric information. While this next phase will take much more time to mature due to the multi-vendor nature of the data center, it is this next phase of DCIM that enables the most cost-effective supply of processing to be more closely aligned with the demand for it.