DCIM, or data center infrastructure management, has been around for a long time at this point. After all, as companies’ IT needs became more complex, so did staying on top of it.
In the data center industry, the numerical progression seems to happen across the board. Industry 3.0 to Industry 4.0, 3G to 5G (and beyond), and DCIM 1.0, 2.0, and 3.0. But what does this progression look like, and why was it necessary?
Schneider Electric’s Stephen Brown is in the perfect position to tell us. Currently the segment director of Cloud and Service Providers, and previously head of Product Management, we were delighted to be able to interview him at our recent DCD>Connect London event, the recurrent theme of which conversation was change.
“In 2012 when I started working, it was a lot of enterprise data centers. That was the big focus, and we're talking about two and a half to five megawatts as some of the biggest data centers around. One of my first projects was our first reference design for a 20-megawatt data center. We were thinking, why are we investing this time? No one's gonna build this big,” said Brown.
“Now, of course, the entire thing has changed. Now we're clearly in a hybrid world. People moving to the cloud, that's driving so much of the way that IT is managed today, but that's no longer the only piece, you know, now we're coming back towards the Edge.”
The result of this is that many now rely on ‘distributed IT’, where their IT is in many different locations of different kinds, thus making data center infrastructure management more nuanced, and complicated.
According to Brown, when considering the sprawling enterprise IT architecture of today, 60 percent is in the public cloud, 20 percent in a retail colocation, and the rest makes up a significant on-prem footprint.
For Brown, the breakdown of DCIM 1.0, 2.0, and 3.0, is less about the offers of Schneider Electric, and more about how the company sees the category of capabilities evolving.
So what do the different phases of DCIM look like?
“1.0 is actually even prior to that . That's where we think of this idea of the client-server as the predominant form of enterprise management, then you were looking at pockets of small to medium data centers. That drove the bulk of IT, and the DCIM that you needed, the infrastructure management, looked a lot different,” explained Brown.
“It was a lot simpler, in some ways, because you had a lot of onsite support, you had people there, you needed some level of monitoring and management, but it was really just all about resiliency.
“Then as we go into phase two, data centers were getting much larger and more complex. You had a whole host of new issues. It went beyond just monitoring, and extended to planning and modeling. We needed to know how we could leverage solutions to understand, ‘where is the best place for a server? I'm getting 50 of them every day, I need space, power, and cooling availability.’
“It got much more complex than the simple DCIM 1.0 requirements. I also think DCIM 2.0 is where we had a lot of friction around the term DCIM. What is it? Why do we need it? It's brittle and complex, it's overbuilt, etc. But there was also a lot of hype around that at the same time as well.”
This friction mentioned has not prevented DCIM software from being widely used throughout the data center industry. When asked if Schneider Electric continued deviating from the name that had been somewhat dirtied, Brown exclaimed that Schneider Electric intends to reclaim the term as an engine for enterprise IT progress.
Regardless of what you call data center infrastructure management software, it now needs to handle new challenges, and take on another role within the data center.
“When you get to 3.0, this hybrid world where the requirements have changed, you've got IT disparate across a large geography, and you've got different complex workloads running in different environments. We thought, hey, there is a new category emerging; a new version of DCIM.”
It is the development between DCIM 2.0 and 3.0, that the change is most noticeable.
“[For DCIM 2.0], the monitoring didn't need to evolve too much. It became much more about layering and modeling some level of context so that you would know how to manage this data center, whether I have enough cooling, whether I have enough power, but it was still contextualized by an inflexibility with how that could be deployed. It was still on-premises software, still within the four walls.
“Now, a data center no longer has four walls. Now, a data center is a concept more than a physical place for an enterprise. Now, the constraints are more around ‘how do we make this more flexible, more customized, and focusing on the problems of today?’
“Resiliency is still there, it's always going to be there. Now, there’s cybersecurity, everything's connected and ‘part of the furniture’. How are we going to make sure that we protect that infrastructure? Then, most importantly, is sustainability.
“I was just reading a recent McKinsey publication which said that enterprise IT is about half of the carbon emissions of aviation. I think as we rationalize not just the large data centers and their impact, but look into the corporations driving so much of that consumption, those teams, the CIO organizations, aren't equipped to make the right decisions today around sustainability, resiliency, and security.”
DCIM 3.0 is, hopefully, going to change the game. No longer is the customer the data center manager. Within this ecosystem of tools focused along those three vectors: resilience, security, and sustainability, DCIM 3.0 will act as the CIO of the organization - seeking to elevate the way the data center network as a whole is managed.