It’s an inescapable fact that data centers are changing. That monolith at the side of the freeway may be unflinching at first sight, but beyond the threshold, the way we consume data and the form it takes have transformed, not just over the past two decades, but even over the past two years. Meanwhile, the effect that these facilities have on the environment has never been under more scrutiny. Put together, these factors are driving out-of-the-box thinking to balance the exponential demand with protecting the global ecosystem. 

Eaton has a century of experience in industrial power management. With the launch of a new digital platform for data centers, it seemed high time we caught up with Mike Jackson, Eaton’s global director of product, data center, and distributed IT software, to look at where we’ve been, where we’re going, and how we’ll get there. 

Drivers of change

He begins by explaining what has driven this dramatic change in the needs of data center operators and their users: “Two big things have changed the industry. One was Covid-19, which changed the landscape of how businesses operate. I don’t think Covid changed how a data center operates, other than the need to operate with a reduction in physically present personnel, but there’s been a big shift for consumers.” 

“During Covid, many businesses that had little to no digital presence shifted their models. For example, moving to online ordering and curbside pickup required a significant amount of IT infrastructure refresh at the store level. What used to be a point-of-sale server and inventory system is now a comprehensive picking and packing e-commerce operation. Because of this shift, we saw more businesses become reliant on data centers to operate, increasing data center consumption from what we would consider non-traditional businesses.”

Machine-led change

As Jackson points out, the real change hasn’t been in data centers – it’s been in us. Nevertheless, facilities have been turned on their heads to accommodate these changes that were on no one’s radar and yet had to be adopted in record time. But it’s been more than the rise of humans. It’s also very much about the rise of the machines: 

“The second thing that’s changed the industry is generative artificial intelligence (GenAI). We’re seeing significantly larger power draws because of GenAI, so much so that we’ve seen customers pause a data center build because they want to get more power into the building and need to rework the design. New data centers that started as 20-megawatt designs now want 40 or even 50 megawatts out of the same envelope. Data center operators understand that GPUs and the things that make GenAI work are power-hungry, which means they’re also cooling-hungry, but the massive uptick in GenAI demand is driving data center operators to react in every way possible to fill the demand. The utility companies are starting to feel the strain of this increased draw as well.”

To pivot is cool

This means that simply pulling out an old CPU-powered rack and replacing it with GPUs is not going to cut it. Cooling many GPU arrays running at full tilt requires a whole new approach including pivoting to an entirely different form of cooling altogether, as Jackson explains:

“Not everything in a data center is converted to GPUs. But you’ll have certain parts, whether it’s a row or a cluster or a couple of rows of GPU-based racks, that are significantly more power-dense. We’re seeing companies, especially colocation data centers, get creative with cooling management, specifically with liquid cooling where we’re seeing an increase in direct-to-chip, full immersion which is increasingly becoming an option, or rear door heat exchangers.”

If they’re not careful, data centers can push the limit of the local power grid and availability of water for cooling while expanding capacity to meet demand. But they are also having to find ways to reduce carbon dioxide and greenhouse gas emissions as the world battles climate change. As Jackson tells us, it’s a delicate balancing act, but there are some clever workarounds: 

“Those familiar with the Toyota Production System know about the idea of opposing metrics – you can’t have one metric go up without the other one going down. We’re seeing that situation play out in data centers – the two most “sustainably challenging items,” which are power and cooling, are going up. Which brings the question: How do you keep your achievements/progress toward sustainability from going down?

“There are a few solutions. One is understanding when you should be training models, outside the peak demand portion of the day, or when there’s more availability of renewables. Another is to move workloads around – “follow the sun”, as it were. If you have a process that’s eight hours long, you can run it in a data center on the other side of the world where it’s night, and you could potentially have free cooling.”

The sustainability push

Those relatively easy wins are just the beginning, and advances in technology have meant that, with the right infrastructure, you can push even further toward a sustainable facility: 

“All data centers have UPSs (uninterruptible power supplies). A traditional UPS is typically used as a safeguard if the power goes out, to sustain the gap between the outage and the generator kicking in. But now, battery technology has advanced. So, a four-hour battery pack, for example, can feed up to two megawatts of UPS. With lithium-ion, the UPS can sustain significantly more cycles versus lead acid. That means data centers can use UPSs, like Eaton’s Energy Aware UPS, to supplement the grid.” 

“If we have high power demand, we can use the energy that's stored in the lithium-ion batteries to offset the data center power draw until lower cost or renewable energy is available. Once on renewable power, you can also recharge the batteries for the next peak offset. The controller or our power management software maintains a failsafe, so if the batteries are used to increase sustainability, they will always have enough battery power remaining to ride through any sort of outage the data center operator decides.”

Building intelligence

In other words, the idea of “Follow the sun” can take many forms. The secret is to build intelligence into the data center through a combination of intelligent hardware that incorporates sensors and a reliable and robust software management system. Enter Eaton:

“Historically, a data center that is not well monitored can operate at a temperature that’s over-cooled for a longer period. Without a way to know this is happening, the facility can use a lot of unnecessary extra power. Once this is identified, the data center can dial back the cooling, but it takes time for them to react and regulate. Only then, will they see the savings. When you combine intelligent hardware (UPS, rack PDU, CRAH, etc.), sensors, and software working together, you enable data center operators to identify and respond with remediation efforts much faster. So, you're not delaying a reaction in the time from when you're alerted of an out-of-bounds operating condition to when it is ultimately resolved. Software, in combination with intelligent hardware, shrinks that time so a data center will spend less time in a sub-optimal operating state,” says Jackson.

Data is changing

The challenge comes because every data center is different – from the stalwarts with two decades of reliable service who are suddenly being asked to operate in ways they were never designed for, right through to state-of-the-art facilities. The problem is that the form of data is changing, regardless of a data center’s ability to cope with it. 

Eaton
Eaton is a leader in power provision for the data center industry – Sebastian Moss

As Jackson explains, “You have data centers across the world with vastly different levels of maturity. On one end, you have data centers that are intelligent and operate extremely well with low PUEs, and on the other, you have data centers that have very little insight into their operation’s real-time performance. There should be a lot of focus on data centers that have been operating for two or more decades because a lot of the infrastructure that they have isn't capable of delivering the sustainability targets desired. We love helping customers in this situation navigate their digital journey. Upgrading their facilities with new hardware and software is truly game-changing for their operations.” 

“Don’t forget, hardware has come a long way in the last 10 to 20 years. A transformer used to be, at best, 92 percent efficient. Now, they’re up to 98-99 percent efficiency. Same thing with the UPS. Suppose it was installed 20 years ago, at best. In that case, it’s providing sub-90s from an efficiency standpoint, whereas overall losses for new data center builds are more like three percent across the entire data center.” 

“It’s about how to balance old versus new. It used to be that data centers were very stable loads. With GenAI, you're testing the data center's ability to spin up and spin down, and that's not the way data centers have historically operated. There's a significant change in the power draw in a data center, which is not the normal operating state that they're used to, and dealing with those changes has injected a lot of complexity for data center operators.”

The Brightlayer approach

For Jackson, the key to solving this conundrum is visibility. Eaton’s Brightlayer Data Centers suite includes a new all-in-one solution for monitoring, controlling, and optimizing data center operations, from the core to the Edge. The platform is the first in the industry to unite asset management, IT and operational technology (OT) device monitoring, power quality metrics, automation, and advanced electrical supervision in a single, native application. 

He explains, “When you can see your operations holistically and clearly, you can make intelligent decisions about managing workloads. You need the intelligence to report on what's going on – the metrics, and the KPIs (key performance indicators). If you can't measure that, you don't know what to change and you won't know the impact of any changes. The software gathers information from sensors and intelligence from the infrastructure devices themselves. Then, it aggregates all of these data points together, so you can quickly see trends or anomalies.” 

“From there, we can look at moving from supervised automation to something fully autonomous, where the software identifies an issue and makes the change automatically. This allows the value capture from remediations to be almost instantaneous.”

“Combining everything in a single platform where customers can manage the IT and facility assets in their data center, along with the IT assets at tens to thousands of distributed sites is a game changer. When you know what you have, and you know what you're trying to achieve, you can optimize your operations and physically move or change things in the data center through software.”

Predicting change, built-in

The next stage of automation is predictive analytics, and that too is covered in Eaton’s new offering. 

“Analytics is another key customer need we’re addressing with our platform. We have a lot of ‘state of health’ analytics today. Take a UPS, for example. We have analytics that can predict when a fan, capacitor, or battery in one of our UPSs is going to impact reliability. Our algorithms can provide a six-month warning and then a 60-day warning. This is just one example of how we’re harnessing the power of analytics. We have a team of data scientists working hard to develop new algorithms as quickly as possible to drive more even value for our customers.”

Another benefit of the platform: it can be deployed in stages. Rather than trying to implement the entire software platform at once, an operator can solve the most important challenges of today and then expand the platform as more capabilities are needed. This can be done in a modular fashion with a simple license key update, which adds the new capabilities to tackle the next use case. 

Eaton’s platform approach also means that customers can start with traditional data center infrastructure management (DCIM) today and add an electrical power monitoring system (EPMS) and/or edge capabilities in the future. The platform can scale and expand as budget or staffing support allows.

Providing what were traditionally three siloed software applications—DCIM, EPMS, and Edge—in a single platform offers other advantages, like reducing the learning curve for staff, though, of course, Eaton is on hand with support and advice in many forms. 

“We've built lots of self-help capabilities into our software. There's a full library of how-to videos, more than 50 today and growing, and formal training is available to customers as well. But we also provide deployment services for our customers, where we can take on as much or as little of the process as customers desire, from deploying infrastructure to creating navigation trees and floor views to KPI dashboard creation and integration to other 3rd party software systems.” says Jackson. 

“There are a lot of capabilities inside the platform, which is fantastic for an admin or power user, but not everybody needs that much information. We can personalize views at the user level, so users can see everything or only what they're responsible for. So, data center staff can log into the software, quickly do what they need to do, and then move on to their next priority. When you scale that kind of efficiency across the organization, it can have a big impact.”

Gradual change is possible

Of course, if you are onboarding in stages, it is vital to ensure that the new software works seamlessly with the old. Jackson recognizes the importance of integrating with third-party solutions:

“The core purpose of our software is to cover a data center’s entire physical infrastructure, but we are acutely aware that other systems in the data center are also critical to operations. For example, we have an out-of-box integration with ServiceNow. So, if an alarm is triggered by our solution, we can send it to ServiceNow, which generates a ticket. Once the work’s complete and the ticket’s closed, ServiceNow will send it back so we can resolve the issue in our system. Our software is extremely flexible from that perspective. With third-party integration, we can associate multiple inputs to create even more value for our customers.”

Knowledge is power. We know it. Eaton knows it. But so do hackers. So it’s vitally important to ensure that your software solution is as secure as it can be, says Jackson. 

“Each of our customers has a corporate Active Directory list. When they remove a staff member, that user is automatically removed from our system. Also, every user action is tracked in our software, and it's auditable. If you uncover a bad actor, you can trace every step they took through the system to identify the culprit, gather evidence, and mitigate the impact. 

“Eaton is also hypersensitive to cybersecurity, to the point that sometimes it costs us speed to market, but only because we want to ensure that our products are as secure and capable as possible.”

As data center operators navigate an industry that has changed beyond recognition in an almost incomprehensibly short amount of time, with infrastructure spread across not just facilities, but entire continents, the value of a complete software solution will become ever more important. As intimidating as it can sound, it’s reassuring to know that a lot of the challenges we face can be mitigated with a solid dose of intelligence. 

Read Eaton’s white paper to learn more about how their new digital platform can provide you with an operational and productivity edge.