Archived Content

The following content is from an older version of this website, and may not display correctly.

As data centers continue to consume an ever-greater share of the world’s electricity, CIOs are coming under increasing financial and moral pressure to reduce power consumption in their facilities.

The exponential growth in data volumes, together with the relentless rise in energy costs, requires data center managers to investigate new approaches, technologies and architectures to ensure they can rein in spiralling energy costs.

In the first instalment of our guide to reducing data center energy costs, we will highlight the steps that every operator can take to effect immediate and lasting electricity savings.

Step 1: Consider alternative locations, and delivery and management models
Once you’ve considered what hosting model best suits your business and what applications you want to run inside your own facilities, the next step is to reassess the number of physical data centers that you need to own and operate.

By optimizing the delivery of applications over the network, you can reduce the number of data centers required.

Then there’s the issue of data center location.

Traditionally, businesses built data centers in close proximity to their employees.

A more modern approach is to position data centers in areas where the network performs optimally and to base employees in offices elsewhere.

If you invest in automation and remote management tools, you no longer need people based at your data center facilities.

Another approach is to increase the temperature at which your data center operates.

Many data centers can achieve significant overall operational cost savings by widening the temperature and humidity ranges for equipment.

The latest industry research shows how facilities can run their equipment in hotter, more humid environments with negligible effect on IT reliability and service availability.

Increasing the temperature at which you run your data center by just 5% can translate into cooling savings upwards of 10%.

Step 2: Virtualize and consolidate
Virtualization and consolidation are steps in the right direction towards an energy-efficient data center.

Many servers today still utilize between only 5 and 15% of their capacity to service just one application.

With appropriate analysis and consolidation, many of these devices can be combined into a single physical server that consumes only a fraction of the power of the original devices – resulting in cost savings as well as creating a more environmentally sustainable data center environment.

Virtualization and consolidation projects are complex, but the benefits are compelling: improved application availability and business continuity, independent of hardware and operating systems, among others.

Step 3: Design a best-practice floor plan
The Uptime Institute surveyed 19 data centers and reported that, on average, only 40% of cold air went directly towards cooling the servers in the room, wasting yet more power in the data center.

So, whether you’re designing a new data center or upgrading your existing environment, make use of existing best practices in data center floor plan designs.

Examples include:
Hot aisle/cold aisle layout
By implementing a hot aisle/cold aisle layout, equipment is spared from having hot air recirculated, thereby eliminating the risk of an outage through device failure.

Also, a common hot aisle gives you the ability to contain areas where heat density is high – such as racks with blade servers – and to deal with the heat in a specific manner. This allows for multiple heat-rejection methods to be in use within one data center.

Distributing power across racks
Distributing power equally across racks minimizes hotspots and the need for sporadic hot-aisle containment.


Ideally, power will be balanced per rack to within a 10-15% variance.

Often, data center designers place servers performing related functions in the same racks, but the benefit of having these servers in close proximity is counteracted by the heat density this may cause.

Isolation of dense server configurations where those units can operate at a higher temperature is the exception to this approach.

Minimize or eliminate underfloor cabling
It’s imperative for organizations with static pressure cooling to minimize or eliminate underfloor cabling.

If you can’t avoid it, use conduit, cable trays, and other structured methods for running cabling.

This minimizes barriers between CRAC units and perforated tiles, resulting in more efficient airflow and optimised cooling system efficiency.

Step 4:  Redesign the data center network
Technology, architectures, and approaches for data center networks have evolved significantly as organizations and the industry have put more focus on ensuring that the network is the platform for the modern data center.

Networking can contribute significantly to energy savings: the deployment of specialist data center network hardware offers significant benefits over general-purpose network hardware.

For example:
- front-to-back airflow to support hot/cold aisle layouts

- higher-efficiency power supplies that dramatically reduce power consumption per port

- convergence functionality to enable the consolidation of multiple devices into a single appliance, which in turn reduces the number of - cable runs and improves airflow through the entire data center

Our next installment of tips will look at some of the specific equipment and technologies that data center operators can implement to drive down energy costs yet further, and deliver even greater savings.

The opinions expressed in the article above are those of the author and do not reflect those of Datacenter Dynamics, its employers or affiliates.