Archived Content

The following content is from an older version of this website, and may not display correctly.

Power Usage Effectiveness (PUE) does not define energy efficiency of the data center because it does not account for energy losses within the power and cooling train internal to IT equipment. A combination of facility efficiency (MEP) and IT efficiency (compute, network and storage) is required (Figure 1). This is conceptually the product of facility energy efficiency and the sum of individual energy efficiencies of compute, network and storage, each weighted according to overall IT power.

Previous discussion regarding IT energy performance for compute centered on the number of instructions, transactions, clock cycles per unit of power. For network bit rate, packet rate, per unit of power. For storage stored bytes, I/O rate per unit power and numerous other permutations for each category. We propose that to be consistent with the PUE definition, overall data center energy efficiency should only consider energy efficiency.

Here we consider the IT power train i.e. power transmission or conversion prior to delivery to an IT component that performs data state change. The majority of energy inefficiencies in IT occur in switched mode power supplies, internal cooling fans and DC-DC converter stages (Figure 2).

 

PUE can be skewed considerably when power or cooling functions that are usually part of the facility infrastructure are located within the IT infrastructure. E.g. in an enclosed aisle containment system replacing the computer room air handling units with overhead cooling coils and utilizing the IT equipment internal fans to provide all the energy for air movement within the aisles and through the equipment racks; therefore energy that used to be part of the facility side of the PUE calculation, is now part of the IT load in the PUE calculation. A similar issue exists with the replacement of the conventional UPS within the facility by a battery and charger integral to the server. These and other examples demonstrate that the delineation between where the facility ends and the IT infrastructure begins is becoming unclear.

The general form of the equation for calculating the energy efficiency of a particular device is given by Equation 1.

Where the mean energy efficiency of the IT device (η device) is the sum of the discrete energy efficiencies (η (k,p,r)) of the device based on variables (k, p and r) that drive a variation in energy efficiency for that type of device for M samples. The number and type of variables depend on the type of IT device. Computer simulation can then be used to interpolate between the known energy efficiencies at discrete data points to provide energy efficiency across the entire operating range of the IT device.

For the IT sub-system the energy efficiency is given by Equation 2.

And the mean efficiency of the sub-system (ηsubsys) is the sum of the energy efficiency of each IT device in the sub-system (ηdevice) for N devices.

Switched mode power supplies

Contemporary high efficiency servers, network switches, routers, and storage devices have internal switched mode power supplies that typically operate at their highest energy efficiency at around half electrical load. Power supplies are designed this way by hardware manufacturers because this is the point where the equipment is expected to operate most of the time.

Compute energy efficiency

Compute energy efficiency has increased substantially in recent years primarily due to virtualization of servers and the introduction of much improved switched mode power supply efficiencies. Other significant improvements in compute energy efficiency include the adoption of server energy management tools, power capping and varying CPU clock speed.

For compute devices operating in normal energy mode, the quiescent power at 0% CPU utilization is in excess of 50% of the device’s full load power. This is a significant challenge for manufacturers because it represents a substantial unproductive energy overhead in compute environments. One response to this is to utilize sleep-states e.g. Intel C and DC sleep-states; however there is a trade-off in terms time required to wake from the sleep states.

Server virtualization has provided a significant increase in overall data center energy efficiency. It is reasonable therefore that any new measure for server energy efficiency should include the impact of virtualization.

For enterprise class servers both CPU (and GPU) utilization has become less dominant from an energy standpoint primarily due to multi-core processors. The additional processing capacity has brought both memory and I/O capacity into play as factors driving energy consumption and efficiency in conjunction with CPU / GPU utilization. For compute therefore, the variables that drive energy efficiency are CPU utilization, memory and I/O configuration.

The required data often already exists in the form of energy calculators that show the relative power consumption from 0-100% utilization for most configurations. Examples include: IBM Systems Energy Estimator, HP Power Advisor, Cisco Power Calculator, Oracle Calculator, Dell ESSA and Fujitsu System Architect.

Network energy efficiency

Network equipment does not always exhibit an obvious correlation between network activity and electrical load:

  • Routers and switches may have multiple processors to manage traffic, not just a central processor. There may be multiple line cards, a management processor, and traffic management components.
  • If the equipment is providing Power over Ethernet (POE) there are power losses in the Ethernet cabling serving downstream devices and the downstream devices themselves.
  • Energy efficiency initiatives that switch power distribution across the network switch stack that allows switch devices to share power and improve reliability with a common distributed power system.
  • Each type of node inter-connect methodology, i.e., fiber versus copper, 1Gb/s versus 10Gb/s, has a unique coefficient of efficiency.

Network variables that drive energy efficiency are primarily packet size, packet rate and interface type. Irrespective of the type and complexity of the network, if the network node power efficiencies are known at discrete intervals for each packet size, packet rate and interface type then interpolation can be used to determine node power efficiency at any operating condition.

Storage energy efficiency

Energy consumption in storage systems is a function of the capacity of the storage device, number of disk drives and I/O rate.

Numerous measures have been introduced by storage manufacturers to reduce energy consumption that frequently improves resource utilization; e.g. storage virtualization, auto-tiering, lower power disks, and thin provisioning.  It is significant to note however that these measures primarily address the efficiency of the IT data state change, and are thus less relevant to the energy efficiency analysis.

Other measures introduced to drive energy efficiency, e.g., high efficiency switched mode power supplies, high efficiency disk actuators, etc., do have impact on the overall energy efficiency prior to data state change.

Data center energy efficiency

With the energy efficiency determined for compute (ηc), network (ηn) and storage (ηs) the energy efficiency of the IT physical infrastructure (ηIT) can be calculated based upon the respective power allocation.

Consider the power and cooling load coefficients across the entire data center as follows: kc for compute, kn for network and ks for storage such that kc + kn + ks = 1 then IT energy efficiency is given by Equation 3.

And the overall energy efficiency of the data center (ηDC) is given by Equation 4.

Next steps

Variables that drive energy efficiency may change. A recent example of this is the hitherto dominance of CPU utilization in enterprise servers. This has diminished with the advent of multi-core processing. This in turn resulted in increased significance of memory and I/O with regard to server energy efficiency.

The methodology of using interpolation of energy efficiency data sets based on discrete operating conditions (provided, should they be willing, by the IT hardware OEM community), and data extraction and analysis (again, provided by the management software community, should they be willing) can provide data center owners and operators with knowledge of the overall data center energy efficiency and should benefit end users and data center operators. The key question and potential obstacle is: Are IT hardware manufacturers willing to provide the energy efficiency data sets? Computer simulation of the energy efficiency data and data set extraction should be relatively straightforward thereafter.

About the authors: Ed Ansett and James Dow are principals at i3 Solutions Group.

This article originally appeared in DatacenterDynamics FOCUS magazine.

Disclaimer: views expressed in the article above are those of its authors and do not necessarily reflect the views of DatacenterDynamics.