Enterprise user adoption of Infrastructure as a service (IAAS) is rapidly accelerating in a crowded and highly competitive supply side market.

IAAS, and its subset ‘Bare metal as a service’, are all about providing flexibility.  But is there a missing piece to what can and should be delivered?

Nothing runs without power. As firms grow used to ‘As a Service’ models so questions for suppliers include how long customers will be willing to pay for power they don’t use? For data center space suppliers trying to maintain margin it begs the question: Can we continue to spend Capex and Opex on static power infrastructure and operation in the era of flexibility and customer demand.

‘As a service’ defined

If we borrow some definitions from IBM we see that “IaaS is a form of cloud computing that delivers fundamental compute, network, and storage resources to consumers on-demand, over the internet, and on a pay-as-you-go basis. IaaS enables end users to scale and shrink resources on an as-needed basis…IaaS providers manage large data centers, typically around the world, that contain the physical machines required to power the various layers of abstraction...In most IaaS models, end users do not interact directly with the physical infrastructure, but it is provided as a service to them.”

Whichever service is chosen, to a user, all forms of IaaS abstract away the physical while allowing them total flexibility to access IT capacity. Cloud flexibility is a fundamental benefit that users now expect.

They want to be charged for what they use and expect that the supplier will seamlessly accommodate the need to scale and shrink as required.

A reasonable question to put to colocation and cloud providers is: “If IaaS is pay-per-use, then why is the data center power price fixed whether it is used or not?”

Can power provision flex like a cloud?

For power to be cloud-like and provisioned ‘as a service’ is something that could, and some would argue should, be considered at the design and construction phases of a data center.

Running a data center in a modular fashion is nothing new. Designing and running the modular power infrastructure in a super flexible fashion is something new and it would address many flexibility issues now surfacing in modern data center operations.

Data centers are and will continue to be designed as physical buildings which are broken down into halls. So, for example, a 12MW data center may be typically split into 6 x 2MW data halls.

Traditionally the manner in which infrastructure is deployed, if a contract for all halls exists, is to design, source, pay for and roll in the full 12MWs of Genset, switchgear, UPS, PDU and ancillary equipment up front. And then wait for the demand to arrive.

As all data centers start with a zero-load requirement rolling in traditional modules is wastefully expensive and inefficient.

A modular approach to power provision allows for a staggered infrastructure roll out which responds to end user load and not fully built-out building design criteria.

A worked example

A typical use case, where power is responsive to the end user needs requires the ability and flexibility to serve power across multiple halls. So, for example, where there is 2MW of infrastructure in Hall 1 this can be used to power Halls 1-3 at partial load. And with 2MW in Hall 6 doing the same for Halls 4-6 as demand increases additional modules can be deployed in Hall 3, and then Hall 4 and so on until all halls are fully powered.

In this 12MW scenario as the number of halls in use reaches three and above and utilization rates across all halls in use rise from around 50 percent, so deferred equipment costs through using fewer modules reaches 35 percent of overall power cost. Even at utilization rates above 50 percent and rising to 67 percent deferred cost savings remain high – around 20 percent of total power cost.

Any initial premium invested for new control and management of modules is rapidly offset by deferred cost savings as the gap between the number of modules required expands.

With all halls occupied, once utilization gets above 40 percent so 35 percent end state savings are seen as a percentage of overall power costs. 

A modular, flexible approach to power provision means that as data halls become occupied and start to fill with compute, storage and networking, and power is ramped up, the ability to access existing redundant power can bring huge savings and provide flexibility to customers by more closely matching the infrastructure deployment and use to the actual demand.

In the real world not everything is linear. No data center operator, whether owner occupied, commercial, or cloud, opens one hall, fills it to 70 percent utilization of available capacity and then moves onto the next one. It is simply not feasible.

In fact, for a variety of reasons familiar to all data center operators, occupancy levels in different data halls will vary.  Yet traditional design and equipment has deemed that each hall can only access the power infrastructure with which it is associated.

New developments in power systems management allow predetermined redundancy levels to be provisioned to IT loads where the power is derived from unused power capacity. And for availability, IT Load prioritization can service different hierarchies of application needs in the event of an outage.

Making power elastic

CIOs are desperate to tap into flexibility offered by the cloud. As their experience grows they will want flexibility in all things. With power measurement and cost control growing in importance they are less accepting that IaaS can only rest on in-elastic power infrastructure.

Customers are more likely to engage with a provider that can answer the reasonable question: “How can I get the transparency needed to inform my ‘as a service’ business decisions for pay as you go for IAAS, PAAS and SAAS when power provision and costing is fixed and inflexible?”

Data center operators using the latest developments in power systems have the answer.

For more information contact: www.I3.solutions