Ahead of DCD>Dallas, Craig Pennington, Vice President - Design Engineering, Equinix sat down for a Q&A with DCD’s Kisandka Moses, who is producing this year’s conference, to discuss what's driving colo capacity.

Q: Equinix are historically a retail-focused colocation giant, but you’ve recently unveiled a hyperscale infrastructure team and strategy. Can we now argue that enterprise capacity is no longer enough to fill data halls?

A: No, I don't think that's true. If we stay within our core retail business, there's more than sufficient demand. However, we are aware that a significant amount of data center requirement is driven by the hyperscalers such as Microsoft, Amazon, Facebook, etc. We have a long term relationship with all of those clients, we don't want to lose them if their requirements in a certain market grow beyond what we can deliver, hence the hyperscale product offering. The hyperscale team is a recognition of the increasing importance of hyperscale players and their very exacting and different needs to our retail customers and our desire to continue our relationships with them.

Q: Facebook recently announced they are entering the cryptocurrency market, with the creation of a coin called Libra. Do you think that these cutting-edge applications will drive colocation uptake in the future?

A: Definitely. In terms of cryptocurrency they are using what is it effectively a very process intensive application. We know that the minting of cryptocurrency uses an awful lot of compute power. We're not necessarily trying to read too much into what they're trying to deliver with their services, but if they want to deliver cryptocurrency and others want to do the same thing, we want to be there to provide the infrastructure in which that can happen. As they broaden and deepen their product offerings to include B2C, I can only see that that will drive more and more data center requirements.

Q: You’re at the helm of the ‘data center of the future’ initiative. Can you tell us how that initiative came to be, your vision for the project and how it initially got off the ground?

A: Our CEO, Charles Myers, previously headed up our Strategy, Services and Innovations (SSI) team in his capacity as COO. Within that team, there were a number of strategic areas of investigation we were looking into.Our key focus would be identifying what we need to be able to deliver, not for the next generation of data centers, but for the generation after that, and it covered a myriad of different things. Some of them were software related, such as being able to do crypto key management, while some of them were broadening our portfolio so that we could offer edge data centers.

With the data center of the future initiative, we focused on being radical, not being held by convention and what we could do to deliver our data centers faster and cheaper but as reliably.

The initiative is looking at technologies, instrumentation, operational support processes and applications such as artificial intelligence in a bid to see how radically and how far we can drive a data center and get the delivery cost down.

We started the initiative just over a year ago and are including our major customers, because we are a very customer-centric organization and we want to make sure that we are responding to their needs as much as our vision of the future. We are now implementing that in the construction of our DC12 facility (Ashburn, Virginia) to showcase some of the technologies we believe we're going to be important for the future.

Q: What do you think has been your most groundbreaking discovery as part of this initiative so far? How much do you attribute this to Equinix's R&D strategy?

A: I would say Equinix are ahead on R&D because we are a big enough organization to have engineers who can look out beyond the next data center. Many of our colocation competitors do not design their own facilities, they go to external design houses. I think we're in a very privileged and unique position, where we can afford to spend money on real R&D to drive some of this innovation.

The technology I'm betting on is liquid cooling. All of the others are, I think, great innovations, but they will come anyway, such as the use of artificial intelligence for maintenance operations, which everyone is doing. I am a very strong believer that if we're going to drive efficiencies and reduce the cost of operating, a way to do that is to minimize the overhead of mechanical cooling.

We're all going to generate heat in a smaller footprint which will drive up the heat density. When you try to extract that, then you know liquid cooling is the way forward. I remember Kevin Brown at Schneider clearly saying that liquid cooling is the cooling mechanism of the future at our Technology Advisory Board. The need for liquid cooling has expanded beyond the supercomputing environment and mainstream compute is now driving densities up well above 20 kilowatts per cabinet and when you reach that level of density the only sensible cooling approach to consider is liquid cooling.

The purpose of our co-innovation facility, where we're setting up all of these technologies, is to demonstrate to our customers that these technologies are now mainstream, sufficiently robust and reliable. The level of interest across the entire industry, from the server manufacturers, the chip manufacturers, all the way to the data center players has reached a height where I can’t see the adoption of liquid cooling decelerating.

Q: At DCD>Dallas, you’ll present a case study on the journey to building the four-storey facility in Northern Virginia. The four-story data center is becoming more sort after in highly populated cities where land cost is exorbitant. Are there any markets where you think multi-storey data centers would not work?

A: The balance we always keep front of mind when we're thinking about the form factor and the type of data center that we're building, is cost of land and speed of uptake i.e., our ‘fill rate’ for that facility. If we're in a land constrained, expensive and highly dynamic market where there's lots of customers, we're going to build tall, because it's the sensible way to get the largest number of cabinets onto the smallest footprint of land.

If your facility is out in the middle of Phoenix and you are not required to meet the need for a latency dependent set of services, then you can take a different view and decide whether it's easier and cheaper to build horizontally rather than vertically, as the cost of land is relatively affordable. If land prices were the same everywhere, it would be cheaper to build horizontally.

The other factor we have to take into account is network density. Our customers want to go where they can get access to the maximum number of carriers. if the land we are occupying is not served by a lot of network providers, and the network providers are not keen to go out and pop your building, that's another dynamic you have to work into your site selection strategy. Our four-storey build in Dallas is next door to the Infomart so we can leverage all of the network density we own in that building. Additionally, the amount of land we had available sort of dictated to us that we want to maximize the cabinet count for the plot that we have available and as a result we are building two, four-storey buildings on that land.

Q: If you had to choose between affordable land, competitive power rates , unfathomable connectivity or little to no risk of natural disasters, which factor would be most pivotal to Equinix from a site selection perspective?

A: We are not Facebook or Amazon and are not building data centers for our own consumption using our compute. For them, cheap power and cheap land is always going to be a driving factor. They will look at our facilities when they need to have low latency, because they need to ensure that they can get their services to their end customers in the major markets and cities globally.
For us, when we're looking at where we want to build access to network is really important for us. We want to make sure that our buildings are incredibly network dense because we want to help our customers build their ecosystems and encourage interaction between customers under the roof of our buildings. We also want them to be able to leverage the network density that we have to interface with our other buildings globally and their solutions, whether they are deployed in their own premises or in other countries in other facilities. In short, network density is incredibly important to us.

Craig will join us at DCD>Dallas on 21-22 October, to share his insights into "Everything’s bigger in Texas; how are hyperscale demands impacting campus design and facility readiness?" and "If VA’s two-million dollar acre is near, can multi-storey construction help drive down CAPEX?"