Ahead of DCD>Dallas, Sharif Fotouh, Managing Director, Edgepoint, Compass Datacenters sat down for a Q&A with DCD’s Kisandka Moses, who is producing this year’s conference, to discuss placing edge at the center of low-latency workloads.

Q: Edge data centers are clearly becoming big business, with the likes of EdgeMicro and Vapor IO centering their business models on building infrastructure to support low latency applications. As the CEO of EdgePoint at Compass Data Centers, how are you differentiating your business from those types of companies? How do you see EdgePoint being able to monetize the edge?

A: I’ll preface with saying, I don’t think anybody has a crystal ball and a 100% picture of how edge will pan out from a financial and capital deployment perspective. In my opinion, it’s still in the nascent stages of being figured out. However, you can see some trends clearly taking shape. You can start to see the likes of EdgeMicro beginning to operate as more of a traditional colocation operator at the edge with Vapor IO going a step further with their orchestration software and almost providing a virtualized environment for the edge. Those are two models that will make sense for many customers, but EdgePoint’s approach is fundamentally different because Compass’ roots are as a wholesale data center provider.

We are looking at edge very agnostically, with a model that helps customers get to the edge in a way that works with their existing data center infrastructure and that supports whatever their edge use cases will be. That kind of wholesale data center mindset is unique in the edge market, and it allows us to work flexibly with any type of customer, any edge application, and any scope of program. The idea is to make the edge as frictionless as possible for our potential customers and those customers can range from mobile network operators, fixed network operators, enterprises and a long tail of other potential use cases.

Q: At the moment the data center market appears to be moving in two different directions. On one hand, we've got data centers being designed to meet the demands of hyperscale use. On the other side, we've got data centers which are being constructed to meet the demands of edge use or low latency applications. The middle ground typically belongs to the everyday enterprise ranging from Fortune 500 to SMB. Where do you predict that enterprise workloads will live in the future?

A: At Compass, we like to start by thinking about the final application rather than the vertical or the classification of customer. There’s a very important reason for that: because the application dictates the hardware, the hardware dictates the infrastructure, and all of those things combined dictate geography, revenue and cost modeling.

That means that people should start their edge planning process by asking: What applications are going to be needed? And that will spark key follow up questions such as: What applications are gaining widespread adoption, and how will their demands trickle down into infrastructure? I think as long as we look through an application-centric lens, we'll be well aligned with where various verticals are going.

So what does that mean for specific verticals? Let’s take healthcare as an example. Rather than thinking about healthcare as a sector, we should think about how healthcare has a spectrum of applications that the industry use which all have very different requirements from a hardware level, and also from an infrastructure level. So let's break those up and group the application-like elements from healthcare with the same requirements which might exist in industrial manufacturing or banking and identify which applications are relatively static in terms of utilization growth. Is there a potential for something like IoT or an emerging technology to dramatically ramp up the needs of that application? From there we can forecast how the infrastructure is going to need to evolve and grow or even contract based on those kind of those guesses.

When you ask whether the enterprise middle ground is being eroded, I think it's still very hazy. The application lever is very dynamic and in a single announcement, one product can take the world by storm, impact multiple verticals almost overnight and immediately shifting the infrastructure requirements needed by multiple verticals of enterprises.

If all of a sudden you realize this application that everybody's using, let's use facial tracking as an example. It requires localized GPUs and FPGAs, is heavy compute and needs to be relatively dynamic and low latency. If every store or every facility starts employing facial recognition, you're going to see data centers flock to being closer and more local, and you're going to see the infrastructure mimic that requirement.

Q: We know that currently an edge data center requires little to no IT personnel meaning remote monitoring and automation will become pretty critical to how that facility functions. Do you think the edge facility will become at any point fully autonomous? Are we nearly there?

A: I think everything in the world is going to become fully autonomous, given enough time. Eventually, we will have robots building robots taking care of robots. That's definitely the current trajectory. With that said, it's one of the challenges I faced while I was at Google. As you distribute your infrastructure, and you go from having 10 megawatts under a single roof, to 100 sites with 100 kilowatts each, suddenly you are facing a very different problem set and looking at the total cost of operations is absolutely critical.

To date, as our facilities have larger and grown to become more consolidated, the human efficiency factor has fallen even further off the radar. Now, the same operation center that was taking care of a two megawatt facility, ten years ago is now taking care of a 25 or 50 megawatt facility. Chris Crosby coined the term 'HUE', human utilization efficiency which refers to human effectiveness and that graph has naturally plateaued as facilities have become more consolidated. Most people in our industry are not looking at operational efficiency as a guidance principle for their data center design, software choices, and overall IT program – but it should be because inefficiency quickly balloons out of control when you’re looking at hundreds of sites.

I learned this the hard way. For instance, in an edge site, finding a unit that drives your PUE, let's say 5% lower, but requires a truck roll every six weeks for a filter change or some small adjustment, will quickly set you upside down on TCO if you don’t think hard about this type of operational efficiency at scale.

HUE is factored into alot of the design decisions we've made at EdgePoint. A lot of people are designing without redundancy and relying on a mesh of sites that are close to each other and the ability to fail capacity over between them. In my opinion, you're still going to have to dispatch somebody immediately if you have a failure and that type of program doesn't scale, you need to have the resiliency built-in. Normally, infrastructure failures can be scheduled on a routine round of sites, rather than become an emergency fire drill and that's the only way you can get out of having five or ten sites and to having 500 or 1000 sites.

When I was at Google, I started with a bare bones facility design to try to save on the cost of my infrastructure as a primary driver and over the course of five years, I got burnt. When you look at the design we have built today, everything is redundant. Part of the decision-making behind this is the need to drive uptime but really the major lever here is to minimize on emergency maintenance and truck rolls.

Q: Speaking of design, how important do you think liquid cooling will become to the evolution of the design? Is widespread adoption at facility-level inevitable at this stage?

A: Interestingly, liquid cooling has been a nerdy interest topic for me personally from the early days, when PC hardware enthusiasts were dropping their boxes into mineral oil to go entirely submerged. It’s come a long way since then. Now we're using discrete liquid cooling on specific components, which is really sophisticated and is generating interesting results.

I absolutely think there's opportunity in the edge space to improve on our mechanical design and cooling solutions. There are still facilities which rely on rudimentary wall mounted, packaged units plagued with performance and reliability concerns inherent to their design, regardless of the manufacturer. That simply isn’t good enough for edge deployments, especially when you consider that so many of these are envisioned as being “lights off” sites that need to stay up and running without people on hand ready to fix something the moment it stops working.

That’s why our team went with a custom industrial cooling unit that we've developed for our EdgePoint shelter. It doesn’t use liquid cooling, but I can see that kind of technology being viable in the future either in the form submerge liquid cooling or a discrete component-level liquid cooling. They aren’t quite mature enough for edge data centers, in my opinion.

There are some important questions organizations will need to ask themselves when the timing is right to start considering liquid cooling. For example, when you look at liquid cooling for discrete components, will your facility be able to supply chilled water directly to the rack level. What type of work is needed and can the work justify the cost incurred? Are you really going to be saving on your total airflow by deploying that solution?

And there are important questions regarding submerged liquid cooling as well. There is some really cool technology coming out in that space that have very clear advantages, but your organization will have to address a whole host of operational challenges when you go down this path. For example, you will need to analyze how much work it takes to swap out a discrete component on storage shelf, given that the process for any equipment maintenance or changes includes letting the unit drip dry for an hour after pulling it from the liquid. I think it really takes a critical mass for the market to attack them and for end users to say "okay, we're going to not only adopt this, not only test this, we're going to standardize on it" but we're just we're not there yet in terms of maturity level.

Q: I wanted to find out your thoughts on the role of the edge data center in a post-carbon world? Is sustainability a key metric for EdgePoint or a set of aims which may become more prominent as the company develops?

A: My answer may be fairly polarizing, especially for our friends out in San Francisco, but I think sustainability exists in a duality where it is absolutely and unarguably critical and a primary concern in every industry. If you're not looking at sustainability, you're effectively mortgaging your industry's future for profits today.

Now, the flip side of that coin is that sustainability has to be justified by long-term economics. That provides the business case that makes it a fundamental operational commitment for businesses rather than something that gets abandoned down the road.

I support solutions which might involve an upfront cost increase but that prove themselves over the lifespan of the unit, typically by providing lower TCO or other benefits. However, not every technology has an ROI that justifies its adoption. Some will require the organization to go out of its way economically to pursue that solution. I do support the early adopters and the people promoting energy smart technologies, but it doesn't always make business sense. The reality is, if you're going to be the one person that's going to adopt an upside-down sustainable solution, you're going to have a dozen competitors who are going to opt for the most economically feasible solution, and they'll swallow you.

With sustainability, you must look at it through the lens of total economic feasibility, though a macro lens with a timeline inclusive of hundreds of years as opposed to one where you are simply looking at maximizing your profits over the next two to three years. As far as edge data centers go, there are some really interesting opportunities with distributing infrastructure and marrying sustainable technologies to it. The edge is very suited for ground-up grid and microgrid solutions because lots of small power generation packages provide a much better benefit and an overall kind of reliability of your footprint than a provider trying to bet the farm on one gigantic fuel cell system for a hyperscale-size facility.

I have looked at fuel cell and geothermal solutions for cooling but unfortunately there is a chicken and egg problem where the solutions haven't reached a maturity point that fits our needs for reliability or our need to be able to deploy quickly and at scale and the downside is that maturity is blocked by the fact that nobody's adopting it at scale.

On the topic of energy procurement, while a couple of states (for example California) have very advanced energy purchase programs, a lot of states still haven't even cracked the surface on those programs. You have to ask yourself: Am I willing to put the brakes my overall operation to try to fight the fight for energy procurement on a national level? Do I just prioritize my customers and their current requirements whilst hoping that one day those solutions will catch up and be available?

The key again is to know that your goal is to build a business case for as many of these sustainability-focused choices as possible. It’s that level of rigor that will guide smart choices that balance technical needs, business realities and sustainability goals.

Sharif will join us at DCD>Dallas on 21-22 October, to share his insights into "Do milliseconds still matter & does the edge require core sites or small cells?"