In a world drowning in data, network traffic is already a complex task. But the rise of Edge computing will complicate matters all the more, requiring bigger and smarter networks than ever before.
"There's a lot of content that's being generated outside the walls of the data center," Commscope's hyperscale and cloud solutions architect Alastair Waite explains. "The Internet of Things, smart cities, etc. - that's all generating data in a distributed fashion."
This article appeared in our Future of Data Center Connectivity Supplement. Read the whole thing here
While there has always been local data creation, the sheer quantity being produced at the network edge is a new - and rapidly growing - phenomenon. "The network is having to adapt to be able to manage these huge pools of data that are now being pushed around the network," Waite says. "Before, we saw more centralized data being generated centrally and in large data centers."
This data is not just being created, but requires a back-and-forth response from the Internet. Workloads like artificial intelligence (AI) or augmented reality (AR) will send data off into the network, but also expect a response - and fast.
That's a problem for two reasons. Current bandwidth constraints mean that it is often not technically or financially feasible to send all that data back to a central facility and, even if you do, it may mean too much latency to be useful.
Here the Edge is presented as a way of killing two birds with one stone. Not only can it process data for a low-latency response, but it can also filter and compress the data that needs to be sent back to the larger data center.
"With AR and virtual reality (VR), latency can make a person sick," Tilly Gilbert, a senior consultant at telecoms advisory STL Partners, says. "So that application really is reliant on that low latency under 50 milliseconds or so. And then there are those really high bandwidth use cases where processing at the Edge can make them cheaper or more efficient by filtering information out rather than streaming all raw data to the centralized cloud."
This is still mostly a dream of tomorrow. "Today's networks are not reasonably accessible," Yuval Bachar, the former principal hardware architect of the Microsoft Azure platform, says.
"If you try to send data from point A to point B, which are not on the same carrier, you're going to be exchanged somewhere that can be two miles away, but also could be 1,000 miles away. Your latency is completely unpredictable."
He adds: "So the current network does not give us a predictable latency that we need for the future applications of the Edge. And the current network also cannot handle the very, very large datasets which are being generated at the endpoints. So there will be a complete hard need for processing units which will sit close to the endpoints to reduce that volume of bandwidth that needs to go back to the cloud."
We have all heard similar arguments for the Edge for some time, but the shift has been gradual due to the scale of the change and the requirement for a customer to actually make the business proposition viable.
But Bachar is keen to share an actual example of the Edge in action, at another of his former employers: "LinkedIn has a limited global data center footprint," he says. "As a result, some of the users had a very great experience of extremely fast load. But in some domains, like areas of Europe and Asia Pacific, the experience was not sufficient."
The problem is that every time the homepage is loaded, it is unique, requiring specific processing for every person, every time they visit the page. "It requires touching the data center constantly," he says.
"We decided to actually build an Edge platform. We built a micro data center that we're actually placing in strategic areas, enabling faster response to what the data center can actually provide to the end user. And by that enable a low latency environment, even though the data center is much further away."
He continues: "That's created a dramatic improvement in the experience that the end users had, specifically in Europe." It also, he claimed, allowed the company to roll out richer features they would have otherwise not felt comfortable deploying. "But this is an early-stage development."
Bachar is convinced that this Edge case is not just an edge case, but rather a hint of what is to come. That is, he admits, if people can make the numbers work.
"On paper, we understand what needs to be done," he says. "But it's all tied to a business model - if we don't have a way to monetize it, then the big players will not jump in there, and there are a few very large companies in the world that can actually make this investment."
If and when they do make that investment, "whoever is going to take the first step is going to be dominating this market, just like what happened with the cloud," he predicts.
Here, again, the network demands will be crucial in defining the business model. The vast scale of the network overhaul means that cloud providers or other data center companies will not be able to go it alone, and will likely have to partner with network operators, argues Caroline Puygrenier, director of strategy and business development, connectivity, at Digital Realty's Interxion.
"With 5G, Edge, new network architectures, satellite constellations, and so on, we need there to be a greater collaboration between the network operators and the actual cloud service providers," she says. "We all benefit from that implementation of new technology, it's not just one segment of the verticals, that's going to develop or pay for the implementation."
Her company appears to be hoping to cash in on this potential collaboration, investing in AtlasEdge, and installing a former Digital Realty exec as CEO. AtlasEdge is a joint venture between DigitalBridge and telco conglomerate Liberty Global to turn thousands of sites at telco locations into Edge data centers.
There are issues. "Some of the cloud providers are much more interested in getting access to telco networks so they can get access to telco customers, more so than partnering long term," Mark Thiele, CEO of data center procurement company Edgevana, says. "Many of the initial solutions have huge gaps in opportunity - from a cloud provider standpoint, they're too expensive, and they are not autonomous from a centralized network.
"But people are working on it."
When they do solve this challenge, it will have a profound impact on the network of tomorrow, bringing high bandwidth to Edge locations, and offloading processing to those sites.
That doesn't mean that's it for the centralized data center, though.
"As this data is being created at the Edge locations, a lot of it is going to have to come back to somewhere," Commscope's Waite says. "So it's going to be extremely important to make sure that your cloud data center, whether that's in a multi-tenant data center or within your own premises, has the correct level of bandwidth being provisioned."
In conversations with cloud and hyperscale providers, it's "all about 400 gigabits and beyond," he says. "They're asking 'what's next?' because they want to be able to deliver that seamless experience that's really driving the bandwidth at the moment."