The industry has seen some extremely interesting developments throughout 2018. One trend has taken center stage as the vast majority of compute and storage continues to be funneled into the largest hyperscale and centralized data centers.

We’ve seen a targeted move of the Internet giants to occupy colocation facilities as tenants, using these large facilities to deploy critical infrastructure and applications closer to customers. This need is driven by the insatiable demand for cloud computing and local compute, which is accompanied by the need to quickly reduce latency and Internet costs, resulting in the emergence of ‘regional’ edge computing – something that we describe as localized versions of ‘cloud stacks’.

But this change won't stop at the regional edge. As the trend continues to gain popularity, it’s likely that we will begin to see more of these ‘cloud stacks’ deployed in the most unlikely of places. After much deliberation on its definition, we believe 2019 is the year that we will really see the edge playing a dominant roll, spilling into the Channel and creeping ever closer to users in more commercial and retail applications.

Hyperscalers need faster deployments

Given everything that’s happened in 2018, it’s clear that the demand for cloud computing will neither subside, nor slow down. The next 12 months will see it accelerate further, meaning the Internet giants will continue to build greater levels of compute capacity in the form of hyperscale data centers.

The market demand will mean these giants also need to build their facilities increasingly quickly. In some cases, 10MW to 100MW projects may even need to be designed, built and become operational in less than twelve months.

One key to accomplishing such aggressive timeframes is the use of prefabricated, modular power skids, which combine UPS, switchgear and management software in one predictable and factory built, pre-tested package. A great example of the use of prefabricated infrastructure in today’s data centers is Green Mountain’s recent choice to add 35MW of capacity to its Stavanger and Telemark sites.

Since the lead-time for this type of power equipment can in some regions take up to 12 months, having a solution built and ready to deploy eliminates any delays during the critical path of the design and construction phase. In this case it has enabled Green Mountain to complete the first element of its staged project at the Rjukan site by April 1st, 2019.

One might also consider that within the data halls of other colocation and hyperscale providers, compute capacity will also become more modular, allowing the user to simply roll new racks and IT infrastructure into place. A solution such as this will need to be in some respects vendor neutral, allowing racks and IT to be quickly deployed, thereby removing the complexity and any accompanying timing challenges for the user.

IT and telco data centers will continue to collide

The discussion around 5G has continued to move forward, but in order for it to deliver on the promise of sub-one millisecond latency, it will need a distributed cloud computing environment that will be scalable, resilient, and fault-tolerant. This distributed architecture will become virtualized in a new way - namely cloud based radio access networks (cRAN) - that move processing from base stations at cell sites to a group of virtualized servers running in an edge data center. In that respect, we believe significant build outs will need to occur on a global scale in order that metro core clouds are available throughout 2019 and thereafter.

These facilities could be classed as ‘regional data centers’, ranging from 500kW to 2MW in size. They will combine telco functionality, data routing and flow management, with IT functionality, data cache, processing and delivery.

While they will enable vast performance improvements, it’s unlikely that they alone will be able to deliver on the promise of sub one millisecond latency due to their physical location. Due to the increase in urbanization it’s not easy to find the space for new (and large) data centers within today’s cities. It’s more likely that the world will begin to see sub 1 millisecond latency times when the edge core cloud deployment happens in 2021 and after.

This is where localized micro data centers will provide the vehicle for super fast latency, delivering high levels of connectivity and availability for both 5G providers and their customers.

AI and liquid cooling

– Pixabay

As AI continues to gain prominence, springing from research labs into today’s business and consumer applications, with it comes massive processing demands, placed on data centers worldwide.

AI applications are often so compute heavy that IT hardware architects have begun to use GPUs for core processing, or as supplemental processing. The heat profile for GPU-based servers can be double that of more traditional servers with a TDP (total design power) of 300W vs 150W, which is one of the many drivers behind the renaissance of liquid cooling.

Liquid cooling has of course been in use within high performance computing (HPC) applications for some time, but the new core application of AI is placing increased demands in a more intensive way, meaning it needs a more advanced, efficient and reliable mechanism for cooling. Liquid cooling is just one of the ways to provide an innovative solution as AI continues to gain momentum.

– Thinkstock

Cloud based data center management

DCIM was originally deployed as an on-premise software system, designed to gather and monitor information from infrastructure solutions in a single data center.

While recent cloud-based management applications are deployed within the cloud, they enable the user to collect larger volumes of data from a broader range of IoT-enabled products. What’s more, the same software can be used across a greater number of large or smaller localized data centers deployed in thousands of geographically dispersed locations.

This new software, described by Schneider Electric as Data center management as-a-service, uses big data analytics that enable the user to make more informed, data driven decisions, mitigating unplanned events or downtime far more quickly than traditional DCIM solutions. Being cloud-based, the software leverages pools of data or “data lakes”, which store the collected information for future trend analysis, helping to plan operations at a more strategic level.

Cloud-based systems simplify the task of deploying new equipment within, or making upgrades to existing installations. This includes software updates for data centers in different regions or locations. In all cases, managing such upgrades on a site-by-site basis, especially at the edge, with only on-premise management software, leaves the user in a challenging, resource intensive and time-consuming position.