It might seem premature to talk about 400Gbps and 800Gbps cabling systems when the vast majority of the data center world is only just to think about laying 100Gbps wiring. Yet the sheer volume of information that hosting facilities will have to store and process over the next few years indicates that future-proofing networks against looming capacity bottlenecks should be a priority for many owners and operators.

Cisco’s latest Global Cloud index predicts that global data center traffic will reach 19.5 zettabytes by 2021 for example, more than triple the six zettabytes in 2016 and representing a compound annual growth rate of 27 percent. What’s more, 95 percent of that traffic is expected to be driven by the cloud as more businesses and consumers store and process the information in centrally hosted environments.

That trend has led to a widespread expectation that hyperscale cloud hosting companies like Amazon Web Services (AWS), Facebook, Google and Microsoft will start implementing 400Gbps cabling systems as early as 2019 whilst simultaneously preparing themselves for 800Gbps upgrades at a later date. Much of that activity will be focused on the core leaf and spine parts of the network, as well as inter data center links.

Even owners and operators of more modest hosting facilities - tier two cloud providers, systems integrators and large enterprises and government departments for example - will find that the increased use of 100Mbps cabling systems at the top of rack level will inevitably create choke points within current network architecture, hastening upgrades to core backbones.

Cabling
– Thinkstock

A question of form

The bigger question is not so much if or when 400Gbps or 800Gbps networks will be needed, but what form they will take. And the answer to that varies according to individual data speed, cabling distance and component size/power requirements combined with planned expansion strategies and the type of network/server/storage architecture already in place.

When it comes to server interconnects, Ethernet seems to have become the specification of choice amongst large scale hosting providers and enterprise data centers focused on cloud service delivery whereas InfiniBand is favored by the high performance computing (HPC) community. For linking external storage arrays however, Fiber Channel (FC) is widely used due to its considerable legacy install base.

Both InfiniBand and Fiber Channel have roadmaps which push existing capacity further. The T11 specifications for 64GFC and 256GFC products delivering maximum bi-directional throughput of 128Gbps and 512Gbps were completed last year, for example, with commercial products expected in 2019. Those will give data centers with existing FC infrastructure at the network edge and server interconnect level additional capacity when it comes to upgrading the 16/32/128GFC components already in situ using OM3/OM4 MMF cabling systems, especially when it comes to linking servers to external data storage devices.

The InfiniBand Trade Association (IBTA) also published a roadmap outlining 1x, 4x and 12x port widths with bandwidth reaching 600Gbps in 2017, again for server interconnects over short distance passive and active copper cables (up to 30m) or optical cables (up to 10km).

But it is Ethernet which appears to have stolen a march on both FC and InfiniBand, at least in terms of the capacity it expects to support in the nearer term. In March this year the Ethernet Alliance, a consortium of vendors, industry experts, academics and government professionals committed to the success and expansion of Ethernet, released the latest Ethernet roadmap mapping out future iterations of the technology.

It expects to see 400 gigabit Ethernet (GbE) links deployed in hyperscale data centers by 2020, with 800GbE and 1.6 terabit Ethernet (TbE) connectivity appearing within five years or so. Of course any timescale for end user deployment depends on when individual component manufacturers can get suitable components onto market, and how affordable they are.

The 400GbE and 200GbE specifications were ratified by the IEEE 802.3 Ethernet Working Group in December 2017. Rather than making it optional, the 802.bs architecture embeds Reed Solomon forward error correction (FEC) in the physical coding sub-layer (PCS) for each rate, effectively forcing manufacturers to develop 200GbE and 400GbE Extender Sublayers to support the future development of other PCS sublayers that can utilize other types of FEC for greater efficiency at a later date.

Three 200Gbps standards - 200GBASE-DR4 (500m), 200GBASE-FR4 (2km) and 200GBASE-LR4 (10km) - all use single mode fiber (SMF) and 50Gbps per lane to achieve the desired throughput. An equivalent SMF based 400GbE standard - 400GBASE-DR4 (500m) - boosts that bandwidth to 100Gbps over four lanes, whilst 400GBASE-FR8 (2km) and 400GBASE-LR8 (10km) use eight lanes at 50Mbps. A fourth 400GbE specification - 400GBASE-SR16 - combines 16 strands of MMF fiber at 25Mbps to push 400Mbps signals over distances of 100m.

Several new optical I/O form factors to meet those standards have now emerged. These include CFP8, OSFP, QSFP-DD, and COBO, again designed for different types of MMF or SMF wiring and electrical interfaces and optimized to suit various metrics within the data center, most notably different transmission distance requirements; backwards compatibility with existing systems; and component space, heat and power consumption constraints within densely populated data center compute and network architecture.

Some manufacturers are already well advanced in their plans for these form factors. 400GbE compliant CFP8 transceivers using 50G pulse amplitude modulation (PAM4) technology having been demonstrated by various companies, including Finisair and NeoPhotonics. The transceivers have been modified for compliance with the 400GBASE-FR8 SMF standard pushing maximum transmission distance out to 2km for campus data center networks.

Mellanox too has indicated its intention to introduce ASICs supporting 400GbE at some point in 2018, whilst Huawei late last year completed tests of 400G optical network technology in partnership with China Telecom and Spirent for commercial use in access, metro and data center networks.

This article appeared in the April/May issue of DCD Magazine. Subscribe to the digital and print editions here: