When it comes to optical connectivity, suppliers are bound by a Multi-Source Agreement (MSA) that specifies minimum standards of quality and performance. Compliance with an MSA is just the first step, covering checkbox items such as speed or interoperability. Data centers are entering one of the most intensive and demanding eras where many small but crucial decisions on optical components will make-or-break their competitive standing. This is being exacerbated by the Covid-19 pandemic, which has data centers scrambling to provide the speed and capacities required to support more video streaming and conferencing due the many “stay at home” orders across the United States. Early estimates show between a 50-70 percent increase in total Internet usage amidst the crisis.

Differentiation between providers will be the deciding factor as to whether your data center can scale and keep up with rising data demands for HPC applications and hyperscale connectivity. Below are the criteria suppliers must look for, and the right questions to ask when planning.

How can I manage speed vs. power?

Even the most advanced data center can’t fight physics, and as business drivers must push speed from 100GB/s, to 200GB/s, 400GB/s and beyond, increased power dissipation will happen accordingly. As power consumption grows, the technology required to support greater speeds also grows in cost and complexity with more optical lanes and components to consider. Complexity is the enemy of reliability, and suddenly high-speed feels like a necessary evil vs. a benefit. What does the vendor in question do to address these concerns? One example is to embed the technology into the devices, a move that we’ve seen consistently save as much as 30 percent on power in optical products.

Can I keep my existing infrastructure?

As the drive to faster speeds continues, many enterprises struggle with hardware issues, and can’t even begin to consider a project that requires a ripping-and-replacing of existing fiber infrastructure. For niche applications, providers should be able to offer non-MSA products, helping data centers take advantage of the higher speeds and connectivity, all on the same infrastructure, in a sustainable way. It’s true that higher speeds don’t come with the same interoperability and ‘quality of life’ benefits as optical equipment for 100-200G speeds, but that’s why hot-swappable pluggable transceivers are designed to make up the difference.

What do you offer for low latency?

Latency is a true game-changer, especially for high performance computing centers. Higher speeds are incredibly important as well, but it’s latency that will enable new applications that simply could not exist before. These could be applications such as enabling true remote surgery or facilitating widespread AR/VR workplace applications to accelerate remote workforces. To fully understand how latency is going to affect business operations, ask about the FEC (Forward Error Correction). By reducing FEC levels, you can reduce latency. Superior performing BER (Bit Error Rate) allows data centers to get away from Strong FEC. This can reduce latency to low as 80 nanoseconds from a high as 250 nanoseconds. It’s just another way to keep power low and sustainability front and center.

Is the technology cost-effective?

To understand whether components are cost-effective, data centers must really focus on reliability. Does the technology offer enough reliability that the total cost of ownership is going to be lower than a competitors? This is determined by a few factors, from the power consumption itself, to the cost to replace equipment over time. A good question to pose is about the Mean Time to Fail, (MTTF) as well as the Time to 1% Fail (TT1%F). These are metrics that the vendor should easily have. The quicker the technology must be replaced, the less cost-effective the solution is going to be overall.

How can I improve my yield?

As data center operators cope with faster speeds, additional power consumption and more devices to handle the growth in data traffic, gaining consistently higher yields can be challenging. To achieve higher yields with pluggable optical modules, data centers must focus their inquiries into these technologies in the right areas. The main factor in achieving higher yield is the effectiveness of the alignment technology. This is because the other two main benchmarks of optical performance, optical design and optical fabrication, have essentially reached the limits of improvement. Ensuring better alignment of the transceivers ensures a lower manufacturing cost, lower overall assembly labor, more repeatable manufacturing process that drives higher reliability. All of which can mean better yield for a data center over time.

How else can I drive down costs?

There is plenty of competition in the optical components market that allows data centers to drive costs down, which can help offset the additional OPEX costs from supporting burgeoning data traffic. However, to take advantage of a wider selection of components, including lower performing ones, data center operators must ensure their optical transceivers can make up the difference through improved light coupling between the VCSELs and the fiber.

A transceiver is at the heart of every data center. A transceiver must be at highest quality and at best performance since a poorly performing transceiver can bring down an entire network line, making the other linked transceivers vulnerable, and leaving many broken links. Ensure that vendors will be able to support the next generation of products, including 800G, so you can smoothly transition without the need to qualify a new vendor.