When it comes to networks, higher speeds and greater throughput are better. It’s no surprise that many organizations – especially those in performance-oriented industries such as finance, healthcare, Big Tech, etc. – are well into the process of upgrading equipment.

Yet despite the hype around 40Gbps and 100Gbps network speeds, 10Gbps is still an integral part of many corporate datacenters today. And that creates a problem; if IT and NetOps teams neglect visibility and monitoring of these older network segments, they’ll create blind spots and security holes in the corporate network. Conversely, organizations that are early into the upgrade journey will want to future-proof solutions to better accommodate 40G/100G, or they’ll have blind spots at the higher end.

In either instance, to maintain overall network performance and security, IT should build a visibility infrastructure that can broker, capture and analyze packets at a variety of speeds without unnecessarily dropping packets.

10G remains relevant for enterprise

10G infrastructure is likely to stick with us for the foreseeable future, for a number of reasons. First and foremost, many industries or companies that don’t require high speeds will likely be slow to upgrade, and when they do, they won’t replace the entire network at once. That’s because the refresh cycle for hardware switches and similar equipment is several years, which can stretch out upgrades. In the meantime, those organizations will need to keep monitoring 10G.

Even companies that have upgraded much of their infrastructure to 40 or 100G will typically focus on particular segments such as north-south traffic and reserve slower speed 10G for east-west or edge traffic. IT and security tools complicate this further, as many network detection and response tools, firewalls and other devices can’t ingest traffic at speeds greater than 10G. All this means that in practice, most organizations will be working with a mix of speeds indefinitely. This can create a barrier to visibility.

Maintaining visibility from 10G to 100G

Visibility is binary – you either have it or you don’t. If parts of the network aren’t included, IT doesn’t really have visibility. The ability to monitor all traffic is central to avoiding blind spots that would otherwise leave an organization open to security exploits and performance issues. In terms of monitoring equipment, this means these organizations need packet brokers or capture devices that offer a range of ports across speeds so they can distribute/capture packets at the correct rate for a given need or destination.

Performing real-time analytics or capturing packets to disk at speeds greater than 10G requires deploying specialized hardware components, otherwise packets will be dropped. Using 10G-rated equipment on newer, faster network segments will result in degraded visibility thanks to these lost packets.

Conversely, neglecting to monitor 10G segments completely should not even be considered an option, as this creates gaping holes in the network. One might think it’s possible to just use older equipment for these older segments and run it alongside newer hardware on the faster segments, and to an extent that’s true. But what you’re functionally doing is creating and maintaining a duplicate monitoring infrastructure, which adds to overhead and can also degrade visibility as IT and security teams context-switch from segment to segment.

Building a visibility infrastructure is complex, especially for hybrid cloud enterprises that must access packet data from public or private clouds as well as on-prem. Trying to build and manage two at once is a recipe for disaster. This is another argument in favor of future-proofing visibility; an infrastructure that can support a range of speeds won’t need to be replaced in another few years as more network infrastructure is upgraded.

Single visibility

The best and easiest approach is to have a single visibility infrastructure to simplify management, and facilitate the same analysis for all links, regardless of what speed they use. This requires packet brokering and capture solutions that have the internal hardware and external ports to simultaneously handle a variety of speeds. They’re not common, but they are out there. This not only creates a unified monitoring architecture, but it also gives users the latest features and functionality regardless of whether we’re talking about 10G, 40G or 100G. Moreover, this new equipment will be significantly denser, saving valuable datacenter space. Ultimately, this approach allows organizations to continue safely using 10G segments, equipment and software, prolonging those IT investments.

It’s not that long ago that we were talking about how to add faster 40G and 100G speeds to the network while maintaining existing observability. As those speeds become increasingly prevalent in the datacenter, the conversation is shifting to how to best maintain visibility for older 10G segments. It’s an important discussion to have.

Subscribe to our daily newsletters