The virtualization of traffic monitoring and analysis solutions are still lacking the virtualization approaches we observed in many other areas of the IT industry over the last decade. Management applications gathering logs, events, and alarms have been designed in software and virtualized, but the heavy lifters that analyze network traffic and create this information are still based on the heavy iron approach: processing traffic in real-time with expensive, complex, and inflexible proprietary applications. Even the hardware-centric network sector is virtualizing switches and routers, yet many network analysis solutions still use proprietary hardware.

However, next-generation network visibility platforms now enable virtualization of traffic analysis applications, ushering in the age of software-defined analysis apps for network, application, and security operations. This allows for more flexible analysis, better understanding, and most importantly a more economic and scalable approach, which the security industry really needs.

Today - Heavy Iron for network traffic analysis

Monitoring network traffic today is predominantly hardware-intensive and difficult to scale across the enterprise's many delivery platforms and locations. However, monitoring the traffic is crucial for assessing network and application performance and, in particular, detecting security threats. Given the different analysis objectives, many solutions analyze the same traffic, each on their own proprietary hardware, resulting in duplicated traffic and even more hardware for processing and storage of the ever-increasing traffic.

There has long been the question as to how network and security operations can construct a scalable and cost-effective system to decrypt and analyze expanding network traffic. For decades, server and storage infrastructure has successfully and efficiently virtualized computation and storage components. Even in networking over the last several years, switches, routers, and firewalls have been virtualized. Virtualization in traffic monitoring and analysis is still an emerging technology. Network and application management systems, SIEMs, and SOARs are virtualized because traffic quantities are manageable and less time sensitive – but even those may struggle when too much data needs to be processed. In the majority of the cases proprietary hardware is required, especially when network traffic surpasses 10 Gbps.

When this model is replicated across the many applications that require real-time traffic analysis, this leads to many "stovepipe" solutions. These solutions capture and analyze the same traffic using specialized NIC cards, have tailored analysis engines operating on FPGAs or across multiple processors simultaneously to be able to keep up with the analysis of the incoming traffic in real-time, and then store the results and discard the underlying traffic.

The cost and size of these technologies restrict IT's insight into network operations by not only limiting the amount of traffic that can be analyzed, but also by only keeping summary details of any event or observation. Another limitation is the discarding of the underlying data that would be very valuable for later root-cause analysis. Therefore, deployments are generally focused on critical aggregation sites such as network ingress-egress points (North-South), but it might be beneficial or needed elsewhere such as internal core networks or for high-value content. For those IT organizations that often rely on network or end-point device generated event and flow data, this creates even further abstracted metadata that contributes to more data with less insight.

Scalability and economic viability

Network packet brokers were the initial step to address this issue. The IT sector discovered a long time ago that tapping network lines individually for each application was inefficient and generated additional points of failure. This prompted the introduction of packet brokers, which aggregate traffic from multiple points in the network, filter out unneeded traffic, timestamp packets, replicate and load balance traffic for each tool, and decrypt communication. Despite packet brokers taking over certain network traffic analysis, such as NetFlow traffic analysis, much of the analysis load still rests on analysis apps and their own hardware.

Furthermore, filtering potentially unimportant traffic out and not analyzing it (e.g., YouTube streaming from Application Performance Monitoring) works for certain applications. However, especially for security, these days all traffic is open to exploitation. Therefore, restricting visibility excludes traffic from analysis and raises the risk of missing the one attack that leads to a large-scale exploitation. However, the primary challenge with this approach is that the NVF still passes through traffic surges and spikes, and traffic is reproduced at high rates, requiring the analysis applications to rely on enormous processing to evaluate data at peak rates properly.

Flipping network analysis!

We are at a point in which we need to consider virtualizing network traffic analysis to take advantage of virtualized environments' scalability. Essentially, we need to rethink the traffic analysis architecture.

Today, each traffic analysis application captures traffic, analyzes in real-time, and stores metadata findings. Going forward, we need to rethink this approach by further centralizing the traffic capture and storage from the analysis, allowing for near real-time analysis by the monitoring application. This new strategy centralizes further traffic capturing, decryption, and storage capabilities on a single platform, and allows for traffic distribution utilizing software APIs. Existing NVF technologies only centralize traffic aggregation. Through storage of the data, this actually extends NVF capability to control time, combining the benefits of packet brokers and packet capture solutions into a single, unifying platform.

This creates a Network Traffic Visibility platform that includes all the information that applications need to examine while permitting reliable traffic distribution and consumption. If further processing is needed, a new virtual instance of the analysis engine may be initiated as traffic is reliably captured and made readily available in near real-time or back-in-time, whenever the resource is spun-up, never losing a packet.

This significantly alters the approach. Instead of big analysis and compute infrastructure that lose access to information when they can't keep up with traffic, analysis applications may now operate in real-time or even near real-time, instead of losing traffic. The added advantage is that the actual network traffic surrounding any event is available for detailed forensic analysis, providing greater insight into the activities pre- and post-event. This back-in-time analysis is particularly relevant in cybersecurity, where new attack methods may have been present but undetected.

This approach of further centralizing traffic collection with the ability to store and distribute at extremely high rates permits virtualization of resource intensive traffic analysis and operation on less expensive servers, eliminating expensive proprietary hardware. Traffic spikes and increases are absorbed in the network visibility platform, so design for average traffic consumption becomes the norm, prolonging the life of the current monitoring and analytic equipment and postponing essential improvements. Having anytime data access offers rapid packet access for any event, pre- and post-incident, giving the context to identify threats or issue severity, impact, and mitigations. Virtualizing analysis allows for software defined traffic monitoring and analysis, creating better visibility and insights while reducing total cost of ownership.

Subscribe to our daily newsletters