Software defined networking (SDN) holds the potential to radically alter data center architecture in the long term, but adoption and implementation rates may remain low for the moment, with buyers wary of new, unproven and arguably overhyped technology that has little to offer in the way of standardization as yet.

Brad Casemore, research director for data center networks at research firm IDC, says that a lot of SDN standards and specifications are currently being thrashed out, but they will take time. It’s important to remember that, by its very nature, SDN is developer-driven, and the predominantly hardware-orientated standardization frameworks of the past are not the best fit for this kind of evolution.

“The old ways of doing things in standards from the IETF or the IEE are no longer as applicable in this API software and device-driven world, and the networking industry is still adapting to that,” he said.

chain breaking right SDN network
– DCD

Software decoupled from hardware

Yet various analyst forecasts suggest that SDN-enabled products will take an increasingly large slice of the overall enterprise networking market in the future. IDC calculated that SDN-enabled physical network infrastructure, controller and network virtualization software, network and security services and related applications, and professional services, will be worth $8bn by 2018, representing a compound annual growth rate (CAGR) of 89 percent.

The significant potential of SDN lies in its decoupling of software functions from the underlying network hardware to build virtual overlays that span multiple type and brand of switches, routers and other equipment, giving network managers the ability to control forwarding and transport planes from a central console and spin up new network services and applications on-the-fly, often automatically, without having to touch the physical hardware.

But because of SDN’s reliance on software overlays, the underlying network hardware no longer needs much functionality built into its operating system or specialized ASIC-driven hardware. The central SDN controllers that program the physical and virtual switches within those overlays and direct traffic between them can run on very basic ‘white box’ switches or routers, or even sit as virtual, software-based equivalents on standardized x86 servers elsewhere in the network.

That capability has understandably struck fear into the hearts of traditional network vendors such as Cisco, Juniper Networks, Extreme Networks, HP and others, that see the potential for a multibillion-dollar market for expensive, full-function network hardware to shrink considerably over the next decade. And rather than fight SDN’s momentum, all these companies have chosen to embrace the technology in a bid to keep themselves relevant and profitable in the long term – in some cases contributing to open-source SDN initiatives espousing interoperable SDN controllers able to manage any brand of equipment, and in others simultaneously developing slightly different versions of the same code, which provide some competitive advantage through close integration with their open operating systems or physical hardware.

Giants collide

Indeed, Casemore points out that two major alternatives currently being put forward for next-generation data center networks come from networking and storage hardware giants that previously joined forces to address the data center market through their Virtual Computing Environment (VCE) platform. Having parted company, Cisco now offers its Application Centric Infrastructure (ACI) based on technology acquired through its $863m purchase of Insieme Networks, while VMware claims 400 customers for its rival NSX platform built from its $1.26bn buyout of Nicira in 2012.

Elsewhere, Juniper Networks has weighed in with its Contrail SDN controller and built features that allow third-party SDN controllers to manage its hardware in virtualized networks into its other products. HP has its own distribution of OpenStack called Helion, while Extreme Networks has released its own SDN platform based on an OpenDaylight controller. Yet all these established vendors face a threat from startups such as Cumulus Networks, which partnered with PLUMgrid to deliver an OpenStack-based SDN platform running on a Linux OS for bare metal or white box switches.

The old ways of doing things are no longer as applicable, and the networking industry is still adapting to that

Brad Casemore, IDC

For the moment, SDN has been implemented largely in single data centers, but there are moves to extend its reach between hosting facilities for customers that need to bridge virtual networks via wide-area network (WAN) links. Casemore believes the SDN WAN is going to be much easier for many enterprise data centers to justify given the pain points they already encounter with SaaS and the cloud, one that removes the obstacle of having to connect different silos of network and storage architecture and the political tugs of war that can provoke.

“Right away you can see that MPLS hops are an issue for them, and they are looking for ways to mitigate that,” he said. “So it [SDN WAN] fits a compelling business need, and that is why you see so much investment in a lot of SDN startups, plus activity from established vendors such as Cisco, Riverbed, Silverpeak and Citrix.”

chain breaking left SDN network
– DCD

A world of overlays

Nuage Networks is busy extending its virtual overlays over the IP and Ethernet WAN connections within telecommunications and cloud service provider networks with its virtualized services assurance platform (VSAP), while a host of smaller vendors – including Glue Networks, CloudGenix, Viptela and Anuta Networks – are addressing the SDN WAN orchestration problem.

The big question remains whether increased adoption of SDN software platforms and SDN-enabled switches and routers will result in reduced spending on dedicated or customized equivalents featuring optimized ASICs and operating systems that Cisco, Juniper and others have been selling into data centers for as long as they have been in operation.

David Noguer Bau, head of service provider marketing at Juniper, thinks otherwise. Rather, he expects that any shortfall in sales will be compensated by data center expansion, as hosting companies add more ports through 10Gbit/s or 100Gbit/s upgrades, or add higher speed interfaces to existing switches. “If you walk into a data center today, one quarter of it will have empty spaces in which to put more servers,” he said.

“Growth does not come from ripping out and replacing old hardware in this market anyway – it is still about new hardware and the more you put in, the more you have to invest in other layers, whether top-of-rack switches, core networks or MPLS.”

This article is from the June 2015 issue of DatacenterDynamics print edition. Click here to subscribe