Archived Content

The following content is from an older version of this website, and may not display correctly.

Guy Churchwood, president, data protection and availability, EMC

I struggle with the catchall that the software-defined data center (SDDC) has got now because when I talk to people about SDDC it goes from “I have a hardware version and can virtualize it” through to “I have a pure fluid multitenant environment that allows me to have flexibility”. To me, when you get to a SDDC, it is about having the ability to drive redundancy, compliance, applications and provisioning through an interface and have that full fluidity.

When someone has really embraced SDDC, it is about that simplification and the access points and everything that is abstracted. It is lightening provision and having everything in a model that allows you not to worry about where anything is. There are a few companies we are working with today that do predictive security out of data protection or backup software using our metadata. This is the sort of area fluidity can take you into.

I did a blog about what data is going to look like in the next ten years, and it is going to be driven by applications. So what should the future app look like? It will need to be agile and fluid.

We will also have small data sprawl. Apps will be about billions of billions of small data sets that move quickly, and that means you need a hyperscale data center that won’t necessarily be in control of company ‘X’ operating out of just one location. It will be a mesh. For this you will need a mesh of storage, for example, and it will need a different control plane that includes different storage systems and different data centers.

Craig Huitema, director of systems engineering, Cisco
Software-defined networking (SDN) has gone through a huge hype cycle over the last couple of years. We are now starting to get to real deployments. We see a lot of customers looking at our Nexus 9000 Series Switch (the foundation of Cisco’s SDN play called Application Centric Infrastructure – ACI).

They are adopting it rapidly because it fits their existing environment and network designs and brings some basic critical needs whether they are moving to 1G or 10G or adopting new technologies like XLAN (an advanced DMX controller). This is where they start deploying ACI.

The early use cases we see are around bringing automation to the network – physical and virtual – make it faster to deploy apps and services, whether they be security policies or policies around load balancing.

Today it is really about eliminating pain points and bringing the physical and virtual together under one operational model. We also see some customers doing simulations to understand how apps are behaving on infrastructure so they can optimize them.

Mike Smart, cyber strategist, Symantec
When we talk about the modern agile data center – the SDDC, where you are seeing the abstraction between the physical and virtual – it becomes a challenge because traditional security mechanisms are not always effective for this environment. There is a whole market now that says we can have security embedded in the virtualization layer but you also have to look at preventing an attack before it happens. People are now realizing their data center has changed a lot and wondering if they can use antivirus at all.

What Symantec and other organizations have developed is a way of plugging into the virtualization layer to speed up that process. Instead of putting security on every single image you put it in once on a physical device and protect all those machines, which integrate with things like VMware’s vSphere of NSX (its network virtualization and security platform). This is still, however, looking for the ‘bad stuff’.

There is another approach. What if we said virtual machines and images are a fixed function by definition. Let’s say a mail server. If it is fixed function then it is easier to say this mail server only needs to send and receive mail. So we can take away everything we don’t need on that server and only allow it to do things that mail server should be doing – and nothing else. This is what I call a positive approach – whitelisting. By physically or virtually locking down the device it can protect itself. The good thing is this does not need to be updated, it does not need to be scanned all the time for known ‘bad things’ and it has a very low footprint. In future it won’t necessarily be a security person switching on security, so you need guidance and off-the-shelf configurations.

Ron Irving, global portfolio consulting director for mobility and networking, HP
[There is] intense pressure on IT to enable rapid rollout of new services, for new consumers, in new locations — IT agility. So how have our data center colleagues in servers and storage responded? If you look what they have done over the past decade, you see:

Phase 1: Dedicated infrastructure - Each application required its own infrastructure with little sharing. Efficiency was poor — typical server utilization ran about 10% — so IT overprovisioned to meet unanticipated needs. Worse, new applications or major changes required that IT acquire and deploy additional infrastructure. IT responds in months.

Phase 2: Virtualization - Virtualization let multiple applications share the same infrastructure and greatly increased server and storage utilization. That was a benefit, but it wasn’t the main one. What virtualization did was allow administrators to dynamically provision new servers and storage in response to business needs. IT responds in days.

Phase 3: Application awareness - This is what cloud has brought us. APIs enable applications to interact with infrastructure to request the server and storage resources they require in the locations and volume they require it. IT responds in minutes.

Networking is stuck in Phase 1. The network is brittle—we acquire and deploy network infrastructure in response to new programs and applications. We overprovision to account for unanticipated volumes. Just as virtualization and application awareness enabled server and storage technology to step up to business demands, they will enable networks to do that as well.

SDN is virtualized networking that can make networks application-aware and able to respond to new demands in minutes.

The rest of the business is waiting for us. But standing where we are and looking at where we need to be can feel like standing at the edge of a huge chasm trying to view the other side.

Ramesh Menon, chief architect and cloud technology leader for the federal sector, IBM Software
Businesses are increasingly turning to private and hybrid cloud solutions to reduce costs and enable more flexible and scalable business processes. These cloud-enabled infrastructures involve complex management and operational challenges. A new generation of software-defined environment makes it possible to deliver common cloud services for compute, storage and network while supporting multiple hypervisors and multi-vendor platforms. This approach is one of the most dynamic innovations in private and hybrid clouds, which brings advanced automation and orchestration for both system management and software distribution. Moreover, it provides self-service provisioning and an accountability mechanism to allow IT to keep track of who is doing what.

Many federal agencies are building this strategy that can handle the need to both optimize and innovate.

Jayshree Ullal, CEO, Arista
The common view is that SDN is a controller or a set of network management products based on virtualization technologies or OpenFlow. At Arista we have a more pragmatic view. To us, SDN is a programmatic suite of open interfaces that allows applications to drive networking actions. Unlike the misconception that SDN is just a controller, I believe SDN is about scaling the control, management and data plane with programmatic and open interfaces.

This means customizing the network with high-level scripting and programmatic languages, structured and machine-readable APIs, standards-based protocols and interoperability with controller-friendly networks.

In 2014, we are witnessing the deployment of SDN via Arista EOS and associated programmable network applications such as Advanced Telemetry, OpenWorkload and Smart System Upgrade (SSU). The hype is indeed settling down to a few meaningful use cases and the reality is that SDN can become a $2B market in three to five years. The realization has hit that SDN cannot be deployed in isolation and that it must be built in hybrid configurations, co-existing with open IP fabrics.

Brad Casemore, research director for datacenternetworks, IDC
Cloud service providers and a growing number of large enterprises understand the right blend of network infrastructure and operational processes, properly automated and orchestrated, can deliver substantive benefits.

In a recent survey of cloud service providers and enterprises, IDC found respondents considering or implementing SDN cited the need for the network to possess greater agility and support virtualized applications and cloud as a primary motivation. Many other primary motivations related to network support for new applications and services, better network programmability for operational efficiency and faster provisioning of infrastructure to support application workloads.

The same survey discovered that those who have deployed SDN for the reasons just cited are already deriving real business benefits. Nearly 46% of cloud providers and enterprises that have deployed SDN indicated they have realized OPEX savings of 10 to 20%. Another 37% indicated that those savings exceeding 30%. Those considering SDN employments anticipate similar efficiency gains and business benefits.

For these reasons, we are stressing the symbiotic relationship that exists between cloud and SDN. While virtualization and cloud clearly were precursors to SDN, the latter is now providing a network alternative and infrastructure foundation for the growth and prosperity of cloud business models.

Erik Giesa, SVP of marketing and business development, ExtraHop Networks
With the data center moving to virtualization and SDN the traditional approaches to managing such complex dynamic environments don’t apply or work anymore.

The traditional view of network performance management – looking at packets, bit rates and network congestion – no longer helps identify problems and address them before big issues arise.

An app used to be defined in a data center as comprising all those elements and if one in that whole application delivery chain failed the end user would be impacted. That is why we now need operational intelligence to get a comprehensive visibility of not just performance but of how thing are working right now, and to collect data so you can make better business decisions which could be around how you implement capacity or SDN, ensuring any changes don’t impact the end user experience.

This article first appeared in FOCUS Issue 36, available online here