Enterprises are moving their data centers from fixed on-premise facilities to more scalable, virtualization, hybrid-cloud infrastructures at a rapid rate. And their security specialists are trying to stay ahead of the trend, searching for solutions to protect mission-critical applications and workloads running in these dynamic, heterogeneous environments.

Traditional network-based boundary security is no longer effective when the boundaries keep shifting. Attackers are breaching perimeter defenses seemingly at will, judging from what feels like daily news accounts. Once inside, they blend into east-west traffic, spreading laterally and looking for vulnerabilities. Unguarded applications, spanning a variety of bare metal servers, VMs and containers, are ripe targets, collectively comprising a huge attack surface.

Turning to micro-segmentation

Increasingly, security experts and analysts cite micro-segmentation as a best-practice solution for securing data center assets and implementing a “zero trust” security model. Micro-segmentation involves setting granular security policies around individual or logically grouped applications. Those policies dictate which applications can and cannot communicate with each other; any unauthorized communication attempt is not only blocked, but also triggers an alert that an intruder may be present.

Technology analyst firm Gartner has identified micro-segmentation as a top ten priority security project, particularly for organizations “that want visibility and control of traffic flows within data centers,” further noting that “the goal is to thwart the lateral spread of data center attacks.”

In view of all the attention micro-segmentation has received, why has it not been more widely adopted? A few misconceptions make security officers hesitant to jump in. One is that it is only for large enterprises that can dedicate armies of security professionals to implementing and managing a micro-segmentation project. Another is that it is an “all or nothing” proposition that requires every last asset to be secured in a single, massive project, a near impossible task in a DevOps environment of continuous application deployment.

It’s important to cast these myths aside, and to take some lessons from enterprises that have successfully incorporated micro-segmentation into their IT operations. These organizations have taken a phased approach, initially focusing on a few manageable projects with easily defined objectives. Common challenges that can be solved through micro-segmentation include:

  • Compliance. A key driver of micro-segmentation, regulatory standards such as SWIFT, PCI, GDPR, HIPAA or others frequently specify that certain processes must be separated from general network traffic.
  • DevOps. Applications in development, testing or quality assurance environments need to be separated from those in the production environment.
  • Restricted access to data center assets or services from outside users or Internet of Things devices.
  • Separation of systems that run highly sensitive equipment (for example, medical devices in hospitals) from general enterprise systems.
  • Ring-fencing to separate the most critical applications from less critical ones.

By establishing a hierarchy of priorities and starting small, you can score some “quick wins” and start seeing tangible results in fairly short order.

Essential Attributes of a Micro-Segmentation Solution

For micro-segmentation to be both effective and practical to manage, it needs to meet certain basic requirements. These include:

Process-level visibility: Lack of visibility is usually the first stumbling block organizations run into – they can’t see everything that’s running in their data centers. Gaining total visibility is the essential prerequisite in order to identify logical groupings of applications for segmentation.

Platform-agnostic performance: As applications migrate among heterogeneous environments, policies governing their communications must be able to follow them and protect them wherever they go.

Labeling: The ability to properly classify or label assets in preparation for monitoring and policy creation is foundational. To take advantage of auto-scaling in dynamic environments, consider labeling methodologies that apply labels automatically as workloads scale up or down.

Flexible policy creation: Operators should be able to create customization hierarchies for easy compound rule creation, understanding that different stakeholders will want to organize and create rules differently.

Automation: The solution should also allow much of the process of policy creation, modification and management to be automated, so that as new workloads are deployed, they are automatically allocated into the appropriate micro-segments and policies.

The implementation Process

Implementation of micro-segmentation can generally be broken down into seven phases:

  • Discovery and identification: Find and identify all the applications running in their data center. Process-level visibility is critical here.
  • Dependency mapping: Figure out which applications need to be able to communicate with each other. This process can be greatly accelerated with the aid of graphic visualization and mapping tools.
  • Grouping of applications for rules: With an understanding of application dependencies, start putting them into logical groups for the creation of security policies. Avoid over-segmenting (having too many discrete groupings) or under-segmenting (creating groups so broad that policies will lack precision).
  • Create policies or rules: Once the logical groupings have been defined, policies can be created, tested and refined for each defined group.
  • Deploy: Put policies into effect.
  • Monitoring and enforcement: The solution should be able to monitor every port and all east-west traffic for anomalies. Policy violations should automatically trigger enforcement mechanisms for threat blocking and containment.

Implementing micro-segmentation is not a small decision and it does take an organizational commitment. However, by taking a phased, hierarchical approach with specific near-term goals, you can start seeing value on key priorities immediately, and the learning curve will flatten out quickly as users gain experience with the process.

Above all, your organization can reap the benefits of cloud-enabled business agility and efficiency with confidence that the risk of compromise is dramatically reduced.