It is undisputable that containers are one of the hottest tickets in open source technology, with 451 Research projecting more than 250% growth in the market from 2016 to 2020. It’s easy to see why, when container technology has the ability to combine speed and density with the security of traditional virtual machines and requires far smaller operating systems in order to run.

However, despite the hype, there still seems to be a disconnect between interest in and adoption of containers to use in enterprise production.

Of course, it’s still early days in the budding enterprise container market, and similar question marks faced OpenStack technology on its path to market maturity and widespread revenue generation. Customers are still asking, “can any of this container stuff actually be used securely in production for an enterprise environment?”

From virtual machines to containers

Shipping containers
– Thinkstock / Prasit Rodphan_1

First, some background. In previous years, virtual machines have been able to provide a solution to workflow expansion and cost reduction for many companies, but they do have limits. For example, virtual machines have a far smaller capacity compared to containers, in terms of the amount of applications they are able to put into a single physical server. Virtual machines also use up system resources; each virtual machine runs on a full copy of an operating system, as well as a virtual copy of all the hardware that the operating system needs in order to function.

Containers offer a new form of virtualisation, providing almost equivalent levels of resource isolation as a traditional hypervisor. However, containers present lower overhead both in terms of lower memory footprint and higher efficiency. This means that higher density can be achieved – simply put, you can get more for the same hardware.

Enterprise adoption

The telco industry has been at the bleeding edge of adopting container technology. Part of the catalyst for this trend has been the NFV (network function virtualisation) revolution – the concept of telcos shifting what were traditionally welded-shut proprietary hardware appliances into virtual machines.

We certainly do see virtual machines being used in production in some telcos, but containers are actually a stronger fit; the performance is even better when it comes to NFV applications.

Developers in enterprise environments are aware that containers offer both higher performance to the end user, as well as operational efficiency for the cloud administrator. However, many CIOs are still unsure that containers are the best option of technology for them, due to wider market misconceptions. For example, some believe that by using one particular type of container, they are going to tie themselves into a specific vendor.

Security worries

Another common misconception that might present an obstacle to enterprise adoption is the concept of security. However, there are several controls in place that enable us to say, with confidence, that an LXD container is absolutely just as secure as, for example, a VMware Guest.

One of these is resource control, which inside of a Linux kernel is provided by a technology called cgroups (control groups), originally engineered at Google in 2006. Cgroups is the fundamental technology inside of a Linux kernel that groups processes in a certain way, ensuring that those processes are tightly coupled. This is essentially what a Docker or LXD container is – an illusion that the Linux kernel creates around the group of processes that makes them look like they belong together.

Within LXD and Docker, cgroups allows you to assign certain limiting parameters, for example, CPU, disk storage or throughput. Therefore, you can keep one container from taking all of the resources away from other containers. From a security perspective, this is what ensures that a given container cannot perform a denial of service (DDoS) attack against other containers alongside it, thereby providing quality of service guarantees.

The kernel also provides a level of security to containers via Apparmor, SELinux, kernel capabilities and seccomp. These prevent, amongst other things, a process running as root in a container having full access to the system (for instance to hardware), and aid in ensuring processes cannot escape the container in which they run. Containers offer two complementary forms of access control. Firstly, discretionary access control (DAC) mediates access to resources based on user-applied policies, so that individual containers cannot interfere with each other, and can be run by non-root users securely.

Secondly, mandatory access control (MAC) ensures that neither the container code itself, nor the code run within the containers, has a greater degree of access than the process itself requires, so the privileges granted to rogue or compromised process are minimised.

Therefore, container technology can offer hardware-guaranteed security to ensure that each containerised machine cannot access one another. There may be situations where a virtual machine is required for particularly sensitive data, but for the most part containers deliver security. Canonical designed LXD from day one with security in mind.

Why containers?

Container technology has brought about a step-change in virtualisation technology. Organisations implementing containers see considerable opportunities to improve agility, efficiency, speed, and manageability within their IT environments. Containers promise to improve data centre efficiency and performance without having to make additional investments in hardware or infrastructure.

For Linux-on-Linux workloads, JaaS (Juju-as-a-Service) cloud services, PaaS (Platform-as-a-Service) cloud services and for all those simply looking for a way to run a container as if it was a physical or virtual machine, containers can offer a faster, more efficient and cost effective way to create an infrastructure. Companies using these technologies can take advantage of brand-new code, written using modern advances in technology and development discipline. We see a lot of start-ups adopting container technology as they develop from scratch, but established companies can also take advantage of the technology, where Canonical’s LXD can improve performance and density of traditional workloads.

Dustin Kirkland is responsible for Ubuntu products and strategy at Canonical