Each container is an executable package of software, running on top of a host OS. A host(s) may support many containers (tens, hundreds or even thousands) concurrently, such as in the case of complex micro-service architecture that uses numerous containerized ADCs. This setup works because all containers run minimal, resource-isolated processes that others cannot access.

Think of a containerized application as the top layer of a multi-tier cake:

  1. At the bottom, there is the hardware of the infrastructure in question, including its CPU(s), disk storage and network interfaces.
  2. Above that, there is the host OS and its kernel – the latter serves as a bridge between the software of the OS and the hardware of the underlying system.
  3. The container engine and its minimal guest OS, which are particular to the containerization technology being used, sit atop the host OS.
  4. At the very top are the binaries and libraries (bins/libs) for each application and the apps themselves, running in their isolated user spaces (containers).

Cgroups and LXC

Containerization as we know it evolved from cgroups, a feature for isolating and controlling resource usage (e.g., how much CPU and RAM and how many threads a given process can access) within the Linux kernel. Cgroups became Linux containers (LXC), with more advanced features for namespace isolation of components, such as routing tables and file systems. An LXC container can do things such as:

  • Mount a file system
  • Run commands as root
  • Obtain an IP address

It performs these actions in its own private user space. While it includes the special bins/libs for each application, an LXC container does not package up the OS kernel or any hardware, meaning it is very lightweight and can be run in large numbers even on relatively limited machines.

 

1.     More agile, DevOps-oriented software development

Compared to VMs, containers are simpler to set up, whether a team is using a UNIX-like OS or Windows. The necessary developer tools are universal and easy to use, allowing for the quick development, packaging and deployment of containerized applications across OSes.

2.     Less overhead and lower costs than VMs

A container doesn’t require a full guest OS or a hypervisor. That reduced overhead translates into more than just faster boot times, smaller memory footprints and generally better performance, though. It also helps trim costs, since organizations can reduce some of their server and licensing costs, which would have otherwise gone toward supporting a heavier deployment of multiple VMs.

3.     Excellent portability across digital workspaces

Containers make the ideal of “write once, run anywhere” a reality. Each container has been abstracted from the host OS and will run the same in any location. As such, it can be written for one host environment and then ported and deployed to another, as long as the new host supports the container technologies and OSes in question.

4.     Fault isolation for applications and micro-services

If one container fails, others sharing the OS kernel are not affected, thanks to the user space isolation between them. That benefits micro-services-based applications, in which potentially many different components support a larger program.

5.     Easier management through orchestration

Container orchestration via a solution such as Kubernetes platform makes it practical to manage containerized apps and services at scale. Using Kubernetes, it’s possible to automate rollouts and rollbacks, orchestrate storage systems, perform load balancing and restart any failing containers.

 

Piyush Jaiswal

Piyush Jaiswal

Piyush Jaiswal, Solution Architect, Messaging Solutions, Comviva