LXC (LinuX Containers) is a OS-level virtualization technology that allows creation and running of multiple isolated Linux virtual environments (VE) on a single control host. These isolation levels or containers can be used to either sandbox specific applications, or to emulate an entirely new host. LXC uses Linux’s cgroups functionality, which was introduced in version 2.6.24 to allow the host CPU to better partition memory allocation into isolation levels called namespaces . Note that a VE is distinct from a virtual machine (VM), as we will see below.

Docker, previously called dotCloud, was started as a side project and only open-sourced in 2013. It is really an extension of LXC’s capabilities. This it achieves using a high-level API that provides a lightweight virtualization solution to run processes in isolation. Docker is developed in the Go language and utilizes LXC, cgroups, and the Linux kernel itself. Since it’s based on LXC, a Docker container does not include a separate operating system; instead it relies on the operating system’s own functionality as provided by the underlying infrastructure. So Docker acts as a portable container engine, packaging the application and all its dependencies in a virtual container that can run on any Linux server.

VE vs. VM

So what, one may ask, is the difference between these VE’s and a traditional VM? Well, the main difference is that in a VE there is no preloaded emulation manager software as in a VM. In a VE, the application (or OS) is spawned in a container and runs with no added overhead, except for a usually minuscule VE initialization process. There is no hardware emulation, which means that aside from the small memory software penalty, LXC will boast bare metal performance characteristics because it only packages the needed applications. Oh, and the OS is also just another application that can be packaged too. Contrast this to a VM, which packages the entire OS and machine setup, including hard drive, virtual processors and network interfaces. The resulting bloated mass usually takes a long time to boot and consumes a lot of CPU and RAM.

Advantage: VE. So why haven’t VM’s already have gone the way of the dinosaur? The problem with VE’s is that, up to now at least, they cannot be neatly packaged into ready-made and quickly deployable machines – think of the flexibility and time saving offered by Amazon’s myriad AWS machine configs. Also, this means they cannot be easily managed via neat GUI management consoles and they don’t offer some other neat features of VM’s such as IaaS setups and live migration.

So the VE crowd is not unlike the overclockers and modders of the CPU and computer hardware universe – they extract more utility from the standard machine in the market. But doing so calls for advanced technical skills, and results in a highly customized machine that’s not necessarily guaranteed to be interoperable with others. Also, if you don’t know what you’re doing, you will royally mess up your machine.

What They Do

Think of LXC as supercharged chroot on Linux. It allows you to not only isolate applications, but even the entire OS. Its helper scripts focus on creating containers as lightweight machines - basically servers that boot faster and need less RAM. There are two user-space implementations of containers, each exploiting the same kernel features:

  • Libvirt, which allows the use of containers through the LXC driver by connecting to 'lxc:///'. This can be very convenient as it supports the same usage as its other drivers.
  • Another implementation, called simply 'LXC', is not compatible with libvirt, but is more flexible with more userspace tools. It is possible to switch between the two, though there are peculiarities which can cause confusion.

Docker, on the other hand, can do much more than this. Docker can offer the following capabilities:

  • Portable deployment across machines: you can use Docker to create a single object containing all your bundled applications. This object can then be transferred and quickly installed onto any other Docker-enabled Linux host.
  • Versioning: Docker includes git-like capabilities for tracking successive versions of a container, inspecting the diff between versions, committing new versions, rolling back etc.
  • Component reuse: Docker allows building or stacking of already created packages. For instance, if you need to create several machines that all require Apache and MySQL database, you can create a ‘base image’ containing these two items, then build and create new machines using these already installed.
  • Shared libraries: There is already a public registry (http://index.docker.io/ ) where thousands have already uploaded the useful containers they have created. Again, think of the AWS common pool of different configs and distros – this is very similar.

For a great list of Docker’s capabilities, see this thread on Stackoverflow: https://stackoverflow.com/questions/17989306/what-does-docker-add-to-just-plain-lxc

Popularity

If popularity were the only criteria for deciding between these two containerization technologies, then Docker would handily beat LXC and its REST tool, LXD. It’s easy to see why, with Docker taking the devops world by storm since its launch back in 2013. Docker’s popularity, however, is not an event in isolation, rather, the application containerization that Docker champions just happens to be a model that tech giants, among them Google, Netflix, Twitter, and other web-scale companies, have gravitated to for its scaling advantages.

LXC, while older, has not been as popular with developers as Docker has proven to be. This is partly due to the difference in use cases that these two technologies focus on, with LXC having a focus on sys admins that’s similar to what solutions like the Solaris operating system, with its Solaris Zones, Linux OpenVZ, and FreeBSD, with its BSD Jails virtualization system. These solutions provide OS containers for a whole system, which is achieved, typically, by providing a different root for the filesystem, and creating environments that are isolated from each other and can’t share state. 

Docker went after a different target market, developers, and sought to take containers beyond the OS level to the more granular world of the application itself. While it started out being built on top of LXC, Docker later moved beyond LXC containers to its own execution environment called libcontainer. Unlike LXC, which launches an operating system init for each container, Docker provides one OS environment, supplied by the Docker Engine, and enables developers to easily run applications that reside in their own application environment which is specified by a docker image. Just like with LXC, these images can be shared among developers, with a dockerfile, in the case of Docker, automating the sequence of commands for building an image.  

The Docker user base is large and continues to grow, with ZDNet estimating the number of containerized applications at more than 3.5 million and billions of containerized applications downloaded using Docker. Linux powerhouses such as Red Hat and Canonical, the backers of Ubuntu, are firmly on the Docker bandwagon, as are even bigger tech companies like Oracle and Microsoft. With such adoption, it’s likely Docker will continue to outstrip LXC in popularity, though system containers like LXC have their place in virtualization of traditional applications that are difficult to port to the microservice architecture that’s popular these days.  

Tooling and CLI 

LXC tooling sticks close to what system administrators running bare metal servers are used to, with direct SSH access allowing the use of automation scripts your team might have used on bare metal or VMs running on VirtualBox and other virtualized production environments. This portability makes migrating any application from a Linux server to running on LXC containers rather seamless, but only if you are not using containerization solutions already. The LXC command line provides essential commands that cover routine management tasks, including the creation, launch, and deletion of LXC containers. LXD images can be obtained from the built in image remotes, supplying an LXD remote, or manually importing a Linux image from a tarball. Once you’ve created and launched a container from an image, you can then run Linux commands in the container.

Docker’s tooling is centered around the Docker CLI, with commands for listing, fetching, and managing Docker images. A public image registry, Docker Hub, provides access to a variety of images for commonly used applications. Notably, you can also download OS images, which lets you run, say, a Linux system in a Docker container. This is functionality that you would typically associate with LXC containers, which allow you to run OS systems without needing a VM. However, Docker containers aim to be even lighter weight in order to support the fast, highly scalable, deployment of applications with microservice architecture.

Ecosystem and Cloud Support  

With backing from Canonical, LXC and LXD have an ecosystem tightly bound to the rest of the open source Linux community. The entire range of tools that work on VMs and Linux systems tend to work for LXC as well, after all, the containers on a LXC host system have Linux OS instances running within them. This means that your team won’t need to find an additional vendor for LXC specific tooling, since the tools you already use on Linux will work when your applications run on LXC containers. For managing your LXC containers, which may live on a single server or potentially thousands of nodes, the LXD hypervisor provides a clean REST API that you can use. LXD is implemented in Go, to ensure high performance and networking concurrency, with excellent integration with OpenStack and other Linux server systems.  

In contrast, Docker requires much more specialized support and has spawned off a sizable ecosystem, since the application container deployment model that it seeks to achieve is such a novel concept in the timeline of software deployments. The application container space is younger than the VM scene, and this results in a lot more fluidity. Docker now runs on Windows, and is supported by major cloud providers such as AWS, IBM, Google, and Microsoft Azure.

Docker’s ecosystem includes the following set of tools:

  • Docker Swarm - An orchestration tool to manage clusters of Docker containers 
  • Docker Trusted Registry - A private registry for trusted Docker images
  • Docker Compose - A tool for launching applications with numerous containers that need to exchange data.
  • Docker Machine - A tool for creating Docker-enabled virtual machines. 

There are more tools that help to fill out the entire stack, providing specialized functionality to support your Docker deployments. Docker Hub, Docker’s official open image registry, contains over 100,000 container images from open source contributors, vendors, and the Docker community.  

Ease Of Use

Docker and LXC both provide ample documentation, with helpful guides for creating and deploying containers. Bindings and libraries exist for languages such as Python and Java, making it even easier for developer teams to use. When comparing the two technologies, however, Docker’s ever-growing ecosystem will take much more to manage. Docker might have become the standard for running containerized applications, with tools like Kubernetes and Docker Swarm providing the orchestration, however, the ecosystem comes with additional complexity. 

Part of this has to do with Docker’s key innovation of single-process containers, over and above the standard multiprocess containers that LXC provided. When Docker introduced this innovation, it inevitably led to downstream complexity for teams porting over traditional applications to a non-standard operating system environment. A lot more planning, architecture decisions and scripting to support applications has to be done. 

With LXC, a large part of this complexity is avoided since LXC runs a standard OS init for each container, providing a standard Linux operating system for your apps to live in. As a result, migrating from a VM or bare metal server is often easier to do if you are moving to LXC containers, unlike if you want to move to Docker containers. On the other hand, Docker’s approach makes working with containers easier for developer since they don’t have to use raw, low-level LXC themselves. This split between a systems admin and developer focus continues to characterize adoption of these tools.

Related Container Technology 

As alluded to above, the world of containers is particularly dynamic, and involves a lot of innovation both around LXC, Docker, as well as alternative containerization technologies. What’s considered standard practice today can become old and substandard fairly fast. This is why you want to be aware of the bigger world of virtualization and containerization. These are some of the container technologies to watch:

  • Kubernetes - Drawing from Google’s experience of running containers in production over the years, Kubernetes facilitates the deployment of containers in your data center by representing a cluster of servers as a single system.
  • Docker Swarm - Swarm is Docker’s clustering, scheduling and orchestration tool for managing a cluster of Docker hosts. 
  • rkt - Part of the CoreOS ecosystem of containerization tools, rkt is a security minded container engine that uses KVM for VM-based isolation and packs other enhanced security features. 
  • Apache Mesos - An open source kernel for distributed systems, Apache Mesos can run different kinds of distributed jobs, including containers. 
  • Amazon ECS - Elastic Container Service is Amazon’s service for running and orchestrating containerized applications on AWS, with support for Docker containers.

Conclusion

LXC offers the advantages of a VE on Linux, mainly the ability to isolate your own private workloads from one another. It is a cheaper and faster solution to implement than a VM, but doing so requires a bit of extra learning and expertise.

Docker is a significant improvement of LXC’s capabilities. Its obvious advantages are gaining Docker a growing following of adherents. In fact, it starts getting dangerously close to negating the advantage of VM’s over VE’s because of its ability to quickly and easily transfer and replicate any Docker-created packages. Indeed, it is not a stretch to imagine that VM providers such as Cisco and VMware may already be glancing nervously at Docker – an open source startup that could seriously erode their VM profit margins. If so, we may soon see such providers also develop their own commercial VE offerings, perhaps targeted at large organizations as VM-lite solutions. As they say, if you can’t beat ‘em, commercially join ‘em.

Protect Your Business from Data Breaches

UpGuard can protect your business from data breaches, identify all of your data leaks, and help you continuously monitor the security posture of all your vendors.

UpGuard also supports compliance across a myriad of security frameworks, including the new requirements set by Biden's Cybersecurity Executive Order.

Test the security of your website, CLICK HERE to receive your instant security score now!

Ready to see
UpGuard in action?