BoxBoat Blog

Service updates, customer stories, and tips and tricks for effective DevOps

x ?

Get Hands-On Experience with BoxBoat's Cloud Native Academy

Containerization Crash Course - What is a Container?

by BoxBoat | Tuesday, Dec 11, 2018 | Docker

featured.jpg

Docker container adoption is up 500 percent year over year, and two-thirds of organizations that try Docker containers triple their use within three months (ref: Datadog). It’s quite clear that containers are popular.

But despite the rapid adoption rate, many do not know the basics of containers, namely how they differ to virtual machines, their advantages as well as their limitations, and the required tools you need to run containers effectively.

In this article, we’ll explain all of this, starting with the basics:

Containerization Crash Course: What is a Container?

A container is a unit of software which packages up all the code, along with its dependencies, so that it can run quickly, independently, and reliably in any computing environment, including local machine, physical data centers, and public and private cloud infrastructures.

Each container has a container image, which is a standalone, lightweight, executable package that includes all the essentials to run the containerized application, this includes the code, runtime, system tools, and system libraries.

Containers can be used to run both Linux and Windows-based applications. The software inside a container is isolated from the environment, allowing them to perform consistently, regardless of what environment it is running in.

Running a Container: What is Docker?

When people talk about containers, you’ll often hear the word “Docker”, and sometimes, people will assume both Docker and containers are the same things. But there is a distinction.

Docker is an open source project, which is licensed under Apache 2.0, that provides a set of tools to help you create your containers.

The Docker project began life in 2010, under the name dotCloud, but was then renamed Docker in 2013. Since then, Docker quickly grew in popularity and saw its dedicated community grow exponentially. Docker, Inc is the company behind the majority of the development of the Docker project. Docker provides basic levels of support for the free and open source Docker Community Edition (Docker CE), while also offering enterprise-level solutions through Docker Enterprise Edition (Docker EE).

Recommended Reading: Docker CE vs Docker EE: Everything You Need to Know

Due to its incredible usefulness in developing and deploying software, its growing popularity, and the timely backing of many big industry names such as Microsoft, Docker became the community-wide industry standard for building containers.

Containers vs. Virtual Machines: The Key Differences

Containers and virtual machines (VM) differ in a number of ways, but the primary difference is that containers can virtualize an OS to enable it to run multiple containers on a single OS. Whereas, with a VM, the hardware is being virtualized so it can run multiple OS instances.

Let’s take a look into this in a bit more detail.

alt text

Source: Docker

What are Virtual Machines?

With the demand for server processing power increasing, it became less feasible to continuously procure server space to run individual applications. To make effective use of server space, virtual machines were born.

As mentioned, VMs virtualizes the hardware so it can run multiple instances of the OS. A hypervisor is used to virtualize the hardware, and this sits on top of the physical infrastructure.

Since each VM has its own operating system, it could sit next to other VMs on the same physical servers. Each VM has its own set of binaries, libraries, and applications to run its software.

This VM architecture provided a number of benefits, primarily the ability to consolidate applications onto a single system. But this setup had some drawbacks.

With each VM having their own OS, it added a large overhead in both memory and storage - taking up tens of GBs. Leading to slower load times and several complexities during all stages of development. This approach also limited the portability of the application between different server environments, including public cloud, private cloud, and physical data centers.

How Containers Differ to VMs?

As previously stated, containers are an isolated abstraction of the application layer that packages both code and dependencies together. Instead of having their own OS, the containers share the OS kernel with other containers, while operating independently. This OS kernel sits on top of the infrastructure.

With this setup, containers take up much less space, with each container image being tens of MB in size. The smaller size advantage in containers enables faster load, agility, and portability.

The Benefits of Containerization

Let’s go through some of the notable benefits of using containers.

  • Since the average size of containers is much smaller than VMs, the server can host more containers.
  • Containers require fewer resources to run, allowing you to add more computational workload to the same server.
  • Containers provide isolation, allowing you to run your development instances and your test instances on the same hardware without issue.
  • Containers are much quicker to develop, test, and deploy your applications and services.
  • Containerization is a cost-effective solution due to the lower requirement of resources, which reduces operating costs.
  • Containers are portable, this means you can test and debug them on any environment including local, test server, or production environment.
  • Containerization is a good option for microservices, continuous deployment, and DevOps.

The Limitations of Containerization

As with anything in life, containers do have some disadvantages that you need to be mindful of.

  • Since containers share an OS Kernel, if the OS were to encounter a vulnerability, then it would potentially affect all the containers that are rooted to the OS. VMs, on the other hand, only share the hypervisor, which has little functionality and is less prone to attacks. That said, most OS providers regularly conduct reviews and audits on their systems to iron out any flaws.
  • The other disadvantage of sharing an OS kernel is lack of flexibility. Even though containers can operate in a multitude of environments, if you want to run an entire container orchestration using multiple operating systems, you will have to start a new server for each operating system. With VMs, you can have various OSs sitting next to each other on the same server.
  • Running an application based on a handful of containers is pretty straightforward. However, if you are running a complex enterprise application with containers, then you will have to look after many containers, perhaps in the hundreds or even thousands. Looking after a large number of containers can be very overwhelming, which is why you need access to several tools to help you manage your containers efficiently and effectively.

Container Orchestration: How Are Containers Managed?

As mentioned in the previous section, you will need some tools to help you manage your containers. The container technology market has rapidly evolved in recent years, so there are a plethora of options for you to choose from.

Below, we highlight the main tools which have proven to be popular with the container development community.

Docker

As mentioned previously, the open source project provides many tools to help you build, ship, and deploy individual components of your software as stand-alone containers. From docker compose file to help to define the blueprint to Docker engine for running your containers, Docker is a must-have.

Kubernetes

Kubernetes is an open-source container orchestration platform that was designed to deploy, scale, and operate application containers. Kubernetes provides a rich ecosystem of plugins which can be customized to fit the needs of your organization.

Docker Swarm

Docker Swarm is a clustering tool for Docker containers. With Docker Swarm (or just Swarm as it is commonly called), both administrators and developers can manage and maintain a cluster of Docker nodes as a single virtual system.

Jenkins

Jenkins is an open source Continuous Integration platform. Continuous integration involves constantly merging ongoing developmental work with the source code branch so that those changes can be tested, and implemented, as well as tested against other changes.

Openshift

Openshift is an enterprise supported container orchestration that has been developed by Redhat. It is a distribution of Kubernetes that has been optimized for continuous application development and multi-tenant deployment. OpenShift provides both developer and operation tools on top of your Kubernetes to allow for a much faster application development, deployment, and scaling, as well as long-term life-cycle maintenance.

BoxOps

As you can see from the above, there are many tools involved. And operating these tools individually can lead to a loss in productivity and cause developmental frustration. BoxOps provides a unified DevOps approach to building containers by enabling users access to all their container tools on a single platform. This allows for a more streamlined development, faster time-to-market, and improved productivity.

Containerization is the Future

Containers are becoming the go-to choice for developing and deploying your applications due to its portability, light-weight, and effective use of resources. But for developing massive and complex applications, you’ll need a large number of containers.

To help you manage these containers, you’ll need access to a number of container tools. BoxOps helps you to manage all of these tools via a single interface, saving your developers from having to log in to multiple solutions.

For information on how BoxBoat can help you with your containers, get in touch with us today.

Feature image credit: Burst