What To Do If Your App is Delivered in a Container
Even if you have no containerization experience, there is no need to panic if your vendor has supplied the application you ordered as a container.
There are many ways to run and orchestrate containers, and these solutions offer great features that allow you to automatically manage applications. You can ensure they remain online, monitor them and automatically manage capacity allowing you to both scale up to handle increased demand and scale down saving you money, when demand decreases.
A Crash Course on Containers
Containerization is a virtualization method that’s application-speciﬁc, providing individual apps with dedicated environments to run on. But instead of deploying a whole virtual machine for each app, a containerized environment silos applications at the operating system level, using one OS for a multitude of containerized apps.
Docker has become the go-to container platform for large companies and enterprise companies alike, while tools such as Docker Swarm or Kubernetes—container orchestration platforms—work in tandem with Docker to provide brands with the functionality required to manage their contained applications.
Identifying Your Container Requirements
Your container should come with documentation that tells you what you need to run it. Important things to note are which ports you need to expose and any data you need to share with the container for it to function. For example, if you wanted to run a MySQL server container, you would need to expose port 3306 so other applications can talk to the MySQL server.
The MySQL server container will also need somewhere to store its data. Because the contents of a Docker container is ephemeral (it is not maintained after the container is taken down) MySQL will need to store its data outside of its container. You can achieve this by mapping in an external ﬁle system using the Docker volume feature. An example of this is shown below using the -v command line flag.
docker run --name db -v /my/own/datadir:/var/lib/mysql \ -e MYSQL_ROOT_PASSWORD=my-secret-pw -d mysql:latest
Docker containers are deﬁned using a ﬁle called Dockerﬁle. You can inspect the Docker container if no documentation is available. Clues can be taken from the Dockerﬁle. The keyword EXPOSE is used to deﬁne which ports are exposed to other containers or networks. The keyword VOLUME is used to deﬁne the volumes that the container uses. If you wish to run the Docker container on your workstation or laptop for experimental purposes, you can easily install Docker for Mac, Windows and Linux.
For more advanced experimentation, a simple tool called Docker Compose is available. It is also a convenient way to store command line arguments so you do not need to remember them each time you run a container. It also allows you to run multiple containers with one command.
However, you will need to run your containers using a container orchestration tool for a more permanent solution.
How Are Containers Managed?
Container orchestration tools manage which containers are run in an environment, ensure they remain running, and facilitate container needs such as communication, scaling, and configuration.
There are quite a few open source and productized tools available in the market for container orchestration; the most popular being Kubernetes. Kubernetes was originally built on a concept by Google, but is now manifested as an open source project, maintained by volunteers and other interested parties.
Kubernetes is available as a managed service in most cloud providers including Google where it is called Google Kubernetes Engine.
Container orchestration tools almost always follow the same basic patterns.
They have the following concepts:
Cluster : The group of machines or cloud instances which runs the containers.
Scheduler : The scheduler is part of the container orchestration tool and is a service that runs on the cluster and manages which containers run where and when.
Pod : The pod deﬁnes the container or containers to be run inside the cluster.
Network : Clusters have their own networking allowing pods to easily ﬁnd each other using service discovery via DNS lookups. For example you can use the DNS name “mysql” and connect to a MySQL container rather than needing to know an IP address.
Ingress : The ingress is how you deﬁne how trafﬁc from outside the cluster talk to the services contained with the cluster. For example, website trafﬁc going from the internet to a web service.
Managed Kubernetes services such as Google Kubernetes Engine manage the cluster, scheduler, networking and ingresses for you allowing you to focus on simply running your application.
There are many ways to create a Kubernetes cluster and ample documentation to cover this. A functioning cluster, however, still requires a Docker image registry from where to pull container images to run!
How to Manage Your Containers
Vendors can provide their Docker images to you in several different ways: through the community-supported Docker Hub, another 3rd party Docker image registry such as Quay.io, or simply providing the Docker image files to you like software delivery of the past. Once you have these images in hand, however, you will need a Docker image registry of your own to store them. You can use SaaS providers like those listed previously, or run your own registry like Docker Trusted Registry in Docker Enterprise Edition.
Docker Enterprise Edition (EE) allows vendors to offer Containers-as-a-Service (CaaS) and uses Docker’s Docker Content Trust system to ensure containers are from who you think they are. Docker Enterprise Edition supports the offical CNCF standard of Kubernetes and any OCI (Open Container Initiative) compliant Docker images (which almost all are!).
Recommended Reading: Docker Community Edition Vs Docker Enterprise Edition: Everything You Need to Know
Docker EE provides an excellent web-based user interface for managing your cluster and supports both Docker Swarm and Kubernetes schedulers. It is a full implementation of Kubernetes so all the usual features of Kubernetes are supported.
You can use the Docker EE web-based UI to add your container to the cluster. These are called "workloads” in the Docker EE documentation. Alternatively, if you wish to automate or script the deployment of the containers you can use the command line tool kubectl. You can install and setup kubectl using this guide.
Kubernetes objects are deﬁned in YAML.
For example, to deploy our MySQL example from earlier you would create the following:
apiVersion: apps/v1beta2 kind: Deployment metadata: name: mysql-deployment spec: selector: matchLabels: app: mysql replicas: 2 template: metadata: labels: app: mysql spec: containers: -name: mysql image: mysql:latest ports: -containerPort: 3306
apiVersion: v1 kind: Service metadata: name: nginx labels: app: mysql spec: type: NodePort ports: - port: mysql nodePort: 32768 selector: app: mysql
You can then apply the object to the cluster using the web-based user interface or with kubectl using the command:
kubectl apply -f deployment.yaml kubectl apply -f service.yaml
The learning curve for deploying a container to Kubernetes can be steep, but there is lots of documentation online and the result is a stable and easy to manage hosting solution.