Run Your Microservice Applications in High Availability Mode with Docker Swarm
Reliability and upgradability are standard expectations today for production software; there are no two better means to achieve that than through microservices and High Availability (HA) architectural constructs. Microservice applications are lightweight, fault tolerant and easy to use. HA means you have multiple instances of your applications running in parallel to handle increased load or failures. These two paradigms fit perfectly into Docker Swarm, the built-in orchestrator that comes with Docker. Deploying your applications like this will improve your uptime which translates to happy users.
Docker Swarm makes it extremely easy to deploy your applications in HA mode. When I say deploy in HA mode, I really mean that you have multiple instances of each piece of software running in parallel. Since we are talking about Docker Swarm, our applications are deployed in Docker Containers. Therefore, we will scale our deployment to have multiple instances of each container running at the same time and let Docker Swarm’s load balancer handle the rest. Let’s see how this works with a quick exercise:
We will follow the same instructions to setup our play-with-docker session that we previously used in Deploy your Stateful Web Applications in Docker Swarm using Traefik Sticky Sessions. To summarize:
- Go to play-with-docker, and verify that you are human.
- Click on the wrench, select 3 Managers and 2 Workers
- Open a terminal on a manager node (filled in blue icon)
Now, we’ll deploy a sample application in HA mode. The voting app is the perfect example because it is simple, plus it deploys the visualizer to show exactly where your containers are running. To get the app up and running, first we need to get the source code. All of the following commands should be executed from the command line on the manager node in your play-with-docker Swarm cluster.
git clone https://github.com/dockersamples/example-voting-app.git. Go into the source directory (
cd example-voting-app). You will see a bunch of files, including
docker-stack.yml. This file contains all of our application configuration. To start it, execute
docker stack deploy --compose-file=docker-stack.yml voting.
After 30-60 seconds, you will see open ports at the top of the page. Click on the
8080 at the top of the page, and it will bring you to a high-level overview of your Docker Swarm cluster.
We can see that several of the services are replicated across our cluster. If one of these replicated containers went down, the application as a whole would remain perfectly fine, and Docker Swarm would reschedule the stopped container on a new host. Now go back to your play-with-docker instance and click on
5000. This will bring us to the voting screen. At the bottom of the page, you’ll see the hash of the container that served the content. Refresh the page and this will change. You can keep refreshing this and see how Docker Swarm is routing the traffic to a different container each time.
Docker Swarm is automatically load balancing the application for us. It does this by default anytime you deploy an application to Docker Swarm and specify multiple replicas for a container. Now that you’ve deployed a Dockerized application in High Availability mode, let’s see how we did this. Open the configuration file that described our application.
The first thing that you’ll notice is that this is a tiny file for describing a distributed application this large. In less than 100 lines of configuration, you’ve deployed 6 microservices, created 2 networks, and even made a new Docker managed data volume. I’ve reproduced the section that configures the voting front-end of the application. Line 10 is all it takes to replicate your application across the entire cluster. You can set the
replicas: 2 line to a larger value to scale up your cluster if you think there will be increased load, or even decrease it if the service doesn’t need to be replicated.
vote: image: dockersamples/examplevotingapp_vote:before ports: - 5000:80 networks: - frontend depends_on: - redis deploy: replicas: 2 update_config: parallelism: 2 restart_policy: condition: on-failure
Hopefully you can see how easy Docker Swarm makes it to deploy a distributed microservice application in HA mode. Although we used play-with-docker to setup our cluster for us, setting up your own test Docker Swarm cluster cluster can be done in less than half an hour, giving you plenty of time to experiment with the technology. If you are looking for more guides, check out our Docker Swarm, WordPress, and Traefik walkthough to deploy WordPress in HA with sticky sessions to Docker Swarm.
Learn how to best introduce Docker into your organization. Leave your name and email, and we'll get right back to you.