BoxBoat Blog

Service updates, customer stories, and tips and tricks for effective DevOps

x ?

Get Hands-On Experience with BoxBoat's Cloud Native Academy

What to watch with Docker Swarm 1.12 [Updated]

by Brandon Mitchell | Monday, Aug 1, 2016 | Docker

featured.png

Docker 1.12 Rolls Out a New Swarm

With the latest announcement of Docker Swarm coming in 1.12, there's a huge incentive to make the upgrade. The streamlined creation of the Swarm replaces the past complexity of running your own distributed key/value store. The orchestration ensures that your environment is continuously adapting to achieve a target state. These two features provide the fault tolerance that has been missing in previous versions of Swarm using the Docker Compose tool.

As with any new feature that's being actively developed, there are pieces that haven't been finished yet. So before migrating your environment, below are some components still being developed in the 1.12 release candidates. As with other new features from Docker, none of this prevents you from continuing to use the capabilities that existed prior to 1.12. Indeed, you can continue to use the container-based Swarm solution in 1.12, instead of or in addition to this new native Swarm mode.

The Listener Address

When starting your swarm nodes, if you don't provide a listener address, you will not be able to restart and automatically reconnect to the swarm because the cluster will not recognize the host attempting to login. If you do provide a listener address, it will be used for configuring the IP used to listen for incoming traffic. This means you can't use the standard 0.0.0.0 address for your listener. The fix for this will be a separate --advertise-addr option to specify what host to advertise to the Swarm. That leaves --listen-addr to control the interfaces where the daemon listens. Follow the progress on issue 23828.

Update: This one was resolved by the GA release. The listener now defaults to listening to all interfaces with 0.0.0.0:2377. You can now run the following to initialize your swarm with your public ip and follow the output directions to setup the rest of your nodes:

docker swarm init --advertise-addr ${ip}:2377

Maintaining Quorum

With a distributed key/value store, there's a need to maintain quorum to ensure the values you pull are current. Once quorum is lost, you lose the ability to manage the swarm. However, any containers that are already running will continue to run.

The easiest way to lose quorum currently is to demote a manager in a two node swarm to a worker. The quorum for two nodes is two (greater than 50%), so removing that node creates a situation where the remaining manager can't find another manager to get the minimum 50% of managers. The current fix will be to prevent the removal of a node that would result in a quorum loss as seen in issue 24065. A solution that allows a two manager Swarm to be downgraded to a single manager looks unlikely for 1.12.

Update: The GA release of 1.12 now handles demoting one of two managers in your swarm cleanly.

As always, make sure you have a DR solution in place to rebuild your Swarm in case of a failure, which could help with this quorum loss issue. At present, there's no published way to export and import the Swarm configuration. Until that happens, either script your own solution, or look into the experimental DAB files and the docker stack CLI.

Volume Support

Volumes don't easily transition into Swarm. The biggest issue being that a volume on one host won't be the same as another unless you're using a distributed or network accessible volume. The docker-compose bundle command doesn't even support them:

$ docker-compose bundle
WARNING: Unsupported top level key 'volumes' - ignoring
WARNING: Unsupported key 'volumes' in services.counter - ignoring
WARNING: Unsupported key 'depends_on' in services.counter - ignoring
Wrote bundle to counter.dab

With docker service create you need to look into the --mount option. The syntax is also very different from docker run -v and being actively discussed in issue 24334. If you want to use your own volume plugin, keep an eye on issue 23619.

Registries with Logins

If you store your images on a registry behind a login of some kind, you'll need a way to pass those login credentials into the Swarm. This has been recently added in the Swarmkit in issue 783 and the --registry-auth option is now showing in the experimental release.

A workaround is to run a second registry service on the same data volume, mounted read only, but without authentication. That allows any Docker host to pull from the second instance without a login, while preventing unauthorized changes to your repositories.

Update: The fix for this one made the 1.12 GA release. You can now include authentication by passing the --with-registry-auth option to the service create/update.

Built-in Service Discovery

The built in service discovery in Swarm 1.12 makes it extremely quick and easy to setup a publicly accessible port for your service. Connections are routed over a built-in mesh network to a container running the service, even if it's on a different node. With 1.12, the only algorithm implemented will be a simple round robin load balancing.

There was a discussion at DockerCon 2016 about developing of a “shortest expected delay” algorithm. This would route traffic to local containers first, until the host reaches capacity. However, this won't be implemented until a later release.

Other suggestions have been made to enhance this load balancing algorithm to provide sticky sessions. When a user connects to a container through the load balancer once, all future traffic will attempt to go to the same container. Allowing IP based sticky sessions is being considered by Docker, but anything at a higher network level, such as HTTP cookies, will need to be implemented elsewhere. This is because the provided service discovery works at the IP layer of the network stack, without visibility to the TCP session.

Pushing Multi-Container Applications

If you're used to Docker Compose with the prior container based swarm solution, your workflow will need to change. Because the calls changed from docker run to docker service, doing a docker-compose up will spin up your containers on the local Docker host outside of the swarm.

The experimental release has added the ability to deploy and manage application stacks from DAB definitions. These DAB files can be generated from recent versions of Docker Compose using the docker-compose bundle command. Docker is leaving this feature experimental for now, while they solicit the community for how these DAB files should look. As an experimental feature, it may be completely redesigned in a later release.

For those not running experimental, Docker won't have a docker-compose up equivalent in 1.12.

Control of Rolling Updates

When you do a rolling update, there's no control of the order of operations, only the rate of deployment. The order currently used is to first stop an old container, pull the new container, and then run the new container. If that pull fails (such as if you make a typo in the image name), your environment will wait with containers down for you to resolve the issue.

If you need to maintain a minimum number of containers, consider scaling your swarm task up by your rolling update batch size before you start your update. Then scale it back down when the update completes. Keep an eye on issue 24447 for any updates from Docker.

Support for Canaries and Rollback

A common test scenario is to deploy a canary container into the swarm, and test that for any issues, before migrating the rest of the swarm containers to a new version. Implementations typically include traffic redirected as part of the round robin mix, or via a separate service discovery location for testing the change. At present, Docker Swarm hasn't implemented native support for these scenarios.

You can manually spin up a separate stack on a different port for testing. But once you reconfigure the target state for your service, the orchestration will change every container to that state at the rate you've requested.

Balancing Your Workload

If you bring up additional nodes in your Swarm, or simply restart a worker in your swarm, it will typically be idle until the orchestration needs to schedule a new container. There is no built-in ability to automatically redistribute workloads, or to bin-pack the existing workload on the fewest number of nodes. Redistributing your workload will need to be an external solution you implement.

While designing your workload distribution strategy, consider constraints and memory/CPU limits to restrict how many and where containers are run. These limits should also be used to prevent cascading failures where rescheduled workload from one failed node causes other nodes to become overloaded and fail themselves. Ensure that any constraints defined still allow the workload to be rescheduled during an outage.

Heath Check Support

Heath checks are a new way to monitor containers from Docker. They allow you to define a command in your Dockerfile to monitor your application, and report back if the application is healthy, still starting, or in an unhealthy state. Since this feature is new, there is still work in progress on adding that support into Swarm as seen in issue 23962.

Update: Github shows this one has been resolved in the GA release of 1.12. We're looking forward to updating our images with this new feature.

Transition Plan

This is a long list, which is to be expected with a new product under active development. Production environments today that already have the container based Swarm implemented may continue to use it. That caution comes at the cost of missing out on the latest orchestration capabilities.

Rather than seeing this as an either-or decision, consider a transition plan. Implement the new Swarm solution in parallel with your existing solution. Then consider what workload can be transitioned or what new projects can be implemented with it. Lastly, keep an eye on future Docker releases, because with their rapid development pace, it won't be long before they've solved these and created even more reasons to migrate.

Update: Many of the biggest concerns above were resolved by the 1.12 GA release. The pace of development from Docker continues to amaze and we look forward to taking advantage of the new orchestration capabilities.