BoxBoat Blog

Service updates, customer stories, and tips and tricks for effective DevOps

x ?

Get Hands-On Experience with BoxBoat's Cloud Native Academy

Exploring Docker Datacenter – Part 1 – Universal Control Plane

by Will Kinard | Friday, Aug 12, 2016 | Docker

featured.png

Docker Datacenter (DDC) was introduced in February of this year after much anticipation following earlier releases of Docker Trusted Registry (formerly Docker Hub Enterprise). Docker Inc.‘s pure-play offering to the container orchestration ecosystem builds upon their core technologies, Docker Compose, Docker Swarm, Docker Registry, and a newly “commercially supported” version of the Docker engine to compete at the enterprise level. In this multi-part series, I'll break down the components and features of DDC to help you understand where it fits in this new Containers as a Service (CaaS) world.

Docker Datacenter is the overall name for a suite of tools that includes:

Versions 1.0, 1.4.3, and 1.10, respectively, debuted at release, while following feature releases, bug fixes, and upgrades have led to the current (as of this posting) versions of 1.1.2, 2.0.2, and 1.12.

UCP is an enterprise on-premise solution that enables IT operations teams to deploy and manage their Dockerized applications in production, while giving developers the agility and portability they need, all from within the enterprise firewall. While DTR provides the operational facilities for Docker images, UCP is all about deployed containers and the applications they make up.

Out of the box features such as:

provide a compelling case to make UCP your go-to application for running containers, both in production and in development.

System requirements are fairly minimal at 2GB of RAM and 3GB of disc space. In this example, I am using six Ubuntu 14.04 Vagrant machines with 2GB of RAM and 40GB discs. Three hosts will be used as HA controllers and the other three as nodes. The CS Docker Engine has already been installed on all six.

Some housekeeping:

Let's start the actual install. To make things easy, Docker packages the installation in a “bootstrap” container to be run on each host. How great is that? We begin by logging into the first host and running the following:

vagrant@node1:~$ docker run --rm -it --name ucp \
  -v /var/run/docker.sock:/var/run/docker.sock \
  docker/ucp install -i \
  --host-address 10.0.1.14

Where:

Let's check our output:

INFO[0000] Verifying your system is compatible with UCP
INFO[0000] Your engine version 1.11.2-cs3, build c81a77d (3.13.0-92-generic) is compatible
WARN[0002] Your system uses devicemapper.  We can not accurately detect available storage space.  Please make sure you have at least 3.00 GB available in /var/lib/docker
Please choose your initial UCP admin password:
Confirm your initial password:
INFO[0033] Pulling required images... (this may take a while)
WARN[0394] None of the hostnames we'll be using in the UCP certificates [node1 127.0.0.1 172.17.0.1 10.0.1.14] contain a domain component.  Your generated certs may fail TLS validation unless you only use one of these shortnames or IPs to connect.  You can use the --san flag to add more aliases

You may enter additional aliases (SANs) now or press enter to proceed with the above list.
Additional aliases:
INFO[0046] Installing UCP with host address 10.0.1.14 - If this is incorrect, please specify an alternative address with the '--host-address' flag
INFO[0000] Checking that required ports are available and accessible
INFO[0009] Generating UCP Cluster Root CA
INFO[0037] Generating UCP Client Root CA
INFO[0043] Deploying UCP Containers
WARN[0057] Configuration updated. You will have to manually restart the docker daemon for the changes to take effect.
INFO[0057] UCP instance ID: RQSM:O4F4:U2GI:XK26:VQNB:HSIM:O26G:W2HA:NG3W:MKCU:3ALU:FXGX
INFO[0057] UCP Server SSL: SHA-256 Fingerprint=E3:4E:5B:9B:B4:24:9B:02:C4:9A:1B:14:6F:FB:70:97:16:24:87:59:3D:45:9B:04:FD:FE:6E:B1:AC:96:86:7C
INFO[0057] Login as "admin"/(your admin password) to UCP at https://10.0.1.14:443

As my Vagrant machines are not using fully qualified domain names, we see the warning above on TLS validation. Use of your own CA will most likely require FQDN's for your machines. We are also prompted to restart our Docker daemon as UCP has configured multi-host networking for us, which in Ubuntu is a quick sudo service docker restart.

Before navigating to our host address, 10.0.1.14, to check out the new installation, let's break down the running containers that comprise the UCP application:

vagrant@node1:~$ docker ps --format '{{printf "%-25s %-30s %-25s %-25s" .Names .Image .Command .Ports}}'
ucp-controller            docker/ucp-controller:1.1.2    "/bin/controller serv"    0.0.0.0:443->8080/tcp
ucp-auth-worker           docker/ucp-auth:1.1.2          "/usr/local/bin/enzi "    0.0.0.0:12386->4443/tcp
ucp-auth-api              docker/ucp-auth:1.1.2          "/usr/local/bin/enzi "    0.0.0.0:12385->4443/tcp
ucp-auth-store            docker/ucp-auth-store:1.1.2    "rethinkdb --bind all"    0.0.0.0:12383-12384->12383-12384/tcp
ucp-cluster-root-ca       docker/ucp-cfssl:1.1.2         "/bin/cfssl serve -ad"    8888/tcp, 0.0.0.0:12381->12381/tcp
ucp-client-root-ca        docker/ucp-cfssl:1.1.2         "/bin/cfssl serve -ad"    8888/tcp, 0.0.0.0:12382->12382/tcp
ucp-swarm-manager         docker/ucp-swarm:1.1.2         "/swarm manage --tlsv"    0.0.0.0:2376->2375/tcp
ucp-swarm-join            docker/ucp-swarm:1.1.2         "/swarm join --discov"    2375/tcp
ucp-proxy                 docker/ucp-proxy:1.1.2         "/bin/run"                0.0.0.0:12376->2376/tcp
ucp-kv                    docker/ucp-etcd:1.1.2          "/bin/etcd --data-dir"    2380/tcp, 4001/tcp, 7001/tcp, 0.0.0.0:12380->12380/tcp, 0.0.0.0:12379->2379/tcp

Seven total images are pulled in support of ten running containers:

<th style="color: white;">
  Image
</th>

<th style="color: white;">
  Description
</th>
<td style="padding: .5em .5em;">
  docker/ucp-proxy:1.1.2
</td>

<td style="padding: .5em .5em;">
  A TLS proxy for access to the Docker Engine.
</td>
<td style="padding: .5em .5em;">
  docker/ucp-controller:1.1.2
</td>

<td style="padding: .5em .5em;">
  The UCP application itself.
</td>
<td style="padding: .5em .5em;">
  docker/ucp-swarm:1.1.2
</td>

<td style="padding: .5em .5em;">
  Swarm cluster manager.
</td>
<td style="padding: .5em .5em;">
  docker/ucp-swarm:1.1.2
</td>

<td style="padding: .5em .5em;">
  Heartbeat to record on the key-value store that this node is alive. If the node goes down, this heartbeat stops, and the node is removed from the cluster.
</td>
<td style="padding: .5em .5em;">
  docker/ucp-auth:1.1.2
</td>

<td style="padding: .5em .5em;">
  The centralized API for identity and authentication used by UCP and DTR.
</td>
<td style="padding: .5em .5em;">
  docker/ucp-auth:1.1.2
</td>

<td style="padding: .5em .5em;">
  Performs scheduled LDAP synchronizations and cleans data on the ucp-auth-store.
</td>
<td style="padding: .5em .5em;">
  docker/ucp-auth-store:1.1.2
</td>

<td style="padding: .5em .5em;">
  Stores authentication configurations, and data for users, organizations and teams.
</td>
<td style="padding: .5em .5em;">
  docker/ucp-etcd:1.1.2
</td>

<td style="padding: .5em .5em;">
  Used to store the UCP configurations. Internal use only.
</td>
<td style="padding: .5em .5em;">
  docker/ucp-cfssl:1.1.2
</td>

<td style="padding: .5em .5em;">
  Certificate Authority for joining new nodes and administrator client bundles.
</td>
<td style="padding: .5em .5em;">
  docker/ucp-cfssl:1.1.2
</td>

<td style="padding: .5em .5em;">
  Certificate Authority to sign client bundles.
</td>

Each of these containers also mount at least one named volume, mostly in support of persisting certificates (these volumes can also be manually created before-hand with your own certificates along with the –external-server-cert flag at install time):

vagrant@node1:~$ docker volume ls
DRIVER              VOLUME NAME
local               ucp-auth-api-certs
local               ucp-auth-store-certs
local               ucp-auth-store-data
local               ucp-auth-worker-certs
local               ucp-auth-worker-data
local               ucp-client-root-ca
local               ucp-cluster-root-ca
local               ucp-controller-client-certs
local               ucp-controller-server-certs
local               ucp-kv
local               ucp-kv-certs
local               ucp-node-certs

Now we can finally go to the browser and make sure the app is up and running. In this case, we navigate to https://10.0.1.14, login, and upload our license.

We're presented with the dashboard – everything is good!

As previously stated, UCP supports a High Availability configuration through adding controller replicas. Standard quorum sizing applies here, so we will be going with three.

1. To support this configuration, certificates from our controller need to be copied to the new replicas (once installed). We start that process with UCP's “backup” command:

vagrant@node1:~$ docker run --rm -i --name ucp \
    -v /var/run/docker.sock:/var/run/docker.sock \
    docker/ucp backup \
    --interactive \
    --root-ca-only \
    --passphrase "boxboat" &gt; /tmp/ucpcerts.tar

Output is as follows:

INFO[0000] Your engine version 1.11.2-cs3, build c81a77d (3.13.0-92-generic) is compatible
INFO[0000] We're about to temporarily stop the local UCP CA containers for UCP ID: RQSM:O4F4:U2GI:XK26:VQNB:HSIM:O26G:W2HA:NG3W:MKCU:3ALU:FXGX
Do you want proceed with the backup? (y/n): y
INFO[0006] Temporarily stopping the local CA containers to ensure a consistent backup
INFO[0000] Beginning backup
INFO[0000] Backup completed successfully
INFO[0008] Resuming stopped UCP containers

Our UCP containers were briefly stopped but restarted once the backup completed.

2. The tar archive of certs now will need to be copied to the next host that is to become a controller replica.

3. Login to the next host and run the UCP “join” command with the –replica flag:

vagrant@node2:~$ docker run --rm -it --name ucp \
  -v /var/run/docker.sock:/var/run/docker.sock \
  -v /home/vagrant/ucpcerts.tar:/backup.tar \
  docker/ucp join \
  --interactive \
  --replica \
  --passphrase "boxboat"
  --host-address 10.0.1.15

The accompanying output is similar to the first install:

INFO[0000] Your engine version 1.11.2-cs3, build c81a77d (3.13.0-92-generic) is compatible
WARN[0002] Your system uses devicemapper.  We can not accurately detect available storage space.  Please make sure you have at least 3.00 GB available in /var/lib/docker
Please enter the URL to your UCP server: https://10.0.1.14
UCP server https://10.0.1.14
CA Subject: UCP Client Root CA
Serial Number: 3dcd050a9c6e613f
SHA-256 Fingerprint=87:C8:8D:74:B5:14:CE:40:84:A2:82:FD:D7:E9:B8:F5:BF:20:31:64:1E:7F:4D:47:36:82:A9:88:91:1A:48:FA
Do you want to trust this server and proceed with the join? (y/n): y
Please enter your UCP Admin username: admin
Please enter your UCP Admin password:
INFO[0014] All required images are present
WARN[0014] None of the hostnames we'll be using in the UCP certificates [node5 127.0.0.1 172.17.0.1 10.0.1.15] contain a domain component.  Your generated certs may fail TLS validation unless you only use one of these shortnames or IPs to connect.  You can use the --san flag to add more aliases

You may enter additional aliases (SANs) now or press enter to proceed with the above list.
Additional aliases:
INFO[0000] This engine will join UCP and advertise itself with host address 10.0.1.15 - If this is incorrect, please specify an alternative address with the '--host-address' flag
INFO[0000] Verifying your system is compatible with UCP
INFO[0000] Checking that required ports are available and accessible
INFO[0039] Starting local swarm containers
INFO[0040] Starting UCP Controller replica containers
INFO[0055] New configuration established.  Signalling the daemon to load it...
INFO[0056] Successfully delivered signal to daemon

4. Repeat steps 2 and 3 for the third replica (10.0.1.16 for me).

5. One final step is a key-value store discovery process that needs to run on each controller node. This process ensures each node properly knows about each other. I imagine in the future this will no longer be necessary as usability is constantly improved. Running the following on each node with the respective host address will do the trick:

$ docker run --rm -it \
    --name ucp \
    -v /var/run/docker.sock:/var/run/docker.sock \
    docker/ucp engine-discovery \
    --update
    --host-address 10.0.1.14

A prompt for a Docker daemon restart will follow.

We now have three UCP controllers in HA! Login to each one demonstrating functionality duplicate functionality. In a future post, I will detail load balancing across these three from a single endpoint using Nginx.

We now have our controllers in a highly available configuration, but these are just the managers – what about the workers? UCP makes it painless to add new hosts as a UCP “node” where containers and applications will run:

$ docker run --rm -it --name ucp \
  -v /var/run/docker.sock:/var/run/docker.sock \
  docker/ucp join -i
  --host-address 10.0.1.17

I'll spare you the output this go around as it's very similar to the previous installs, but it is noteworthy to take a look at the running containers for a UCP node:

vagrant@node4:~$ docker ps --format '{{printf "%-25s %-30s %-25s %-25s" .Names .Image .Command .Ports}}'
ucp-swarm-join            docker/ucp-swarm:1.1.2         "/swarm join --discov"    2375/tcp
ucp-proxy                 docker/ucp-proxy:1.1.2         "/bin/run"                0.0.0.0:12376-&gt;2376/tcp

We now only have two containers running – “ucp-swarm-join” acting as a swarm agent, and “ucp-proxy”, which once again provides secure communications with the Docker daemon on that specific host.

Stay tuned for Part 2 for detailed functionality and further configuration of running UCP!