BoxBoat Blog

Service updates, customer stories, and tips and tricks for effective DevOps

x ?

Get Hands-On Experience with BoxBoat's Cloud Native Academy

Exploring Docker Datacenter – Part 2 – Universal Control Plane

by Will Kinard | Monday, Sep 26, 2016 | Docker

featured.png

Welcome back! This is the second part of our series covering Docker Datacenter. Check out our first post on Universal Control Plane for installation and configuration.

Docker Datacenter (DDC) was introduced in February of this year after much anticipation following earlier releases of Docker Trusted Registry (formerly Docker Hub Enterprise). Docker Inc.‘s pure-play offering to the container orchestration ecosystem builds upon their core technologies, Docker Compose, Docker Swarm, Docker Registry, and a newly “commercially supported” version of the Docker engine to compete at the enterprise level. In this multi-part series, I'll break down the components and features of DDC to help you understand where it fits in this new Containers as a Service (CaaS) world.

Docker Datacenter is the overall name for a suite of tools that includes:

Client Access

All access to UCP is handled via its Role Based Access Control mechanisms that can be changed via the “Administrator” role through the web interface or API. All requests to the system are authenticated via certificates that fall under one of two categories:

  • Admin user certificates: allows running of docker commands on the Docker Engine of any node
  • User certificates: allows running of docker commands through a UCP controller node

Speaking back into our first post, UCP ships with its own certificate authorities (CA) to secure access between the cluster components as well as client connections to the cluster. When administrators bring their own certificates (or those of a third party) they replace UCP's client CA, but not the cluster CA.  It is these CAs that provision the “certificate bundles” for access, the cluster CA provisioning admin bundles, while the client CA provisions user bundles.

Bundle Provisioning

Let's take a look now at provisioning a new client bundle from the “Profile” screen under our “admin” user through the web client UI:

Selecting “Create a Client Bundle” will kick off a few things:

  1. Generate new private key
  2. Provision new public certificate from UCP client CA with that key
  3. Download these with the UCP client CA to machine with scripts to set Docker environment variables

As can be seen in the above picture, a key/certificate has already been issued for our admin user. It can also easily be revoked through this screen by an Admin or that user through a menu option in case of compromise or other scenario.

These same steps can also be accomplished from the command line against the REST API:

# Create an environment variable with the user security token
$ AUTHTOKEN=$(curl -sk -d '{"username":"admin","password":"admin"}' https://10.0.1.14/auth/login | jq -r .auth_token)
# Download the client certificate bundle
$ curl -k -H "Authorization: Bearer $AUTHTOKEN" https://10.0.1.14/api/clientbundle -o bundle.zip

You can see here that we are first authenticating to the UCP REST API using our “admin” username and password before we are able to provision a new user bundle. This could seem confusing at first to beginners as we're having to authenticate to be able to authenticate…but it's actually quite straight-forward: the UCP client bundle we are creating will be used by our Docker CLI to authenticate its commands against UCP, while the initial authentication is needed to run commands against the REST API (in this case, provision a new bundle). Remember that the Docker CLI uses REST as well, so potentially any/all commands of the CLI could be recreated, but that would be a waste of a perfectly good CLI!

Bundle Use

Here's what came down in our new “client bundle” (uncompressed):

  1. ca.pem  –>  UCP client CA certificate
  2. cert.pem  –>  user certificate
  3. cert.pub  –>  user public key
  4. env.cmd  –>  Windows environment script
  5. env.ps1  –>  Windows PowerShell script
  6. env.sh  –>  Shell script
  7. key.pem  –>  user private key

All three of the scripts perform the same functions within their respective environment:

  1. set DOCKER_TLS_VERIFY environment variable to only allow secure connections to the daemon (UCP in this case)
  2. set DOCKER_CERT_PATH environment variable to your current path for certificate location
  3. set DOCKER_HOST environment variable to the UCP host at 443 to redirect all traffic to UCP and not your local daemon

With these scripts it becomes very easy to run commands against UCP. Simply “source” the script to set these variables and the docker CLI will now point to UCP with PKI authentication appropriately. If you then wish to once again run against your local daemon, simply create a new session or unset these variables.

User Interface

Docker provides a robust web UI to perform almost any function that can be executed from the command line. In some instances and environments a UI can provide details and perform executions faster than an administrator firing up the command line. It's also important to realize that not every administrator may be as familiar with the Docker command line set and would rather interact through a browser. Docker understands this and has accounted for it in the product.

Dashboard

Upon entering the application at https://10.0.1.14, we find ourselves at the “Dashboard”. Here, basic information is provided at a glance including the number of running containers, stored images, and health status for each UCP node. It's great to be able to see this information to take action upon if necessary.

Resources

The “Resources” section is compromised of the most important bits to your cluster. Everything is here from known images, to running containers, created networks and volumes, and attached nodes. Each of these sections is fairly self explanatory to it's function (which is how a UI should be), but it's still worth to go over a few things:

  1. “Containers”
    • Upon installation, UCP shows no containers running. But wait, didn't we launch UCP as containers? Docker made a smart UI move here and hid what they refer to as “System Containers” away from the user unless asked for. They can be shown by selecting “View Options” –> “System Containers”.
    • If you wish to deploy a single one-off image, that can be done from here as well. Select “+Deploy Container” and you will presented with the same commandline options you are all too familiar with. In similar behavor, UCP will pull from Docker Hub if the image specified is not found locally.
  2. “Nodes”
    • Check out the “+ Add Node” button here as it presents command line text to copy that already fits your cluster's environment settings. Use this when you need to add your next node.
  3. “Volumes”
    • Docker here does not use the same methodology of “system” specific resources as they do in the “Containers” view. System specific volumes are all shown and not able to be filtered. You will see each of the necessary volumes that were created upon install (see first post) for each node in the cluster.
    • “+ Create Volume” lets you specify a specific volume driver if you are using a solution other than local volume storage
  4. “Networks”
    • Every network here is shown from every node – including the “Host”, “Bridge”, and “None” networks that come with each Engine install.
    • “+ Create Network” has similar options, but also includes IPAM options as well
  5. “Images”
    • UCP does not break down image layers (and perhaps there isn't that much need for this in the UI). I would still expect to see this in later versions. Perhaps with a tree topology or similar.

Users and Teams

Docker's documentation does a good job of breaking down how user permissions to resources and team access are handled. In short, a user's “default permissions” are to control access over non-labeled resources, such as images, networks and volumes. Users can also be added to created “teams”. A team defines the permissions users have for containers that have the label “com.docker.ucp.access.label” applied to them.

Let's demonstrate these permissions by creating a new “boxboat” user. We will then cycle through the “Default Permissions” levels to see the resulting effects.

NA = “No Access”; VO = “View Only”; RC = “Restricted Control”; FC = “Full Control”

<th style="color: white;">
  Visible
</th>

<th style="color: white;">
  Create
</th>
<td style="padding: 0.5em; text-align: center;">
  NA/VO/RC/FC
</td>

<td style="padding: 0.5em; text-align: center;">
   RC/FC
</td>
<td style="padding: 0.5em; text-align: center;">
  NA/VO/RC/FC
</td>

<td style="padding: 0.5em; text-align: center;">
   RC */FC **
</td>
<td style="padding: 0.5em; text-align: center;">
   NA/VO/RC/FC
</td>

<td style="padding: 0.5em; text-align: center;">
   RC/FC
</td>
<td style="padding: 0.5em; text-align: center;">
   VO/RC/FC
</td>

<td style="padding: 0.5em; text-align: center;">
   RC/FC
</td>
<td style="padding: 0.5em; text-align: center;">
   VO/RC/FC
</td>

<td style="padding: 0.5em; text-align: center;">
    RC/FC
</td>
<td style="padding: 0.5em; text-align: center;">
   VO/RC/FC
</td>

<td style="padding: 0.5em; text-align: center;">
   RC/FC
</td>
  • User can create containers, but can’t see other users containers, run docker exec, or run containers that require privileged access to the host.

** User can create containers without any restriction, but can’t see other users containers.

Creating teams, adding users to those them, and specifying access to specific container labels are where things get interesting. Let's create a new “Commercial Services” team and add the user “boxboat” to it. You can see that reflected there:

We can now apply permissions to the team in the form of “Resource Labels”:

Here we have created three resource labels, “crm”, “comservices”, and “buslayer”. In this setup, any member of the “Commercial Services” team has full control over containers that are provisioned with the “crm” label, restricted access to containers with the “buslayer” label, and can only view containers with the label “crm”. This is only one example of how one may setup these permissions, but other combinations are possible depending on your environment. Perhaps, for example, the resource labels could be used for containers as they make their way through the CI pipeline, “dev”, “staging”, and “prod”. Different teams can have access to the same application, but in the example we define different permissions for the environment that the application is running in. Expect even more functionality around this paradigm in future releases of UCP.

Settings

The UCP settings are fairly limited (through the UI) at this point but straight-forward. Tabbed from the top, we see options for:

  • Logs
  • Auth
  • DTR
  • License
  • Usage Reporting
  • Scheduler

Logs

Provides configuration to a remote syslog server and the log-level for messages to pass. Nothing crazy here – levels supported:

DEBUG, INFO, NOTICE, WARN, ERR, CRIT, ALERT, and EMERG

Auth

Provides selection of the method of authentication to UCP: “Managed” or “LDAP”. “Managed” is configured by default, providing user accounts locally. Selecting “LDAP” provides a myriad of options to configure connection to an LDAP server, including the ability to apply filtering to username search.

DTR

Let UCP know about a DTR instance so that commands run against UCP will pull images (if necessary) from DTR. There is more to make this work than from just this screen alone as PKI authentication needs to be taken into account, but more on that in a later post.

License

Your UCP instance license ID, engine count, and expiration. You can also upload a new license from here.

Usage Reporting

Options to allow (or disallow) usage reports to Docker Inc. for improvements and stats.

Scheduler

Two options provided here – “Allow Administrators to deploy containers on UCP controllers” and “Allow Users to deploy containers on UCP controllers”. By default these are allowed, but a production installation should isolate the UCP system containers to their own host; uncheck these before moving forward.

In our next post, we'll cover the installation and configuration of Docker Trusted Registry, as well as finally launch an application!