BoxBoat Blog

Service updates, customer stories, and tips and tricks for effective DevOps

x ?

Get Hands-On Experience with BoxBoat's Cloud Native Academy

Docker on Windows Server 1709

by Caleb Lloyd | Wednesday, Nov 8, 2017 | Docker

featured.png

Microsoft has split Windows Server into 2 release channels: the Long-term Servicing Channel (LTSC), and the Semi-Annual Channel. Microsoft has in-depth documentation explaining each channel. The first Windows Server release to the Semi-Annual channel is out, called Windows Server, Version 1709.

Windows Server 1709 enables support for some long-awaited features for Windows Containers in Docker, but also includes a backwards compatibility breaking change that will force users to choose between portability and density with Windows Containers.

New Docker Features

Some Docker features have been limited in Windows Containers due to the lack of Windows Kernel compatibility, and Windows Server 1709 brings a fresh set of Kernel updates. New features include:

Ingress Routing Mesh for Docker Swarm: Windows Server 1709 now supports overlay networks, which enables ingress routing mesh for Windows Container services exposed in a Docker Swarm. This feature works today in the Docker EE 17.10 preview branch.

Mounting Named Pipes in Linux: Previous versions of Windows did not support mounting Named Pipes, meaning accessing the Docker Engine from inside a Windows Container involved exposing the engine over TCP. Named Pipes can now be mounted into containers on Windows Server 1709.

Linux Containers on Windows (LCOW): This feature will eventually support running Windows Containers and Linux Containers side-by-side on a Windows host. It uses Hyper-V containers with a small 13MB LinuxKit OS to run Linux Containers. Windows Server 1709 fully supports LCOW, however the feature is undergoing heavy development in the Docker Engine presently and is still alpha-quality.

Docker has a great blog post explaining these features in greater detail, including examples of how to use them.

Backwards Compatibility Break with Windows Container Images

With Windows Server 1709, Microsoft has made a breaking change that requires upgrading base Windows Container Images to use the 1709 tag to run in process isolation mode, or requires running in hyper-v isolation mode. One caveat that should be noted is that base images that use the 1709 tag will not be able to run on Windows Server 2016 LTSC at all. Here is the compatibility table: (also available from Microsoft documentation)

Portability Requires Hyper-V Isolation

If your goal with running Windows Containers is to maximize portability, you must use hyperv isolation and develop against Windows Server Core or Nano Server images targeting Windows Server 2016 LTSC. You can continue to build Docker images against microsoft/windowsseervercore:latest and microsoft/nanoserver:latest. In Microsoft's virtualization blog post, they have stated:

From now on, the “latest” tag will follow the releases of the current LTSC product, Windows Server 2016

Docker on Windows Server 1709 defaults to process isolation, so to change it to hyper-v isolation you will have to edit C:\ProgramData\docker\config\daemon.json to include:

{
    "exec-opts":["isolation=hyperv"]
}

Once you have made these changes, restart the Docker Engine:

Stop-Service docker
Start-Service docker

Density Requires Process Isolation

We ran a density test on a Windows Server 1709 host using both process isolation and hyperv isolation. The test environment was Windows 1709 VM (KVM) with Nested Virtualization, 2 VCPU, 8GB RAM. The Docker Engine was put into Swarm mode and a services running the microsoft/iis:1709 image was started.

Isolation TypeIIS Replicas Started
process|50
hyperv|9

We were able to start 50 replicas in process mode, and could have started more as the machine was not out of resources yet. HTTP response time was starting to lag under load with this many containers running, however, so this is a reasonable estimate.

We were only able to start 9 replicas in hyperv isolation mode before the VM ran out of memory. Hyper-V is pre-allocating memory to each container in this mode. We did not specify the amount of memory to allocate with the -m flag, but 9 hyperv isolation containers plus the Host OS's memory requirements resulted in all 8GB of the VM's memory being used.

Hyper-V Isolation Requires Nested Virtualization when the Host OS is a VM

Hyper-V is a hypervisor, so it requires virtualization support. If the Host Windows Server OS is running on Bare Metal with a CPU that supports virtualization, it should be able to run Hyper-V. If the Host Windows Server OS is running in a VM, then the hypervisor running the Host must support nested Hyper-V virtualization.

Cloud: Certain Azure VM types support nested Hyper-V, and Azure is going to be your best bet for nested Hyper-V virtualization in the cloud. AWS does not currently support nested Hyper-V. Google Cloud also does not currently support nested Hyper-V.

On-Premises: ESXi should have nested Hyper-V virtualization support. KVM supports nested Hyper-V virtualization, but we had to upgrade to the 4.13 Kernel to get the necessary fixes for Windows Server 1709. VirtualBox does not support Nested Virtualization.

BoxBoat's Recommendation is Hyper-V Isolation

We favor portability over density for a long-term Docker solution. The ability to develop an application on any supported version of Windows 10, and run it on any supported version of Windows Server is extremely important.

Hopefully Microsoft will announce that they will not make any breaking changes in the future so that developers can start to run Windows containers using process isolation while still having portability guarantees. We have opened moby issue #35247, and Microsoft appears to be working on finding the “right solution”.

In the meantime, BoxBoat recommends hyperv isolation for most Windows Container strategies. Contact Us to talk to a Windows Container expert today about the best strategy for your Windows Server application!