Does Patching Have You Down? How Moving Your Critical Apps to Containers Changes the Game
We’ve all been there. It’s late Friday or maybe Saturday night and you have a list of 30 servers that need OS or application patching. You feel pretty good about it - after all, you’ve tested everything in QA and pre-prod with no issues, and that patch management software you implemented a few years ago helps with some of the heavy lifting. All that said, you know Monday morning has a better than average chance of being what we call in IT “A Bad Day”.
I used to call this “Patch Anxiety” and anyone who’s worked in Enterprise IT knows what it feels like. There’s some good news here, however. As the Enterprise accelerates its adoption of containerization technology using products like Docker EE from Docker and OpenShift from RedHat, one of the beneficial side effects is a significant reduction in “Patch Anxiety”. In order to understand why, it’s important to understand how we deploy apps in containers differently than the way we deploy to traditional VM’s.
The traditional way to stand up a brand new app is to find a server with some capacity, create a VM (or several), attach some storage, install the binary (.exe, .jar.war, etc), make sure the environment variables are all correct for the target environment, then tweak things until everything works. When it’s time to patch, you obviously can’t do all that again over and over, so you just touch the part of the stack that needs to be updated. Then you test it and wait for a change window to make the same changes in production and hope everything works the same way it did in test (which it always does, because pre-prod/staging is always exactly the same as production right?). As they say – “your results may vary”.
Containers abstract a lot of that work away and introduce some guard rails and benefits intrinsic to the technology. Let’s follow the same logic for deploying a new containerized app.
There’s no VM to build or provision because you already have a Swarm or Kubernetes managed cluster running in all your major environments with plenty of capacity. Instead, you write a docker file to tell the engine what to pull into the container image (usually a tiny part of the OS filesystem , the app, and a few dependencies), you create the container image, then drop it into the cluster where it runs in its own little world without any idea it’s one of perhaps hundreds of other workloads running in the same space.
Let’s pause here, because the benefits are already pretty big at this point. First, you’ve only included the smallest parts of the OS filesystem in the container you need on order to run the app – so no more having to patch a whole server if Microsoft releases a mine-sweeper vulnerability hotfix. This reduced OS footprint significantly reduces both the number of applicable patches and the vulnerabilities that show up on your security teams scans. Second, the container will run exactly the same way in your production cluster that it does in your test cluster. No more making laps tweaking config files or settings.
Things get even better when you do need to patch.
Instead of patching one small component of the application stack running in prod and hoping it goes as planned, you instead just rebuild the container image with the latest patches and test it. Once you’re as confident as possible, you remove the container running in production and replace it with the one you just tested. That’s pretty cool. But it gets cooler – if your application has been written or changed to support running in multiple instances, instead of waiting for a change window to swap out the containers, you can just deploy the new one next to the old one and send just enough work to the new container to verify it works. When you’re comfortable that all is well, just let connections bleed off the old container and destroy it, leaving the new container (or containers) to continue doing their work serving your customers. In fact, as your development team starts to embrace this architecture (and the micro-service centric development style that makes full use of containerization) the benefits continue to add up. But that’s a topic for another blog post.
Contact BoxBoat today to get started with your container journey.
BoxBoat Technologies is an authorized CNCF Kubernetes Solutions Partner and Docker Premier Partner. BoxBoat offers services to accelerate enterprise adoption of modern DevOps tool chains, container technologies, and cloud solutions.