BoxBoat Blog

Service updates, customer stories, and tips and tricks for effective DevOps

When to Containerize Legacy Applications — And When Not to

by Will Kinard | Tuesday, Aug 2, 2016 | Docker

When to Containerize Legacy Applications — And When Not to

As first posted at The New Stack – August 2, 2016

 

Sitting across the table in early talks with new customers, I often find myself thinking of the Blendtec marketing campaign that started almost 10 years ago, whereby the creator of Blendtec blenders discovers through his YouTube channel what objects will blend in his line of blenders. The opening tagline was always, “Will it blend?” He then goes on to show that the chosen object of the day will, in fact, blend in the Blendtec. What results is most usually a smokey, soupy mess, but it did, in fact, blend!

In many of BoxBoat’s new engagements, we are presented with legacy applications and asked a similarly simple question, “Can it be containerized?” With few exceptions, my answer is most always, “yes.” Our goal is then to demonstrate a viable path forward to migrate these applications and deliver the incredible benefits of containerization without the hot, soupy mess.

Why is it that “all” applications eventually become “legacy” and hard to maintain? Most developers understand it’s difficult to maintain pace, scope creep, and code organization while juggling business requirements and the necessary speed to market.

Microservice architectures help to address these issues, but what of your current development efforts including the ones that have long been code-complete? “Legacy” applications, or those applications that have found themselves technologically out of date and in need of replacement, or perhaps those that simply don’t fit the latest fad of distributed architectures, can find a place to live in this “New Stack” environment. A strategic path forward and understanding the implications of containerizing legacy applications can go a long way in moving your organization forward.

A Word on Docker and Microservices

Misconceptions arise in all up-and-coming technologies, especially those in a constant state of improvement. Markets rise and fall to the tone of hype, and the promises of Docker have been no different. So no, Docker doesn’t make your app automatically scalable (you need to code for that), it can’t run “everywhere” (binaries still need to run on their compiled architecture), and end-to-end development workflow is still maturing. For the rest of the world that understands Docker isn’t just a lightweight virtual machine, many are still coming to grips with what it looks like for their legacy applications to be containerized, or whether it’s even beneficial to do so.

Microservices{.local-link} are an architectural pattern in which complex applications are composed of small, independent processes that communicate with each other using language-agnostic APIs. In other words, not the monolithic application you just finished that 3 week commit on which spanned from the UI to multiple service layers and a database schema change for it to all mesh together! The microservice architecture is on the incline of the hype cycle rising, but for noticeable reasons (just see what Amazon did); at scale they just make sense. The benefits associated with deployment, team code ownership, maintainability, and scaling outweigh the downsides of managing multiple to many individual services. In fact, the ability of Docker/containerization to keep this architecture sane is one of its key benefits.

Unfortunately, we all don’t have the luxury of starting fresh on new applications. That’s OK, as tremendous benefit can be derived from adopting containerization and/or ultimately the microservices model with our legacy applications.

Everyone owns a Legacy Application

Anything can be containerized. Just because it can be, however, doesn’t mean it should be.

Applications running in a container at the end of the day is still a Linux process being managed by the host. There are now fairly robust mechanisms for handling networking, monitoring, and persistent storage for stateful applications for containerized applications. Here are some areas that container technologies handle fairly well:

Networking

With the introduction of overlay networking in Docker, private multi-host networks are created by the daemon itself, allowing ease of communication between containers and/or services, while providing granularity for security. Popular orchestration frameworks such as Kubernetes{.local-link} and Mesos/Marathon{.local-link} perform a mature form of container networking as well out of the box. Legacy applications can take advantage of this without altering their current network stack. A network is a network to the view of the application, while management tools provide the details outside the scope of its runtime.

Monitoring

Important to any seasoned operator, application health and performance monitoring contribute to uptime and feedback for development in performance tweaks. It is important for monitoring agents to know what containers are as a first class citizen as they can move around between physical (and virtual) hosts. Several mainstay solutions claim container monitoring, with Sysdig{.local-link} coming to mind as providing a robust monitoring solution with a focus on containers and their services. Once again, from a legacy application perspective, monitoring is no different than it always has been.

Persistent Storage

This one seems to trip people up the most. How can a containerized application which can be moved between physical hosts maintain any type of state/storage? Docker and accompanying orchestration solutions provide robust third party plugin support to handle and take the complexity out of such a task. To name a few,GlusterFS, Flocker, or AWS S3 support distributed volumes needs, each with their benefits of implementation. Containerize a database? Yep. Preserve web session state? That can be handled too.

The Value Add

As discussed above, the containerized application itself doesn’t need to know how things are being handled from the outside; but this is the point, isn’t it? Containerizing inherently decouples the application file system and runtime from its host, allowing for the benefits we know and love. With the right approach, a legacy application, or most any application for that matter, can be containerized with these immediate benefits:

  1. Platform support: Your application is now supported on any Linux platform that runs Docker and the popular cloud platforms (AWS, Azure, GCE)

  2. Installation, upgrade, and rollback processes come built-in: Error-prone custom upgrade routines are no longer a necessity as Docker seamlessly handles this for you.

  3. The “Lift and Shift”: Rearchitecting applications can be costly and time-consuming, but moving a legacy app to the cloud along with containerized portability can be major cost win for apps that just aren’t worth touching any longer.

  4. Instant environment replication for development and test: The ability to quickly recreate copies of legacy apps for integration development or testing scenarios is game changing.

Where the Bottom Falls Out (Today)

For the most part, I’ve made this topic sound like nothing but blue skies ahead. And this can be the case in dealing with many to most legacy applications when using the right approach. However, it’s important to note environments where containerization is either not a viable path forward, or when the result is simply not worth the effort:

  1. Microsoft Windows applications and services. Containerization is a technology born from primitives added to the Linux kernel. Microsoft has been very quick to catch up, but Windows native containerization is currently only available in preview releases of Windows Server 2016. Look for this area to change rapidly.

  2. Client-side GUI applications.

  3. Applications with a hard dependency on a special hardware architecture: Think mainframes.

What’s My Point

At OSCON 2016, Dell senior Linux software engineer Jose De La Rosa from Dell presented on legacy application containerization. He explained that perhaps while not explicitly following best practices, containerizing a legacy application is well worth it if it solves an existing problem. In his case, installation, upgrade, and rollback are now easy, and his application now runs on any Linux platform that supports Docker. It wasn’t the smallest image, held host dependencies in the form of a kernel module, and also needed privileged access, but the point was there was a benefit to be gained. For the average business use case, this is a win. And perhaps in the future moving forward, steps will be made to create a more container-friendly solution.

Microservices are fancy, but sometimes we just need to solve the current problems at hand and move on. Most legacy applications can quickly be containerized in their current form, creating several advantages with no downsides from the previous implementation. In many of BoxBoat’s engagements, we often find this to be the case.

BoxBoat Accelerator

Learn how to best introduce Docker into your organization. Leave your name and email, and we'll get right back to you.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.