BoxBoat Blog

Service updates, customer stories, and tips and tricks for effective DevOps

x ?

Get Hands-On Experience with BoxBoat's Cloud Native Academy

Deploying Kubernetes Applications with Rancher

by Forester Vosburgh | Tuesday, Oct 22, 2019 | Rancher Kubernetes



In our last post on Rancher, we saw how we can quickly get Rancher stood up and running. We explored how we can easily enable monitoring and alerting, and we also saw how we can easily deploy standard applications like Wordpress using the Catalog feature.

In this post, we'll take our exploration of Rancher one step further and see how we can deploy our own Kubernetes applications into a Rancher managed cluster. We'll dig into what kinds of control an administrator has over deployments such as upgrade and rollback strategies.

Deployments with Rancher

As with exploring any new technology, we want to keep things simple to start out. So here we'll begin by deploying a hello-world application into our cluster. If you'd like to follow along, you'll need to have a Kubernetes cluster already in place with Rancher installed.

Deploying Hello-World

To start, let's navigate to the cluster we want to deploy into. From the Rancher home dashboard, click on “Clusters” and choose which cluster you want to use. Our setup only has one cluster , “local”, that we'll use.

Select a Cluster

Once you've selected a cluster, navigate to the “Projects and Namespaces” tab for that cluster. Notice that for our cluster, we already have a few namespaces and projects defined.

Projects and Namespaces

With Rancher, we can actually organize and manage multiple namespaces under a single Project entity. Projects can be configured to enforce certain resource quotas and limits. Likewise, Projects can be scoped to certain users with certain roles.

Configure a Project

So many useful features! However, let's still try to keep things simple and click on the default project.

Empty Project

We can see that there's nothing listed in our “Workloads” tab and that we're being prompted to create a deploy. Let's go ahead and do so by clicking on the “Deploy” button.

There's a lot of deployment configuration options available to us to use! In fact, these configuration options match what we can configure in a Kubernetes Deployment manifest. For our purposes, let's simply set the name of the deployment and define our container image to be rancher/hello-world. Once set, we'll click on “Launch” to start the deployment.

Configure Deploy

If you're experienced with developing Kubernetes Deployment manifests, you'll notice that all of these options are things you may configure in your YAML files. What's nice about this feature is that you can avoid the hassle of developing YAML directly and use the Rancher deployment feature to scaffold your manifests by exporting a created deployment to a YAML file

And that's it! We now have a running workload named “hello-world” in our default Project. We can add additional resources such as load balancers or volumes to our project and tie them to our new deployment, but we'll save those topics for another blog post. For now, let's see how we can apply updates and perform rollbacks for our new deployment.

Upgrading Hello-World

Our Hello World deployment was as simple as we could possible make it: one replica of a naked container. No environment variables, no port mappings, no healthchecks, taints, tolerations, labels, annotations, etc. Let's go ahead and add some basic configuration to our deployment and apply our update. From the list of workloads, select “hello-world” and then click on “edit”.

Edit Deploy

Let's change the deployment to use 3 replicas and also add a new environment variable to each of our pods. Once we've got our new configuration in place, we'll hit the “Upgrade” button and watch our upgrade take place from the Wordloads dashboard.

During the upgrade, we saw that the number of containers jumped up to 4 and then dropped back down to our desired count of 3. What controls this behavior? It turns out that's the default upgrade policy that Rancher uses for deployments. Whenever an upgrade is performed, by default the new pods will be started and verified to be healthy, and then the old pods will be removed. More nice defaults out of the box!

You can also specify what kind of upgrade strategy you want to use when you edit the deploy in the Rancher UI.

Upgrade Strategies

Rolling Back Hello-World

We've just seen how easy it is to perform a rolling upgrade on a workload running in our cluster. We also saw our Upgrade Strategy working in action. Now let's consider the counterpart to upgrades: rollbacks. As with everything else we've done with Rancher, rollbacks are painless and straightforward.

Navigate back to the Workloads list for our default Project. Expand the option for our hello-world workload and select Rollback.

Rollback Workload

Rollback Workload Options

We can see that we choose which revision in our revision history to rollback to. This is similar to if we were to use kubectl to perform a rollback against a Deployment object with kubectl rollout undo --to-revision. For now, let's select the only revision in our revision history that we can rollback to and hit “Rollback”.

Similar to our upgrade process, we're taken back to the Workloads dashboard and can see the updates being applied to our workload in realtime.

And that's it! We've just successfully rolled back our deployment.

Final Words

We've just seen how simple it is to deploy and manage a Kubernetes application using Rancher directly from the Rancher UI. This is just a small foray into the multitude of sensible defaults and management option that Rancher affords a Kubernetes shop. Stay tuned for more content around exploring Rancher!

Please reach out to us at BoxBoat if you want to know more about how Rancher can help your organization accelerate Kubernetes adoption. In addition, we offer a Managed Kubernetes service called BoxOps that is built using Rancher and other open source tools. Your engineers can keep developing code, we'll handle the rest from Continuous Integration all the way to production deployment and high availability.