BoxBoat Blog

Service updates, customer stories, and tips and tricks for effective DevOps

Istio Part 1 - Kubernetes Weight Based Traffic Management

by Jesse Antoszyk | Monday, Sep 23, 2019 | Kubernetes Service Mesh Istio

Istio Part 1 - Kubernetes Weight Based Traffic Management

Service mesh is a technology that is being rapidly adopted by organizations leveraging Kubernetes and other container platforms for their application deployments. Service meshes provides advanced routing, observability, and security between your microservices.

Istio is a platform independent service mesh that provides a series of Custom Resource Definitions (CRDs) for use with Kubernetes. This allows for a declarative configuration-based model for traffic management, a powerful capability to enhance the security and funtion of your microservices.

In this blog post, we will explore the basics of traffic management using weight-based routing with Istio.

Related: What is Istio - Intro to Kubernetes Service Mesh

What is Weight-Based Routing?

Weight based routing allows splitting traffic between versions of an application. Using this technique simplifies A/B testing, where a subset of users receive a new version of an application while the remaining users receive the current version. This also allows for canary deployments where a small percentage of traffic is routed to a new version of an application to test that version before sending all traffic to it, and decommissioning the old version.

Kubernetes Istio Setup

This tutorial uses Istio 1.2.2 to demonstrate some of Istio’s traffic management capabilities. If you want to follow along with the blog post, there is an accompanying Katacoda scenario, or you can install Istio on Minikube as described in the Istio Docs. The Katacoda scenario features a minimal Istio install to reduce the resource requirements.

We recommend using the Katacoda scenario to help guide you through the installation process, it already comes with everything necessary to run through the code including your own cloud-based Kubernetes cluster.

All of the code used for this project can be found on GitHub. If you are running locally, clone this repository somewhere convenient.

Demo Istio Application

The typical Istio demo uses the Bookinfo application to describe many of the Istio’s concepts. Bookinfo is a very useful demo application, but I found that using a very simple application helped to isolate the concepts described here.

Instead, our application consists of 3 components: the colors application, a logging backend, and a logging frontend. The colors application will be used to demonstrate the weight-based routing concepts. Colors is a very basic Express application that takes 2 environment variables: a color, and a logging backend. When a user visits the site, the color specified is displayed as the background color, and this is logged out to the logging backend. The logging backend keeps a tally of how many times each color has been served, which the logging frontend uses to display the percent of times each color was served. Istio’s weight-based routing rules will be applied the colors application, the logging applications exist only to display traffic statistics.

You can find the source code in the ./demo-app/ directory of our GitHub repository for this blog post.

alt text

Vanilla Kubernetes Resources

We will start by creating some Kubernetes objects. It is strongly recommended that you have a firm grasp of Kubernetes concepts before proceeding.

Related: Kubernetes Fundamentals Resources

Each of our applications require a Deployment and a Service. We will be creating 5 Deployments: a logger-backend, a logger-frontend, and versions of the colors applications. Each of the colors has a version label specifying if it is version 1, 2 or 3. These labels will be used later to route to specific versions.

Each of these deployments also has an associated Service. It is important to name the ports with the protocol used to take advantage of Istio’s routing features (link). Also of note is that there is only one Service for colors even though we have 3 versions. By default Kubernetes will route to all 3 backends when this service is used.

Apply both the deployments and the services by running the following:

$ kubectl apply -f ./traffic-weighting/services.yaml
$ istioctl kube-inject ./traffic-weighting/deployments.yaml | kubectl apply -f -

Where Did this Extra Container Come From?

You may have noticed that each of our Deployment YAMLs only specified one container but our pods show 22 containers, what gives?

The istioctl kube-inject command generates a deployment which defines an init container and a sidecar container used to run Istio’s proxy along side our application.

The sidecar container contains Envoy, the an open source proxy created by Lyft, and used by Istio. Envoy seamlessly intercepts all requests to and from the application, and handles the actual routing of the requests. This allows it to inspect network traffic, and make informed decisions about where traffic needs to go, and what rules need to be applied to it. This injection can also be done automatically by labeling a namespace with istio-injection=true. To take advantage of automatic injection the sidecarInjectorWebhook option must be enabled.

Istio Resources

Istio Gateway (Ingress)

While the Kubernetes Ingress provides a lot of powerful and flexibly, especially when combined with LetsEncrypt as we show in our Automatic Let’s Encrypt Certificates blog. Istio instead makes use of their own custom resource for managing ingress traffic. Istio’s Ingress Gateway allows Istio to tap into its monitoring and routing rule facilities for ingress traffic.

We are now going to create a gateway for our frontends. This will act as an ingress for our traffic. We will be using path-based routing for our services so only one domain or IP address is needed. We have defined the gateway to accept traffic from any host header. Notice that the gateway.yaml defines its hosts as "*".

Apply the gateway YAML like so:

kubectl apply -f ./traffic-weighting/gateway.yaml

Istio Virtual Services

Now we will create a Virtual Service. A Virtual Service binds to a gateway, and defines routes to the upstream hosts in Kubernetes. Our Virtual Services defines the external host name, and a path the Service will respond to as well the upstream to send the requests to.

Apply the virtual service YAML like so:

kubectl apply -f ./traffic-weighting/virtualservice.yaml

In a browser, open the IP or DNS name of your load balancer. The load balancers IP can be found by running: kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}'. Our logger frontend will respond to requests on / and our application is served on /colors. Open up the application and refresh the page a few times. If you refresh colors enough you will notice that the requests are being distributed in a round-robin fashion. The logger frontend should show an even split for the requests.

Istio Destination Rules (Traffic Weighting)

We are now going to create Istio destination rules. Destination rules allow us to define subsets. These subsets filter endpoints based on labels. For this example, we are defining 3 subsets, one for each version of our application. The subsets use the version label that were added to our deployments to determine the appropriate application version to route to.

We also update our virtual service definition for colors to make use of these subsets. By adding multiple destination rules to our virtual service, and providing weight values to them we can control the flow of traffic to these services.

kubectl apply -f ./traffic-weighting/destinationrule.yaml

In a browser, open the logger interface, and click the “Delete Logs” button to clear the previous statistics. In another window open the application at /colors. If you refresh colors enough you will notice that the requests are being distributed in a according to the weights defined.

What Happened?

At a very high level traffic came in to the Istio ingress gateway which is backed by the Envoy proxy. Upon seeing the request, and knowing the virtual service rules, the proxy rolls a weighted dice to decide what backend to route to. It then routes to the a backend based on the list of endpoints it has for the subset that was selected. With enough repetitions of this process, the percent of traffic going to each backend will approach the percentages we assigned to them.

Routing Users Consistently

In this example, we created objects that allow us to manage traffic to multiple backends based on defined percentages, but in the case of A/B testing or canary deployments, it is often desirable to ensure that the same user gets the same webpage each time they refresh the page. To achieve this, we want to set up sticky sessions. With Istio this is a straight-forward process. In the destination rule we can also define a consistent hash load balancing policy to provide session affinity based on the user’s request. Below is an example of a policy that hashes based on the source IP of the request.

apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  name: colors
spec:
  host: colors
  trafficPolicy:
    loadBalancer:
      consistentHash:
        useSourceIp: true
    ...

Summary

We have shown how Istio can be used for managing traffic to multiple backends based on defined percentages, but this is just scratching the surface. Weighting rules can be combined with other routing rules to allow for more advance used cases such as sticky sessions. Using the basics of traffic weighting alongside other routing features, A/B testing or canary deployments can be achieved quickly.

Beyond the routing rules, Istio also provides features for observability, and security making it a wholistic approach to traffic management, and inspection. In future updates, this series will explore the features of Istio.

Istio is Just One Piece of the Puzzle

Istio is an amazingly powerful tool, but it is just one piece of the DevOps puzzle. There is a lot of work to get from code to deployment to management, and maintenance.

The DevOps specialists at BoxBoat can help you and your organization learn, accelerate, and maintain your DevOps tooling, and workflows at scale. We have experts in Docker, Kubernetes, Istio, and many more proven technologies.

If you are looking to get the benefits of containerization, and orchestration but don’t want the hassle of managing infrastructure, BoxOps provides a managed DevOps stack so you can get back to your applications.

For more information on Istio or other container technologies, please get in touch with a BoxBoat DevOps specialist today.

BoxBoat Accelerator

Learn how to best introduce Kubernetes into your organization. Leave your name and email, and we'll get right back to you.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.