BoxBoat Blog

Service updates, customer stories, and tips and tricks for effective DevOps

x ?

Get Hands-On Experience with BoxBoat's Cloud Native Academy

Monitoring Kubernetes with Prometheus

by Bryton Hall | Thursday, Aug 8, 2019 | Kubernetes

featured.png

Having Kubernetes up and running is great. However, failing to properly monitor the health of a cluster (and the applications it orchestrates) is just asking for trouble!

Fortunately, there are many tools for the job; one of the most popular tools is Prometheus:

an open-source systems monitoring and alerting toolkit

While Prometheus can scale to handle a great deal of metrics in a variety of complex setups (see the diagram below), in this post we're going to focus on the simple task of monitoring, and alerting on, a single metric from one application.

prometheus architecture The Prometheus Architecture

Installation

There are several ways to install Prometheus but we're going to assume that you have a running Kubernetes cluster and are using Helm to install your apps. If you don't have Helm installed, then they have a Quickstart Guide that should get you going fairly . . . quickly.

For a free environment that that has Kubernetes already running and helm pre-installed, see our accompanying Katacoda scenario!

katacoda logo

With that setup, let's use the stable/prometheus Helm chart. First, make sure you have the latest list of charts:

$ helm repo update

Now we can install the prometheus chart with the following command:

$ helm install --name promtheus stable/prometheus

If you're not familiar with Helm, heads up: this command will output a wall of text! There will be some information about the chart along with a list of resources (ConfigMaps, PersistentVolumeClaims, Pods, etc.) the chart has installed onto your cluster. Following that, there will be some help text provided by the chart authors. You can also retrieve this same information with helm status by first finding the name of your chart with helm list:

$ helm list
NAME            REVISION        UPDATED                         STATUS          CHART                   APP VERSION   NAMESPACE
prometheus      1               Wed Jul 10 09:51:19 2019        DEPLOYED        prometheus-8.14.1       2.11.1        default

Then executing helm status <chart-name>; in this case (since we named ours “prometheus” with the --name flag), that would be

$ helm status prometheus

Persistent Volume Claims

Let's take a moment to make sure that both Prometheus and Alertmanager have successfully bound their PersistentVolumes. We can do this by listng all of the pods in our current namespace:

$ kubectl get pods
NAME                                             READY   STATUS    RESTARTS   AGE
prometheus-alertmanager-5c55869589-tb4tc         0/2     Pending   0          5m
prometheus-kube-state-metrics-76cd4b4cf9-7qhbh   1/1     Running   0          5m
prometheus-node-exporter-4lmxr                   1/1     Running   0          5m
prometheus-node-exporter-p2q29                   1/1     Running   0          5m
prometheus-node-exporter-rc4cm                   1/1     Running   0          5m
prometheus-node-exporter-w2x2p                   1/1     Running   0          5m
prometheus-pushgateway-76dbc6588c-tnl4w          1/1     Running   0          5m
prometheus-server-58cdb8b6b-dzfts                0/2     Pending   0          5m

Uh-oh! The Prometheus server and Alertmanager pods are just sitting there waiting to be scheduled (that is, they are Pending)! This is (very likely) because they both require an available PersistentVolume in order to bind their PersistentVolumeClaims; we can verify our suspicion by checking the status condition of the pods:

$ kubectl get pod -l app=prometheus,component=server -o jsonpath='{.items[].status.conditions[].message}'
pod has unbound immediate PersistentVolumeClaims (repeated 4 times)

NOTE: if you're not familiar with PV's and PVC's, then check out the official docs: https://kubernetes.io/docs/concepts/storage/persistent-volumes/ Don't worry! You won't need to be an expert just yet: we'll give you steps below that will make this a problem for another day.

There are a large number of storage options in the Kubernetes ecosystem (even those that will dynamically provision a PersistentVolume on-demand!). For now, let's take the quick-n-easy way out and create 2 local storage PV's that will use directories on your nodes.

On whatever node(s) your cluster is running, create 2 directories; we'll put ours under /mnt but anywhere with some free space should be fine.

# mkdir /mnt/pv{1,2}

Then execute the following command (the whole block!), this will create 2 PersistentVolumes that your pods can bind.

$ kubectl create -f - <<EOF
kind: PersistentVolume
apiVersion: v1
metadata:
  name: pv-volume1
spec:
  storageClassName:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/mnt/pv1"
---
kind: PersistentVolume
apiVersion: v1
metadata:
  name: pv-volume2
spec:
  storageClassName:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/mnt/pv2"
EOF

Now if we check our pod status, we'll see that our PersistentVolumeClaims are Bound and our pods are starting up!

$ kubectl get pvc
NAME                      STATUS   VOLUME       CAPACITY   ACCESS MODES   STORAGECLASS   AGE
prometheus-alertmanager   Bound    pv-volume1   10Gi       RWO                           5m
prometheus-server         Bound    pv-volume2   10Gi       RWO                           5m

Accessing the Web UI's

If we go back and take a look at the chart help text (the one from our helm status command), then we'll find instructions to expose the Prometheus UI:

export POD_NAME=$(kubectl get pods --namespace default -l "app=prometheus,component=server" -o jsonpath="{.items[0].metadata.name}")
kubectl --namespace default port-forward $POD_NAME 9090

Execute these either in another shell or append & to the last line and you'll be able to access the Prometheus interface at http://localhost:9090.

prometheus dashboard the Prometheus dashboard

While we're at it, let's do the same thing for Alertmanager:

export POD_NAME=$(kubectl get pods --namespace default -l "app=prometheus,component=alertmanager" -o jsonpath="{.items[0].metadata.name}")
kubectl --namespace default port-forward $POD_NAME 9093

This will expose the Alertmanager UI at http://localhost:9093.

alertmanager dashboard the Alertmanager dashboard

NOTE: kubectl port-forward is a dev-only technique! In a production environment, we would configure Ingress resources to expose our Services to the world outside of our cluster.

Now that we have some spiffy dashboards to reassure our boss the cluster is being monitored, let's actually configure these tools to monitor our cluster!

Configuring Prometheus

The stable/prometheus Helm chart comes with some initial configuration (you can view it right in the Prometheus dashboard at http://localhost:9090/config). For example, we can query the Prometheus database for the amount of free memory our nodes have in bytes; this metric has been named node_memory_MemFree_bytes. If we type it into the Prometheus expression browser and click on the graph tab, we can get a feel for how much unused memory our nodes have.

prometheus graph of free memory a graph of free memory in Promethues

NOTE: the built-in graphing capabilities of the Prometheus dashboard are a great way to get immediate insight into your metrics (and for testing out your fancy Prometheus queries!) but for a full-fledged visualization experience, we're better off using a tool like Grafana.

But how does Prometheus know how much memory our nodes are currently using? In your configuration, there is a long list of objects under the scrape_configs key. This is where we will define how and where Prometheus will scrape the metrics from our applications (and the Helm chart authors have been kind enough to create some sensible defaults for us!).

You can find a list of the endpoints (what Prometheus calls “targets”) in the UI as well: http://localhost:9090/targets. This page will give you a status of all of the targets that Prometheus is configured to scrape and is an indispensable tool for debugging broken endpoints.

Configuring Alertmanager

Up until now, I've mentioned Alertmanager but haven't really said what it's doing here.

Prometheus is configured with alerting rules which define conditions your metrics should meet (along with some basic information about the alert). However, Prometheus is not meant to be your alerting solution; it won't manage your alerts, or silence them, or aggregate duplicates, or integrate with your other messaging tools (email, chat, on-call notification systems, etc.).

This is where Alertmanager comes in. If you scroll back up to that complicated diagram I breezed over, then you'll see Alertmanager in there, doing its due diligence in connecting your monitoring solution with real, live human intervention!

Before we configure Alertmanager, we want to add an alert to Prometheus; let's create a values.yaml file to configure our helm chart and add a single alert to it.

serverFiles:
  alerts:
    groups:
      - name: Instances
        rules:
          - alert: InstanceDown
            expr: up == 0
            for: 1m
            labels:
              severity: page
            annotations:
              description: '{{ $labels.instance }} of job {{ $labels.job }} has been down for more than 1 minute.'
              summary: 'Instance {{ $labels.instance }} down'

This alert will trigger if a target is down for 1 minute. Now we can update our Helm chart to use this new configuration:

$ helm upgrade -f values.yaml prometheus stable/prometheus

Once that completes, the new alert will be listed in the Prometheus UI at http://localhost:9090/alerts.

prometheus alert an alert in the Prometheus dashboard

To see this alerting business in action, let's destroy one of our target endpoints. I'm going to delete the pushgateway Deployment (because we won't be using it here anyway).

$ kubectl delete deployment -l app=prometheus,component=pushgateway

If we wait for the 1 minute that we configured the alert for, we'll find that Prometheus did in-fact send the alert off to Alertmanager.

alertmanager alert an alert in the Alertmanager dashboard

Getting Alerts Where You Want Them

Nobody wants to stand around, watching the Alertmanager UI, waiting for something to go wrong; so let's set it up to forward our alerts to Slack. There are several available receivers and even a generic webhook to integrate that-obscure-tool-that-no-one-else-uses.

To add a Slack receiver, let's update our values.yaml chart configuration by adding the following snippet.

alertmanagerFiles:
  alertmanager.yml:
    route:
      receiver: slack-me

    receivers:
      - name: slack-me
        slack_configs:
         - channel: ''
           api_url: ''
           send_resolved: true

Be sure to change the following values under receivers[].slack_configs:

  • channel: any valid Slack #channel or @user name
  • api_url: the URL from your Incoming Webhooks configuration

Now upgrade the Helm chart and, if your alert is firing, you'll get an earful from Slack!

slack alert a Slack message from Alertmanager!

Of course, this is just the minimal possible setup. For a real alert, you'll want to add more information by customizing the message with pertinent details and/or links to other tools (whether that's a Grafana dashboard, or a Kibana search/visualization/dashboard, or something else entirely).

I haven't talked about routes but, in short, they offer a decision tree for alert processing and routing. For example, you can notify your database team in their Slack channel for database alerts without polluting the frontend team channel with irrelevant messages.

Conclusion

Now that you know how to use Prometheus and Alertmanager to bring potential cluster and application issues directly to the correct team of problem-solvers, you can go wild implementing various monitoring techniques that give you unparalleled insight into the health of your Kubernetes cluster (not too wild though, there is such a thing as too many alerts!).

And, as always, BoxBoat is here to train personnel, provide consultation, support your services, and/or manage your custom Kuberenetes solution. Just ping us with your details and we can talk about what solution(s) will best help you provide value to your customers!