BoxBoat Blog

Service updates, customer stories, and tips and tricks for effective DevOps

x ?

Get Hands-On Experience with BoxBoat's Cloud Native Academy

Admission Controller for Secure Supply Chain Verification - Kyverno

by Parth Patel | Monday, Dec 6, 2021 | Secure Supply Chain Open-Source Security

featured.png

Admission Controllers are an important piece of ensuring that production clusters only deploy signed and trusted applications. Running these tools within your cluster allows for automated detection and enforcement of your organization's policies. They can be especially useful when dealing with supply chain security! Open Policy Agent Gatekeeper has become one of the standards used for a variety of validating and mutating webhooks. But another tool, Kyverno, has been growing in popularity (and functionality) to meet the challenges of supply chain security. Let's discuss how this tool can be used in conjunction with Tekton Chains from my previous post to ensure only verified and trusted images are run in production automatically! Be sure to read the earlier post regarding Tekton Chains for a background understanding of where the image signing occurs and how it helps meet the SLSA (Supply-chain Levels for Software Artifacts) security levels.

Kyverno

Kyverno is a policy engine that was designed with simplicity and Kubernetes in mind. It allows for the user to define a set of policies as Kubernetes resources, and can be used to validate, mutate, and generate additional Kubernetes resources. Plus, these policies don't require the user to learn a new programming language as they're written in YAML or JSON. While these are all useful in a production environment, we are focusing on how Kyverno can help us with supply chain verification.

Recently, Kyverno has been updated to add multiple verifications; this can help ensure that images are signed and contain proper provenance. The two ClusterPolicy's that we will be looking at today are verify image signatures and verify image attestations. To add the keys which the policies will evaluate to, we will use a shared configmap between the policies. The Kyverno verifyImages rule uses Cosign to verify container image signatures and attestations stored in an OCI registry. The rule matches an image reference (wildcards are supported) and specifies a public key to be used to verify the signed image or attestations.

Verify Image Signature

Verify image signature policy, as the name implies, checks that the image is signed with a valid key pair before the image is allowed to run. Let’s look at this policy in depth:

apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: check-image
spec:
  validationFailureAction: enforce
  background: false
  webhookTimeoutSeconds: 30
  failurePolicy: Fail
  rules:
    - name: check-image
      match:
        resources:
          kinds:
            - Pod
      verifyImages:
      - image: "ghcr.io/kyverno/test-verify-image:*"
        key: |-
          -----BEGIN PUBLIC KEY-----
          MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAE8nXRh950IZbRj8Ra/N9sbqOPZrfM
          5/KAQN0/KjHcorm/J5yctVd7iEcnessRQjU917hmKO6JWVGHpDguIyakZA==
          -----END PUBLIC KEY-----          
      - image: "gcr.io/tekton-releases/github.com/tektoncd/*"
        key: |-
          -----BEGIN PUBLIC KEY-----
          MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEnLNw3RYx9xQjXbUEw8vonX3U4+tB
          kPnJq+zt386SCoG0ewIH5MB8+GjIDGArUULSDfjfM31Eae/71kavAUI0OA==
          -----END PUBLIC KEY-----     

The important pieces to take away are validationFailureAction is set to enforce. This tells Kyverno if the resource being validated should be blocked (enforce) if the policy is not met. The other option is “allowed but reported” (audit). failurePolicy defines how unrecognized errors should be handled by the admission controller. resources:kind defines what the policy will be run against. In this case, we are telling Kyverno that we want to check the Pods in all the namespaces. We can constrain this to specific namespaces by adding namespaces under resources and specifying the name of the namespace. Under verifyImages the image defines the registry from where the images are being pulled. Finally, the key defines the public key of the keypair that was used to sign the image. Once the policy is active, it will not allow any images pulled from ghcr.io/kyverno/test-verify-image unless they are signed by the specified key pair.

Verify Image Attestations

The previous cluster policy allows us to verify that the image is signed by a trusted source. But what about checking and verifying the attestations for the image? The below ClusterPolicy will allow for checking such provenance files. If you recall from the previous post, Tekton Chains can be used to create an image that is signed and also creates a provenance file based on the SLSA Provenance. Please refer to my previous post where I go into more detail about what the SLSA provenance is and how we can use Tekton Chains to create it for us. Let’s take a deeper look at the policy and break it down:

apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: attest-code-review
  annotations:
    pod-policies.kyverno.io/autogen-controllers: none
spec:
  validationFailureAction: enforce
  background: false
  webhookTimeoutSeconds: 30
  failurePolicy: Fail
  rules:
    - name: attest
      match:
        resources:
          kinds:
            - Pod
          nameSpaces:
            - prod
      verifyImages:
      - image: "registry.io/org/*"
        key: |-
          -----BEGIN PUBLIC KEY-----
          MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEHMmDjK65krAyDaGaeyWNzgvIu155
          JI50B2vezCw8+3CVeE0lJTL5dbL3OP98Za0oAEBJcOxky8Riy/XcmfKZbw==
          -----END PUBLIC KEY-----          
        attestations:
          - predicateType: https://slsa.dev/provenance/v0.2
            conditions:
              - all:
                - key: "{{ builder.id }}"
                  operator: Equals
                  value: "tekton-chains"        
                - key: "{{ recipe.type }}"
                  operator: Equals
                  value: "https://tekton.dev/attestations/chains@v1"

Note: Each verifyImages rule can be used to verify signatures or attestations, but not both. To use both, two ClusterPolicy's need to be created.

Similar to the above policy, there is validationFailureAction, failurePolicy, and resources:kind which we have discussed previously. Let’s focus on the differences between the two policies. verifyImages:image is the location where the provenance file is stored. Remember to set your Tekton Chain's configuration to store in OCI to have it automatically upload the provenance when the image is uploaded (more information about how to set this in my previous post). The key defines the public key of the keypair which was used to sign the provenance file. The attestations section goes into the specific sections of the provenance spec to check that specific values are set. Firstly, the predicateType must be specified. As of this post, the most up to date SLSA provenance version is v0.2, but other formats and predicate types can also be checked. The conditions is the key/value pair check for the keys and values present in the provenance file. The operator allows you to define how to evaluate the values such as Equals, NotEquals, In, GreaterThan…etc.

ConfigMap for Public Keys

Specifying the public key for cluster policy is inefficient; if the key changes, it will need to be changed in multiple places. Kyverno's new feature allows you to store your public keys for multiple repositories in a ConfigMap that can be referenced from within the policy. Below is an example of a how the ConfigMap should be setup:

apiVersion: v1
kind: ConfigMap
metadata:
  name: keys
data:
  org: |-
    -----BEGIN PUBLIC KEY-----
    MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEHMmDjK65krAyDaGaeyWNzgvIu155
    JI50B2vezCw8+3CVeE0lJTL5dbL3OP98Za0oAEBJcOxky8Riy/XcmfKZbw==
    -----END PUBLIC KEY-----
  kyverno: |-
    -----BEGIN PUBLIC KEY-----
        MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAE8nXRh950IZbRj8Ra/N9sbqOPZrfM
        5/KAQN0/KjHcorm/J5yctVd7iEcnessRQjU917hmKO6JWVGHpDguIyakZA==
    -----END PUBLIC KEY-----
  tektoncd: |-
    -----BEGIN PUBLIC KEY-----
        MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEnLNw3RYx9xQjXbUEw8vonX3U4+tB
        kPnJq+zt386SCoG0ewIH5MB8+GjIDGArUULSDfjfM31Eae/71kavAUI0OA==
    -----END PUBLIC KEY-----
  projectsigstore: |-
    -----BEGIN PUBLIC KEY-----
        MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEhyQCx0E9wQWSFI9ULGwy3BuRklnt
        IqozONbbdbqz11hlRJy9c7SG+hdcFl9jE9uE/dwtuwU2MqU9T/cN0YkWww==
    -----END PUBLIC KEY-----

Multiple repositories can be added each with their own public key. For example, if you wanted to check that all the pods that were running Tekton resources were signed and trusted, you could add in the public key for Tekton which the cluster policy will check against. This would ensure that trusted signed Tekton resources would be allowed to run in your cluster.

Now that the ConfigMap has been created, the Cluster Policy will change slightly to pull from the ConfigMap instead of directly defining the key within the policy itself.

apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: check-image
spec:
  validationFailureAction: enforce
  background: false
  webhookTimeoutSeconds: 30
  failurePolicy: Fail  
  rules:
    - name: check-image
      match:
        resources:
          kinds:
            - Pod
      context:
      - name: keys
        configMap:
          name: keys
          namespace: default 
      verifyImages:
      - image: "*"
        key: "{{ keys.data.org }}"
      - image: "ghcr.io/kyverno/test-verify-image:*"
        key: "{{ keys.data.kyverno }}"

The key is replaced with "{{ keys.data.org }}" that will look up the specific value for that repository and use that key. This provides you with a central place to manage your public keys.

DEMO

Setup

As these features are still in development and being tested, let's run through a quick demo on how these will work once they are fully released. As a prerequisite, you must have a kubernetes cluster running. You may make use of any of the following or choose another you prefer: (Docker Desktop Kubernetes, k3d, minikube). Be sure to checkout my previous post where I walk through Installing and configuring Tekton Pipeline and Tekton Chains as they will be needed for this demo.

Firstly, I installed Kyverno using the definitions files from the official repository:

kubectl create -f https://raw.githubusercontent.com/kyverno/kyverno/main/config/install.yaml

Next, I need to setup secrets so that Kyverno and Tekton can authenticate into the specific private repositories.

kubectl create secret generic secret-dockerconfigjson --type=opaque --from-file=config.json="$DOCKER_CONFIG_JSON"

# NOTE: Pull secret needs to exist in both kyverno namespace as well as the tekton task namespace (default in this case)

kubectl create secret generic regcred --type=kubernetes.io/dockerconfigjson --from-file=.dockerconfigjson="$DOCKER_CONFIG_JSON" -n kyverno  

kubectl create secret generic regcred --type=kubernetes.io/dockerconfigjson --from-file=.dockerconfigjson="$DOCKER_CONFIG_JSON"

Next, I patched the deployment to use the regcred secret that we created above. Using kubectl patch we can modify the existing Kyverno deployment and add in these values.

kubectl patch \
  deployment kyverno \
  -n kyverno \
  --type json --patch-file patch_container_args.json

patch_container_args.json:

[
  {"op": "add", "path": "/spec/template/spec/containers/0/args/-", "value": "--imagePullSecrets=regcred"},
  {"op": "add", "path": "/spec/template/spec/containers/0/args/-", "value": "--webhooktimeout=15"}
]

Finally, I slightly modified the above ConfigMap and ClusterPolicy (for both image signature verification and attestation verification) such that Kyverno is verifying the pods I deploy in my cluster. Make sure to change the image to point to the specific repository that you are working from. You can also generate a key pair using cosign to create the public key to add to your policy.

In this demo I used the buildpacks tekton pipleline to build out an image and Tekton Chains to sign the image that was produced and create the provenance. An ephemeral image repository called ttl.sh was used to temporarily store the image, signature and provenance file.

To use the buildpacks pipeline I first need to install the shared tasks that will be used by the pipeline. This can be done by:

kubectl apply -f https://raw.githubusercontent.com/tektoncd/catalog/main/task/git-clone/0.4/git-clone.yaml
kubectl apply -f https://raw.githubusercontent.com/tektoncd/catalog/main/task/buildpacks/0.4/buildpacks.yaml
kubectl apply -f https://raw.githubusercontent.com/tektoncd/catalog/main/task/buildpacks-phases/0.2/buildpacks-phases.yaml

Next, I installed the pipeline using the following command:

kubectl apply -f https://raw.githubusercontent.com/tektoncd/catalog/main/pipeline/buildpacks/0.1/buildpacks.yaml

The pipeline needs a shared volume to pass information between its tasks. Thus, I configured the below PersistentVolumeClaim (copy and paste into a yaml file and use kubectl apply -f):

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: cache-image-ws-pvc
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 500Mi

Finally, I used the following PipelineRun to run the buildpacks pipeline (copy and paste into a yaml file and use kubectl apply -f):

apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
  generateName: cache-image-pipelinerun-
  labels:
    app.kubernetes.io/description: PipelineRun
spec:
  pipelineRef:
    name: buildpacks
  params:
  - name: BUILDER_IMAGE
    value: docker.io/cnbs/sample-builder:bionic@sha256:6c03dd604503b59820fd15adbc65c0a077a47e31d404a3dcad190f3179e920b5
  - name: TRUST_BUILDER
    value: "true"
  - name: APP_IMAGE
    value: ttl.sh/buildpackspoc
  - name: SOURCE_URL
    value: https://github.com/buildpacks/samples
  - name: SOURCE_SUBPATH
    value: apps/ruby-bundler
  - name: CACHE_IMAGE
    value: ttl.sh/buildpackspoc
  workspaces:
  - name: source-ws
    subPath: source
    persistentVolumeClaim:
      claimName: cache-image-ws-pvc
  - name: cache-ws
    emptyDir: {}

Running the Demo

If Kyverno detects that an image being pulled from the specified repository does not match the signature, it will throw the following error message and not allow the pod to start up.

Note: To view the logs from the Kyverno pod use the following command:

kubectl logs -n kyverno <Name of the Kyverno Pod>

Image Signature Verification Failed:

E1206 13:13:24.559003       1 server.go:327] WebhookServer/MutateWebhook "msg"="image verification failed" "error"="\n\nresource Deployment/default/test123 was blocked due to the following policies\n\nverify-image:\n  autogen-verify-image: 'image signature verification failed for ttl.sh/0f318dd1a84650369e8d669d875daf5d/slsapoc:latest:\n    signature mismatch'\n" "gvk"="apps/v1, Kind=Deployment" "kind"="Deployment" "name"="test123" "namespace"="default" "operation"="CREATE" "uid"="124a16f5-bcbf-4dfa-92c0-68bb5d1c7b3e"

Image Signature Verification Check Passed:

The below log shows that the image verification for signature check passed for the image being used by a taskrun within the buildpacks pipeline.

I1206 13:17:42.770969       1 imageVerify.go:162] EngineVerifyImages "msg"="verifying image" "kind"="Pod" "name"="cache-image-pipelinerun-zdm5f-r-z5c7v-build-trusted-9vtpk-2v5bg" "namespace"="default" "policy"="verify-image" "image"="gcr.io/tekton-releases/github.com/tektoncd/pipeline/cmd/entrypoint:v0.29.0@sha256:66958b78766741c25e31954f47bc9fd53eaa28263506b262bf2cc6df04f18561"

Attestation Checks Failed:

The below log shows the attestation check failing as the given image does not have the attestations file attached. This will block the deployment of the pod with the specific image.

E1206 15:22:41.599489       1 server.go:327] WebhookServer/MutateWebhook "msg"="image verification failed" "error"="\n\nresource Pod/prod/ was blocked due to the following policies\n\nattest-code-review:\n  attest-code-review: 'failed to fetch attestations for ttl.sh/0f318dd1a84650369e8d669d875daf5d/slsapoc:latest:\n    not found'\n" "gvk"="/v1, Kind=Pod" "kind"="Pod" "name"="" "namespace"="prod" "operation"="CREATE" "uid"="c7d63edf-e508-4f91-af4c-6f1b3d73af91"

Attestation Checks Passed:

Once the buildpacks pipeline completes, it creates an image that is stored in ttl.sh. This image is signed by chains and a provenance file is created. We can then pull down the image and run it locally in our cluster. This allows us to verify that Kyverno will check the validity of the image's signature and the signature of the provenance file, and that the values within the provenance file match both the PredicateType and the values we have specified for Conditions. Below is the log we should see within Kyverno when all of those checks have passed:

I1206 13:19:22.463332       1 imageVerify.go:251] EngineVerifyImages "msg"="attestation checks passed for ttl.sh//buildpackspoc:latest" "kind"="Pod" "name"="" "namespace"="prod" "policy"="attest-code-review"

Conclusion

While some of these features are still in development and haven't been officially released, Kyverno shows us that the open-source community is acknowledging supply chain security and prioritizing tools which automate detection and prevention. Be on the lookout for Kyverno v1.6.0 when these features will hit GA!

Stay tuned for further updates and blog posts on secure supply chain where we explore additional tools and advancements in this burgeoning industry!

Here at BoxBoat we are always pushing the boundary and looking for new ways to improve your workflow. Security is always forefront on our minds. Our Engineers are ready to implement DevSecOps in your organization using various tools and industry-leading security practices. Contact us for more information, or take a look at some of the work we're doing for the community.