BoxBoat Blog

Service updates, customer stories, and tips and tricks for effective DevOps

x ?

Get Hands-On Experience with BoxBoat's Cloud Native Academy

Supply Chain Security By Verification - Mitigating Supply Chain Attacks

by Cole Kennedy | Tuesday, May 4, 2021 | Security Kubernetes

featured.png

At BoxBoat, we have been helping high compliance and assurance industries adopt DevSecOps practices for the last five years. In-band compliance, security checks, and scans form the basis of a secure software delivery pipeline. However, recent supply chain attacks such as SUNBURST have highlighted the need for a new approach to supply chain security. At BoxBoat we have been working with the Cloud Native Computing Foundation sig-security on guidance on implementing an evidence based trust system for secure software delivery that mitigates against key and root credential loss.

DevSecOps Pipeline

A DevSecOps pipeline is the series of steps or processes required to build, deploy, and test a software artifact. At BoxBoat, we partner with companies like GitLab and Aqua Security to provide this framework for our customers.

After the Solar Winds supply chain attack, several BoxBoat engineers informally threat modeled these types of pipelines. We found that for highly skilled adversaries, like those involved in the Solar Winds Attack, relying on the RBAC security baked into CI systems is not sufficient. A credential loss of any user with administrative or push rights could potentially undermine the entire system’s security.

At BoxBoat, we are open source experts and looked first to the community to provide a solution. The CNCF community is where we met the maintainers of the in-toto project and started thinking about the concept of out-of-band verification and how to use this concept to mitigate complex supply chain attacks in an automated fashion.

A suitable solution required a few attributes. Without these properties, malicious actors may weaponize CI/CD systems.

  • Fully Automated: Any solution must allow fully automated deployments. Signing and verification should be fully automated processes.

  • Signed SBOM: The system should be auditable with cryptographic attestations. We need to be able to trace what processes generated which artifacts on what machines. Our customers have strict build requirements such as country of origin requirements and static code analysis. They need the ability to verify these processes out-of-band and across network air-gaps.

  • Cloud Native: Any solution should be easy to deploy in a cloud native environment. At BoxBoat, we leverage containers and Kubernetes to solve complex system engineering problems quickly. We want to be able to leverage this skill set to solve the Software Supply Chain problem.

  • Resilient to key loss, trojanized build agents, and MITM attack: A compromise of any piece of the build infrastructure should not result in a compromised artifact.

Software Supply Chain Verification

Integrity checks of compiled software, source code, and materials by SHASUM verification and cryptographic signatures align with industry best practices and prescribed by MITRE’s ATT&CK framework.

At BoxBoat, we wanted to build a system that would do just this. The in-toto project provides a method for supply chain verification. It is compatible with any automated CI system or manual process and outputs metadata that can be used to generate and validate SBOMs cryptographically.

To verify a build in-toto requires two types of files. Layout and Link-Metadata

The Layout file represents the build policy. This is the file that encodes the organizational release policy. That is, if your organization requires static code analysis, an out-of-band code sign-off, and a pass result from a staging environment this file will encode those policies.

Link-Metadata is the signed metadata file that is generated as a result of a build step wrapped with the in-toto command.

The in-toto verify command takes a set of link metadata files and a build layout policy files. It outputs the result of signature verification and file integrity checks. If the link metadata files satisfy the requirements of the build layout policy, the verification passes. If not, in-toto outputs the reasons the verification did not pass.

in-toto may seem like a perfect fit to solve the problem. However, it does not currently support Certificate Authority based verification or signing.

Since the early days of BoxBoat, understanding enterprise PKI has been essential to our engagement success. Enterprise has specific ways they distribute and verify certificates, and it seems each organization has its process, some insecure, some secure. Most, however, are prolonged manual processes.

in-toto requires the private key to be present when the meta-data is signed and the corresponding public key to be available at the verification stage. In our design, we wanted to verify signatures based on certificate constraints. Certificate constraints allow us to verify signatures based on certificate attributes. If you have ever administered Active Directory, you will be very familiar with the concept.

BoxBoat forked the Go implementation of the in-toto project and added a cli and certificate constraint support. This addition allowed us to fully integrate in-toto with the existing enterprise PKI policy.

Key Loss Resiliency

However, the system was not yet fully automated nor resilient to private key loss or compromise. This is where we reached for one of our favorite tools, SPIRE.

SPIRE is the reference implementation of the SPIFFE Specification. SPIRE is a tool that allows administrators to distribute identity in the form of x.509 certificates to workloads based on verified selectors. Administrators create registration entries that map selectors such as container SHASUM, what USER the process is running as, machine identity, etc. to an identity string known as the SPIFFEID. The SPIFFEID is a URI string encoded in the x.509 certificates. SPIRE verifies workloads over the workload API. This API is exposed over a UNIX domain socket. When a call is made over a UDS the caller’s process ID is exposed. SPIRE uses this information to identify selectable attributes of the workload, such as container SHASUM, process name, etc. SPIRE uses registration entries to select identities issued to workloads. These identities are normal x.509 certificates that can be used to terminate mTLS connections or sign metadata files.

We implemented our in-toto fork with hooks into the SPIRE workload API, deployed a SPIRE server and agent, configured our GitLab runner to have access to the workload API, configured our GitLab pipeline to wrap all commands with in-toto, registered our build container with SPIRE, and built an initial POC that satisfied almost all of our constraints.

Use of Short-Lived Keys

We still have one additional problem to solve. The use of SPIRE assumes the use of short-lived keys. The in-toto spec requires a certificate that must be valid when the signature is validated. We are using longer-lived keys in our current system, allowing enough time for the CI process to complete. We are currently investigating RFC 3161 and TUF as possible patterns to further mitigate against key loss. Please follow our project on GitHub or reach out to us directly to offer feedback or for implementation help.

BoxBoat Accelerator

Learn how to best introduce Kubernetes into your organization. Leave your name and email, and we'll get right back to you.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Get Hands on Training

Learn by doing. Step by step content created by professionals.

Learn More