Guest post originally published on Medium by Charles-Edouard Brétéché

In this story we are going to deploy a local Kubernetes cluster using kind, then we will deploy Kyverno and use it to verify Kubernetes control plane images signature.

What is Kyverno ?

Kyverno is an open-source policy engine for Kubernetes that allows you to define, validate, and enforce policies for your cluster.

It is designed to be easy to use and flexible, allowing you to define policies using simple, declarative configuration files that can be managed and version-controlled like any other code.

Kyverno can be used to enforce a wide range of policies, including security, compliance, and operational best practices, and can help you ensure that your cluster is always in a known, compliant state.

Create a local cluster

First we need a local cluster, run the script below to create one with kind.
Please note that we are going to create a Kubernetes using version 1.26.0.

kind create cluster --image "kindest/node:v1.26.0" --config - <<EOF
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: worker
- role: worker
- role: worker
EOF

Control plane pods live in kube-system namespace and you can list the pods with kubectl get pods -n kube-system.

NAME                                         READY   STATUS    RESTARTS       AGE
coredns-787d4945fb-bdvsk 1/1 Running 0 149m
coredns-787d4945fb-j4qs9 1/1 Running 0 149m
etcd-kind-control-plane 1/1 Running 0 149m
kindnet-666x4 1/1 Running 0 149m
kindnet-bw2jb 1/1 Running 0 149m
kindnet-rlqkb 1/1 Running 0 149m
kindnet-s49tp 1/1 Running 0 149m
kube-apiserver-kind-control-plane 1/1 Running 0 149m
kube-controller-manager-kind-control-plane 1/1 Running 0 149m
kube-proxy-5snpj 1/1 Running 0 149m
kube-proxy-h2mfn 1/1 Running 0 149m
kube-proxy-tcmf5 1/1 Running 0 149m
kube-proxy-w9qjw 1/1 Running 0 149m
kube-scheduler-kind-control-plane 1/1 Running 0 149m

Deploy Kyverno

Now we have a local cluster running we can deploy Kyverno with Helm using the script below (this will install Kyverno version 1.10.0-alpha.1).

helm upgrade --install --wait --timeout 15m --atomic \
--version 3.0.0-alpha.1 \
--namespace kyverno --create-namespace \
--repo https://kyverno.github.io/kyverno kyverno kyverno

Once the Helm chart finishes installing you can verify Kyverno is running with kubectl get pods -n kyverno.

NAME                                             READY   STATUS    RESTARTS   AGE
kyverno-admission-controller-5b478d89db-gswnl 1/1 Running 0 98s
kyverno-background-controller-7478c9c5cc-9ldgx 1/1 Running 0 98s
kyverno-cleanup-controller-6c84b74fc8-6k9p8 1/1 Running 0 98s
kyverno-reports-controller-7565cff47b-kfnbc 1/1 Running 0 98s

What is Sigstore ?

Sigstore is an open-source project that aims to improve software supply chain security by providing a transparent and secure way to sign, verify, and distribute software artifacts.

The project was initiated by Red Hat, Google, and other industry leaders, and it is currently maintained by the Linux Foundation.

By using Sigstore, developers can ensure that the software they are distributing is authentic and has not been tampered with. This can help to prevent a wide range of security threats, such as supply chain attacks, malware distribution, and data breaches.

Overall, Sigstore aims to provide a simple, secure, and scalable solution for software signing and verification, and to promote transparency and trust in the software supply chain.

Kyverno policy to verify Kubernetes images

All images powering the Kubernetes control plane pods are coming from registry.k8s.io.

Starting from version 1.24Kubernetes is adopting the Sigstore free software signing service for signing artifacts, images and verifying signatures.

We should be able to create a Kyverno policy to verify images used by control plane pods are correctly signed.

Run the script below to create such a policy:

kubectl apply -f - <<EOF
apiVersion: kyverno.io/v1
kind: Policy
metadata:
name: verify-k8s-images
# policy lives in the same namespace as control plane pods
namespace: kube-system
spec:
# don't reject pod creation but create a report entry
validationFailureAction: Audit
# run policy evaluation in the background
background: true
rules:
- name: verify-k8s-images
match:
any:
- resources:
kinds:
- Pod
verifyImages:
- imageReferences:
# applies to all containers running an image from the k8s registry
- registry.k8s.io/*
verifyDigest: false
required: false
mutateDigest: false
attestors:
- entries:
- keyless:
# verifies issuer and subject are correct
issuer: https://accounts.google.com
subject: krel-trust@k8s-releng-prod.iam.gserviceaccount.com
rekor:
url: https://rekor.sigstore.dev
EOF

Things to be noted from the policy above:

Viewing the results

Once the above policy is created, Kyverno will start verifying images and producing reports.

You can list reports in the kube-system namespace with:

kubectl get policyreports.wgpolicyk8s.io -n kube-system

NAME PASS FAIL WARN ERROR SKIP AGE
pol-verify-k8s-images 12 1 0 0 0 14m

This shows we have one report for the verify-k8s-images, the report contains 12 entries that satisfied the policy requirements and 1 entry that failed the policy requirements.

We can look at the report details with:

kubectl describe policyreports.wgpolicyk8s.io -n kube-system pol-verify-k8s-images

Name: pol-verify-k8s-images
Namespace: kube-system
# ...
Results:
# ...
Message: failed to verify image registry.k8s.io/etcd:3.5.6-0: .attestors[0].entries[0].keyless: subject mismatch: expected krel-trust@k8s-releng-prod.iam.gserviceaccount.com, received k8s-infra-gcr-promoter@k8s-artifacts-prod.iam.gserviceaccount.com
Policy: kube-system/verify-k8s-images
Resources:
API Version: v1
Kind: Pod
Name: etcd-kind-control-plane
Namespace: kube-system
UID: 4be45338-7d30-4cac-a536-34165ba84fba
Result: fail
Rule: verify-k8s-images
Scored: true
Source: kyverno
Timestamp:
Nanos: 0
Seconds: 1681461047
# ...

The image running registry.k8s.io/etcd:3.5.6–0 doesn’t match the expected subject.

Improving the policy

To fix the subject mismatch with the registry.k8s.io/etcd image, we need to modify the policy to apply the following strategy:

We can easily configure Kyverno to apply the new strategy by adding a new entry in the verifyImages stanza:

kubectl apply -f - <<EOF
apiVersion: kyverno.io/v1
kind: Policy
metadata:
name: verify-k8s-images
namespace: kube-system
spec:
validationFailureAction: Audit
background: true
rules:
- name: verify-k8s-images
match:
any:
- resources:
kinds:
- Pod
verifyImages:
# verify kube-* and coredns/* images
- imageReferences:
- registry.k8s.io/kube-*
- registry.k8s.io/coredns/*
verifyDigest: false
required: false
mutateDigest: false
attestors:
- entries:
- keyless:
issuer: https://accounts.google.com
subject: krel-trust@k8s-releng-prod.iam.gserviceaccount.com
rekor:
url: https://rekor.sigstore.dev
# verify etcd:* images
- imageReferences:
- registry.k8s.io/etcd:*
verifyDigest: false
required: false
mutateDigest: false
attestors:
- entries:
- keyless:
issuer: https://accounts.google.com
subject: k8s-infra-gcr-promoter@k8s-artifacts-prod.iam.gserviceaccount.com
rekor:
url: https://rekor.sigstore.dev
EOF

Kyverno should update the report shortly and now all images should pass the image signature verification 🎉

kubectl get policyreports.wgpolicyk8s.io -n kube-system

NAME PASS FAIL WARN ERROR SKIP AGE
pol-verify-k8s-images 13 0 0 0 0 44m