Part II: A Platform Integration Example

In Part I, we explored the architecture of the CRI-O credential provider and walked through a manual setup. In this part, we’ll see how platforms like OpenShift and its upstream open-source project OKD integrate the credential provider natively, making deployment simpler. OpenShift includes the credential provider starting with version 4.21, with different integration levels across versions showing the evolution toward more native platform support.

Using the credential provider with OpenShift 4.21

OpenShift 4.21 ships the crio-credential-provider RPM package along with CRI-O v1.34, which is the minimum version required for the credential provider support. Earlier CRI-O versions do not support namespace-scoped auth files. Since there is no Custom Resource Definition or API for managing the credential provider configuration in OpenShift 4.21, users must manually create a MachineConfig resource to deploy the configuration files. By overriding the existing ECR credential provider configuration file, the kubelet automatically uses the CRI-O credential provider without requiring additional configuration. This approach works on all OpenShift installations regardless of the underlying cloud provider.

Enable feature gate

Enable the KubeletServiceAccountTokenForCredentialProviders feature gate:

kubectl patch FeatureGate cluster --type merge --patch '{"spec":{"featureSet":"CustomNoUpgrade","customNoUpgrade":{"enabled":["KubeletServiceAccountTokenForCredentialProviders"]}}}'


Create Ignition Config

Create a file named machine-config.bu with the following Ignition Config. This configuration creates both the credential provider configuration and the registry mirror configuration on worker nodes. Note that this will overwrite both /etc/kubernetes/credential-providers/ecr-credential-provider.yaml and /etc/containers/registries.conf. Check what credential providers and registry mirrors are currently configured on your nodes before applying this to avoid breaking existing setups.

Note: The configuration file is intentionally named ecr-credential-provider.yaml to override OpenShift’s existing ECR credential provider configuration. While the KubeletConfig API supports configuring kubelet settings, the imageCredentialProviderConfigFile field is passed as a command-line flag to the kubelet and isn’t currently configurable through the KubeletConfig resource. By reusing the existing file path that OpenShift’s kubelet is already configured to use, we avoid needing to modify kubelet flags or systemd units. This approach simplifies deployment and works across all OpenShift installations.

variant: openshift
version: 4.20.0
metadata:
  labels:
    machineconfiguration.openshift.io/role: worker
  name: 99-worker-crio-credential-provider-config
storage:
  files:
    - path: /etc/kubernetes/credential-providers/ecr-credential-provider.yaml
      mode: 0644
      overwrite: true
      contents:
        inline: |
          apiVersion: kubelet.config.k8s.io/v1
          kind: CredentialProviderConfig
          providers:
            - name: crio-credential-provider
              matchImages:
                - docker.io
              defaultCacheDuration: "1s"
              apiVersion: credentialprovider.kubelet.k8s.io/v1
              tokenAttributes:
                serviceAccountTokenAudience: https://kubernetes.default.svc
                cacheType: "Token"
                requireServiceAccount: false
    - path: /etc/containers/registries.conf
      mode: 0644
      overwrite: true
      contents:
        inline: |
          unqualified-search-registries = ["docker.io", "registry.access.redhat.com"]

          [[registry]]
          location = "docker.io"

          [[registry.mirror]]
          location = "localhost:5000"
          insecure = true


Compile to MachineConfig

The config needs to be compiled from the Ignition Config format to MachineConfig using Butane so that the Machine Config Operator is able to apply the configuration:

podman run -it -v $(pwd):/w -w /w quay.io/coreos/butane:release machine-config.bu -o machine-config.yml

This should result in:

# Generated by Butane; do not edit
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
  labels:
    machineconfiguration.openshift.io/role: worker
  name: 99-worker-crio-credential-provider-config
spec:
  config:
    ignition:
      version: 3.5.0
    storage:
      files:
        - contents:
            compression: gzip
            source: data:;base64,H4sIAAAAAAAC/1SPMW+EMAyFd36FxU5Ot1XZTnTpVqmn7jnHHFYgoY6D1H9fEUFps8V+z+97buFPkswpWgjlQROpwRQHfprwkg2ny3ptAkdvoRfyFJXd9C5pZU/SV2Gz7N9sG4AOopvJAgqnDn8t3SFqAABmpzi+ze5J1bO9DnzCQGI41YmnwZVJe4cjvRZxWhHba27r+i/3mXKEmKPK2WEzaQoUb6rCj6JndCZZGemGmErUexUVzxSRLIyqS7aXy3ZRIills6OZvOJ+ATfK+/dCFtrqb/eF0FdhoY9/CRYGN2VqfgIAAP//tTtvdHwBAAA=
          mode: 420
          overwrite: true
          path: /etc/kubernetes/credential-providers/ecr-credential-provider.yaml
        - contents:
            compression: gzip
            source: data:;base64,H4sIAAAAAAAC/1zMsaoDIRCF4d6nGKzvyja3CfgkYiHjJA5xHTKjRd4+BBJY0h34P84aj1U6X5nqZlQU26Z0Y5vKZBAh+Sp4Jw0s/g/8pz1DQSSzoFRbmQHl8Nm5lL49Z9cFy2QZEOH0cUbhYFXRH/vevYnNy/++797xMMKlBBGmLnKvAAAA//92BbzesgAAAA==
          mode: 420
          overwrite: true
          path: /etc/containers/registries.conf

Note: The base64-encoded data in the generated MachineConfig will reflect the actual file paths from your Ignition Config. The example above is for reference only.

Apply MachineConfig

Apply the MachineConfig to deploy the configuration files to worker nodes:

kubectl apply -f machine-config.yml

Wait for the Machine Config Operator to roll out these changes to worker nodes. This process involves node reboots. You can monitor the rollout status:

oc get machineconfigpool worker -w

Once all nodes show UPDATED=True and UPDATING=False, the MachineConfig rollout is complete and the kubelet is configured to use the CRI-O credential provider.

Prepare a selected node for testing

Now that the MachineConfig has been applied, prepare a worker node for testing. You need to start the local registry mirror on the target node and apply the cluster-level RBAC configuration for each node, similar to the Configure RBAC section in the earlier example:

# Select any worker node for the demo
export NODE_NAME=$(kubectl get node -l node-role.kubernetes.io/worker --output jsonpath='{.items[0].metadata.name}')

# Enter the node
oc debug "node/$NODE_NAME"

Now on the node itself, execute the following commands:

# Access the host filesystem
chroot /host

# Verify that the crio-credential-provider is available on the node
/usr/libexec/kubelet-image-credential-provider-plugins/crio-credential-provider --version

# Clone the repository
git clone --depth=1 https://github.com/cri-o/crio-credential-provider ~/crio-credential-provider

# Start the local test registry
~/crio-credential-provider/test/registry/start

Apply RBAC

Move back to the host, and apply the required RBAC to the cluster:

# Clone the repository
git clone --depth=1 https://github.com/cri-o/crio-credential-provider
cd crio-credential-provider

# Update RBAC to use the actual node name
sed -i 's;system:node:127.0.0.1;system:node:'"$NODE_NAME"';g' test/cluster/rbac.yml

# Apply RBAC and secret to the cluster
kubectl apply -f test/cluster/rbac.yml -f test/cluster/secret.yml


Test credential provider

Test the credential provider by using a node selector:

# Label the node for testing
kubectl label nodes "$NODE_NAME" app=test

# Add node selector to pod spec
sed -i "s;spec:;spec:\n  nodeSelector:\n    app: test;g" test/cluster/pod.yml

# Deploy the test pod
kubectl apply -f test/cluster/pod.yml

View logs

Inspect the credential provider logs using journald on the node:

journalctl _COMM=crio-credential
… crio-credential[…]: app.go:33: Running credential provider
… crio-credential[…]: app.go:45: Reading from stdin
… crio-credential[…]: app.go:62: Parsed credential provider request for image "docker.io/library/nginx"
… crio-credential[…]: app.go:64: Parsing namespace from request
… crio-credential[…]: app.go:71: Matching mirrors for registry config: /etc/containers/registries.conf
… crio-credential[…]: app.go:84: Got mirror(s) for "docker.io/library/nginx": "localhost:5000"
… crio-credential[…]: app.go:86: Getting secrets from namespace: default
… crio-credential[…]: k8s.go:119: Using API server host: api-int.ci-ln-62qi4bb-76ef8.aws-4.ci.openshift.org:6443
… crio-credential[…]: app.go:101: Got 1 secret(s)
… crio-credential[…]: auth.go:87: Parsing secret: my-secret
… crio-credential[…]: auth.go:97: Found docker config JSON auth in secret "my-secret" for "http://localhost:5000"
… crio-credential[…]: auth.go:112: Checking if mirror "localhost:5000" matches registry "localhost:5000"
… crio-credential[…]: auth.go:115: Using mirror auth "localhost:5000" for registry from secret "localhost:5000"
… crio-credential[…]: auth.go:48: Wrote auth file to /etc/crio/auth/default-7e59ad64326bc321517fb6fc6586de5ee149178394d9edfa2a877176cdf6fad5.json with 8 number of entries
… crio-credential[…]: app.go:108: Auth file path: /etc/crio/auth/default-7e59ad64326bc321517fb6fc6586de5ee149178394d9edfa2a877176cdf6fad5.json


Also, podman logs registry should indicate that the image has been pulled using the local mirror.

OpenShift 4.22+: CRIOCredentialProviderConfig API

OpenShift 4.22 is planned to introduce a CRIOCredentialProviderConfig Custom Resource Definition (config.openshift.io/v1alpha1) that will provide a declarative API for managing credential provider configuration. The Machine Config Operator will handle all the configuration details automatically, eliminating the need for manual MachineConfig resources.

Example usage:

apiVersion: config.openshift.io/v1alpha1
kind: CRIOCredentialProviderConfig
metadata:
  name: cluster
spec:
  matchImages:
    - "docker.io"

The Machine Config Operator will validate the configuration, generate the credential provider config files on all nodes, configure the kubelet, and perform rolling restarts automatically.

Conclusion

Throughout this two-part series, we’ve explored how the CRI-O credential provider addresses a real operational problem: maintaining namespace-scoped security boundaries while using private registry mirrors in Kubernetes. The implementation leverages existing Kubernetes primitives (service account tokens, secrets, and RBAC) to provide a solution that integrates naturally with cluster security policies.

The kubelet credential provider API is stable in Kubernetes 1.26, and the service account token feature is available in Kubernetes 1.33 with a feature gate. The CRI-O credential provider is shipping in OpenShift 4.21, with declarative API support coming in 4.22.

This enables organizations running multi-tenant platforms to use authenticated registry mirrors and pull-through caches without breaking namespace isolation. Teams can manage their own registry credentials as standard Kubernetes Secrets, rotate them independently, and maintain strict security boundaries between projects. For air-gapped environments and organizations with compliance requirements, this means registry mirrors can be used safely without exposing credentials globally at the node level. The solution leverages existing Kubernetes RBAC for access control, making it natural to integrate into existing security policies.

The project is actively maintained as part of the CRI-O ecosystem. Source code, documentation, and end-to-end tests are available at github.com/cri-o/crio-credential-provider. For OpenShift users, the evolution from manual configuration in 4.21 to the declarative CRIOCredentialProviderConfig API in 4.22 shows the path toward fully automated, operator-managed setup.

Thanks for reading! We’d love to hear your thoughts on this feature and how it might help with your registry authentication challenges. Feel free to open issues or contribute to the project on GitHub, or join the discussion in the #cri-o channel on Kubernetes Slack.