Guest post originally published on Xenit’s blog by Philip Laine, DevOps Engineer at Xenit

Attempting to debug a Pod and realizing that you can’t install curl due to security settings has to be a meme at this point. Good security practices are always nice but it often comes at the cost of usability. To the point where some may even solve this problem by installing debug tools into their production images. Shudders.

Meme saying, "You can't develop vulnerabilities if they are a pain to develop"

Kubernetes has introduced a new concept called ephemeral containers to deal with this problem. Ephemeral containers are temporary containers that can be attached after a Pod has been created. Rejoice! We can now attach a temporary container with all the tools which we desire. While the applications container may have “annoying security features” like a read only file system the ephemeral container can enjoy all the freedom which writing files entails. I love this feature so I need to upgrade my cluster immediately!

Digging Deeper

Now that we have the new feature we can start a ephemeral container in any Pod we like.

kubectl run ephemeral-demo --restart=Never
kubectl debug -it ephemeral-demo --image=busybox:1.28

We get a shell and life is now much simpler, but wait a minute. This post is not about how to use ephemeral containers, there are enough of those already, but rather the security implications of enabling ephemeral containers. Let’s have a look at the YAML for the Pod that we created the ephemeral container in.

apiVersion: v1
kind: Pod
  name: ephemeral-demo
  - name: debugger-r59b7
    image: busybox:1.28
    imagePullPolicy: IfNotPresent
    stdin: true
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    tty: true

Interesting, there is a new field called ephemeralContainers in the Pod definition. This new field contains a list of containers similar to initContainers and containers. It is not identical as there are certain options which are not available, refer to the API documentation for more information. It does however allow configuration of the container security context, which could in theory allow a bad actor to escalate the container’s privileges. This should not affect those of us who use a policy enforcement tool right? The answer is yes and no depending on the tool and version that is being used. It also depends on if you are using policies from the project’s library or policies developed in house.

OPA Gatekeeper

OPA Gatekeeper does not require any code changes as all of its policies are written in rego. It’s sub project Gateekper Library does however have to be updated. The library contains an implementation of the common Pod Security Policies. This includes policies like not allowing containers in privileged mode. The issue with the all of the policies is that they currently only check containers specified in initContainers and containers, analyze the following rego as an example.

The good news is that this is a pretty easy fix, the bad news is that it requires end users to update the policies pulled from the library.


Kyverno seems to have resolved the issues faster. Compared to OPA Gatekeeper however it did require a small code change which means that version 1.5.3 or later is needed to write policies for ephemeral containers. They have also updated their policy library to include checking ephemeral containers. Kyverno has done a great job solving these issues quickly. It does still require end users to update however.

Pod Security Policies

Pod Security Policies used to be the default policy tool for Kubernetes, and a lot of projects have rules based on Pod Security Policies (PSP). However if you are relying on PSP in a modern cluster you should really start looking for other options like OPA Gatekeeper or Kyverno. PSP has been deprecated since Kubernetes v1.21 and will be removed in v1.25.

If PSP is your only policy tool and you are planning to upgrade to v1.23, don’t. As PSP is deprecated no new features have been added, and that includes policy enforcement on ephemeral containers. Which means that any security context in an ephemeral container is allowed no matter the PSP in the cluster. The PSP below will have no affect when adding an ephemeral container to a Pod which is privileged.

apiVersion: policy/v1beta1
kind: PodSecurityPolicy
name: default
privileged: false
rule: RunAsAny
rule: RunAsAny
rule: RunAsAny
rule: RunAsAny
- '*'


Disallowing ephemeral containers with RBAC could be an option if the feature is not needed and it is not possible to disable the feature completely. The KEP-277: Ephemeral Containers state the following about using RBAC to disable ephemeral containers.

Cluster administrators will be expected to choose from one of the following mechanisms for restricting usage of ephemeral containers:

RBAC is additive which means that it is not possible to remove permissions from a role. This type of mitigation obviously does not matter if all users a cluster admin, which they should not be, so we assume that new roles are created for the cluster consumers. In this case having a look at the existing roles can be enough to make sure that the subresource /ephemeralcontainers is not included in the role.

kind: ClusterRole
  name: edit
- apiGroups:
  - ""
  - pods
  - pods/attach
  - pods/exec
  - pods/portforward
  - pods/proxy
  - create
  - delete
  - deletecollection
  - patch
  - update

Checking Policy Enforcement

Let’s say that you upgraded your cluster and informed all end users of the great new feature. How do you know that the correct policies are enforced in accordance to your security practices. You may have been aware of the API changes and taken the correct precautionary steps. Or you just updated Kyverno and it’s policies out of pure happenstance. Either way it is good to trust but verify that it is not for example possible to create a privileged ephemeral container. Annoyingly the debug command does not expose any options to set any security context configuration, so we need another option. Ephemeral containers cannot be defined in a Pod when it is created and it can neither be added with an update. We need some other method to create these specific ephemeral containers.

Ephemeral containers are created using a special ephemeralcontainers handler in the API rather than by adding them directly to pod.spec, so it’s not possible to add an ephemeral container using kubectl edit.

The simplest method to add an ephemeral container with a security context to a Pod is to use the Go client. A couple of lines of code can add a new ephemeral container running as privileged or use any other security context setting which is to your liking.

package main

import (

corev1 ""
metav1 ""

func main() {
if len(os.Args) != 4 {
panic("expected three args")
podNamespace := os.Args[1]
podName := os.Args[2]
kubeconfigPath := os.Args[3]

// Create the client
client, err := getKubernetesClients(kubeconfigPath)
if err != nil {
panic(fmt.Errorf("could not create client: %w", err))
ctx := context.Background()

// Get the Pod
pod, err := client.CoreV1().Pods(podNamespace).Get(ctx, podName, metav1.GetOptions{})
if err != nil {
panic(fmt.Errorf("could not get pod: %w", err))

// Add a new ephemeral container
trueValue := true
ephemeralContainer := corev1.EphemeralContainer{
EphemeralContainerCommon: corev1.EphemeralContainerCommon{
Name: "debug",
Image: "busybox",
TTY: true,
SecurityContext: &corev1.SecurityContext{
Privileged: &trueValue,
AllowPrivilegeEscalation: &trueValue,
pod.Spec.EphemeralContainers = append(pod.Spec.EphemeralContainers, ephemeralContainer)
pod, err = client.CoreV1().Pods(pod.Namespace).UpdateEphemeralContainers(ctx, pod.Name, pod, metav1.UpdateOptions{})
if err != nil {
panic(fmt.Errorf("could not add ephemeral container: %w", err))

func getKubernetesClients(path string) (kubernetes.Interface, error) {
cfg, err := clientcmd.BuildConfigFromFlags("", path)
if err != nil {
return nil, err
client, err := kubernetes.NewForConfig(cfg)
if err != nil {
return nil, err
return client, nil

Run the program and pass the namespace, pod name, and path to a kube config file. We assume that the ephemeral-demo Pod is still running.

go run main.go default ephemeral-demo $KUBECONFIG

If it completes with no error a privileged ephemeral container should have been added to the Pod. Exec into it and list the host’s devices to prove that it is a privileged container.

kubectl exec -it ephemeral-demo -c debug -- sh
ls /dev


If there is one takeaway from this post, it is that any policy tool that has not been updated in the last couple of months will not enforce rules on ephemeral containers. This also includes all policies written in house! It is not enough to update the community policies.

Some may argue that this type of oversight is not really an issue. Ephemeral containers can’t mount host paths, or access the hosts namespaces. All it can do is set the common container security context. That is a fair comment, because it’s true. Being able to create a privileged container is however still not ideal, and there are methods to escalate privileges when this is possible. Either way it is important to be aware of how policies are enforced and the security contexts which are allowed.

I am still not sure how much of an issue this will be short term. Cloud providers are currently in the process of rolling out Kubernetes v1.23 in their SaaS offerings. In these solutions it is still a possibility that they chose to disable ephemeral containers. Those rolling their own clusters may have already upgraded to v1.23 and not be aware of the new feature. That is the biggest issue really, that the platform administrator has to be aware of the existence of ephemeral containers. The fact that kubectl does not expose the option to set a security context will make even less people aware that it is still possible to set one with other means. Investing in a security audit 6 months ago will only be valuable as long as the same Kubernetes version is used. Kubernetes is by design not secure by default, so each new feature introduced has to be analyzed. The fact that upgrading from Kubernetes v1.22 to v.23 could make your cluster less secure is part of the difficulties of working with Kubernetes, requiring platform administrators to always stay on top of things. The reality is that these types of things are easy to miss, so hopefully this post has helped someone make their cluster a bit more secure.