This Member Blog was originally published on the Middleware blog and is republished here with permission.

kubectl is the command-line interface for managing Kubernetes clusters. It allows you to manage pods, deployments, and other resources from the terminal, helping you troubleshoot Kubernetes issues., check pod health, and scale applications easily. Most kubectl commands follow a simple structure.

For example, kubectl get pods lists running pods, and kubectl delete pod <pod-name> removes a pod.

Many users wonder how to restart a Kubernetes pod using kubectl. Contrary to popular belief, there is no direct kubectl restart pod command. Instead, Kubernetes expects you to work with higher-level objects, such as Deployments.

This guide covers the safest and most effective methods for restarting pods, including rollout restarts, deleting pods, scaling replicas, and updating environment variables, helping you manage pod restarts in a predictable and controlled way.

When should you restart a Kubernetes pod?

Knowing when to restart a Kubernetes pod is key to maintaining application stability and performance. Here are the most common scenarios that require a pod restart:

1. Configuration changes

When you update your application’s settings (such as environment variables or resource limits), the pod continues to use the old configurations. Restarting ensures the new settings take effect.

2. Recover from application failure

If your app crashes but the container stays in a “Running” state, or the pod shows as running but isn’t functioning, a restart forces a clean start to recover the service.

3. Debugging application issues

Restarting the pod helps resolve temporary issues or confirms persistent problems while troubleshooting why the application isn’t behaving as expected.

4. Pod stuck or not responding

A pod may stop responding to traffic while Kubernetes still reports it as healthy. Restarting resolves frozen states or resource leaks and restores responsiveness.

What are the different pod states in Kubernetes?

Understanding the different Kubernetes pod states enables you to monitor your application’s health and take the necessary actions when needed. Here are the key pod states you should know:

1. Pending

Kubernetes has approved the pod, but it is awaiting scheduling and launch. This occurs while Kubernetes is downloading container images or while it is still looking for a suitable node to run your pod. A prolonged pending pod typically indicates a configuration issue or insufficient resources.

2. Running

Your pod has at least one active container. The containers are working, but this doesn’t mean everything is functional. Your application may still have troubles despite the pod running.

3. Succeeded

You typically see this state with jobs or one-time tasks that are designed to run once and finish. It means all containers in the pod have completed their tasks successfully and won’t restart.

4. Failed

The failed state means one or more containers in the pod have stopped running, maybe due to an error, or the system terminated the containers. It indicates something went wrong with your application, or the container couldn’t restart correctly. Failed pods often need a restart.

5. Unknown

This indicates that the node where your pod should be running has lost contact with Kubernetes. Node failures, network problems, or other infrastructure issues may be the cause of this. It’s actually hard to tell what’s going on with your pod when you see this state.

How to restart pods in Kubernetes using kubectl

When you search for how to restart a Kubernetes pod using kubectl, the first thing that comes to mind is the command:

kubectl restart pod 

However, that command does not exist. Instead, there are several reliable methods to restart Kubernetes pods using kubectl. Below are the most effective and commonly used approaches:

1. Restart pods using Kubectl rollout restart

This is the commonly used method that follows Kubernetes best practices for restarting pods managed by a deployment. It performs a controlled restart without downtime by creating new pods and removing old ones.

Commands to use:

kubectl rollout restart deployment/my-app

This command replaces existing pods with new ones. It will remove the old pods after starting and waiting for the new ones. This approach keeps your app up during the restart.

To restart pods in a deployment within a specific namespace:

kubectl rollout restart deployment/my-app -n your-namespace

To check the status of your restart

kubectl rollout status deployment/my-app

Consider this strategy if you want minimal downtime, the safest alternative, or have deployment-managed pods.

2. Delete individual pods to force restart

With this method, you must delete pods to force Kubernetes to recreate them. It’s simpler than rollout restart, but you must watch which pods you remove.

If the pod is managed by a deployment, replica set, or equivalent controller, Kubernetes immediately creates a new one when you delete the existing one. However, this may temporarily disrupt service.

Here’s how to go about it:

# List all pods to see what you're working with

kubectl get pods

# To delete a specific pod

kubectl delete pod <pod-name>

# To delete multiple pods at once

kubectl delete pod <pod-1> <pod-2>

# To delete and wait to remove fully

kubectl delete pod <pod-name> --wait=true

# To force delete a stuck pod (use with caution)

kubectl delete pod <pod-name> --grace-period=0 --force

Delete only controller-managed pods. A standalone pod that isn’t managed by anything will never be restored if it is deleted.

3. Scale deployment replicas to restart pods

This strategy works by scaling your deployment down to zero replicas for a short time, which stops all the pods. Then it scales back up to the number you started with. Kubernetes lets you turn your program off and back again in a controlled way.

1. Check how many replicas you currently have

kubectl get deployment my-app

2. Scale down to zero

kubectl scale deployment my-app --replicas=0

3. Lastly, scale back up to your original number (creates new pods)

kubectl scale deployment my-app --replicas=3

When you scale down to zero, Kubernetes deletes all the pods in that deployment. When you scale back up, it creates new pods from scratch. This approach is more aggressive than rollout restart, but sometimes necessary when you need a complete fresh start.

4. Update environment variables to trigger a restart

This is yet another clever method for pod restarts. You will need to modify their configuration slightly. Kubernetes interprets changing environment variables in a deployment as a configuration change and restarts the pods automatically to implement the updated configuration.

The key here is that you don’t even have to alter your environment variables significantly. To initiate the restart, update a timestamp or add a dummy variable. 

For instance:

You can update an existing environment variable

kubectl set env deployment/my-app RESTART_TRIGGER=$(date +%s)

Or

You can also edit the deployment directly:

kubectl edit deployment my-app

Then add or modify any environment variable in the editor.

The benefit of using this approach is that it follows the same safe strategy as the rollout restart, and there’s no downtime during the restart process.

5. Replacing pods manually

Using the same configuration or an updated version, this method requires deleting particular pods and then manually creating new ones to replace them. You have total control over the creation and deletion processes with this method.

1. Get the pod configuration and save it

kubectl get pod -o yaml > pod-backup.yaml

2. Delete the existing pod

kubectl delete pod 

3. Create a new pod using the saved configuration

kubectl apply -f pod-backup.yaml

This method causes downtime because the old pod is removed before the new one starts. This method is only used with standalone pods; don’t do this with pods managed by deployments.

Conclusion

Restarting Kubernetes pods can help you in several practical ways, as discussed in this article, one of which is maintaining healthy applications. It doesn’t matter the method you choose to use; the key thing is to understand when and why you want to restart.

Monitoring pods also helps you in making the right restart decision. Instead of guessing what’s wrong, which you might be wrong about, proper observability shows you exactly when pods need attention.