DEV Community

Cover image for Using kubectl to Restart a Kubernetes Pod
ChrisBello1
ChrisBello1

Posted on

Using kubectl to Restart a Kubernetes Pod

Sometimes when something goes wrong with one of your pods—for example, your pod has a bug that terminates unexpectedly—you will need to restart your Kubernetes pod. This tutorial will show you how to use kubectl to restart a pod.

In Kubernetes, a pod is the smallest API object, or in more technical terms, it’s the atomic scheduling unit of Kubernetes. In a Cluster, a pod represents a running application process. It holds one or more containers along with the resources shared by each container, such as storage and network.

The status of a pod tells you what stage of the lifecycle it’s at currently. There are five stages in the lifecycle of a pod:

  1. Pending: This state shows at least one container within the pod has not yet been created.

  2. Running: All containers have been created, and the pod has been bound to a Node. At this point, the containers are running, or are being started or restarted.

  3. Succeeded: All containers in the pod have been successfully terminated and will not be restarted.

  4. Failed: All containers have been terminated, and at least one container has failed. The failed container exists in a non-zero state.

  5. Unknown: The status of the pod cannot be obtained.

Why You Might Want to Restart a Pod

First, let’s talk about some reasons you might restart your pods:

  • Resource use isn’t stated or when the software behaves in an unforeseen way. If a container with 600 Mi of memory attempts to allocate additional memory, the pod will be terminated with an OOM. You must restart your pod in this situation after modifying resources specification.

  • A pod is stuck in a terminating state. This is found by looking for pods that have had all of their containers terminated yet the pod is still functioning. This usually happens when a cluster node is taken out of service unexpectedly, and the cluster scheduler and controller-manager cannot clean up all the pods on that node.

  • An error can’t be fixed.

  • Timeouts.

  • Mistaken deployments.

  • Requesting persistent volumes that are not available.

Restarting Kubernetes Pods Using kubectl

You can use docker restart {container_id} to restart a container in the Docker process, but there is no restart command in Kubernetes. In other words, there is no

kubectl restart {podname}.

Your pod may occasionally develop a problem and suddenly shut down, forcing you to restart the pod. But there is no effective method to restart it, especially if there is no YAML file. Never fear, let’s go over a list of options for using kubectl to restart a Kubernetes pod.

Method 1: kubectl scale

Where there is no YAML file, a quick solution is to scale the number of replicas using the kubectl command scale and set the replicas flag to zero:

kubectl scale deployment shop --replicas=0 -n service

kubectl get pods -n service

Image description

Note that the Deployment object is not a direct pod object, but a Replica Set object, which is composed of the definition of the number of replicas and the pod template.
Example: Pod Template Used by ReplicaSet to Create New Pods

apiVersion: apps/v1
kind: ReplicaSet
metadata:
name:
labels:
app:
tier:
spec:
# change replicas according to your case
replicas: 2
selector:
matchLabels:
tier:
template:
metadata:
labels:
tier:
spec:
containers:
- name:
image:

This command scales the number of replicas that should be running to zero.
kubectl get pods -n service

Image description

To restart the pod, set the number of replicas to at least one:

kubectl scale deployment shop --replicas=2 -n service

deployment.name/shop scaled

Check the pods now:
kubectl scale deployment shop --replicas=0 -n service

kubectl get pods -n service

Image description

YourKkubernetes pods have successfully restarted.

Method 2: kubectl rollout restart

Method 1 is a quicker solution, but the simplest way to restart Kubernetes pods is using the rollout restart command.

The controller kills one pod at a time, relying on the ReplicaSet to scale up new pods until all of them are newer than the moment the controller resumed. Rolling out restart is the ideal approach to restarting your pods because your application will not be affected or go down.
For rolling out a restart, use the following command:

kubectl rollout restart deployment -n

Method 3: kubectl delete pod

Because Kubernetes is a declarative API, the pod API object will contradict the expected one after deleting it, using the command kubectl delete pod -n .
It will automatically recreate the pod to keep it consistent with the expected one, but if the ReplicaSet manages a lot of pod objects, then it will be very troublesome to delete them manually one by one. You can use the following command to delete the ReplicaSet:

kubectl delete replicaset -n

Method 4: kubectl get pod

Use the following command:

kubectl get pod -n -o yaml | kubectl replace --force -f -

Here, since there is no YAML file and the pod object is started, it cannot be directly deleted or scaled to zero, but it can be restarted by the above command. The meaning of this command is to get the YAML statement of currently running pods and pipe the output to kubectl replace the standard input command to achieve the purpose of a restart.

Conclusion

In this summary, you were briefly introduced to Kubernetes pods as well as some reasons why you might need to restart them. In general, the most recommended way to ensure no application downtime is to use kubectl rollout restart deployment -n .

While Kubernetes is in charge of pod orchestration, it’s no effortless task to continuously ensure that pods always have highly accessible and affordable nodes that are fully employed.

Latest comments (0)