DEV Community

Cover image for Kubernetes-101: Pods, part 3
Mattias Fjellström
Mattias Fjellström

Posted on • Originally published at mattias.engineer

Kubernetes-101: Pods, part 3

In this article we will revisit the topic of Pods. Remember Pods? Of course you do, we have been using them all along. But just so that we are on the same page: a Pod is sort of a wrapper around one or more containers where our application source code lives.

Previously we have seen how we can interact with our Pods and how we work with Pods together with other objects in a Kubernetes cluster. In this article we will:

  • See how to get the logs from the applications in our Pods
  • See how to use container probes (and define what a container probe is)
  • Explore the Pod manifest

Logs

When we write out applications we usually add logging statements for interesting events in our code1. When writing cloud-native applications for a platform such as Kubernetes we should send our logs to standard-out. Why? Our application should not have to bother with connections to an external log-sink and take care of bad network connections with retries when the logs can't be sent to the destination. Instead we should log to standard-out, and let someone else deal with handling the logs. Who is this someone else? This could be the Kubernetes platform itself, or more commonly it will be a dedicated logging tool that is installed in the cluster. Either globally in the whole cluster, or as a side-car container along your application container.

In this section we will not bother with sending the logs to a third-party log-analyzing tool, instead we will see how we can view logs via the Kubernetes platform itself. This is good enough in many cases, especially for quick debugging when a third-party tool is overkill.

For this section I will create a Service and a Deployment with three Pods, all Pods running an Nginx web-server. The manifest for this application looks like this:

# application.yaml
---
apiVersion: v1
kind: Service
metadata:
  name: nginx-service
spec:
  type: NodePort
  selector:
    app: nginx
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
        - name: nginx
          image: nginx:latest
          ports:
            - containerPort: 80
Enter fullscreen mode Exit fullscreen mode

I create the Service and the Deployment with kubectl apply:

$ kubectl apply -f application.yaml

service/nginx-service created
deployment.apps/nginx-deployment created
Enter fullscreen mode Exit fullscreen mode

Since I am using Minikube I will expose my Service with minikube service nginx-service --url (see the article on services for additional details for why this is.):

$ minikube service nginx-service --url

http://127.0.0.1:54568
Enter fullscreen mode Exit fullscreen mode

If I now go to http://127.0.0.1:54568 in my browser I get the Nginx welcome page:

nginx welcome page

I can see my three Pods with kubectl get pods:

$ kubectl get pods

NAME                               READY   STATUS    RESTARTS   AGE
nginx-deployment-cd55c47f5-6tfjf   1/1     Running   0          6m2s
nginx-deployment-cd55c47f5-gbwct   1/1     Running   0          6m2s
nginx-deployment-cd55c47f5-mqc8k   1/1     Running   0          6m2s
Enter fullscreen mode Exit fullscreen mode

To see the logs from one of these Pods I can use the kubectl logs <pod name> command:

$ kubectl logs nginx-deployment-cd55c47f5-6tfjf

/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
/docker-entrypoint.sh: Configuration complete; ready for start up
2023/01/05 18:39:11 [notice] 1#1: using the "epoll" event method
2023/01/05 18:39:11 [notice] 1#1: nginx/1.23.3
2023/01/05 18:39:11 [notice] 1#1: built by gcc 10.2.1 20210110 (Debian 10.2.1-6)
2023/01/05 18:39:11 [notice] 1#1: OS: Linux 5.15.49-linuxkit
2023/01/05 18:39:11 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 1048576:1048576
2023/01/05 18:39:11 [notice] 1#1: start worker processes
2023/01/05 18:39:11 [notice] 1#1: start worker process 29
2023/01/05 18:39:11 [notice] 1#1: start worker process 30
2023/01/05 18:39:11 [notice] 1#1: start worker process 31
2023/01/05 18:39:11 [notice] 1#1: start worker process 32
2023/01/05 18:39:11 [notice] 1#1: start worker process 33
Enter fullscreen mode Exit fullscreen mode

It is often not useful to only get the logs from a single Pod. To see the logs from all the Pods in the Deployment I can instead run kubectl logs deployment/<deployment name> like so:

$ kubectl get logs deployment/nginx-deployment

Found 3 pods, using pod/nginx-deployment-cd55c47f5-gbwct
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
/docker-entrypoint.sh: Configuration complete; ready for start up
2023/01/05 18:39:08 [notice] 1#1: using the "epoll" event method
2023/01/05 18:39:08 [notice] 1#1: nginx/1.23.3
2023/01/05 18:39:08 [notice] 1#1: built by gcc 10.2.1 20210110 (Debian 10.2.1-6)
2023/01/05 18:39:08 [notice] 1#1: OS: Linux 5.15.49-linuxkit
2023/01/05 18:39:08 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 1048576:1048576
2023/01/05 18:39:08 [notice] 1#1: start worker processes
2023/01/05 18:39:08 [notice] 1#1: start worker process 29
2023/01/05 18:39:08 [notice] 1#1: start worker process 30
2023/01/05 18:39:08 [notice] 1#1: start worker process 31
2023/01/05 18:39:08 [notice] 1#1: start worker process 32
2023/01/05 18:39:08 [notice] 1#1: start worker process 33
172.17.0.1 - - [05/Jan/2023:18:42:21 +0000] "GET / HTTP/1.1" 200 615 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:108.0) Gecko/20100101 Firefox/108.0" "-"
2023/01/05 18:42:21 [error] 29#29: *1 open() "/usr/share/nginx/html/favicon.ico" failed (2: No such file or directory), client: 172.17.0.1, server: localhost, request: "GET /favicon.ico HTTP/1.1", host: "127.0.0.1:54568", referrer: "http://127.0.0.1:54568/"
172.17.0.1 - - [05/Jan/2023:18:42:21 +0000] "GET /favicon.ico HTTP/1.1" 404 153 "http://127.0.0.1:54568/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:108.0) Gecko/20100101 Firefox/108.0" "-"
Enter fullscreen mode Exit fullscreen mode

If I want to see the logs arriving in real-time I can add the -f (or --follow) flag to the previous commands.

Another useful flag is --since=<timespan> to restrict how old logs are fetched. For instance, to see the logs from the previous five minutes I could add --since=5m, or to see the logs from the past three hours I add --since=3h.

If you don't want to restrict the number of results by time you could instead restrict the number of results by specifying exactly how many logs you want with the --tail flag. For instance, to see the last 15 logs you add --tail=15.

That has been the basics of logs from Pods and Deployments. With these commands, together with regular Linux-commands like grep, you can do a lot of debugging. If you are planning to take the Certified Kubernetes Application Developer (CKAD), you will most likely not need to do anything else related to logs than what I have discussed here. In a production environment you will not be using these commands that much, you would rather set up a third-party tool (maybe Elasticsearch) where you would export all your logs.

Container probes

What is a container probe? To quote the official Kubernetes documentation2:

A probe is a diagnostic performed periodically by the kubelet on a container. To perform a diagnostic, the kubelet either executes code within the container, or makes a network request.

As the name probe suggests, this is an interaction with the container from the outside. We are probing out containers!

There are three different kinds of probes:

  • Readiness probe
  • Liveness probe
  • Statup probe

In the following subsections I will go through what these different probes are for and show examples of how they are used.

In general a probe tests for a certain condition at a given frequency. What kinds of tests a probe can do are the following:

  • exec: execute a given command in the container, if the command has an exit code of 0 the probe attempt is considered successful
  • grpc: perform a gRPC call, if the gRPC call has the status of SERVING the probe attempt is considered successful
  • httpGet: performs a HTTP GET request, if the request responds with a code that fulfills >=200 and <400 the probe attempt is considered successful
  • tcpSocket: performs a TCP check against a port, if the port is open the probe attempt is considered successful

Of the four types listed above the httpGet and exec are the most common.

Readiness probe

When a Pod starts up it is possible that Kubernetes believes it is ready to receive traffic before it is actually ready to receive traffic. If Kubernetes allows traffic to be sent to the Pod in this state the caller will be met with an error message.

How can we avoid this situation? One solution is to use a readiness probe. A readiness probe is a process that checks the status of our container(s), and waits until a certain condition is fulfilled before it reports that the container is ready.

Let me add a readiness probe of type httpGet to my Nginx Pod:

# pod.yaml
apiVersion: v1
kind: Pod
metadata:
  name: nginx-pod
spec:
  containers:
    - name: nginx-container
      image: nginx
      ports:
        - containerPort: 80
      readinessProbe:
        httpGet:
          port: 80
          path: "/"
        initialDelaySeconds: 20
        periodSeconds: 5
        successThreshold: 3
Enter fullscreen mode Exit fullscreen mode

This httpGet probe will make a GET-request for the root path / on port 80. I have specified an initialDelaySeconds of 20, so the probe will wait for 20 seconds before it begins probing. I specified a periodSeconds of 5, which means the probe will make a test every five seconds. I require three successful attempts (successThreshold: 3) before I consider the probe successful and the container can start to receive traffic.

I create this Pod using kubectl apply and then I watch the life of my Pod for a while with kubectl get pod nginx-pod -w, the -w flag activates watch-mode which prints updates to the output:

$ kubectl get pod nginx-pod -w

NAME        READY   STATUS    RESTARTS   AGE
nginx-pod   0/1     Running   0          8s
nginx-pod   1/1     Running   0          36s
Enter fullscreen mode Exit fullscreen mode

I see that after 36 seconds the Pod shows 1/1 containers are ready and the Pod can start to receive traffic.

Liveness probe

A liveness probe is a probe that continually checks a given condition to determine if the container is alive. If the probe reports an error the container is considered to have died and it will be restarted.

An example of a liveness probe is shown in the following manifest3:

# pod.yaml
apiVersion: v1
kind: Pod
metadata:
  name: liveness-exec
spec:
  containers:
    - name: liveness
      image: registry.k8s.io/busybox
      args:
        - /bin/sh
        - -c
        - touch /tmp/healthy; sleep 30; rm -f /tmp/healthy; sleep 600
      livenessProbe:
        exec:
          command:
            - cat
            - /tmp/healthy
        initialDelaySeconds: 5
        periodSeconds: 5
Enter fullscreen mode Exit fullscreen mode

I create this Pod using kubectl apply and then I watch the life of my Pod for a while:

$ kubectl apply -f pod.yaml

pod/liveness-exec created

$ kubectl get pod liveness-exec -w

NAME            READY   STATUS    RESTARTS   AGE
liveness-exec   1/1     Running   0          25s
liveness-exec   1/1     Running   1 (1s ago)   77s
liveness-exec   1/1     Running   2 (1s ago)   2m32s
Enter fullscreen mode Exit fullscreen mode

The liveness probe is configured to run cat /tmp/healthy, and as long as that file exists the exit code of the cat command is 0. But in the .spec.containers[*].args section I configured the command touch /tmp/healthy; sleep 30; rm -f /tmp/healthy; sleep 600 which creates the file, sleeps for 30 seconds, deletes the file and sleeps for another 600 seconds.

I can also view the events from my Pod to understand what is going on. If I run kubectl describe pod I see the events at the bottom of the output (the output is truncated for clarity):

$ kubectl describe pod liveness-exec

Events:
  Type     Reason     Age                   From               Message
  ----     ------     ----                  ----               -------
  Normal   Scheduled  4m28s                 default-scheduler  Successfully assigned default/liveness-exec to minikube
  Normal   Pulled     4m25s                 kubelet            Successfully pulled image "registry.k8s.io/busybox" in 1.827584708s
  Normal   Pulled     3m12s                 kubelet            Successfully pulled image "registry.k8s.io/busybox" in 521.407083ms
  Normal   Created    117s (x3 over 4m25s)  kubelet            Created container liveness
  Normal   Started    117s (x3 over 4m25s)  kubelet            Started container liveness
  Normal   Pulled     117s                  kubelet            Successfully pulled image "registry.k8s.io/busybox" in 554.210958ms
  Warning  Unhealthy  72s (x9 over 3m52s)   kubelet            Liveness probe failed: cat: can't open '/tmp/healthy': No such file or directory
  Normal   Killing    72s (x3 over 3m42s)   kubelet            Container liveness failed liveness probe, will be restarted
  Normal   Pulling    42s (x4 over 4m27s)   kubelet            Pulling image "registry.k8s.io/busybox"
Enter fullscreen mode Exit fullscreen mode

Startup probe

A startup probe is a probe that waits for a slow-starting container to properly start up before the readiness probe (or liveness probe) takes over. This is useful to avoid having to create an unreasonably configured readiness probe to handle slow-starting containers. Why would a container be slow-starting? I have no experience of a situation when I have needed a startup probe, so I can't give real-world examples. But it could be that as part of your application startup you need to perform time-consuming database migrations, and until those are done your application will not respond to anything.

An example of a startup probe is shown in the following manifest:

# pod.yaml
apiVersion: v1
kind: Pod
metadata:
  name: nginx-pod
spec:
  containers:
    - name: nginx-container
      image: nginx
      ports:
        - containerPort: 80
      startupProbe:
        httpGet:
          path: "/"
          port: 80
        failureThreshold: 30
        periodSeconds: 10
Enter fullscreen mode Exit fullscreen mode

This startupProbe is similar to the readiness probe we saw above. It makes HTTP GET requests to the root path / on port 80. It will make a request every 10 seconds and it will try for a total of 30 times. If the container responds with a HTTP response-code between 200 and 399 the probe will be considered successful, and if I had defined readiness probes or liveness probes they would start up at this point.

Exploring the Pod manifest

Are there any additional pieces of interest hidden in the Pod manifest? Of course there are. However, not everything is immediately interesting before we actually realize that we need it. There are a few things I would like to mention:

  • Init containers
  • Image pull secrets
  • Restart policy
  • Service account name

In the subsections that follow I will briefly describe these topics and show sample manifests where they are used, but I will not create all the Pods and explore the running instances of them (only for a few).

Init containers

We already know that Pods can have one or more containers. However, we could also define something known as init containers. An init container runs before our regular containers start. An init container runs until completion (sort of like a Job). If you define several init containers they will run one by one, and all of them need to complete successfully for the regular containers to start. If one of the init containers fails, then the Pod might restart (depending on the restart policy, see below) and retry all the init containers from the start.

An example of what init containers look like in a sample manifest is shown below:

# pod.yaml
apiVersion: v1
kind: Pod
metadata:
  name: nginx
spec:
  containers:
    - name: nginx-container
      image: nginx
  initContainers:
    - name: init-container-1
      image: busybox:1.28
      command: ["sh", "-c", "sleep 10"]
    - name: init-container-2
      image: busybox:1.28
      command: ["sh", "-c", "sleep 20"]
Enter fullscreen mode Exit fullscreen mode

This is a simple example with two init containers that performs a sleep command with different number of seconds. If I create this pod with kubectl apply and I watch the evolution of my Pod with kubectl get pod nginx -w I see this:

$ kubectl apply -f pod.yaml

pod/nginx created

$ kubectl get pod nginx -w

NAME    READY   STATUS            RESTARTS   AGE
nginx   0/1     Init:0/2          0          4s
nginx   0/1     Init:0/2          0          5s
nginx   0/1     Init:1/2          0          15s
nginx   0/1     Init:1/2          0          16s
nginx   0/1     PodInitializing   0          35s
nginx   1/1     Running           0          37s
Enter fullscreen mode Exit fullscreen mode

I can see in the STATUS column that the init containers are run one by one and once they all complete my Pod is starting up and eventually has the status of Running.

Image pull secrets

So far we have only been using publicly available images from Docker Hub. Docker Hub is just one of many available image registries, and chanses are you will be using something other than Docker Hub. Chanses are also that you will be using a private image registry, that requires a secret to access. This is where the image pull secret comes in.

To create an image pull secret we create a Secret object in our cluster (see the article on Secrets for details):

apiVersion: v1
kind: Secret
metadata:
  name: my-registry-secret
data:
  .dockerconfigjson: <long secret value>
type: kubernetes.io/dockerconfigjson
Enter fullscreen mode Exit fullscreen mode

The .data section contains one file like key .dockerconfigjson with the value of the secret. The type of the secret is kubernetes.io/dockerconfigjson, it won't work with the generic type.

A sample Pod manifest using the previous secret as an imagePullSecret looks like this:

apiVersion: v1
kind: Pod
metadata:
  name: private-pod
spec:
  containers:
    - name: private-container
      image: private-registry.io/private-container:v1
  imagePullSecrets:
    - name: my-registry-secret
Enter fullscreen mode Exit fullscreen mode

Restart policy

What should happen if a Pod shuts down due to an unrecoverable error? What should happen if one of the init containers in our Pod fails? We can control this with a restart policy.

The possible values for restartPolicy is: Always, OnFailure, and Never. The default value is Always. If we a restart policy of OnFailure the Pod will restart if the exit code of a container is other than 0. If we set a restart policy of Never then the Pod will not be restarted no matter how it exits.

An example manifest using a restart policy is shown next:

apiVersion: v1
kind: Pod
metadata:
  name: nginx
spec:
  restartPolicy: OnFailure
  containers:
    - name: nginx-container
      image: nginx
Enter fullscreen mode Exit fullscreen mode

What happens if we use a Deployment to create our Pods? In that case we must use a restartPolicy of Always. No other value is allowed. This makes sense since the Deployment will strive to keep the number of Pods equal to what we specify in .spec.replicas for the Deployment.

Service Account name

All the Pods we have created so far have been using the default Service Account. What is a Service Account? We will explore this topic further in a future article, but for now it will suffice to say that a Service Account is an identity that is assigned to our Pod. A Service Account is a Kubernetes object that we can create with a manifest like any other type of resource. A Service Account can have Roles, which are sets of permissions that states what the Service Account is allowed to do in the Kubernetes cluster.

If we do not assign an explicit Service Account to our Pods then the default Service Account is used.

An example manifest that explicitly assigns a non-default Service Account named my-service-account to a Pod looks like this (note that the Service Account my-service-account is assumed to have been created before this Pod):

apiVersion: v1
kind: Pod
metadata:
  name: nginx
spec:
  serviceAccountName: my-service-account
  containers:
    - name: nginx-container
      image: nginx
Enter fullscreen mode Exit fullscreen mode

Summary

This article has been a whirlwind of information related to Pods! We see that there is a lot we can configure when it comes to Pods. That is a good thing since Pods are the fundamental building blocks in Kubernetes!

A summary of all the things we looked more closely at:

  • How to fetch logs from Pods and Deployments using kubectl logs
  • How to set up three different kinds of probes to control the startup behavior and potentially trigger restarts: readiness probes, liveness probes, and startup probes

We looked a bit less close at:

  • How to add init containers to our Pods to run initialization tasks
  • How to add image pull secrets to our Pods to be able to access images in private repositories
  • How to add a restart policy to a Pod to control what should happen if the Pod fails
  • How to add a service account to our Pod to control what the Pod should be allowed to do in our Kubernetes cluster

In the next article I will start digging into the concept of the Helm package manager.


  1. At least we should! 

  2. Fetched from https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-probes at 2023-01-05. 

  3. The sample manifest is fetched from https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-a-liveness-command at 2023-01-05. 

Top comments (0)