DEV Community

Cover image for Kubernetes-101: Pods, part 2
Mattias Fjellström
Mattias Fjellström

Posted on • Originally published at mattias.engineer

Kubernetes-101: Pods, part 2

In the previous article we established some basics about what a pod is. Let us repeat the definition we ended up with here: a pod is a number of containers (such as Docker containers) clustered together. To come up with that definition I stole the corresponding definition of pod from Merriam-Webster, and replaced the word animals with containers, and the word whales with Docker containers. I am pretty pleased about that. Apart from defining the word pod and talking about what we use pods for in Kubernetes, we also went through how to create a pod using an imperative and a declarative approach.

In this article we will do a few more things with a basic pod before we move on to other Kubernetes objects that will make our lives with pods easier (deployment, service, ingress, etc). Specifically, we will see how we can interact with our pods from our local machine, and we will see a few more ways to configure our pods through the pod manifest1.

Let us get started!

Interacting with a pod

We are not only interested in creating a pod and leaving it to live the rest of its life in isolation without a purpose. We want to be able to interact with our pod. Our Nginx pod2 from the previous article wants to serve webpages over HTTP, because it is a web-server after all. So there must be a way to do just that: send HTTP requests to our Nginx pod!

However, we have to learn to walk before we can run!

Port-forwarding

The simplest way to expose a pod that we are running in a cluster is to use the kubectl port-forward command. Let us begin by creating our Nginx pod again. I will use the declarative approach to create my pod, so I will need my pod manifest to start with:

# pod.yaml
apiVersion: v1
kind: Pod
metadata:
  name: web
spec:
  containers:
    - name: web
      image: nginx:latest
      ports:
        - containerPort: 80
Enter fullscreen mode Exit fullscreen mode

This is the exact same manifest from the previous article, so I will not go through the details of it again. I create the pod from the manifest with the kubectl apply command:

$ kubectl apply -f pod.yaml

pod/web created
Enter fullscreen mode Exit fullscreen mode

Excellent! Next, I use the kubectl port-forward command to forward all traffic to my local port 8080 to port 80 on the web pod inside of my Kubernetes cluster:

$ kubectl port-forward web 8080:80

Forwarding from 127.0.0.1:8080 -> 80
Forwarding from [::1]:8080 -> 80
Enter fullscreen mode Exit fullscreen mode

To see if this actually worked I can visit localhost:8080 in my browser, or I can use the curl command in my terminal:

$ curl localhost:8080

<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
...
</head>
<body>
<h1>Welcome to nginx!</h1>
...
</body>
</html>
Enter fullscreen mode Exit fullscreen mode

We have now successfully interacted with our pod for the first time. As you might already suspect, there is a better way to send HTTP traffic to a pod. But, we're still learning how to walk before we can run. We'll look at the Kubernetes service object as well as the ingress object in future articles.

Before we leave this section, when would you use port-forward like this in real life? Most likely you would only be using it in a development cluster to troubleshoot some issue you have with your pods. It is possible to do this in your production cluster as well, but be careful3!

Starting a shell session on our pod

There is a different way we can interact with our pod. Imagine we could start a shell session right inside of our pod and see what it looks like from the inside? Hold on a moment: I keep saying pod when I really mean a container, but as long as we only have a single container in our pod I think that is OK.

Back to our imagination: Actually, we don't have to imagine it, we can just do it. We will use the kubectl exec command:

$ kubectl exec -it web -- /bin/bash

root@web:/#
Enter fullscreen mode Exit fullscreen mode

We are now inside of our container as the root user. Let us break down the kubectl exec command a bit to understand what is going on:

  • kubectl is the Kubernetes CLI tool (duh!)
  • exec means we will execute something
  • -it forwards your terminal input into the pod, and forwards the pods output to your terminal
  • -- separates the kubectl command from the command you want to run inside of the container
  • /bin/bash is what we want to run inside of the container, in this case we want to start bash

Why do we want to enter our pod in this way? It could be useful during debugging of some issue related to environment variables or attached volumes (we have not covered volumes yet in this series, it will take some time before we get there). It could also be useful for debugging network access from the pod, i.e. if we want to test if the pod can access the internet or if it can access another pod it needs to communicate with.

Note that what commands we can run inside of the pod depends on what is available in the actual container image we stepped into. If you use a very slim image there might not be many Linux tools (ls, cat, vim, curl, etc) available for you to do something with. Usually you can find the bare minimum tools available to perform many common tasks.

Exploring more of the pod manifest

First of all a reminder: a Kubernetes manifest is a YAML file describing a given object in Kubernetes. I say YAML, but in reality it could also be JSON. But no one is crazy enough to use JSON for this purpose, yes?

If you are interested in all the things you could possibly specify in the pod manifest you can go to the official API documentation. In this rest of this section we will take a look at two common things you will encounter in your Kubernetes journey: environment variables and resource requests and limits.

Environment variables

It is a good practice to put your application configuration outside of your application code in order to be able to configure the application without having to rebuild the code into a new image and create a new container out of it. One simple way to externalize your configuration is to use environment variables. Providing environment variables to your application through the pod manifest is easy:

# pod.yaml
apiVersion: v1
kind: Pod
metadata:
  name: web
spec:
  containers:
    - name: web
      image: nginx:latest
      ports:
        - containerPort: 80
      env:
        - name: ENVIRONMENT_VARIABLE_1
          value: "value 1"
        - name: ENVIRONMENT_VARIABLE_2
          value: "value 2"
Enter fullscreen mode Exit fullscreen mode

Here I have added a new section spec.containers[].env that is a list of name/value pairs. There are alternative ways to provide environment variables4, but we will stick to this way for now.

If we create our pod using kubectl apply and then enter the pod with kubectl exec we can verify that the environment variables have been set:

$ kubectl apply -f pod.yaml

pod/web created

$ kubectl exec -it web -- /bin/bash

root@web:/# env | grep ENVIRONMENT
ENVIRONMENT_VARIABLE_1=value 1
ENVIRONMENT_VARIABLE_2=value 2
Enter fullscreen mode Exit fullscreen mode

It looks like it worked!

Resource requirements and limits

Our applications consume CPU and memory. How much CPU and memory does your application require to run? There is no generic answer to that question. You have to either know what your application needs by experience, or you have to gain that experience by testing your application and measure the results.

In the pod manifest you can specify resource requirements and limits that says how much your application requires in terms of CPU and memory, and how much it can at most consume before Kubernetes puts an end to the consumption. Let us add requirements and limits to our Nginx pod and see what it looks like:

# pod.yaml
apiVersion: v1
kind: Pod
metadata:
  name: web
spec:
  containers:
    - name: web
      image: nginx:latest
      ports:
        - containerPort: 80
      resources:
        requests:
          cpu: "0.5"
          memory: "100Mi"
        limits:
          cpu: "1.0"
          memory: "200Mi"
Enter fullscreen mode Exit fullscreen mode

Here we have added the new section spec.containers[].resources where we have the two subsections requests and limits. I say that my container requests 0.5 CPU, and 100 MiB of memory. I also say that the limit of how much resources my container can use is 1 CPU and 200 MiB memory. If my container (or pod) tries to go over this limit it will be stopped by Kubernetes.

What happens if you do not specify resource limits for your containers? Then there is no upper bound on the amount of resources a container could consume. So, this tells us that it is a good idea to provide limits for CPU and memory.

Summary

We have broadened our knowledge about pods in this article. We have looked at two different ways of interacting with our pods. We looked at how to port-forward traffic from you machine's localhost to a pod in the Kubernetes cluster. We also looked at how to start at shell session inside of our running pod. Next we looked at a few additional parts of the pod manifest that are of interest right now. Specifically we looked at how to provide environment variables to our containers in our pod, as well as how to specify resource requirements and limits in terms of CPU and memory.

What is there left to say about pods? In the next article we will learn about a new object in Kubernetes, the deployment. A deployment will make our pods a bit more resilient to failures, and it will simplify the administration of additional copies of our pod. But hold on, there are still a lot more to say about pods themselves. But right now it doesn't make sense to go through the full pod manifest and learn about everything there is to know about pods. I will introduce additional details about the pods themselves as it becomes necessary in the future.


  1. If you don't remember, a manifest is simply a YAML file that declaratively specifies how a pod (or any other Kubernetes object) should look like 

  2. I say Nginx pod, but technically Nginx is one (of potentially many) containers inside of our pod, but you know what I mean 

  3. Don't do it! 

  4. Because there are always more than one way to do things, right? 

Top comments (0)