DEV Community

Cover image for Kubernetes-101: Deployments, part 2
Mattias Fjellström
Mattias Fjellström

Posted on • Originally published at mattias.engineer

Kubernetes-101: Deployments, part 2

One thing you might have wondered about when reading my previous article on Deployments in Kubernetes might have been: why is it called Deployment? In this article we will update an existing Deployment and we will see how a new version is rolled out. This will feel more like a traditional deployment in the world of DevOps, and this might make the name Deployment more tangible.

Something I failed to mention in the previous article on Deployments is that a Deployment, along with ReplicaSets, StatefulSets, DaemonSets, and Jobs, are collectively referred to as workload resources. Why is this important? To be honest it is not that important. But when you read the documentation you will see the term workload resource a lot, so it is good to be familiar with what it means.

In this article we will expand our knowledge about Deployments further. We will see how to update an existing Deployment and go through the details of what happens in the background. Along the way we will have to build Docker images and upload them to Docker Hub!

Updating a Deployment

Containerizing Nginx

To be able to update from one version of our application to a different version it would be illustrative if our application said something like Hello from version 1 and Hello from version 2, etc. To do this we will build our own Nginx container. Our Nginx container will serve the following index.html1:

<!-- index.html -->
<html>
  <body>
    <h1>Hello from version 1</h1>
  </body>
</html>
Enter fullscreen mode Exit fullscreen mode

The simplest possible Dockerfile where we copy our index.html into the resulting container looks like this:

FROM nginx:1.23.1-alpine
COPY ./index.html /usr/share/nginx/html/index.html
Enter fullscreen mode Exit fullscreen mode

With our index.html and Dockerfile in place we are ready to build our custom Nginx container and push it to Docker hub2:

$ docker build -t mattiafj/custom-nginx:v1 .
$ docker push mattiafj/custom-nginx:v1
Enter fullscreen mode Exit fullscreen mode

To prepare for our coming Deployment updates I repeat the steps above, but with the following index.html:

<!-- index.html -->
<html>
  <body>
    <h1>Hello from version 2</h1>
  </body>
</html>
Enter fullscreen mode Exit fullscreen mode

And I tag the image with v2:

$ docker build -t mattiafj/custom-nginx:v2 .
$ docker push mattiafj/custom-nginx:v2
Enter fullscreen mode Exit fullscreen mode

Create our Kubernetes Deployment

Let us create a new Deployment for our Nginx application. The Deployment manifest looks almost identical to what we had in the previous article on Deployments:

# deployment-v1.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
        - name: nginx
          image: mattiafj/custom-nginx:v1
          ports:
            - containerPort: 80
Enter fullscreen mode Exit fullscreen mode

The difference from what we had before is that I have used the image mattiafj/custom-nginx:v1 that I built in the previous section. Note that this image is fetched from Docker Hub by default, and it works because I have a public repository. If my repository was not public I would have had to make a secret containing a password available to my Pod, we will see examples of that in a later article. To create my Deployment I use kubectl apply:

$ kubectl apply -f deployment-v1.yaml

deployment.apps/nginx-deployment created
Enter fullscreen mode Exit fullscreen mode

If I list my Deployments I see the following:

$ kubectl get deployments

NAME               READY   UP-TO-DATE   AVAILABLE   AGE
nginx-deployment   3/3     3            3           33s
Enter fullscreen mode Exit fullscreen mode

And if I list my ReplicaSets I see this:

$ kubectl get replicasets

NAME                          DESIRED   CURRENT   READY   AGE
nginx-deployment-8646cd8464   3         3         3       52s
Enter fullscreen mode Exit fullscreen mode

And finally, if I list my Pods I get this:

$ kubectl get pods

NAME                                READY   STATUS    RESTARTS   AGE
nginx-deployment-8646cd8464-9bjlv   1/1     Running   0          1m21s
nginx-deployment-8646cd8464-qr4mw   1/1     Running   0          1m21s
nginx-deployment-8646cd8464-zmnlh   1/1     Running   0          1m21s
Enter fullscreen mode Exit fullscreen mode

To make sure the correct Pods are deployed we can run a GET request to one of the Pods using curl. To be able do this we must use kubectl port-forward, like we did in the second article about Pods:

$ kubectl port-forward nginx-deployment-8646cd8464-9bjlv 8080:80
$ curl localhost:8080

<html>
  <body>
    <h1>Hello from version 1</h1>
  </body>
</html>
Enter fullscreen mode Exit fullscreen mode

This is the correct content of the index.html file we created earlier, so we are in a good starting position!

Perform a rolling update of a Deployment

One of the simpler updates we can perform is to change the container image from mattiafj/custom-nginx:v1 to mattiafj/custom-nginx:v2. The new Deployment manifest looks like this:

# deployment-v2.yaml updated
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
        - name: nginx
          image: mattiafj/custom-nginx:v2 # updated
          ports:
            - containerPort: 80
Enter fullscreen mode Exit fullscreen mode

In this case I created a new file called deployment-v2.yaml, but in reality you might instead have a single file named deployment.yaml and just update the values inside of that file. In future articles we will also see how we can use Helm to create our Kubernetes manifests. When using Helm we would not directly edit our manifest files, but more on that in the future!

To perform a rolling update we simply run another kubectl apply using our new Deployment manifest:

$ kubectl apply -f deployment-v2.yaml

deployment.apps/nginx-deployment configured
Enter fullscreen mode Exit fullscreen mode

The output tells us that our existing Deployment with the name nginx-deployment has been configured. If we would have set a different name in .metadata.name in deployment-v2.yaml then Kubernetes would have assumed that this was a brand new Deployment object, and we would have had two parallel Deployments. Since we did use the same name (nginx-deployment) Kubernetes correctly assumed that we want to update our existing object.

After performing the update, the list of our Deployments now looks like this:

$ kubectl get deployments

NAME               READY   UP-TO-DATE   AVAILABLE   AGE
nginx-deployment   3/3     3            3           5m20s
Enter fullscreen mode Exit fullscreen mode

We see that the age of the Deployment indicates that it is still the same Deployment object. Let us check out our ReplicaSets:

$ kubectl get replicasets

NAME                          DESIRED   CURRENT   READY   AGE
nginx-deployment-6ddd894ff    3         3         3       36s
nginx-deployment-8646cd8464   0         0         0       5m40s
Enter fullscreen mode Exit fullscreen mode

Here we see something interesting. It seems like we have two ReplicaSets. A new one has appeared (nginx-deployment-6ddd894ff), but the old one is still there as well (nginx-deployment-8646cd8464). We see that the old ReplicaSet has a desired/current/ready count of 0 Pods, while the new ReplicaSet has a count of 3 Pods. What has happened during the update of the Deployment is illustrated in the following three images.

A new ReplicaSet is created, but the old ReplicaSet still has three active Pods:

part 1

One Pod is terminated in the old ReplicaSet and a new Pod is created in the new ReplicaSet:

part 2

This process is continued until the new ReplicaSet has three Pods and the old ReplicaSet has zero Pods:

part 3

All of this happens very fast so we won't be able to follow along each step in our terminal. This is called a rolling deployment.

Let us check the status of our new Pods:

$ kubectl get pods

NAME                               READY   STATUS    RESTARTS   AGE
nginx-deployment-6ddd894ff-ggl9b   1/1     Running   0          52s
nginx-deployment-6ddd894ff-nds87   1/1     Running   0          46s
nginx-deployment-6ddd894ff-z7rn2   1/1     Running   0          47s
Enter fullscreen mode Exit fullscreen mode

We can see that the age of the three Pods are slightly different, due to how the update happened in steps. To verify that the Deployment was successful we can run curl again:

$ kubectl port-forward nginx-deployment-6ddd894ff-nds87 8080:80
$ curl localhost:8080

<html>
  <body>
    <h1>Hello from version 2</h1>
  </body>
</html>
Enter fullscreen mode Exit fullscreen mode

It worked!

Exploring the Deployment manifest

What we saw in the previous section was the default behavior of how an update of a Deployment progresses. You can configure this behavior through the Deployment manifest. Here is a sample Deployment manifest with the relevant parameters:

# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
        - name: nginx
          image: nginx:latest
          ports:
            - containerPort: 80
  # control the update behavior
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 30
      maxUnavailable: 30
Enter fullscreen mode Exit fullscreen mode

A brief explanation of the new manifest properties we have added:

  • .spec.strategy is where we configure the DeploymentStrategy
  • .spec.strategy.type can be either RollingUpdate which is the default, or ReCreate
  • .spec.strategy.rollingUpdate configures the behavior of a rolling update
    • maxSurge is a percentage of how many additional Pods can be created in the new ReplicaSet before Pods from the old ReplicaSet are terminated. So if we run 10 Pods and we specify a maxSurge of 50% that means during an update of our Deployment we could have at most 15 Pods, a mix of old and new Pods. The default value is 25%.
    • maxUnavailable is a percentage of how many Pods we can at most be without during an update of a Deployment. So if we run 10 Pods and we specify a maxUnavailable of 50% that means during an update of our Deployment we will have at least 5 Pods, a mix of old and new Pods. The default value is 25%.

As we can see, there is not a lot we can configure. The default behavior is usually fine, and we do not have to add or edit anything in the strategy part of the manifest.

You can perform a lot more complicated (but better and more safe) updates of Deployments using various tools and processes. We might see more of this in the future, but I want to be clear that what we have seen here is most often enough and it will work for the majority of the workloads you run.

Summary

We have gone through the process of updating a Deployment and we have seen what happens during a rolling update of our Deployment. We briefly explored what parameters we can set in our Deployment manifest to configure the rolling update behavior.

In the next article we will add a new kind of Kubernetes object to our repertoire: the Kubernetes Service. This will give us a single point of contact for a collection of Pods. It will allow us to load balance traffic between all Pods in a Deployment. It will definitely make our Deployments more useful, and it will take us one step closer to have a working application in Kubernetes!


  1. In a real scenario I would provide the index.html file to my Pod through a Volume. But we have not yet discussed Volumes in this series, so in this article I will instead bake the index.html file into the container itself. 

  2. For this to work I have already signed in to my Docker account in my terminal. If you are following along this article you must update the image names to your own Docker account. 

Top comments (0)