<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Damien Mathieu</title>
    <description>The latest articles on DEV Community by Damien Mathieu (@dmathieu).</description>
    <link>https://dev.to/dmathieu</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/dmathieu"/>
    <language>en</language>
    <item>
      <title>Dissecting Kubernetes Deployments</title>
      <dc:creator>Damien Mathieu</dc:creator>
      <pubDate>Thu, 22 Feb 2018 17:21:16 +0000</pubDate>
      <link>https://dev.to/heroku/dissecting-kubernetes-deployments--16ej</link>
      <guid>https://dev.to/heroku/dissecting-kubernetes-deployments--16ej</guid>
      <description>&lt;p&gt;&lt;em&gt;This post originally appeared on Heroku's &lt;a href="https://blog.heroku.com/engineering" rel="noopener noreferrer"&gt;Engineering Blog&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://kubernetes.io/" rel="noopener noreferrer"&gt;Kubernetes&lt;/a&gt; is a container orchestration system that originated at Google, and is now being maintained by the &lt;a href="https://www.cncf.io/" rel="noopener noreferrer"&gt;Cloud Native Computing Foundation&lt;/a&gt;. In this post, I am going to dissect some Kubernetes internals—especially, &lt;a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="noopener noreferrer"&gt;Deployments&lt;/a&gt; and how gradual rollouts of new containers are handled.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Is a Deployment?
&lt;/h2&gt;

&lt;p&gt;This is how the Kubernetes documentation describes Deployments:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;A Deployment controller provides declarative updates for Pods and ReplicaSets.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;A &lt;a href="https://kubernetes.io/docs/concepts/workloads/pods/pod/" rel="noopener noreferrer"&gt;Pod&lt;/a&gt; is a group of one or more containers which can be started inside a cluster. A pod started manually is not going to be very useful though, as it won't automatically be restarted if it crashes. A &lt;a href="https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/" rel="noopener noreferrer"&gt;ReplicaSet&lt;/a&gt; ensures that a Pod specification is always running with a set number of replicas. They allow starting several instances of the same Pod and will restart them automatically if some of them were to crash. Deployments sit on top of ReplicaSets. They allow seamlessly rolling out new versions of an application.&lt;/p&gt;

&lt;p&gt;Here is an example of a rolling deploy in a basic app:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd15owr8wkwqn8bhfuxz5.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd15owr8wkwqn8bhfuxz5.gif" alt="kuber-blog-post-animation" width="1024" height="1024"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;What we can see in this video is a 10-Pods Deployment being rolled out, one Pod at a time. When an update is triggered, the Deployment will boot a new Pod and wait until that Pod is responding to requests. When that happens, it will terminate one Pod and boot a new one. This continues until all old Pods are stopped and we have 10 new ones running the updated Deployment.&lt;/p&gt;

&lt;p&gt;Let's see how that is handled under the covers.&lt;/p&gt;

&lt;h2&gt;
  
  
  A Trigger-Based System
&lt;/h2&gt;

&lt;p&gt;Kubernetes is a trigger-based environment. When a Deployment is created or updated, it's new status is stored in &lt;a href="https://github.com/coreos/etcd" rel="noopener noreferrer"&gt;etcd&lt;/a&gt;. But without any controller to perform some action on the new object, nothing will happen.&lt;/p&gt;

&lt;p&gt;Anyone with the proper authorization access on a cluster can listen on some triggers and perform actions on them.  Let's take the following example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="k"&gt;package&lt;/span&gt; &lt;span class="n"&gt;main&lt;/span&gt;

&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="s"&gt;"log"&lt;/span&gt;
  &lt;span class="s"&gt;"os"&lt;/span&gt;
  &lt;span class="s"&gt;"path/filepath"&lt;/span&gt;
  &lt;span class="s"&gt;"reflect"&lt;/span&gt;
  &lt;span class="s"&gt;"time"&lt;/span&gt;

  &lt;span class="s"&gt;"k8s.io/api/apps/v1beta1"&lt;/span&gt;
  &lt;span class="n"&gt;metav1&lt;/span&gt; &lt;span class="s"&gt;"k8s.io/apimachinery/pkg/apis/meta/v1"&lt;/span&gt;
  &lt;span class="s"&gt;"k8s.io/apimachinery/pkg/runtime"&lt;/span&gt;
  &lt;span class="s"&gt;"k8s.io/apimachinery/pkg/watch"&lt;/span&gt;
  &lt;span class="s"&gt;"k8s.io/client-go/kubernetes"&lt;/span&gt;
  &lt;span class="s"&gt;"k8s.io/client-go/tools/cache"&lt;/span&gt;
  &lt;span class="s"&gt;"k8s.io/client-go/tools/clientcmd"&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="n"&gt;main&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="c"&gt;// doneCh will be used by the informer to allow a clean shutdown&lt;/span&gt;
  &lt;span class="c"&gt;// If the channel is closed, it communicates the informer that it needs to shutdown&lt;/span&gt;
  &lt;span class="n"&gt;doneCh&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="nb"&gt;make&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;chan&lt;/span&gt; &lt;span class="k"&gt;struct&lt;/span&gt;&lt;span class="p"&gt;{})&lt;/span&gt;
  &lt;span class="c"&gt;// Authenticate against the cluster&lt;/span&gt;
  &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;getClient&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
  &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="no"&gt;nil&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;log&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Fatal&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;err&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="c"&gt;// Setup the informer that will start watching for deployment triggers&lt;/span&gt;
  &lt;span class="n"&gt;informer&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;cache&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;NewSharedIndexInformer&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;cache&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;ListWatch&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="c"&gt;// This method will be used by the informer to retrieve the existing list of objects&lt;/span&gt;
    &lt;span class="c"&gt;// It is used during initialization to get the current state of things&lt;/span&gt;
    &lt;span class="n"&gt;ListFunc&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="k"&gt;func&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;options&lt;/span&gt; &lt;span class="n"&gt;metav1&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;ListOptions&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;runtime&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Object&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="kt"&gt;error&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;AppsV1beta1&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Deployments&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"default"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;List&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;options&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="c"&gt;// This method is used to watch on the triggers we wish to receive&lt;/span&gt;
    &lt;span class="n"&gt;WatchFunc&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="k"&gt;func&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;options&lt;/span&gt; &lt;span class="n"&gt;metav1&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;ListOptions&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;watch&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Interface&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="kt"&gt;error&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;AppsV1beta1&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Deployments&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"default"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Watch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;options&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;v1beta1&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Deployment&lt;/span&gt;&lt;span class="p"&gt;{},&lt;/span&gt; &lt;span class="n"&gt;time&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Second&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="m"&gt;30&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;cache&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Indexers&lt;/span&gt;&lt;span class="p"&gt;{})&lt;/span&gt; &lt;span class="c"&gt;// We only want `Deployments`, resynced every 30 seconds with the most basic indexer&lt;/span&gt;

  &lt;span class="c"&gt;// Setup the trigger handlers that will receive triggerss&lt;/span&gt;
  &lt;span class="n"&gt;informer&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;AddEventHandler&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;cache&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;ResourceEventHandlerFuncs&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="c"&gt;// This method is executed when a new deployment is created&lt;/span&gt;
    &lt;span class="n"&gt;AddFunc&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="k"&gt;func&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;deployment&lt;/span&gt; &lt;span class="k"&gt;interface&lt;/span&gt;&lt;span class="p"&gt;{})&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="n"&gt;log&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Printf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Deployment created: %s"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;deployment&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;v1beta1&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Deployment&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;ObjectMeta&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Name&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="c"&gt;// This method is executed when an existing deployment is updated&lt;/span&gt;
    &lt;span class="n"&gt;UpdateFunc&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="k"&gt;func&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;old&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;cur&lt;/span&gt; &lt;span class="k"&gt;interface&lt;/span&gt;&lt;span class="p"&gt;{})&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="n"&gt;reflect&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;DeepEqual&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;old&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;cur&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;log&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Printf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Deployment updated: %s"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;cur&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;v1beta1&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Deployment&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;ObjectMeta&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Name&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="p"&gt;})&lt;/span&gt;

  &lt;span class="c"&gt;// Start the informer, until `doneCh` is closed&lt;/span&gt;
  &lt;span class="n"&gt;informer&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Run&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;doneCh&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c"&gt;// Create a client so we're allowed to perform requests&lt;/span&gt;
&lt;span class="c"&gt;// Because of the use of `os.Getenv("HOME")`, this only works on unix environments&lt;/span&gt;
&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="n"&gt;getClient&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;kubernetes&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Clientset&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="kt"&gt;error&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;clientcmd&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;BuildConfigFromFlags&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;""&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;filepath&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Join&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Getenv&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"HOME"&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="s"&gt;".kube"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"config"&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="no"&gt;nil&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="no"&gt;nil&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;kubernetes&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;NewForConfig&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you follow the comments in this code sample, you can see that we create an informer which listens on create and update Deployment triggers, and logs them to &lt;code&gt;stdout&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Back to the Deployment controller. When it is &lt;a href="https://github.com/kubernetes/kubernetes/blob/master/pkg/controller/deployment/deployment_controller.go#L100" rel="noopener noreferrer"&gt;initialized&lt;/a&gt;, it configures a few informers to listen on:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A Deployment creation&lt;/li&gt;
&lt;li&gt;A Deployment update&lt;/li&gt;
&lt;li&gt;A Deployment deletion&lt;/li&gt;
&lt;li&gt;A ReplicaSet creation&lt;/li&gt;
&lt;li&gt;A ReplicaSet update&lt;/li&gt;
&lt;li&gt;A ReplicaSet deletion&lt;/li&gt;
&lt;li&gt;A Pod deletion&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;All those triggers allow the entire handling of a gradual rollout.&lt;/p&gt;

&lt;h2&gt;
  
  
  Rolling Out
&lt;/h2&gt;

&lt;p&gt;For any of the mentioned triggers, the Deployment controller will do a &lt;a href="https://github.com/kubernetes/kubernetes/blob/master/pkg/controller/deployment/deployment_controller.go#L561" rel="noopener noreferrer"&gt;Deployment sync&lt;/a&gt;. That method will check the Deployment status and perform the required action based on that.&lt;/p&gt;

&lt;p&gt;Let's take the example of a new Deployment.&lt;/p&gt;

&lt;h3&gt;
  
  
  A Deployment Is Created
&lt;/h3&gt;

&lt;p&gt;The controller receives the creation trigger and performs a sync. After performing all of its checks, it looks for the &lt;a href="https://github.com/kubernetes/kubernetes/blob/master/pkg/controller/deployment/deployment_controller.go#L641" rel="noopener noreferrer"&gt;Deployment strategy&lt;/a&gt; and triggers it. In our case, we're interested in a rolling update, as it's the one you should be using to prevent downtime.&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://github.com/kubernetes/kubernetes/blob/master/pkg/controller/deployment/rolling.go#L33" rel="noopener noreferrer"&gt;&lt;code&gt;rolloutRolling&lt;/code&gt;&lt;/a&gt; method will then create a new ReplicaSet. We need a new ReplicaSet for every rollout, as we want to be able to update the Pods one at a time. If the Deployment kept the same replica and just updated it, all Pods would be restarted and there would be a few minutes where we are unable to process requests.&lt;/p&gt;

&lt;p&gt;At this point, we have at least 2 ReplicaSets. One of them is the one we just created. The other one (there can be more if we have several concurrent rollouts) is the old one. We will then scale &lt;a href="https://github.com/kubernetes/kubernetes/blob/master/pkg/controller/deployment/rolling.go#L70" rel="noopener noreferrer"&gt;up&lt;/a&gt; and &lt;a href="https://github.com/kubernetes/kubernetes/blob/master/pkg/controller/deployment/rolling.go#L88" rel="noopener noreferrer"&gt;down&lt;/a&gt; both of the ReplicaSets accordingly.&lt;/p&gt;

&lt;p&gt;To &lt;a href="https://github.com/kubernetes/kubernetes/blob/master/pkg/controller/deployment/rolling.go#L70" rel="noopener noreferrer"&gt;scale up&lt;/a&gt; the new ReplicaSet, we start by looking how many replicas the Deployment expects. If we have scaled enough, we just stop there. If we need to keep scaling up, we check the &lt;a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#max-surge" rel="noopener noreferrer"&gt;max surge value&lt;/a&gt; and compare it with the number of running Pods. If too many are running, it won't scale up and wait until some old Pods have finished terminating. Otherwise, it will boot the required number of new Pods.&lt;/p&gt;

&lt;p&gt;To &lt;a href="https://github.com/kubernetes/kubernetes/blob/master/pkg/controller/deployment/rolling.go#L88" rel="noopener noreferrer"&gt;scale down&lt;/a&gt;, we look at how many total Pods are running, subtract the &lt;a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#max-unavailable" rel="noopener noreferrer"&gt;maximum available Pods&lt;/a&gt; we want, then subtract any not fully booted Pods. Based on that, we know how many Pods need to be terminated and can randomly finish them.&lt;/p&gt;

&lt;p&gt;At this point, the controller has finished for the current trigger. The deployment itself is not over though.&lt;/p&gt;

&lt;h3&gt;
  
  
  A ReplicaSet Is Updated
&lt;/h3&gt;

&lt;p&gt;Because the new Deployment just booted new Pods, we will receive new triggers. Specifically, when a Pod goes up or down, the ReplicaSet will send an update trigger. By listening on ReplicaSet updates, we can look for Pods that have finished booting or terminating.&lt;/p&gt;

&lt;p&gt;When that happens, we do the sync dance all over again, looking for Pods to shutdown and other ones to boot based on configuration, then wait for a new update.&lt;/p&gt;

&lt;h3&gt;
  
  
  A ReplicaSet Is Deleted
&lt;/h3&gt;

&lt;p&gt;The ReplicaSet deleted trigger is used as a way to make sure all Deployments are always properly running. If a ReplicaSet is deleted and the Deployment didn't expect this, we need to perform a sync again to create a new one and bring the Pods back up.&lt;/p&gt;

&lt;p&gt;This means if you want to quickly restart your app (with downtime), you can delete a Deployment's ReplicaSet safely. A new one will be created right away.&lt;/p&gt;

&lt;h3&gt;
  
  
  A Pod Is Deleted
&lt;/h3&gt;

&lt;p&gt;Deployments allow setting a &lt;a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#progress-deadline-seconds" rel="noopener noreferrer"&gt;&lt;code&gt;ProgressDeadlineSeconds&lt;/code&gt;&lt;/a&gt; option. If the Deployment hasn't progressed (any Pod booted or stopped) after the set number of seconds, it will be marked as failed. This typically happens when Pods enter a crash loop. When that happens, we will never receive the ReplicaSet update, as the Pod never goes online.&lt;/p&gt;

&lt;p&gt;However, we will receive Pod deletion updates—one for each crash loop retry. By syncing here, we can check how long it's been since the last update and reliably mark the Deployment as failed after a while.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Deployment Is Finished
&lt;/h3&gt;

&lt;p&gt;If we consider the Deployment to be &lt;a href="https://github.com/kubernetes/kubernetes/blob/master/pkg/controller/deployment/rolling.go#L60" rel="noopener noreferrer"&gt;complete&lt;/a&gt;, we then &lt;a href="https://github.com/kubernetes/kubernetes/blob/eff9f75f707a0ae9af56a55d08292eb87a632b97/pkg/controller/deployment/sync.go#L525" rel="noopener noreferrer"&gt;clean things&lt;br&gt;
up&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;At cleanup, we will delete any ReplicaSet that became too old. We keep a set number of old ReplicaSets (without any Pod running) so we can rollback a broken Deployment.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Note: ReplicaSets only hold a Pod template. So if you are always using the &lt;code&gt;:latest&lt;/code&gt; tag for your Pod (or using the default one), you won't be rolling back anything. In order to have proper rollback here, you will need to change the container tag every time it is rebuilt. For example, you could tag the containers with the &lt;code&gt;git&lt;/code&gt; commit SHA they were built against.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;You can &lt;a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#rolling-back-a-deployment" rel="noopener noreferrer"&gt;rollback a deployment&lt;/a&gt; with the &lt;code&gt;kubectl rollout undo&lt;/code&gt; command.&lt;/p&gt;

&lt;h2&gt;
  
  
  To Infinity and Beyond
&lt;/h2&gt;

&lt;p&gt;While Kubernetes is a generally seen as a complex tool, it is not difficult to dissect its parts to understand how they work. Also, being as generic as it is is good, making the system extremely modular.&lt;/p&gt;

&lt;p&gt;For example, as we have seen in this post, it is very easy to listen for Deployment triggers and implement your own logic on top of them. Or to entirely reimplement them in your own controller (which would probably be a bad idea). This trigger-based system also makes things more straightforward as each controller doesn't need to regularly check for updates on the objects it owns. It just needs to listen on the appropriate triggers and perform the appropriate action.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>k8s</category>
      <category>deployment</category>
    </item>
  </channel>
</rss>
