<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Pratik Jagrut</title>
    <description>The latest articles on DEV Community by Pratik Jagrut (@pratikjagrut).</description>
    <link>https://dev.to/pratikjagrut</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/pratikjagrut"/>
    <language>en</language>
    <item>
      <title>Kubernetes: Deployments (Part-1)</title>
      <dc:creator>Pratik Jagrut</dc:creator>
      <pubDate>Tue, 08 Oct 2024 20:08:24 +0000</pubDate>
      <link>https://dev.to/pratikjagrut/kubernetes-deployments-part-1-785</link>
      <guid>https://dev.to/pratikjagrut/kubernetes-deployments-part-1-785</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;In the previous blog, we explored ReplicaSets and their importance in maintaining multiple identical pods for high application availability. However, in Kubernetes, we typically don't create ReplicaSets directly. Instead, we create higher-level objects such as Deployments, DaemonSets, or StatefulSets, which in turn manage ReplicaSets for us. In this blog, we'll delve into Deployments&lt;/p&gt;

&lt;p&gt;Deployments are a crucial higher-level resource in Kubernetes, designed to manage the deployment and scaling of ReplicaSets, ensuring applications are always running in the desired state. We describe the desired state in the deployment configuration, and then the Deployment controller works to make the current state match the desired state.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F17skftn9v9bxgobw4097.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F17skftn9v9bxgobw4097.png" alt="Deployment image" width="800" height="366"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Difference Between Deployments and ReplicaSets
&lt;/h2&gt;

&lt;p&gt;Deployments and ReplicaSets have different roles in Kubernetes. A ReplicaSet ensures a specified number of pod replicas are always running, but it lacks advanced update features. Deployments, however, allow for easy updates and rollbacks for pods and ReplicaSets. They support rolling updates and rollbacks. While ReplicaSets keeps the right number of pods running, Deployments help manage application lifecycles, updates, and scaling without downtime, making them the better choice for managing stateless applications in Kubernetes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Configuring a Deployment
&lt;/h2&gt;

&lt;p&gt;Configuring a Deployment involves defining a YAML file that specifies the desired state of the application. This includes details such as the application's image, the number of replicas, any necessary secrets and ConfigMaps, and a strategy for updating the application, ensuring smooth transitions with minimal downtime. Once the configuration file is applied, the Deployment controller works to ensure that the actual state matches the desired state.&lt;/p&gt;

&lt;p&gt;Below is a sample YAML configuration for a Deployment:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx-deployment&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
        &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx:latest&lt;/span&gt;
        &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
  &lt;span class="na"&gt;strategy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;RollingUpdate&lt;/span&gt;
    &lt;span class="na"&gt;rollingUpdate&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;maxUnavailable&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
      &lt;span class="na"&gt;maxSurge&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Breakdown of the YAML File
&lt;/h2&gt;

&lt;p&gt;When writing any object in Kubernetes, you need to include certain required fields: &lt;code&gt;apiVersion&lt;/code&gt;, &lt;code&gt;kind&lt;/code&gt;, &lt;code&gt;metadata&lt;/code&gt;, and &lt;code&gt;spec&lt;/code&gt;.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;apiVersion:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;This field specifies the version of the Kubernetes API that your object adheres to, ensuring compatibility with your Kubernetes cluster. In this case, it uses &lt;code&gt;apps/v1&lt;/code&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;kind:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;This field defines the type of Kubernetes object being created. In our YAML file, it indicates that we are creating a &lt;code&gt;Deployment&lt;/code&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;metadata:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx-deployment&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;This section provides essential information about the Deployment:&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;name:&lt;/strong&gt; This uniquely identifies the Deployment within its namespace (&lt;code&gt;nginx-deployment&lt;/code&gt;). This is the only required field in &lt;code&gt;metadata&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;labels:&lt;/strong&gt; Key-value pairs used to organize and select resources (&lt;code&gt;app: nginx&lt;/code&gt;).&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;spec:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
        &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx:latest&lt;/span&gt;
        &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
  &lt;span class="na"&gt;strategy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;RollingUpdate&lt;/span&gt;
    &lt;span class="na"&gt;rollingUpdate&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;maxUnavailable&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
      &lt;span class="na"&gt;maxSurge&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;The &lt;code&gt;spec&lt;/code&gt; section defines the desired state of the Deployment, including its pods and their configurations:&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;replicas:&lt;/strong&gt; Specifies the number of pod replicas that the Deployment should maintain (3 in this case).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;selector:&lt;/strong&gt; Defines how the Deployment identifies the pods it manages.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;matchLabels:&lt;/strong&gt; A set of key-value pairs used to match the pods (&lt;code&gt;app: nginx&lt;/code&gt;). This should be the same as the labels in &lt;code&gt;template.metadata.labels&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;template:&lt;/strong&gt; Describes the pod's configuration to be created.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;metadata:&lt;/strong&gt; Includes labels to be applied to the pods.&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;spec:&lt;/strong&gt; Defines the pod's configuration.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;containers:&lt;/strong&gt; Lists the containers within the pod.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;name:&lt;/strong&gt; Identifies the container (&lt;code&gt;nginx&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;image:&lt;/strong&gt; Specifies the Docker image to use (&lt;code&gt;nginx:latest&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ports:&lt;/strong&gt; Indicates which ports should be exposed by the container (&lt;code&gt;containerPort: 80&lt;/code&gt;).&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;/li&gt;

&lt;/ul&gt;

&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;strategy:&lt;/strong&gt; Defines the strategy for updating the Deployment.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;type:&lt;/strong&gt; Specifies the update strategy. In this case, it is &lt;code&gt;RollingUpdate&lt;/code&gt;, which updates pods in a rolling fashion with minimal downtime.

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;rollingUpdate:&lt;/strong&gt; Specifies parameters for the rolling update strategy, such as &lt;code&gt;maxUnavailable&lt;/code&gt; and &lt;code&gt;maxSurge&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  Creating a Deployment
&lt;/h2&gt;

&lt;p&gt;To create a Deployment using the above YAML configuration, save the configuration to a file named &lt;code&gt;nginx-deployment.yaml&lt;/code&gt; and apply it to the Kubernetes cluster using the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; nginx-deployment.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Managing a Deployment
&lt;/h2&gt;

&lt;p&gt;You can list the Deployments using the &lt;code&gt;kubectl get deployments&lt;/code&gt; command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get deployments
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;NAME               READY   UP-TO-DATE   AVAILABLE   AGE
nginx-deployment   3/3     3            3           10s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can use the &lt;code&gt;kubectl describe&lt;/code&gt; command to check the state of the Deployment:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;❯ kubectl describe deployments.apps/nginx-deployment
Name:                   nginx-deployment
Namespace:              default
CreationTimestamp:      Mon, 15 Jul 2024 20:04:25 +0530
Labels:                 &lt;span class="nv"&gt;app&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;nginx
Annotations:            deployment.kubernetes.io/revision: 1
Selector:               &lt;span class="nv"&gt;app&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;nginx
Replicas:               3 desired | 3 updated | 3 total | 3 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  1 max unavailable, 1 max surge
Pod Template:
  Labels:  &lt;span class="nv"&gt;app&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;nginx
  Containers:
   nginx:
    Image:        nginx:latest
    Port:         80/TCP
    Host Port:    0/TCP
    Environment:  &amp;lt;none&amp;gt;
    Mounts:       &amp;lt;none&amp;gt;
  Volumes:        &amp;lt;none&amp;gt;
Conditions:
  Type           Status  Reason
  &lt;span class="nt"&gt;----&lt;/span&gt;           &lt;span class="nt"&gt;------&lt;/span&gt;  &lt;span class="nt"&gt;------&lt;/span&gt;
  Available      True    MinimumReplicasAvailable
  Progressing    True    NewReplicaSetAvailable
OldReplicaSets:  &amp;lt;none&amp;gt;
NewReplicaSet:   nginx-deployment-57d84f57dc &lt;span class="o"&gt;(&lt;/span&gt;3/3 replicas created&lt;span class="o"&gt;)&lt;/span&gt;
Events:
  Type    Reason             Age   From                   Message
  &lt;span class="nt"&gt;----&lt;/span&gt;    &lt;span class="nt"&gt;------&lt;/span&gt;             &lt;span class="nt"&gt;----&lt;/span&gt;  &lt;span class="nt"&gt;----&lt;/span&gt;                   &lt;span class="nt"&gt;-------&lt;/span&gt;
  Normal  ScalingReplicaSet  54s   deployment-controller  Scaled up replica &lt;span class="nb"&gt;set &lt;/span&gt;nginx-deployment-57d84f57dc to 3
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;List the ReplicaSets:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;❯ kubectl get rs
NAME                          DESIRED   CURRENT   READY   AGE
nginx-deployment-57d84f57dc   3         3         3       93s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;List the pods:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;❯ kubectl get pods
NAME                                READY   STATUS    RESTARTS   AGE
nginx-deployment-57d84f57dc-w5n9r   1/1     Running   0          2m11s
nginx-deployment-57d84f57dc-8mf7x   1/1     Running   0          2m11s
nginx-deployment-57d84f57dc-gv24t   1/1     Running   0          2m11s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Scaling the Deployment
&lt;/h3&gt;

&lt;p&gt;You can scale the Deployment to a different number of replicas using the &lt;code&gt;kubectl scale&lt;/code&gt; command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;❯ kubectl scale &lt;span class="nt"&gt;--replicas&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;5 deployment nginx-deployment
deployment.apps/nginx-deployment scaled
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;❯ kubectl get deployments
NAME               READY   UP-TO-DATE   AVAILABLE   AGE
nginx-deployment   5/5     5            5           4m11s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;❯ kubectl get pods
NAME                                READY   STATUS    RESTARTS   AGE
nginx-deployment-57d84f57dc-w5n9r   1/1     Running   0          4m24s
nginx-deployment-57d84f57dc-8mf7x   1/1     Running   0          4m24s
nginx-deployment-57d84f57dc-gv24t   1/1     Running   0          4m24s
nginx-deployment-57d84f57dc-9gjph   1/1     Running   0          27s
nginx-deployment-57d84f57dc-t47qd   1/1     Running   0          27s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Deleting the Deployment
&lt;/h3&gt;

&lt;p&gt;To delete the Deployment, use the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl delete deployment nginx-deployment
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Deleting the Deployment will terminate all the pods it manages. If you want to keep the pods running after deleting the Deployment, use the &lt;code&gt;--cascade=orphan&lt;/code&gt; flag:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl delete deployment nginx-deployment &lt;span class="nt"&gt;--cascade&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;orphan
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In summary, Deployments play a crucial role in managing the desired state of your applications by ensuring a specified number of pod replicas are running at all times. This not only enhances the availability and reliability of your applications but also simplifies the management of pods in a Kubernetes environment.&lt;/p&gt;

&lt;p&gt;By defining a Deployment through a YAML file, you can easily control the number of replicas, monitor their status, and scale them as needed, ensuring your applications remain resilient and performant.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Thank you for reading this blog; your interest is greatly appreciated. I hope this information helps you on your Kubernetes journey. In the next part, we'll explore updating and rolling back Kubernetes deployments.&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Kubernetes: ReplicaSet</title>
      <dc:creator>Pratik Jagrut</dc:creator>
      <pubDate>Sun, 07 Jul 2024 12:59:14 +0000</pubDate>
      <link>https://dev.to/pratikjagrut/replicaset-54gp</link>
      <guid>https://dev.to/pratikjagrut/replicaset-54gp</guid>
      <description>&lt;p&gt;In the last blog, we explored Pods and how they encapsulate containers to run workloads on Kubernetes. While Pods provide useful features for running workloads, they also have inherent issues due to their ephemeral nature—they can be terminated at any time. When this happens, the user application will no longer be available.&lt;/p&gt;

&lt;p&gt;To avoid such situations and ensure the user application is always available, Kubernetes uses &lt;code&gt;ReplicaSets (RS)&lt;/code&gt;. A ReplicaSet creates multiple identical replicas of a pod and ensures a specific number of pods are running at all times—neither fewer nor more.&lt;/p&gt;

&lt;p&gt;A &lt;code&gt;ReplicaSet controller&lt;/code&gt; continuously monitors the pods to ensure that the number of desired pods equals the number of available pods at all times. If a pod fails, the ReplicaSet automatically creates more pods. Conversely, if new pods with the same label are added and there are more pods than needed, the ReplicaSet will reduce the number by stopping the extra pods.&lt;/p&gt;

&lt;h2&gt;
  
  
  Configuring a ReplicaSet
&lt;/h2&gt;

&lt;p&gt;Configuring a ReplicaSet involves defining a YAML file that specifies the desired state for the ReplicaSet. This YAML file includes crucial details such as the number of replicas, the selector to identify the pods managed by the ReplicaSet, and the pod template that defines the pods to be created.&lt;/p&gt;

&lt;p&gt;Below is a sample YAML configuration for a ReplicaSet using an Nginx container:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ReplicaSet&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx-replicaset&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx-rs&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx-pods&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx-pods&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
        &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx:latest&lt;/span&gt;
        &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Breakdown of the YAML File
&lt;/h3&gt;

&lt;p&gt;When writing any object in Kubernetes, you need to include certain required fields: &lt;code&gt;apiVersion&lt;/code&gt;, &lt;code&gt;kind&lt;/code&gt;, &lt;code&gt;metadata&lt;/code&gt;, and &lt;code&gt;spec&lt;/code&gt;.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;apiVersion:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;This field specifies the version of the Kubernetes API that your object adheres to, ensuring compatibility with your Kubernetes cluster. In this case, it uses &lt;code&gt;apps/v1&lt;/code&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;kind:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ReplicaSet&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;This field defines the type of Kubernetes object being created. In our YAML file, it indicates that we are creating a &lt;code&gt;ReplicaSet&lt;/code&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;metadata:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx-replicaset&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx-rs&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;This section provides essential information about the ReplicaSet:&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;name:&lt;/strong&gt; This uniquely identifies the ReplicaSet within its namespace (&lt;code&gt;nginx-replicaset&lt;/code&gt;). This is the only field in &lt;code&gt;metadata&lt;/code&gt; that is required.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;namespace:&lt;/strong&gt; Assigns a specific namespace for resource isolation (optional).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;labels:&lt;/strong&gt; Key-value pairs used to organize and select resources (&lt;code&gt;app: nginx-rs&lt;/code&gt;).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;annotations:&lt;/strong&gt; These key-value pairs offer additional details about the Pod, useful for documentation, debugging, or monitoring (optional).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;ownerReferences:&lt;/strong&gt; Specifies the controller managing the Pod, establishing a relationship hierarchy among Kubernetes resources (optional).&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;spec:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx-pods&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx-pods&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
        &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx:latest&lt;/span&gt;
        &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;The &lt;code&gt;spec&lt;/code&gt; section defines the desired state of the ReplicaSet, including its pods and their configurations:&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;replicas:&lt;/strong&gt; Specifies the number of pod replicas that the ReplicaSet should maintain (3 in this case).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;selector:&lt;/strong&gt; Defines how the ReplicaSet identifies the pods it manages.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;matchLabels:&lt;/strong&gt; A set of key-value pairs used to match the pods (&lt;code&gt;app: nginx-pods&lt;/code&gt;). This should be the same as the labels in &lt;code&gt;template.metadata.labels&lt;/code&gt;.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;template:&lt;/strong&gt; Describes the pod's configuration to be created.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;metadata:&lt;/strong&gt; Includes labels to be applied to the pods.&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;spec:&lt;/strong&gt; Defines the pod's configuration.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;containers:&lt;/strong&gt; Lists the containers within the pod.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;name:&lt;/strong&gt; Identifies the container (&lt;code&gt;nginx&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;image:&lt;/strong&gt; Specifies the Docker image to use (&lt;code&gt;nginx:latest&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ports:&lt;/strong&gt; Indicates which ports should be exposed by the container (&lt;code&gt;containerPort: 80&lt;/code&gt;).&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;Additional optional fields for advanced configurations within the &lt;code&gt;spec&lt;/code&gt; section include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;resources:&lt;/strong&gt; Manages the pod's resource requests and limits.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;volumeMounts:&lt;/strong&gt; Specifies volumes to be mounted into the container's filesystem.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;env:&lt;/strong&gt; Defines environment variables accessible to the container.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;volumes:&lt;/strong&gt; Describes persistent storage volumes available to the pod.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  Creating a ReplicaSet
&lt;/h2&gt;

&lt;p&gt;To create a ReplicaSet using the above YAML configuration, save the configuration to a file named &lt;code&gt;nginx-replicaset.yaml&lt;/code&gt; and apply it to the Kubernetes cluster using the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; nginx-replicaset.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Managing ReplicaSet
&lt;/h2&gt;

&lt;p&gt;You can list the replica sets using &lt;code&gt;kubectl get replicaset&lt;/code&gt; command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;❯ kubectl get replicasets
NAME               DESIRED   CURRENT   READY   AGE
nginx-replicaset   3         3         3       2m17s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can use the &lt;code&gt;kubectl describe&lt;/code&gt; command to check the state of the replica set.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;❯ kubectl describe replicasets.apps/nginx-replicaset
Name:         nginx-replicaset
Namespace:    default
Selector:     &lt;span class="nv"&gt;app&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;nginx-pods
Labels:       &lt;span class="nv"&gt;app&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;nginx-rs
Annotations:  &amp;lt;none&amp;gt;
Replicas:     3 current / 3 desired
Pods Status:  3 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  &lt;span class="nv"&gt;app&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;nginx-pods
  Containers:
   nginx:
    Image:        nginx:latest
    Port:         80/TCP
    Host Port:    0/TCP
    Environment:  &amp;lt;none&amp;gt;
    Mounts:       &amp;lt;none&amp;gt;
  Volumes:        &amp;lt;none&amp;gt;
Events:
  Type    Reason            Age   From                   Message
  &lt;span class="nt"&gt;----&lt;/span&gt;    &lt;span class="nt"&gt;------&lt;/span&gt;            &lt;span class="nt"&gt;----&lt;/span&gt;  &lt;span class="nt"&gt;----&lt;/span&gt;                   &lt;span class="nt"&gt;-------&lt;/span&gt;
  Normal  SuccessfulCreate  13s   replicaset-controller  Created pod: nginx-replicaset-rgxzx
  Normal  SuccessfulCreate  13s   replicaset-controller  Created pod: nginx-replicaset-prddh
  Normal  SuccessfulCreate  13s   replicaset-controller  Created pod: nginx-replicaset-rwvpn
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can list the pods using &lt;code&gt;kubectl get pods&lt;/code&gt; command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;❯ kubectl get pods
NAME                     READY   STATUS    RESTARTS   AGE
nginx-replicaset-xqcjm   1/1     Running   0          10s
nginx-replicaset-pzzhh   1/1     Running   0          10s
nginx-replicaset-k4lp4   1/1     Running   0          10s/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  &lt;strong&gt;Scaling the ReplicaSet&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;You can scale the ReplicaSet to a different number of replicas using the &lt;code&gt;kubectl scale&lt;/code&gt; command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;❯ kubectl scale &lt;span class="nt"&gt;--replicas&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;5 replicaset nginx-replicaset
replicaset.apps/nginx-replicaset scaled

❯ kubectl get replicasets
NAME               DESIRED   CURRENT   READY   AGE
nginx-replicaset   5         5         3       17m

❯ kubectl get pods
NAME                     READY   STATUS    RESTARTS   AGE
nginx-replicaset-xqcjm   1/1     Running   0          17m
nginx-replicaset-pzzhh   1/1     Running   0          17m
nginx-replicaset-k4lp4   1/1     Running   0          17m
nginx-replicaset-79d96   1/1     Running   0          16s
nginx-replicaset-sdb5g   1/1     Running   0          16s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Updating the ReplicaSet
&lt;/h3&gt;

&lt;p&gt;To update the ReplicaSet, such as changing the container image, you can modify the YAML file and apply the changes using:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; nginx-replicaset.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Alternatively, use the &lt;code&gt;kubectl set image&lt;/code&gt; command to update the image directly:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl &lt;span class="nb"&gt;set &lt;/span&gt;image replicaset/nginx-replicaset &lt;span class="nv"&gt;nginx&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;nginx:1.19
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  &lt;strong&gt;Deleting the ReplicaSet&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;To delete the ReplicaSet, use the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl delete rs nginx-replicaset
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Deleting the ReplicaSet will terminate all the pods it manages. If you want to keep the pods running after deleting the ReplicaSet, use the &lt;code&gt;--cascade=orphan&lt;/code&gt; flag:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl delete rs nginx-replicaset &lt;span class="nt"&gt;--cascade&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;orphan
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Scenarios Where RSs are Particularly Useful
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;High Availability:&lt;/strong&gt; ReplicaSets ensure that a specified number of pod replicas are always running, which is crucial for applications requiring high availability.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Load Balancing:&lt;/strong&gt; By maintaining multiple replicas of a pod, ReplicaSets help distribute the load evenly across all replicas, improving performance and reliability.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Fault Tolerance:&lt;/strong&gt; If a pod fails, the ReplicaSet automatically replaces it, ensuring continuous availability of the application.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Rolling Updates:&lt;/strong&gt; ReplicaSets can be used to perform rolling updates to applications, allowing updates without downtime by incrementally replacing old pods with new ones.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Scalability:&lt;/strong&gt; Easily scale the number of pod replicas up or down based on demand, ensuring efficient use of resources.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In summary, ReplicaSets play a crucial role in maintaining the desired state of your applications by ensuring a specified number of pod replicas are running at all times. This not only enhances the availability and reliability of your applications but also simplifies the management of pods in a Kubernetes environment.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Benefits of Using ReplicaSets:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Enhanced Reliability:&lt;/strong&gt; Ensures continuous application availability by maintaining multiple pod replicas.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Improved Uptime:&lt;/strong&gt; Automatically replaces failed pods to maintain the desired number of running pods.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Simplified Scaling:&lt;/strong&gt; Allows easy scaling of applications by adjusting the number of replicas.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Consistent Performance:&lt;/strong&gt; Distributes the load evenly across replicas, maintaining application performance.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By defining a ReplicaSet through a YAML file, you can easily control the number of replicas, monitor their status, and scale them as needed, ensuring your applications remain resilient and performant.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Thank you for reading this blog; your interest is greatly appreciated. I hope this information helps you on your Kubernetes journey. In the next blog, we'll explore Kubernetes deployments.&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>tutorial</category>
      <category>containers</category>
    </item>
    <item>
      <title>Kubernetes Pod 101</title>
      <dc:creator>Pratik Jagrut</dc:creator>
      <pubDate>Thu, 27 Jun 2024 15:26:14 +0000</pubDate>
      <link>https://dev.to/pratikjagrut/kubernetes-pod-101-29ee</link>
      <guid>https://dev.to/pratikjagrut/kubernetes-pod-101-29ee</guid>
      <description>&lt;p&gt;With Kubernetes, our main goal is to run our application in a container. However, Kubernetes does not run the container directly on the cluster. Instead, it creates a Pod that encapsulates the container.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F54p21ai5oqrszbpb633s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F54p21ai5oqrszbpb633s.png" alt="Pod" width="584" height="360"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;A Pod is the smallest object that you create and manage in Kubernetes.&lt;/code&gt; A Pod consists of one or more containers that share storage and network resources, all running within a shared context. This shared context includes Linux namespaces, cgroups, and other isolation mechanisms, similar to those used for individual containers.&lt;/p&gt;

&lt;p&gt;In a Kubernetes cluster, Pods use two models to run containers.:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;em&gt;One-container-per-Pod model&lt;/em&gt;&lt;/strong&gt;: This is the common use case in Kubernetes. A Pod acts as a wrapper for a container, with Kubernetes managing the Pod instead of the individual container. &lt;em&gt;Refer to diagram POD-A.&lt;/em&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;em&gt;Multi-containers Pod model:&lt;/em&gt;&lt;/strong&gt; Pods can run multiple containers that work closely together. These Pods hold applications made up of several containers that need to share resources and work closely. These containers operate as a single unit within the Pod. &lt;em&gt;Refer to diagram POD-B.&lt;/em&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq80pqoaxc6bm2icg2t41.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq80pqoaxc6bm2icg2t41.png" alt="pod" width="800" height="399"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Anatomy of a Pod
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Containers
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Main Container&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Primary Role:&lt;/strong&gt; This is the application's primary container.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Example:&lt;/strong&gt; If you have a web application, the main container will run the web server that serves your application.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Sidecar Containers&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Supporting Role:&lt;/strong&gt; These auxiliary containers support the main container, often used for logging, monitoring, or proxying tasks.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Example:&lt;/strong&gt; For the same web application, a sidecar container might handle logging by collecting and storing log data generated by the web server.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Storage (Volumes)
&lt;/h3&gt;

&lt;p&gt;Pods can include storage resources known as volumes, which enable data persistence across container restarts. Volumes in a Pod are shared among all containers in that Pod, allowing for data exchange between them.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Types of Volumes:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;emptyDir:&lt;/strong&gt; A temporary directory that is created when a Pod is assigned to a node and deleted when the Pod is removed.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;hostPath:&lt;/strong&gt; Maps a file or directory from the host node’s filesystem into a Pod.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;persistentVolumeClaim:&lt;/strong&gt; A request for storage by a user that binds to a PersistentVolume (PV) in the cluster.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;configMap:&lt;/strong&gt; Provides configuration data, command-line arguments, environment variables, or container files.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;secret:&lt;/strong&gt; Used to store sensitive data such as passwords, OAuth tokens, and SSH keys.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Network
&lt;/h3&gt;

&lt;p&gt;Each Pod is assigned a &lt;code&gt;unique IP address&lt;/code&gt;. Containers within the same Pod share the network namespace, which means they can communicate with each other using &lt;code&gt;localhost&lt;/code&gt;. Pods can communicate with each other using their assigned IP addresses.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Pod IP:&lt;/strong&gt; A unique IP address assigned to each Pod.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;DNS:&lt;/strong&gt; Kubernetes automatically assigns DNS names to Pods and services, facilitating network communication within the cluster.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Pod Lifecycle
&lt;/h3&gt;

&lt;p&gt;Like individual application containers, Pods are considered to be relatively temporary (rather than permanent) entities. Understanding the lifecycle of a Pod is crucial for effective management and troubleshooting.&lt;/p&gt;

&lt;p&gt;Pods can be in one of several phases during their lifecycle. A Pod's &lt;code&gt;status&lt;/code&gt; field is a &lt;code&gt;PodStatus&lt;/code&gt; object, which has a &lt;code&gt;phase&lt;/code&gt; field. The phase of a Pod is a simple, high-level summary of where the Pod is in its lifecycle.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Pending:&lt;/strong&gt; The Pod has been accepted by the Kubernetes system but one or more container images have not been created.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Running:&lt;/strong&gt; The Pod has been bound to a node, and all of the containers have been created. At least one container is still running or is in the process of starting or restarting.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Succeeded:&lt;/strong&gt; All containers in the Pod have terminated successfully, and the Pod will not be restarted.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Failed:&lt;/strong&gt; All containers in the Pod have terminated, and at least one container has terminated in failure.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Unknown:&lt;/strong&gt; The state of the Pod cannot be obtained, typically due to an error in communicating with the node where the Pod resides.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Pod Conditions
&lt;/h3&gt;

&lt;p&gt;Pods have a set of conditions that describe their current state. These conditions are used to diagnose and troubleshoot the status of Pods.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;PodScheduled:&lt;/strong&gt; Indicates whether the Pod has been scheduled to a node.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Initialized:&lt;/strong&gt; All init containers have been completed successfully.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Ready:&lt;/strong&gt; The Pod can serve requests and should be added to the load-balancer pools of all matching Services.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;ContainersReady:&lt;/strong&gt; All containers in the Pod are ready.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;PodReadyToStartContainers&lt;/strong&gt;: (beta feature; enabled by default) The Pod sandbox has been successfully created and networking configured.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Pod creation
&lt;/h2&gt;

&lt;p&gt;A Pod can be created using two methods. The first method is by using the &lt;code&gt;kubectl run&lt;/code&gt; command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl run &lt;span class="nt"&gt;--image&lt;/span&gt; nginx nginx-pod
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The second method is declarative. In this approach, you create a Pod configuration file in YAML and apply it using the &lt;code&gt;kubectl create&lt;/code&gt; or &lt;code&gt;kubectl apply&lt;/code&gt; command. This method is widely used because it allows you to manage multiple versions of an application easily.&lt;/p&gt;

&lt;p&gt;Create a configuration file named &lt;code&gt;nginx-pod.yaml&lt;/code&gt; with the following content.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; 
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; 
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx-pod&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx-container&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx:latest&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; nginx-pod.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can list the pods using &lt;code&gt;kubectl get pods&lt;/code&gt; command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;❯ kubectl get pods
NAME        READY   STATUS    RESTARTS      AGE
nginx-pod   1/1     Running   0             5s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Let's break down the definition of a Pod in Kubernetes.
&lt;/h2&gt;

&lt;p&gt;When writing any object in Kubernetes, you need to include certain required fields: &lt;code&gt;apiVersion&lt;/code&gt;, &lt;code&gt;kind&lt;/code&gt;, &lt;code&gt;metadata&lt;/code&gt;, and &lt;code&gt;spec&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  apiVersion
&lt;/h3&gt;

&lt;p&gt;This field specifies the version of the Kubernetes API that your object adheres to, ensuring compatibility with your Kubernetes cluster (e.g., &lt;code&gt;v1&lt;/code&gt;).&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kind&lt;/code&gt;: This field defines the type of Kubernetes object being created. In our YAML file, it indicates that we are creating a Pod.&lt;/p&gt;

&lt;h3&gt;
  
  
  metadata
&lt;/h3&gt;

&lt;p&gt;This section provides essential information about the Pod:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;name&lt;/code&gt;: This uniquely identifies the Pod within its namespace (e.g., &lt;code&gt;nginx-pod&lt;/code&gt;).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;namespace&lt;/code&gt;: Assigns a specific namespace to the Pod for resource isolation.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;labels&lt;/code&gt;: These are key-value pairs used to organize and select resources (e.g., &lt;code&gt;app: nginx&lt;/code&gt;).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;annotations&lt;/code&gt;: These key-value pairs offer additional details about the Pod, useful for documentation, debugging, or monitoring.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;ownerReferences&lt;/code&gt;: Specifies the controller managing the Pod, establishing a relationship hierarchy among Kubernetes resources.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  spec
&lt;/h3&gt;

&lt;p&gt;The &lt;code&gt;spec&lt;/code&gt; section defines the desired state of the Pod, including its containers and their configurations:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;containers&lt;/code&gt;: This list defines each container within the Pod.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;name&lt;/code&gt;: Identifies the container (e.g., &lt;code&gt;nginx-container&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;image&lt;/code&gt;: Specifies the Docker image to use (e.g., &lt;code&gt;nginx:latest&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;ports&lt;/code&gt;: Indicates which ports should be exposed by the container (e.g., port &lt;code&gt;80&lt;/code&gt;).&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Additional Optional Fields&lt;/strong&gt;: For more advanced setups, you can include additional fields within the &lt;code&gt;spec&lt;/code&gt; section:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;resources&lt;/code&gt;: Manages the Pod's resource requests and limits.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;volumeMounts&lt;/code&gt;: Specifies volumes to be mounted into the container's filesystem.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;env&lt;/code&gt;: Defines environment variables accessible to the container.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;volumes&lt;/code&gt;: Describes persistent storage volumes available to the Pod.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Static pods
&lt;/h2&gt;

&lt;p&gt;In Kubernetes, Static Pods offer a way to directly manage Pods on a node without the need for the Kubernetes control plane. Unlike regular Pods that are managed by the Kubernetes API server, Static Pods are managed directly by the Kubelet daemon on a specific node.&lt;/p&gt;

&lt;h3&gt;
  
  
  How Static Pods Work
&lt;/h3&gt;

&lt;p&gt;Static Pods are defined by creating Pod manifest files on the node itself. These manifest files are usually located in a directory monitored by the Kubelet, such as &lt;code&gt;/etc/kubernetes/manifests&lt;/code&gt;, or a directory specified in the Kubelet's configuration (&lt;code&gt;kubelet.conf&lt;/code&gt;).&lt;/p&gt;

&lt;h3&gt;
  
  
  Key Characteristics of Static Pods
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Node-Specific Management&lt;/strong&gt;: Each node runs its instance of the Kubelet, which monitors a designated directory for Pod manifests. When a manifest file is detected or updated, the Kubelet creates, updates, or deletes the corresponding Pod on that node.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;No Kubernetes API Interaction&lt;/strong&gt;: Unlike regular Pods that are part of the Kubernetes API and etcd datastore, Static Pods are not managed via the API server. They do not appear in Kubernetes API responses and are not visible through tools like &lt;code&gt;kubectl&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Use Cases&lt;/strong&gt;: Static Pods are useful in scenarios where Pods need to run directly on a node, independent of the Kubernetes control plane. This can include bootstrapping components required for Kubernetes itself, or running critical system daemons that must be available even if the control plane is offline.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Creating Static Pods
&lt;/h3&gt;

&lt;p&gt;To create a Static Pod:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Create a Manifest File&lt;/strong&gt;: Write a Pod manifest YAML file specifying the Pod's metadata and spec, similar to how you define regular Pods.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Place in Watched Directory&lt;/strong&gt;: Save the manifest file in the directory monitored by the Kubelet (&lt;code&gt;/etc/kubernetes/manifests&lt;/code&gt; by default). This directory can be configured in the Kubelet configuration file by setting &lt;code&gt;staticPodPath&lt;/code&gt; to the pod manifests path. Alternatively, it can also be passed to Kubelet through the &lt;code&gt;--pod-manifest-path&lt;/code&gt; flag, but this flag is deprecated.'&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;If needed restart the kubelet&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;systemctl restart kubelet
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Static Pods in Kubernetes are managed directly by the Kubelet and are automatically restarted if they fail. The Kubelet ensures that each Static Pod's state aligns with its specified manifest file. Despite this direct management, Kubelet also attempts to create a mirror Pod on the API server for each Static Pod. This makes the static pod visible to the API server, however, the API server cannot control the pod.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Pods are the core units in Kubernetes, encapsulating containers with shared storage and network resources. They can run single or multiple containers, providing flexibility in application deployment. Understanding Pods' anatomy, lifecycle, and creation methods, including static Pods, is crucial for efficient and scalable application management in Kubernetes environments.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Pods in Kubernetes are inherently ephemeral and can be terminated at any time. Kubernetes uses controllers to effectively manage Pods, ensuring their desired state is maintained. ReplicationSet controllers ensure a specified number of Pod replicas are running. Other controllers like Deployments, StatefulSets, and DaemonSets cater to different use cases.&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Thank you for reading this blog; your interest is greatly appreciated, and I hope it helps you on your Kubernetes journey. In the next blog, we'll explore Kubernetes controllers that are used to manage Pods.&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>opensource</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Creating a Kubernetes Cluster with Kubeadm and Containerd: A Comprehensive Step-by-Step Guide</title>
      <dc:creator>Pratik Jagrut</dc:creator>
      <pubDate>Sun, 23 Jun 2024 14:44:07 +0000</pubDate>
      <link>https://dev.to/pratikjagrut/creating-a-kubernetes-cluster-with-kubeadm-and-containerd-a-comprehensive-step-by-step-guide-1fgo</link>
      <guid>https://dev.to/pratikjagrut/creating-a-kubernetes-cluster-with-kubeadm-and-containerd-a-comprehensive-step-by-step-guide-1fgo</guid>
      <description>&lt;p&gt;Kubeadm is a tool designed to simplify the process of creating Kubernetes clusters by providing &lt;code&gt;kubeadm init&lt;/code&gt; and &lt;code&gt;kubeadm join&lt;/code&gt; commands as best-practice "fast paths." - Kubernetes documentation&lt;/p&gt;

&lt;p&gt;In this blog, we'll go through the step-by-step process of installing a Kubernetes cluster using Kubeadm.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;Before you begin, ensure you have the following:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Ensure you have a compatible Linux host (e.g., Debian-based and Red Hat-based distributions). In this blog, we're using Ubuntu which is a Debian-based OS.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;At least 2 GB of RAM and 2 CPUs per machine.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Full network connectivity between all machines in the cluster.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Unique hostname, MAC address, and product_uuid for every node.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Ensure all the required ports are open for the control plane and the worker nodes. You can refer to &lt;a href="https://kubernetes.io/docs/reference/networking/ports-and-protocols"&gt;Ports and Protocols&lt;/a&gt; or see the screenshot below.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbcj9gurz7u9s05jh8t4m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbcj9gurz7u9s05jh8t4m.png" alt="Ports and Protocols" width="738" height="727"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Disable swap on all the nodes.&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;swapoff &lt;span class="nt"&gt;-a&lt;/span&gt;
&lt;span class="c"&gt;# disable swap on startup in /etc/fstab&lt;/span&gt;
&lt;span class="nb"&gt;sudo sed&lt;/span&gt; &lt;span class="nt"&gt;-i&lt;/span&gt; &lt;span class="s1"&gt;'/ swap / s/^/#/'&lt;/span&gt; /etc/fstab
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Setup container runtime(Containerd)
&lt;/h2&gt;

&lt;p&gt;To run containers in Pods, Kubernetes uses a container runtime. By default, Kubernetes employs the &lt;code&gt;Container Runtime Interface (CRI)&lt;/code&gt; to interact with your selected container runtime. Each node needs to have container runtime installed. In this blog, we'll use &lt;code&gt;containerd&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Run these instructions on all the nodes. I am using Ubuntu on all the nodes.&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Enable IPv4 packet forwarding:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# sysctl params required by setup, params persist across reboots&lt;/span&gt;
&lt;span class="nb"&gt;cat&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt; | sudo tee /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
&lt;/span&gt;&lt;span class="no"&gt;EOF

&lt;/span&gt;&lt;span class="c"&gt;# Apply sysctl params without reboot&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;sysctl &lt;span class="nt"&gt;--system&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;Run &lt;code&gt;sudo sysctl net.ipv4.ip_forward&lt;/code&gt; to verify that &lt;code&gt;net.ipv4.ip_forward&lt;/code&gt; is set to 1&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Specify and load the following kernel module dependencies:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cat&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt; | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
&lt;/span&gt;&lt;span class="no"&gt;EOF

&lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;modprobe overlay
&lt;span class="nb"&gt;sudo &lt;/span&gt;modprobe br_netfilter
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Install containerd:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Add Docker's official GPG key:&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get update
&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get &lt;span class="nt"&gt;-y&lt;/span&gt; &lt;span class="nb"&gt;install &lt;/span&gt;ca-certificates curl
&lt;span class="nb"&gt;sudo install&lt;/span&gt; &lt;span class="nt"&gt;-m&lt;/span&gt; 0755 &lt;span class="nt"&gt;-d&lt;/span&gt; /etc/apt/keyrings
&lt;span class="nb"&gt;sudo &lt;/span&gt;curl &lt;span class="nt"&gt;-fsSL&lt;/span&gt; https://download.docker.com/linux/ubuntu/gpg &lt;span class="nt"&gt;-o&lt;/span&gt; /etc/apt/keyrings/docker.asc
&lt;span class="nb"&gt;sudo chmod &lt;/span&gt;a+r /etc/apt/keyrings/docker.asc
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;&lt;strong&gt;&lt;em&gt;Add the repository to Apt sources:&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="s2"&gt;"deb [arch=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;dpkg &lt;span class="nt"&gt;--print-architecture&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt; signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu &lt;/span&gt;&lt;span class="se"&gt;\&lt;/span&gt;&lt;span class="s2"&gt;
  &lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt; /etc/os-release &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$VERSION_CODENAME&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt; stable"&lt;/span&gt; | &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nb"&gt;sudo tee&lt;/span&gt; /etc/apt/sources.list.d/docker.list &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; /dev/null
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;&lt;strong&gt;&lt;em&gt;Install containerd&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get update
&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get &lt;span class="nt"&gt;-y&lt;/span&gt; &lt;span class="nb"&gt;install &lt;/span&gt;containerd.io
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;For more details, refer to the &lt;a href="https://github.com/containerd/containerd/blob/main/docs/getting-started.md#option-2-from-apt-get-or-dnf"&gt;Official installation documentation&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Configure systemd cgroup driver for containerd&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;First, we need to create a containerd configuration file at the location &lt;code&gt;/etc/containerd/config.toml&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; /etc/containerd
&lt;span class="nb"&gt;sudo &lt;/span&gt;containerd config default | &lt;span class="nb"&gt;sudo tee&lt;/span&gt; /etc/containerd/config.toml
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;Now, we enable the systemd cgroup driver for the CRI in &lt;code&gt;/etc/containerd/config.toml&lt;/code&gt; at section &lt;code&gt;[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]&lt;/code&gt; set &lt;code&gt;SystemCgroup = true&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faih3rxrkar7zj917epzf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faih3rxrkar7zj917epzf.png" alt="Screenshot" width="782" height="387"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;OR we can just run&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo sed&lt;/span&gt; &lt;span class="nt"&gt;-i&lt;/span&gt; &lt;span class="s2"&gt;"s/SystemdCgroup = false/SystemdCgroup = true/g"&lt;/span&gt; &lt;span class="s2"&gt;"/etc/containerd/config.toml"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;Restart containerd&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl restart containerd
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;Containerd should be running, check the status using:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl status containerd
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Install kubeadm, kubelet and kubectl
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Run these commands on all nodes. These instructions are for Kubernetes v1.30.&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Install&lt;/strong&gt;&lt;code&gt;apt-transport-https, ca-certificates, curl, gpg&lt;/code&gt;&lt;strong&gt;packages&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get update
&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; apt-transport-https ca-certificates curl gpg
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Download the public signing key for the Kubernetes package repositories&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-fsSL&lt;/span&gt; https://pkgs.k8s.io/core:/stable:/v1.30/deb/Release.key | &lt;span class="nb"&gt;sudo &lt;/span&gt;gpg &lt;span class="nt"&gt;--dearmor&lt;/span&gt; &lt;span class="nt"&gt;-o&lt;/span&gt; /etc/apt/keyrings/kubernetes-apt-keyring.gpg
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Add the&lt;/strong&gt;&lt;code&gt;apt&lt;/code&gt;&lt;strong&gt;repository for Kubernetes 1.30&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s1"&gt;'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.30/deb/ /'&lt;/span&gt; | &lt;span class="nb"&gt;sudo tee&lt;/span&gt; /etc/apt/sources.list.d/kubernetes.list
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Install kubelet, kubeadm and kubectl&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get update
&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; kubelet kubeadm kubectl
&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-mark hold kubelet kubeadm kubectl
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;This is optional, Enable the kubelet service before running kubeadm&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl &lt;span class="nb"&gt;enable&lt;/span&gt; &lt;span class="nt"&gt;--now&lt;/span&gt; kubelet
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Initialize the k8s control plane
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Run these instructions only on the control plane node&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;&lt;em&gt;kubeadm init&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To initialize the control plane, run the &lt;code&gt;kubeadm init&lt;/code&gt; command. You also need to choose a pod network add-on and deploy a &lt;code&gt;Container Network Interface (CNI)&lt;/code&gt; so that your Pods can communicate with each other. Cluster DNS (CoreDNS) will not start until a network is installed.&lt;/p&gt;

&lt;p&gt;We will use Calico CNI, so set the &lt;code&gt;--pod-network-cidr=192.168.0.0/16&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;kubeadm init &lt;span class="nt"&gt;--pod-network-cidr&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;192.168.0.0/16
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;Now, at the end of the &lt;code&gt;kubeadm init&lt;/code&gt; command, you'll see &lt;code&gt;kubeadm join&lt;/code&gt; command&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sudo kubeadm join &amp;lt;control-plane-ip&amp;gt;:&amp;lt;control-plane-port&amp;gt; --token &amp;lt;token&amp;gt; --discovery-token-ca-cert-hash &amp;lt;hash&amp;gt;&lt;/code&gt; copy it and keep it safe.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Run the following commands to set kubeconfig to access the cluster using kubectl&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; &lt;span class="nv"&gt;$HOME&lt;/span&gt;/.kube
&lt;span class="nb"&gt;sudo cp&lt;/span&gt; &lt;span class="nt"&gt;-i&lt;/span&gt; /etc/kubernetes/admin.conf &lt;span class="nv"&gt;$HOME&lt;/span&gt;/.kube/config
&lt;span class="nb"&gt;sudo chown&lt;/span&gt; &lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;id&lt;/span&gt; &lt;span class="nt"&gt;-u&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;:&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;id&lt;/span&gt; &lt;span class="nt"&gt;-g&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt; &lt;span class="nv"&gt;$HOME&lt;/span&gt;/.kube/config
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;Now, check the node using&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get nodes
NAME             STATUS     ROLES           AGE    VERSION
ip-zzz-zz-z-zz   NotReady   control-plane   114s   v1.30.2
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;The node is in &lt;code&gt;NotReady&lt;/code&gt; state because &lt;code&gt;message: 'container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized' reason: KubeletNotReady&lt;/code&gt; . After setting up the CNI, the node should be in a Ready state.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Set Up a Pod Network&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We must deploy a Container Network Interface (CNI) based Pod network add-on so that Pods can communicate with each other.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Install the calico operator on the cluster.&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl create &lt;span class="nt"&gt;-f&lt;/span&gt; https://raw.githubusercontent.com/projectcalico/calico/v3.28.0/manifests/tigera-operator.yaml
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;&lt;strong&gt;&lt;em&gt;Download the custom resources necessary to configure Calico.&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl create &lt;span class="nt"&gt;-f&lt;/span&gt; https://raw.githubusercontent.com/projectcalico/calico/v3.28.0/manifests/custom-resources.yaml
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;&lt;strong&gt;&lt;em&gt;Verify all the Calico pods are in running state.&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get pods &lt;span class="nt"&gt;-n&lt;/span&gt; calico-system
NAME                                       READY   STATUS    RESTARTS   AGE
calico-kube-controllers-6f459db86d-mg657   1/1     Running   0          3m36s
calico-node-ctc9q                          1/1     Running   0          3m36s
calico-typha-774d5fbdb7-s7qsg              1/1     Running   0          3m36s
csi-node-driver-bblm8                      2/2     Running   0          3m3
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;&lt;strong&gt;&lt;em&gt;Verify that the node is in a running state&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get nodes
NAME             STATUS   ROLES           AGE     VERSION
ip-xxx-xx-x-xx   Ready    control-plane   2m46s   v1.30.2
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Join the worker nodes
&lt;/h2&gt;

&lt;p&gt;Ensure &lt;code&gt;containerd&lt;/code&gt;, &lt;code&gt;kubeadm&lt;/code&gt;, &lt;code&gt;kubectl&lt;/code&gt;, and &lt;code&gt;kubelet&lt;/code&gt; are installed on all worker nodes, then run &lt;code&gt;sudo kubeadm join &amp;lt;control-plane-ip&amp;gt;:&amp;lt;control-plane-port&amp;gt; --token &amp;lt;token&amp;gt; --discovery-token-ca-cert-hash &amp;lt;hash&amp;gt;&lt;/code&gt;, which you can find at the end of the &lt;code&gt;kubeadm init&lt;/code&gt; command's output.&lt;/p&gt;

&lt;h3&gt;
  
  
  Check the cluster state
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Run these commands on the control-plane node since the worker nodes do not have the kubeconfig file.&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Check if the worker nodes are joined to the cluster.&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get nodes
NAME               STATUS   ROLES           AGE    VERSION
ip-xxx-xx-xx-xx    Ready    &amp;lt;none&amp;gt;          9m9s   v1.30.2
ip-yyy-yy-yy-yy    Ready    &amp;lt;none&amp;gt;          23s    v1.30.2
ip-zzz-zz-z-zz     Ready    control-plane   27m    v1.30.2
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;To add a worker role to the worker node we can use &lt;code&gt;kubectl label node &amp;lt;node-name&amp;gt; node-role.kubernetes.io/worker=worker&lt;/code&gt; command.&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get nodes
NAME               STATUS   ROLES           AGE     VERSION
ip-xxx-xx-xx-xx    Ready    worker          12m     v1.30.2
ip-yyy-yy-yy-yy    Ready    worker          3m51s   v1.30.2
ip-zzz-zz-z-zz     Ready    control-plane   31m     v1.30.2
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Check the workload running on the cluster&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get pods &lt;span class="nt"&gt;-A&lt;/span&gt;
NAMESPACE          NAME                                       READY   STATUS    RESTARTS   AGE
calico-apiserver   calico-apiserver-6fcb65fbd5-n4wsn          1/1     Running   0          35m
calico-apiserver   calico-apiserver-6fcb65fbd5-nnggl          1/1     Running   0          35m
calico-system      calico-kube-controllers-6f459db86d-mg657   1/1     Running   0          35m
calico-system      calico-node-ctc9q                          1/1     Running   0          35m
calico-system      calico-node-dmgt2                          1/1     Running   0          18m
calico-system      calico-node-nw4t5                          1/1     Running   0          9m49s
calico-system      calico-typha-774d5fbdb7-s7qsg              1/1     Running   0          35m
calico-system      calico-typha-774d5fbdb7-sxb5c              1/1     Running   0          9m39s
calico-system      csi-node-driver-bblm8                      2/2     Running   0          35m
calico-system      csi-node-driver-jk4sz                      2/2     Running   0          18m
calico-system      csi-node-driver-tbrrj                      2/2     Running   0          9m49s
kube-system        coredns-7db6d8ff4d-5f7s5                   1/1     Running   0          37m
kube-system        coredns-7db6d8ff4d-qj9r8                   1/1     Running   0          37m
kube-system        etcd-ip-zzz-zz-z-zz                        1/1     Running   0          37m
kube-system        kube-apiserver-ip-zzz-zz-z-zz              1/1     Running   0          37m
kube-system        kube-controller-manager-ip-zzz-zz-z-zz     1/1     Running   0          37m
kube-system        kube-proxy-dq8k4                           1/1     Running   0          9m49s
kube-system        kube-proxy-t2sw9                           1/1     Running   0          18m
kube-system        kube-proxy-xd6nn                           1/1     Running   0          37m
kube-system        kube-scheduler-ip-zzz-zz-z-zz              1/1     Running   0          37m
tigera-operator    tigera-operator-76ff79f7fd-jj4kf           1/1     Running   0          35m
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Setting up a Kubernetes cluster with Kubeadm involves a clear and structured process. You can create a functional cluster by meeting all prerequisites, configuring the container runtime, and installing Kubernetes components. Using Calico for networking ensures seamless pod communication. With the control plane and worker nodes properly configured and joined, you can efficiently manage and deploy workloads across your Kubernetes cluster.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Thank you for reading this blog; your interest is greatly appreciated, and I hope it helps you on your Kubernetes journey; in the next article, we'll explore running workloads in the Kubernetes cluster.&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>cka</category>
      <category>devops</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Introduction to Kubernetes</title>
      <dc:creator>Pratik Jagrut</dc:creator>
      <pubDate>Wed, 19 Jun 2024 19:24:20 +0000</pubDate>
      <link>https://dev.to/pratikjagrut/introduction-to-kubernetes-3p1i</link>
      <guid>https://dev.to/pratikjagrut/introduction-to-kubernetes-3p1i</guid>
      <description>&lt;h2&gt;
  
  
  What is Kubernetes?
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;Kubernetes is a portable, extensible, open-source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation. It has a large, rapidly growing ecosystem. Kubernetes services, support, and tools are widely available.&lt;/code&gt; - &lt;em&gt;From Kubernetes Docs&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;In simpler terms, Kubernetes, also known as an orchestrator, is an open-source platform that automates the management of containers. Originally developed by &lt;code&gt;Google&lt;/code&gt; as an internal container management project named &lt;code&gt;Borg&lt;/code&gt;, Kubernetes was made open-source on &lt;code&gt;June 7, 2014&lt;/code&gt;. In &lt;code&gt;July 2015&lt;/code&gt;, it was donated to the &lt;code&gt;Cloud Native Computing Foundation (CNCF)&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Kubernetes means &lt;code&gt;helmsman&lt;/code&gt; or &lt;code&gt;pilot&lt;/code&gt; in Greek, reflecting its role in guiding containerized applications. It is also known as &lt;code&gt;k8s&lt;/code&gt;, a shorthand that represents the &lt;code&gt;8 letters between the 'k' and the 's'&lt;/code&gt; .&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Features
&lt;/h2&gt;

&lt;p&gt;Kubernetes offers a robust set of features, including:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Automatic Bin Packing:&lt;/strong&gt; Schedules containers automatically based on resource needs and constraints, ensuring efficient utilization without compromising availability.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Self-Healing:&lt;/strong&gt; Replaces and reschedules containers from failed nodes, restarts unresponsive containers, and prevents traffic from being routed to them.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Horizontal Scaling:&lt;/strong&gt; Scales applications manually or automatically based on CPU or custom metrics utilization.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Service Discovery and Load Balancing:&lt;/strong&gt; Assigns IP addresses to containers and provides a single DNS name for a set of containers to facilitate load balancing.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Automated Rollouts and Rollbacks:&lt;/strong&gt; Manages seamless rollouts and rollbacks of application updates and configuration changes, continuously monitoring application health to prevent downtime.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Secret and Config Management:&lt;/strong&gt; Manages secrets and configuration details separately from container images, avoiding the need to rebuild images.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Storage Orchestration:&lt;/strong&gt; Automatically mounts storage solutions to containers from local storage, cloud providers, or network storage systems.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Batch Execution:&lt;/strong&gt; Supports batch execution and long-running jobs, and replaces failed containers as needed.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Role-Based Access Control (RBAC):&lt;/strong&gt; Regulates access to cluster resources based on user roles within an enterprise.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Extensibility:&lt;/strong&gt; Extends functionality through Custom Resource Definitions (CRDs), operators, custom APIs, and more.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With its extensive array of capabilities, Kubernetes simplifies the management of containerized applications, ensuring optimal efficiency and performance, while also providing scalability, resilience, and flexibility for diverse workloads.&lt;/p&gt;

&lt;h2&gt;
  
  
  Cluster Architecture
&lt;/h2&gt;

&lt;p&gt;Kubernetes cluster has a straightforward yet elegant and efficient architecture, consisting of at least one &lt;code&gt;master node&lt;/code&gt;, also known as a &lt;code&gt;control-plane node&lt;/code&gt;, and at least one &lt;code&gt;worker node&lt;/code&gt;. The diagram below illustrates the cluster architecture.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffsz467yk45va18exodfu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffsz467yk45va18exodfu.png" alt="K8s Architecture" width="800" height="419"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Control-plane node
&lt;/h3&gt;

&lt;p&gt;The control plane is responsible for maintaining the overall state of the cluster. The control plane includes components like the API server, scheduler, controller manager, and etcd, which coordinate and manage the cluster operations.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;kube-api-server&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The Kubernetes API server serves as the core of the control plane. It exposes an HTTP API through which users, external components, and cluster components securely interact to manage the state of Kubernetes objects. It validates incoming requests before storing them and supports the incorporation of custom API servers to expand its functionality. Highly configurable, it accommodates diverse configurations and extensions to suit specific cluster requirements.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scheduler&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The Kubernetes scheduler watches for newly created pods without assigned nodes and selects suitable nodes for them. It retrieves resource usage data for each worker node from etcd via the API server. Additionally, it incorporates scheduling requirements specified in the pod's configuration, such as the preference to run on nodes labelled with specific attributes like "disk==ssd".&lt;/p&gt;

&lt;p&gt;So basically, the scheduler is responsible for assigning a node to a pod based on available resources and scheduling constraints.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;kube-controller-manager&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The kube-controller-manager is a collection of different Kubernetes controllers, running as a single binary. It ensures that the &lt;code&gt;actual state of objects matches their desired state&lt;/code&gt;. Each controller watches over its objects, maintains their state, and plays a specific role in maintaining the health and desired configuration of the cluster. Key controllers within kube-controller-manager include the &lt;code&gt;ReplicaSet controller, Deployment controller, Namespace controller, ServiceAccount controller, Endpoint controller, and Persistent Volume controller&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;cloud-controller-manager&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The cloud-controller-manager includes the &lt;code&gt;Node controller, Route controller, Service controller, and Volume controller&lt;/code&gt;. These controllers are responsible for interfacing with the cloud infrastructure to manage nodes, storage volumes, load balancing, and routing. They ensure seamless integration and management of cloud resources within the Kubernetes cluster.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ETCD&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;ETCD is a distributed key-value store used to persist the state of a Kubernetes cluster. New data is always appended, never replaced, and obsolete data is compacted periodically to reduce the data store size. Only the Kubernetes API server can communicate directly with ETCD to ensure consistency and security. The ETCD CLI management tool provides capabilities for backup, snapshot, and restore.&lt;/p&gt;

&lt;h3&gt;
  
  
  Worker node
&lt;/h3&gt;

&lt;p&gt;A worker node in Kubernetes executes application workloads by hosting and managing containers, providing essential computational resources within the cluster. It consists of components such as the &lt;code&gt;Kubelet&lt;/code&gt;, &lt;code&gt;kube-proxy&lt;/code&gt;, and a &lt;code&gt;container runtime interface(CRI)&lt;/code&gt; like &lt;code&gt;Docker&lt;/code&gt; or &lt;code&gt;containerd&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Kubelet&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The kubelet operates as an agent on each node within the Kubernetes cluster, maintaining communication with the control plane components. It receives pod definitions from the API server and coordinates with the container runtime to instantiate and manage containers associated with those pods. The kubelet also monitors the health and lifecycle of containers and manages resources as per pod specifications.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;kube-proxy&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The kube-proxy is a network agent running on each node in the Kubernetes cluster. It dynamically updates and maintains the network rules on the node to facilitate communication between pods and external traffic. It abstracts the complexities of pod networking by managing services and routing connections to the appropriate pods based on IP address and port number.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Container Runtime&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In Kubernetes, a container runtime is essential for each node to handle the lifecycle of containers. Some of the known container runtimes are Docker, CRI-O, containerd, and rkt. These runtimes interface with Kubernetes using the Container Runtime Interface (CRI), ensuring that containers are created, managed, and terminated as needed within the cluster.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pod&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In Kubernetes, you cannot directly run containers as you would with Docker. Instead, containers are grouped into units called pods. A pod can host multiple containers and is the smallest deployable object in Kubernetes.&lt;/p&gt;

&lt;h2&gt;
  
  
  How does all of this come together?
&lt;/h2&gt;

&lt;p&gt;Let's see this with an example of pod creation:&lt;/p&gt;

&lt;p&gt;First, define the pod by creating a YAML or JSON file that specifies its configuration, including &lt;code&gt;container images, resource requirements, environment variables, storage volumes etc&lt;/code&gt;. This file acts as the pod's blueprint.&lt;/p&gt;

&lt;p&gt;Once you have the pod definition file ready, you submit it to the Kubernetes API server using the &lt;code&gt;kubectl&lt;/code&gt; command-line tool. For instance, you can apply the configuration with a command like &lt;code&gt;kubectl apply -f pod-definition.yaml&lt;/code&gt;. Alternatively, you can create the pod directly with a command such as &lt;code&gt;kubectl run my-pod --image=my-image&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7xjnszxze0vqros0w57f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7xjnszxze0vqros0w57f.png" alt="User-to-api-server" width="800" height="310"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The Kubernetes API server receives and validates the pod creation request, ensuring it meets all criteria. Once validated, it stores the pod definition in etcd, the cluster's key-value store.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbpinafu3t50j9q30jaa8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbpinafu3t50j9q30jaa8.png" alt="api-server-to-etcd" width="800" height="429"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The Kubernetes scheduler watches for newly created &lt;code&gt;pods without assigned nodes&lt;/code&gt;. It interacts with the API server to get the pod configuration, evaluates the resource requirements and constraints, and then selects a suitable worker node for the pod, updating the pod configuration with the &lt;code&gt;nodeName&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi7tcjpxwyt2myl8opcps.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi7tcjpxwyt2myl8opcps.png" alt="scheduler" width="800" height="391"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;On the assigned node, the kubelet receives the pod definition from the API server. &lt;code&gt;It manages pods and their containers, retrieves container images from a registry if needed, and uses the container runtime (like Docker or containerd) to create and start containers&lt;/code&gt; as specified and &lt;code&gt;also set up storage as required&lt;/code&gt;. It monitors container health and can restart them based on defined policies. Meanwhile, &lt;code&gt;kube-proxy configures networking rules for pod communication&lt;/code&gt;. When all containers are running and ready, the pod can accept traffic, showcasing Kubernetes' orchestration in maintaining application states.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhyo0mz1e1mvyjps49ze3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhyo0mz1e1mvyjps49ze3.png" alt="worker-node" width="800" height="523"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Kubernetes is a robust platform for managing containerized applications, offering features that simplify deployment, scaling, and maintenance. This article covered its origins, core functionalities, and architecture. Leveraging Kubernetes enhances efficiency, resilience, and flexibility, making it essential for modern cloud-native environments.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Thank you for reading this blog; your interest is greatly appreciated, and I hope it helps you on your Kubernetes journey; in the next article, we'll explore how to install Kubernetes using the Kubeadm tool.&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>containers</category>
      <category>cloudnative</category>
      <category>opensource</category>
    </item>
    <item>
      <title>Introduction to Container Orchestration</title>
      <dc:creator>Pratik Jagrut</dc:creator>
      <pubDate>Mon, 17 Jun 2024 06:06:03 +0000</pubDate>
      <link>https://dev.to/pratikjagrut/introduction-to-container-orchestration-352f</link>
      <guid>https://dev.to/pratikjagrut/introduction-to-container-orchestration-352f</guid>
      <description>&lt;p&gt;Containerization has revolutionized how we develop and deploy software, streamlining processes and enhancing scalability. It all began modestly in &lt;code&gt;1979&lt;/code&gt; with the introduction of &lt;code&gt;chroot&lt;/code&gt;, a Unix feature that allowed applications to operate within a confined directory subset. This breakthrough laid the groundwork for application isolation, a crucial concept for modern containerization.&lt;/p&gt;

&lt;p&gt;Building upon chroot, FreeBSD's introduction of &lt;code&gt;jails&lt;/code&gt; in 2000 marked a significant advancement. Jails provided a more robust form of isolation within a FreeBSD environment, enabling multiple applications to run securely on the same host without interference. This development was pivotal in demonstrating the practicality of isolating software environments for enhanced security and efficiency.&lt;/p&gt;

&lt;p&gt;Following FreeBSD, &lt;code&gt;Solaris Containers (2004),&lt;/code&gt; also known as &lt;code&gt;Zones&lt;/code&gt; refined containerization by introducing sophisticated resource management capabilities. Zones allowed administrators to allocate specific CPU, memory, and storage resources to each container, optimizing hardware utilization and paving the way for efficient data centre management.&lt;/p&gt;

&lt;p&gt;Google's &lt;code&gt;control group(cgroup)&lt;/code&gt;, integrated into the Linux kernel in &lt;code&gt;2007&lt;/code&gt;, brought fine-grained resource control to Linux-based containers. This innovation enabled administrators to manage and isolate resource usage among groups of processes, enhancing predictability and performance in containerized environments.&lt;/p&gt;

&lt;p&gt;The culmination of these advancements led to the creation of &lt;code&gt;Linux Containers (LXC) in 2008&lt;/code&gt;, which provided a user-friendly interface for leveraging Linux kernel features like &lt;code&gt;cgroups&lt;/code&gt; and &lt;code&gt;namespaces&lt;/code&gt;. LXC enabled the creation and management of lightweight, isolated Linux environments, marking a significant milestone towards the widespread adoption of container technology.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;In 2013, Docker&lt;/code&gt; revolutionized containerization with its user-friendly platform for creating, deploying, and managing containers. Initially built upon LXC, Docker later introduced its &lt;code&gt;container runtime and libcontainer&lt;/code&gt;, which l&lt;code&gt;everaged Linux namespaces, control groups, and other kernel features&lt;/code&gt;. Docker's standardized container format and tooling simplified application packaging and deployment, accelerating the adoption of containers in both development and production environments.&lt;/p&gt;

&lt;p&gt;Around the same time, the technological landscape experienced a major shift in software architecture. It moved from monolithic applications, where all modules run on a single machine and are tightly coupled, to a more decentralized and scalable model known as microservices architecture. In the 2000s, the rise of microservices architecture and the adoption of cloud computing rapidly accelerated the use of containerization. However, efficiently managing and orchestrating these containers remains a significant challenge.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Challenges in Container Management&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Efficiently managing and orchestrating these containers at scale remains a formidable task, presenting challenges such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Deployment&lt;/strong&gt;: Deploying numerous containers across diverse environments requires meticulous handling of versions, dependencies, and configurations to ensure consistency and reliability.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Scaling&lt;/strong&gt;: Applications must scale dynamically to meet varying demands, necessitating automated mechanisms that optimize resource usage without manual intervention.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Networking&lt;/strong&gt;: Effective networking is essential for seamless service discovery, load balancing, and secure communication among containers, demanding robust management policies.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Resource Management&lt;/strong&gt;: Efficient allocation of CPU, memory, and storage resources is critical to prevent performance bottlenecks and control operational costs effectively.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Security&lt;/strong&gt;: Ensuring container security requires implementing strict access controls, secure configurations, and isolation strategies to mitigate risks of breaches.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;High Availability&lt;/strong&gt;: Maintaining application availability involves proactive management of container failures, load balancing, and resilient failover strategies to minimize downtime.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Addressing these challenges is crucial for leveraging the full potential of containerization, enabling agility, scalability, and efficiency in software development and deployment.&lt;/p&gt;

&lt;h3&gt;
  
  
  Container Orchestration
&lt;/h3&gt;

&lt;p&gt;While containerization has revolutionized software deployment, efficiently managing and scaling containerized applications across complex environments remains a daunting task. Container orchestration addresses these challenges by automating deployment, scaling, and management processes, ensuring applications run seamlessly from development through to production.&lt;/p&gt;

&lt;h3&gt;
  
  
  Container Orchestrators
&lt;/h3&gt;

&lt;p&gt;Container orchestrators are tools that group systems together to form clusters where container deployment and management are automated at scale while meeting the production requirements.&lt;/p&gt;

&lt;p&gt;They provide essential functionalities such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Automated Deployment&lt;/strong&gt;: Simplifying the deployment process with declarative configurations and automated rollouts.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Scalability&lt;/strong&gt;: Enabling horizontal scaling of applications based on resource demands, ensuring performance and efficiency.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Networking Automation&lt;/strong&gt;: Facilitating efficient networking by managing service discovery, load balancing, and network security policies.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Resource Optimization&lt;/strong&gt;: Optimizing resource allocation and utilization to enhance performance and reduce operational costs.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Security Enhancements&lt;/strong&gt;: Implementing security best practices, including isolation mechanisms, encryption, and access controls.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;High Availability Strategies&lt;/strong&gt;: Ensuring continuous application availability through automated failover, load distribution, and recovery mechanisms.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Popular Container Orchestrators are:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Kubernetes&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Developed by Google and is now maintained by the Cloud Native Computing Foundation (CNCF).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Features: Automatic bin packing, self-healing, horizontal scaling, service discovery, load balancing, and automated rollouts and rollbacks.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Docker Swarm&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Native clustering and orchestration solution for Docker containers.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Features: Easy setup, Docker CLI compatibility, service discovery, load balancing, scaling, and rolling updates.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Apache Mesos&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;An open-source project that abstracts CPU, memory, storage, and other compute resources away from machines.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Features: Highly scalable, supports multiple frameworks, resource isolation, fault tolerance, and elasticity.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Nomad&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Developed by HashiCorp, it is a flexible and simple workload orchestrator.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Features: Multi-region, multi-cloud deployment, integrates with Consul for service discovery and Vault for secrets management, easy to use, and supports multiple workloads (Docker, non-containerized, Windows, etc.).&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;OpenShift&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Developed by Red Hat, built on top of Kubernetes with additional features.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Features: Developer and operational tools, automated installation, upgrade management, monitoring, logging, and security policies.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Rancher&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;An open-source platform for managing Kubernetes at scale.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Features: Multi-cluster management, integrated monitoring and logging, centralized RBAC, and supports any Kubernetes distribution.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Amazon Elastic Kubernetes Service (EKS)&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Managed Kubernetes service by Amazon Web Services.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Features: Fully managed, integrated with AWS services, auto-scaling, security, and compliance.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Google Kubernetes Engine (GKE)&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Managed Kubernetes service by Google Cloud.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Features: Fully managed, integrated with Google Cloud services, auto-scaling, security, and compliance.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Azure Kubernetes Service (AKS)&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Managed Kubernetes service by Microsoft Azure.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Features: Fully managed, integrated with Azure services, auto-scaling, security, and compliance.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;IBM Cloud Kubernetes Service&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Managed Kubernetes service by IBM Cloud.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Features: Fully managed, integrated with IBM Cloud services, auto-scaling, security, and compliance.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Alibaba Cloud Container Service for Kubernetes (ACK)&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Managed Kubernetes service by Alibaba Cloud.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Features: Fully managed, integrated with Alibaba Cloud services, auto-scaling, security, and compliance.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Conclusion and Future Articles
&lt;/h3&gt;

&lt;p&gt;In conclusion, containerization has revolutionized software development and deployment, offering scalability, efficiency, and agility crucial in today's dynamic landscape. As we've explored the evolution from chroot to Docker and the challenges of managing containerized environments, it's clear that container orchestration is pivotal.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Thank you for reading this blog; your interest is greatly appreciated, and I hope it helps you on your Kubernetes journey; in the next blog, we'll explore Kubernetes, covering its features, architecture and core components.&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>containers</category>
      <category>kubernetes</category>
      <category>devops</category>
      <category>opensource</category>
    </item>
    <item>
      <title>Kubernetes: Hello World</title>
      <dc:creator>Pratik Jagrut</dc:creator>
      <pubDate>Sun, 09 Jun 2024 12:14:45 +0000</pubDate>
      <link>https://dev.to/pratikjagrut/kubernetes-hello-world-268d</link>
      <guid>https://dev.to/pratikjagrut/kubernetes-hello-world-268d</guid>
      <description>&lt;h3&gt;
  
  
  Introduction
&lt;/h3&gt;

&lt;p&gt;Deploying software can be a daunting and unpredictable task. Kubernetes, often referred to as &lt;strong&gt;&lt;em&gt;K8s&lt;/em&gt;&lt;/strong&gt;, serves as a proficient navigator in this complex landscape. It is an open-source container orchestration platform, that automates the deployment, scaling, and management of applications within containers. These containers are compact, self-sufficient units that house everything an application needs, ensuring consistency across diverse environments.&lt;/p&gt;

&lt;h3&gt;
  
  
  Prepare the application
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Clone the Repository&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In this guide, we're using &lt;a href="https://github.com/pratikjagrut/hello-kubernetes"&gt;&lt;strong&gt;&lt;em&gt;hello-Kubernetes&lt;/em&gt;&lt;/strong&gt;&lt;/a&gt;&lt;strong&gt;&lt;em&gt;, a&lt;/em&gt;&lt;/strong&gt; simple web-based application written in Go. You can find the source code &lt;a href="https://github.com/pratikjagrut/hello-kubernetes"&gt;here&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone https://github.com/pratikjagrut/hello-kubernetes.git
&lt;span class="nb"&gt;cd &lt;/span&gt;hello-kubernetes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Understanding the Code&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="k"&gt;package&lt;/span&gt; &lt;span class="n"&gt;main&lt;/span&gt;

&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="s"&gt;"fmt"&lt;/span&gt;
    &lt;span class="s"&gt;"log"&lt;/span&gt;
    &lt;span class="s"&gt;"net/http"&lt;/span&gt;
    &lt;span class="s"&gt;"os"&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="n"&gt;handler&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;w&lt;/span&gt; &lt;span class="n"&gt;http&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;ResponseWriter&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;r&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;http&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Request&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;log&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Printf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Received request from %s"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;r&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;RemoteAddr&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;fmt&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Fprintf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;w&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"Hello, Kubernetes!"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="n"&gt;main&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;port&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Getenv&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"PORT"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;port&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="s"&gt;""&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;port&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"8080"&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="n"&gt;http&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;HandleFunc&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"/"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;handler&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="k"&gt;go&lt;/span&gt; &lt;span class="k"&gt;func&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;log&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Printf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Server listening on port %s..."&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;port&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;http&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;ListenAndServe&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;":"&lt;/span&gt;&lt;span class="o"&gt;+&lt;/span&gt;&lt;span class="n"&gt;port&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="no"&gt;nil&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="no"&gt;nil&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="n"&gt;log&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Fatal&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Failed to start the server"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}()&lt;/span&gt;

    &lt;span class="n"&gt;log&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Printf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Click on http://localhost:%s"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;port&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="n"&gt;done&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="nb"&gt;make&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;chan&lt;/span&gt; &lt;span class="kt"&gt;bool&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="o"&gt;&amp;lt;-&lt;/span&gt;&lt;span class="n"&gt;done&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This Go application sets up an HTTP server that responds with "Hello, Kubernetes!" and logs request details. It runs the server concurrently, keeping the main function active.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Dockerfile&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The Dockerfile uses a multi-stage build to create a minimal container:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Builds the Go application.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Creates a minimal final image using &lt;strong&gt;&lt;em&gt;scratch&lt;/em&gt;&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Exposes port &lt;strong&gt;&lt;em&gt;8080&lt;/em&gt;&lt;/strong&gt; and sets the command to run the Go application.&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="c"&gt;# Builder Stage&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;cgr.dev/chainguard/go:latest&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;as&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;builder&lt;/span&gt;
&lt;span class="c"&gt;# Set the working directory inside the container&lt;/span&gt;
&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /app&lt;/span&gt;
&lt;span class="c"&gt;# Copy the application source code into the container&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; . .&lt;/span&gt;
&lt;span class="c"&gt;# Download dependencies using Go modules&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;go mod download
&lt;span class="c"&gt;# Build the Go application&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;&lt;span class="nv"&gt;CGO_ENABLED&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;0 &lt;span class="nv"&gt;GOOS&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;linux go build &lt;span class="nt"&gt;-a&lt;/span&gt; &lt;span class="nt"&gt;-installsuffix&lt;/span&gt; cgo &lt;span class="nt"&gt;-o&lt;/span&gt; main .
&lt;span class="c"&gt;# Final Stage&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; scratch&lt;/span&gt;
&lt;span class="c"&gt;# Copy the compiled application binary from the builder stage to the final image&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; --from=builder /app/main /app/main&lt;/span&gt;
&lt;span class="c"&gt;# Expose port 8080 to the outside world&lt;/span&gt;
&lt;span class="k"&gt;EXPOSE&lt;/span&gt;&lt;span class="s"&gt; 8080&lt;/span&gt;
&lt;span class="c"&gt;# Command to run the executable&lt;/span&gt;
&lt;span class="k"&gt;CMD&lt;/span&gt;&lt;span class="s"&gt; ["/app/main"]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Building the Container Image&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Docker&lt;/strong&gt;: Install Docker to create container images for your application. Refer to the official &lt;a href="https://docs.docker.com/get-docker/"&gt;Docker documentation&lt;/a&gt; for installation instructions.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Image Registry Account&lt;/strong&gt;: Sign up for an account on &lt;a href="https://github.com"&gt;GitHub&lt;/a&gt;, &lt;a href="https://hub.docker.com"&gt;DockerHub&lt;/a&gt;, or any other container image registry. You'll use this account to store and manage your container images.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Open the terminal and navigate to the repository directory.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Build the container image using the following command:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker build &lt;span class="nt"&gt;-t&lt;/span&gt; ghcr.io/pratikjagrut/hello-kubernetes &lt;span class="nb"&gt;.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;This command builds the container image using the &lt;code&gt;Dockerfile&lt;/code&gt; current directory. The &lt;code&gt;-t&lt;/code&gt; flag specifies the image name.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Testing application image&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Once the image is built, run a Docker container from the image:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;➜ docker run &lt;span class="nt"&gt;-p&lt;/span&gt; 8080:8080 ghcr.io/pratikjagrut/hello-kubernetes
2023/08/08 13:25:24 Click on the &lt;span class="nb"&gt;link &lt;/span&gt;http://localhost:8080
2023/08/08 13:25:24 Server listening on port 8080...
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;This command maps port 8080 of your host machine to port 8080 in the container.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Open a web browser and navigate to &lt;a href="http://localhost:8080"&gt;&lt;code&gt;http://localhost:8080&lt;/code&gt;&lt;/a&gt;. You should see the &lt;code&gt;Hello, Kubernetes!&lt;/code&gt; message.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Pushing the image to the container registry&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Here we've opted for the GitHub container registry. However, feel free to select a registry that aligns with your preferences.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Log in to Docker using the GitHub Container Registry:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker login ghcr.io
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;When you run the command, it will ask for your username and password. Enter these credentials to log into your container registry.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Push the tagged image to the GitHub Container Registry:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker push ghcr.io/pratikjagrut/hello-kubernetes
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Verify that the image is in your GitHub Container Registry by visiting the &lt;code&gt;Packages&lt;/code&gt; section of your GitHub repository.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Next, we'll set up a Kubernetes cluster to deploy our containerized application.&lt;/p&gt;

&lt;h3&gt;
  
  
  Setup Kubernetes cluster
&lt;/h3&gt;

&lt;p&gt;Here we'll use KIND (Kubernetes in Docker) as our local k8s cluster.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Installing KIND and Kubectl&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Before we dive into setting up the Kubernetes cluster, you'll need to install both KIND and kubectl on your machine.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;KIND (Kubernetes in Docker)&lt;/strong&gt;: KIND allows you to run Kubernetes clusters as Docker containers, making it perfect for local development. Follow the &lt;a href="https://kind.sigs.k8s.io/docs/user/quick-start/"&gt;official KIND installation guide&lt;/a&gt; to install it on your system.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;kubectl&lt;/strong&gt;: This command-line tool is essential for interacting with your Kubernetes cluster. Follow the &lt;a href="https://kubernetes.io/docs/tasks/tools/"&gt;Kubernetes documentation&lt;/a&gt; to install kubectl on your machine.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Creating Your KIND Cluster&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Once KIND and Kubectl are set up, let's create your local Kubernetes cluster:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Open your terminal.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Run the following command to create a basic KIND cluster:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kind create cluster
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Check if the cluster is properly up and running using &lt;code&gt;kubectl get ns&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;It should get all the namespaces present in the cluster.&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;➜ kubectl get ns
NAME                 STATUS   AGE
default              Active   3m13s
kube-node-lease      Active   3m14s
kube-public          Active   3m14s
kube-system          Active   3m14s
local-path-storage   Active   3m9s
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Alternative Setup Options:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Minikube&lt;/strong&gt;: If you prefer another local option, &lt;a href="https://minikube.sigs.k8s.io/docs/start/"&gt;Minikube&lt;/a&gt; provides a hassle-free way to run a single-node Kubernetes cluster on your local machine.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Docker Desktop&lt;/strong&gt;: For macOS and Windows users, &lt;a href="https://www.docker.com/products/docker-desktop"&gt;Docker Desktop&lt;/a&gt; offers a simple way to set up a Kubernetes cluster.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Rancher Desktop&lt;/strong&gt;: &lt;a href="https://rancherdesktop.io/"&gt;Rancher Desktop&lt;/a&gt; is another choice for a local development cluster that integrates with Kubernetes, Docker, and other tools.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Cloud Clusters&lt;/strong&gt;: If you'd rather work in a cloud environment, consider platforms like &lt;a href="https://cloud.google.com/kubernetes-engine"&gt;Google Kubernetes Engine (GKE)&lt;/a&gt; or &lt;a href="https://aws.amazon.com/eks/"&gt;Amazon EKS&lt;/a&gt; for managed Kubernetes clusters.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With your Kubernetes cluster up and running, you're ready to sail ahead with deploying your first application.&lt;/p&gt;

&lt;h3&gt;
  
  
  Deploy application on Kubernetes
&lt;/h3&gt;

&lt;p&gt;Now, we'll deploy our application onto the Kubernetes cluster.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Create a Kubernetes Deployment&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A &lt;strong&gt;Deployment&lt;/strong&gt; in Kubernetes serves as a manager for your application's components, known as &lt;em&gt;Pods&lt;/em&gt;. Think of it like a supervisor ensuring that the right number of Pods are running and matching your desired configuration.&lt;/p&gt;

&lt;p&gt;In more technical terms, a Deployment lets you define how many Pods you want and how they should be set up. If a Pod fails or needs an update, the Deployment Controller steps in to replace it. This ensures that your application remains available and runs smoothly.&lt;/p&gt;

&lt;p&gt;To put it simply, a Deployment takes care of keeping our application consistent and reliable, even when Pods face issues. It's a fundamental tool for maintaining the health of your application in a Kubernetes cluster.&lt;/p&gt;

&lt;p&gt;Here's how we can create a Deployment for our application:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create a YAML file named &lt;code&gt;app-deployment.yaml&lt;/code&gt;:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;hello-k8s-deployment&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;hello-k8s&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;hello-k8s&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;hello-k8s-container&lt;/span&gt;
          &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ghcr.io/pratikjagrut/hello-kubernetes&lt;/span&gt;
          &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;8080&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This YAML defines a Deployment named &lt;code&gt;hello-k8s-deployment&lt;/code&gt; that runs two replicas of our application.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Apply the Deployment to your Kubernetes cluster:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; hello-k8s-deployment.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, if you're using a GitHub registry just like me then you'll see an error(&lt;code&gt;ImagePullBackOff or ErrImagePull&lt;/code&gt;)(&lt;code&gt;failed to authorize: failed to fetch anonymous token: unexpected status: 401 Unauthorized&lt;/code&gt;) in deploying your application. By default the images on the GitHub container registry are private.&lt;/p&gt;

&lt;p&gt;When you describe the pods you'll see warning messages in the events section such as &lt;code&gt;Failed to pull image "&lt;/code&gt;&lt;a href="http://ghcr.io/pratikjagrut/hello-kubernetes"&gt;&lt;code&gt;ghcr.io/pratikjagrut/hello-kubernetes&lt;/code&gt;&lt;/a&gt;&lt;code&gt;"&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;➜ kubectl describe pods hello-k8s-deployment-54889c9777-549rn
...
Events:
  Type     Reason     Age                  From               Message
  &lt;span class="nt"&gt;----&lt;/span&gt;     &lt;span class="nt"&gt;------&lt;/span&gt;     &lt;span class="nt"&gt;----&lt;/span&gt;                 &lt;span class="nt"&gt;----&lt;/span&gt;               &lt;span class="nt"&gt;-------&lt;/span&gt;
  Normal   Scheduled  2m40s                default-scheduler  Successfully assigned default/hello-k8s-deployment-54889c9777-549rn to kind-control-plane
  Normal   Pulling    75s &lt;span class="o"&gt;(&lt;/span&gt;x4 over 2m39s&lt;span class="o"&gt;)&lt;/span&gt;  kubelet            Pulling image &lt;span class="s2"&gt;"ghcr.io/pratikjagrut/hello-kubernetes"&lt;/span&gt;
  Warning  Failed     74s &lt;span class="o"&gt;(&lt;/span&gt;x4 over 2m39s&lt;span class="o"&gt;)&lt;/span&gt;  kubelet            Failed to pull image &lt;span class="s2"&gt;"ghcr.io/pratikjagrut/hello-kubernetes"&lt;/span&gt;: rpc error: code &lt;span class="o"&gt;=&lt;/span&gt; Unknown desc &lt;span class="o"&gt;=&lt;/span&gt; failed to pull and unpack image &lt;span class="s2"&gt;"ghcr.io/pratikjagrut/hello-kubernetes:latest"&lt;/span&gt;: failed to resolve reference &lt;span class="s2"&gt;"ghcr.io/pratikjagrut/hello-kubernetes:latest"&lt;/span&gt;: failed to authorize: failed to fetch anonymous token: unexpected status: 401 Unauthorized
  Warning  Failed     74s &lt;span class="o"&gt;(&lt;/span&gt;x4 over 2m39s&lt;span class="o"&gt;)&lt;/span&gt;  kubelet            Error: ErrImagePull
  Warning  Failed     50s &lt;span class="o"&gt;(&lt;/span&gt;x6 over 2m39s&lt;span class="o"&gt;)&lt;/span&gt;  kubelet            Error: ImagePullBackOff
  Normal   BackOff    36s &lt;span class="o"&gt;(&lt;/span&gt;x7 over 2m39s&lt;span class="o"&gt;)&lt;/span&gt;  kubelet            Back-off pulling image &lt;span class="s2"&gt;"ghcr.io/pratikjagrut/hello-kubernetes"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This happened because Kubernetes is trying to pull the private image and it does not have permission to do so.&lt;/p&gt;

&lt;p&gt;When a container image is hosted in a private registry, we need to provide Kubernetes with credentials to pull the image via &lt;strong&gt;Image Pull Secrets&lt;/strong&gt;.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create a Docker registry secret:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl create secret docker-registry my-registry-secret &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--docker-username&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&amp;lt;your-username&amp;gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--docker-password&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&amp;lt;your-password&amp;gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--docker-server&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&amp;lt;your-registry-server&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Attach the secret to your Deployment:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;imagePullSecrets&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-registry-secret&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Apply the changes::&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; hello-k8s-deployment.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After applying the updated deployment you can see that all the pods are running.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;➜ kubectl get pods
NAME                                    READY   STATUS    RESTARTS   AGE
hello-k8s-deployment-669788ccd6-4dbb6   1/1     Running   0          22s
hello-k8s-deployment-669788ccd6-k5gfg   1/1     Running   0          37s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Access Your Application&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;With the Deployment in place, we can access our application externally. Since we're using KIND, we can use port-forwarding to access the application:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Find the name of one of the deployed Pods:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get pods &lt;span class="nt"&gt;-l&lt;/span&gt; &lt;span class="nv"&gt;app&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;hello-k8s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Forward local port 8080 to the Pod:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl port-forward &amp;lt;pod-name&amp;gt; 8080:8080
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, if you open a web browser and navigate to &lt;a href="http://localhost:8080"&gt;&lt;code&gt;http://localhost:8080&lt;/code&gt;&lt;/a&gt; or use &lt;code&gt;curl http://localhost:8080&lt;/code&gt; you should see "Hello, Kubernetes!" displayed, indicating your application is running successfully.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;➜ curl http://localhost:8080
Hello, Kubernetes!%
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;NOTE: For production, use a Kubernetes service and Ingress for optimal traffic handling.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Conclusion&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;In conclusion, this beginner's guide has walked you through deploying your first application on Kubernetes. But remember, this is just the start. Kubernetes offers vast opportunities for optimizing your application's performance, scalability, and resilience. With features like advanced networking, load balancing, automated scaling, and self-healing, Kubernetes ensures seamless application operation in any environment. So, while this guide ends here, your journey with Kubernetes is only beginning.&lt;/p&gt;

&lt;p&gt;Thank you for reading! Hope you find this helpful!&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>containers</category>
    </item>
  </channel>
</rss>
