<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Lloyd Rivers</title>
    <description>The latest articles on DEV Community by Lloyd Rivers (@lloydrivers).</description>
    <link>https://dev.to/lloydrivers</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/lloydrivers"/>
    <language>en</language>
    <item>
      <title>CKA Full Course 2024: Day 15/40 Kubernetes Node Affinity Explained</title>
      <dc:creator>Lloyd Rivers</dc:creator>
      <pubDate>Fri, 15 Nov 2024 20:51:37 +0000</pubDate>
      <link>https://dev.to/lloydrivers/cka-full-course-2024-day-1540-kubernetes-node-affinity-explained-1klf</link>
      <guid>https://dev.to/lloydrivers/cka-full-course-2024-day-1540-kubernetes-node-affinity-explained-1klf</guid>
      <description>&lt;p&gt;Here we go with another video and fresh labs we need to do. But before we get going, let’s discuss the differences between &lt;strong&gt;Node Affinity&lt;/strong&gt; and &lt;strong&gt;Taints and Tolerations&lt;/strong&gt; because I’ve got to keep it real, I was (more than) a bit confused.&lt;/p&gt;

&lt;p&gt;The key difference between &lt;strong&gt;Node Affinity&lt;/strong&gt; and &lt;strong&gt;Taints and Tolerations&lt;/strong&gt; lies in how they control pod placement on nodes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;a href="https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/" rel="noopener noreferrer"&gt;Node Affinity&lt;/a&gt;&lt;/strong&gt;: Specifies &lt;strong&gt;where&lt;/strong&gt; pods can be scheduled based on node labels. It uses "required" or "preferred" criteria to influence pod placement.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;a href="https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/" rel="noopener noreferrer"&gt;Taints and Tolerations&lt;/a&gt;&lt;/strong&gt;: Used to &lt;strong&gt;prevent&lt;/strong&gt; pods from being scheduled on certain nodes unless they explicitly "tolerate" the node’s taint. Taints are applied to nodes, and tolerations are applied to pods to allow them to be scheduled on tainted nodes.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  Exercises
&lt;/h3&gt;

&lt;h3&gt;
  
  
  1. &lt;strong&gt;Create a Pod with Node Affinity&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Create a pod with &lt;code&gt;nginx&lt;/code&gt; as the image.&lt;/li&gt;
&lt;li&gt;Add a &lt;strong&gt;Node Affinity&lt;/strong&gt; rule with the property &lt;code&gt;requiredDuringSchedulingIgnoredDuringExecution&lt;/code&gt;, setting the condition &lt;code&gt;disktype = ssd&lt;/code&gt;.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Pod&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;

  &lt;span class="na"&gt;affinity&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;nodeAffinity&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;requiredDuringSchedulingIgnoredDuringExecution&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;nodeSelectorTerms&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;matchExpressions&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;disktype&lt;/span&gt;
            &lt;span class="na"&gt;operator&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;In&lt;/span&gt;
            &lt;span class="na"&gt;values&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;ssd&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h3&gt;
  
  
  2. &lt;strong&gt;Check Pod Status&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Check the status of the pod to understand why it’s not being scheduled.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Command:
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get pods &lt;span class="nt"&gt;-o&lt;/span&gt; wide
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Output:
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NAME    READY   STATUS    RESTARTS   AGE     IP       NODE     NOMINATED NODE   READINESS GATES
nginx   0/1     Pending   0          3m55s   &amp;lt;none&amp;gt;   &amp;lt;none&amp;gt;   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Explanation:
&lt;/h4&gt;

&lt;p&gt;The issue is that we haven't labeled any nodes with &lt;code&gt;disktype=ssd&lt;/code&gt;. &lt;/p&gt;




&lt;h3&gt;
  
  
  3. &lt;strong&gt;Add Node Label&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Add the label &lt;code&gt;disktype=ssd&lt;/code&gt; to the node &lt;code&gt;kind-cka-cluster-worker&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Verify the pod status again to ensure it has been scheduled on &lt;code&gt;worker01&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Command to Add the Label:
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl label nodes kind-cka-cluster-worker &lt;span class="nv"&gt;disktype&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;ssd
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Recheck Pod Status:
&lt;/h4&gt;

&lt;p&gt;To verify the change, run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get pods &lt;span class="nt"&gt;-o&lt;/span&gt; wide
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should see the &lt;code&gt;nginx&lt;/code&gt; pod in the &lt;strong&gt;Running&lt;/strong&gt; state, with the &lt;code&gt;NODE&lt;/code&gt; column showing &lt;code&gt;kind-cka-cluster-worker&lt;/code&gt;.&lt;/p&gt;




&lt;h3&gt;
  
  
  4. &lt;strong&gt;Create a Second Pod with Node Affinity&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Create a new pod configuration with &lt;code&gt;redis&lt;/code&gt; as the image.
&lt;/li&gt;
&lt;li&gt;Add a &lt;strong&gt;Node Affinity&lt;/strong&gt; rule with the property &lt;code&gt;requiredDuringSchedulingIgnoredDuringExecution&lt;/code&gt;, setting &lt;code&gt;disktype&lt;/code&gt; as the condition without specifying a value.
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To accomplish this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   &lt;span class="nb"&gt;cat &lt;/span&gt;pod.yml &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; redis.yml  
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;or&lt;/p&gt;

&lt;p&gt;Edit the &lt;code&gt;redis.yml&lt;/code&gt; file:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Change the &lt;code&gt;name&lt;/code&gt; to &lt;code&gt;redis&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Update the &lt;code&gt;image&lt;/code&gt; to &lt;code&gt;redis&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Modify the &lt;code&gt;nodeAffinity&lt;/code&gt; rule by removing the &lt;code&gt;values&lt;/code&gt; section and changing the operator to Exists.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The final configuration should look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Pod&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;redis&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;redis&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;redis&lt;/span&gt;
  &lt;span class="na"&gt;affinity&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;nodeAffinity&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;requiredDuringSchedulingIgnoredDuringExecution&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;nodeSelectorTerms&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;matchExpressions&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;disktype&lt;/span&gt;
            &lt;span class="na"&gt;operator&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Exists&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Do not deploy this pod yet&lt;/strong&gt;—we will first ensure that the cluster is prepared for this configuration.  &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Add the Label &lt;code&gt;disktype&lt;/code&gt; (with no value) to &lt;code&gt;worker02&lt;/code&gt; Node&lt;/strong&gt;:&lt;br&gt;&lt;br&gt;
   To add a label with just the key &lt;code&gt;disktype&lt;/code&gt; and no value to the node &lt;code&gt;kind-cka-cluster-worker2&lt;/code&gt;, run the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   kubectl label nodes kind-cka-cluster-worker2 &lt;span class="nv"&gt;disktype&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Verify the Node Labels&lt;/strong&gt;:&lt;br&gt;&lt;br&gt;
   Check if the label has been successfully added to the node:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   kubectl get nodes &lt;span class="nt"&gt;--show-labels&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Look for the &lt;code&gt;disktype&lt;/code&gt; label under &lt;code&gt;LABELS&lt;/code&gt; for &lt;code&gt;kind-cka-cluster-worker2&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Deploy the Redis Pod&lt;/strong&gt;:&lt;br&gt;&lt;br&gt;
   Ensure that the &lt;code&gt;redis.yml&lt;/code&gt; file is configured with the &lt;code&gt;nodeAffinity&lt;/code&gt; rule that requires the &lt;code&gt;disktype&lt;/code&gt; key to exist.&lt;br&gt;&lt;br&gt;
   Deploy the pod:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; redis.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Check Pod Scheduling&lt;/strong&gt;:&lt;br&gt;&lt;br&gt;
   Verify that the &lt;code&gt;redis&lt;/code&gt; pod is now scheduled on &lt;code&gt;worker02&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   kubectl get pods &lt;span class="nt"&gt;-o&lt;/span&gt; wide
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The output should show &lt;code&gt;kind-cka-cluster-worker2&lt;/code&gt; in the &lt;code&gt;NODE&lt;/code&gt; column for the &lt;code&gt;redis&lt;/code&gt; pod.&lt;/p&gt;




&lt;h3&gt;
  
  
  Tags and Mentions
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a class="mentioned-user" href="https://dev.to/piyushsachdeva"&gt;@piyushsachdeva&lt;/a&gt;&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Day 15: &lt;a href="https://www.youtube.com/watch?v=5vimzBRnoDk&amp;amp;list=PLl4APkPHzsUUOkOv3i62UidrLmSB8DcGC&amp;amp;index=18" rel="noopener noreferrer"&gt;Video Tutorial&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>kubernetes</category>
      <category>containers</category>
      <category>docker</category>
      <category>learning</category>
    </item>
    <item>
      <title>CKA Full Course 2024: Day 14/40 Taints and Tolerations in Kubernetes</title>
      <dc:creator>Lloyd Rivers</dc:creator>
      <pubDate>Sun, 10 Nov 2024 13:54:02 +0000</pubDate>
      <link>https://dev.to/lloydrivers/cka-full-course-2024-day-1440-taints-and-tolerations-in-kubernetes-2ain</link>
      <guid>https://dev.to/lloydrivers/cka-full-course-2024-day-1440-taints-and-tolerations-in-kubernetes-2ain</guid>
      <description>&lt;p&gt;So, if you're following along, you might notice I'm experimenting with the format of these posts (sorry, lol). In this one, we'll tackle all the exercises Piyush has asked for right at the beginning. This way, if anyone needs help or gets stuck, they can find guidance here without having to go through a lot of details first.&lt;/p&gt;




&lt;p&gt;Before you answer the first question, you need to have the cluster up and running. You should be really &lt;strong&gt;REALLY&lt;/strong&gt; familiar with this by now, but just in case you're here randomly, here’s the contents of my &lt;code&gt;config.yml&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Cluster&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kind.x-k8s.io/v1alpha4&lt;/span&gt;
&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cka-cluster&lt;/span&gt;
&lt;span class="na"&gt;nodes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;role&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;control-plane&lt;/span&gt;
    &lt;span class="na"&gt;extraPortMappings&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;30001&lt;/span&gt;
        &lt;span class="na"&gt;hostPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;30001&lt;/span&gt;
        &lt;span class="na"&gt;listenAddress&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;0.0.0.0"&lt;/span&gt;
        &lt;span class="na"&gt;protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;tcp&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;role&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;worker&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;role&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;worker&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, you need to run the following command to create the cluster:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kind create cluster &lt;span class="nt"&gt;--name&lt;/span&gt; kind-cka-cluster &lt;span class="nt"&gt;--config&lt;/span&gt; config.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h3&gt;
  
  
  Exercises
&lt;/h3&gt;

&lt;h3&gt;
  
  
  1. Taint both Worker Nodes
&lt;/h3&gt;

&lt;p&gt;To apply taints to the worker nodes, use the following &lt;code&gt;kubectl taint&lt;/code&gt; commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl taint nodes kind-cka-cluster-worker &lt;span class="nv"&gt;gpu&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt;:NoSchedule
kubectl taint nodes kind-cka-cluster-worker2 &lt;span class="nv"&gt;gpu&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;false&lt;/span&gt;:NoSchedule
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h3&gt;
  
  
  2. Create a new pod with the image nginx and see why it's not getting scheduled on worker nodes and control plane nodes.
&lt;/h3&gt;

&lt;p&gt;To create a new pod with the &lt;code&gt;nginx&lt;/code&gt; image, run the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl run nginx &lt;span class="nt"&gt;--image&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;nginx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will create the pod, but since it doesn't have a toleration for the taints applied to the worker nodes (&lt;code&gt;gpu=true:NoSchedule&lt;/code&gt; and &lt;code&gt;gpu=false:NoSchedule&lt;/code&gt;), the pod won't be scheduled on them.&lt;/p&gt;

&lt;p&gt;To verify that the pod is not scheduled, use:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get pods
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should see that the pod is stuck in a &lt;code&gt;Pending&lt;/code&gt; state due to the lack of tolerations for the taints.&lt;/p&gt;




&lt;h3&gt;
  
  
  3. Add Toleration to the Pod to Match the Taint on worker01
&lt;/h3&gt;

&lt;p&gt;To add a toleration to the pod for the taint, we're going to take a bit of a roundabout approach. The reason is that I tried to update the YAML directly, but it got tricky with indentation. So, for that reason, I am deleting the pod and creating a new one.&lt;/p&gt;

&lt;p&gt;First, delete the existing pod with the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl delete pod nginx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then, create a new pod with the updated toleration by applying a YAML file. Below is the YAML configuration for the pod:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Pod&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
  &lt;span class="na"&gt;tolerations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;gpu"&lt;/span&gt;
    &lt;span class="na"&gt;operator&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Equal"&lt;/span&gt;
    &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;true"&lt;/span&gt;
    &lt;span class="na"&gt;effect&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;NoSchedule"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This configuration ensures that the pod will tolerate the taint &lt;code&gt;gpu=true:NoSchedule&lt;/code&gt; and should be scheduled on &lt;code&gt;worker01&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Finally, apply the YAML file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; pod.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To verify that the pod is scheduled on the correct worker node, you can check the pod’s status and the node it is running on:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get pods &lt;span class="nt"&gt;-o&lt;/span&gt; wide
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h3&gt;
  
  
  4. Remove Taint from Control Plane Node
&lt;/h3&gt;

&lt;p&gt;First, we need to check the taints applied to the control plane node. To do this, run the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl describe node kind-cka-cluster-control-plane
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you search (command + F on Mac) for the word &lt;strong&gt;taint&lt;/strong&gt;, you'll see output similar to this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Taints:node-role.kubernetes.io/control-plane:NoSchedule
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This indicates that the control plane node currently has the taint &lt;code&gt;node-role.kubernetes.io/control-plane:NoSchedule&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;To remove this taint, use the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl taint nodes kind-cka-cluster-control-plane node-role.kubernetes.io/control-plane:NoSchedule-
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;-&lt;/code&gt; at the end of the taint command signifies that you're removing the taint, not adding a new one.&lt;/p&gt;

&lt;p&gt;After running the command, you can verify that the taint has been successfully removed by describing the node again:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl describe node kind-cka-cluster-control-plane
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should see that the &lt;code&gt;Taints&lt;/code&gt; section is now empty or no longer includes the &lt;code&gt;node-role.kubernetes.io/control-plane:NoSchedule&lt;/code&gt; taint.&lt;/p&gt;

&lt;p&gt;This chunk of code and the explanation are directly "borrowed" from the docs. I won't lie, sometimes I find it better to do the exercises without watching the video. I find I am retaining more info.&lt;/p&gt;




&lt;h3&gt;
  
  
  Create a New Pod Without Toleration
&lt;/h3&gt;

&lt;p&gt;Create a new file named &lt;code&gt;redis.yml&lt;/code&gt;, and add the following configuration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Pod&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;redis&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;redis&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;redis&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This configuration defines a simple pod named &lt;code&gt;redis&lt;/code&gt; with a container running the &lt;code&gt;redis&lt;/code&gt; image.&lt;/p&gt;

&lt;p&gt;Deploy the Pod:&lt;/p&gt;

&lt;p&gt;To deploy the pod using the YAML file, run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; redis.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Verify Pod Placement:&lt;/p&gt;

&lt;p&gt;Once the pod is created, let’s verify that it’s running on the control plane node. Use the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get pods &lt;span class="nt"&gt;-o&lt;/span&gt; wide
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The output will show which node the &lt;code&gt;redis&lt;/code&gt; pod is running on. Under the "NODE" column, you should see that it’s been scheduled on the control plane node, as it lacks the required tolerations to run on the tainted worker nodes.&lt;/p&gt;




&lt;h3&gt;
  
  
  Reapply Taint to Control Plane Node
&lt;/h3&gt;

&lt;p&gt;To reapply the previously removed taint on the control plane node, run the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl taint nodes kind-cka-cluster-control-plane node-role.kubernetes.io/control-plane:NoSchedule
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command re-establishes the &lt;code&gt;NoSchedule&lt;/code&gt; taint on the control plane node, preventing pods without a matching toleration from being scheduled on it.&lt;/p&gt;




&lt;h3&gt;
  
  
  Tags and Mentions
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a class="mentioned-user" href="https://dev.to/piyushsachdeva"&gt;@piyushsachdeva&lt;/a&gt;&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Day 14: &lt;a href="https://www.youtube.com/watch?v=nwoS2tK2s6Q&amp;amp;list=PLl4APkPHzsUUOkOv3i62UidrLmSB8DcGC&amp;amp;index=15" rel="noopener noreferrer"&gt;Video Tutorial&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>kubernetes</category>
      <category>containers</category>
      <category>docker</category>
      <category>infrastructureascode</category>
    </item>
    <item>
      <title>VS Code YAML Plugin Setup for Kubernetes Beginners</title>
      <dc:creator>Lloyd Rivers</dc:creator>
      <pubDate>Mon, 04 Nov 2024 11:10:18 +0000</pubDate>
      <link>https://dev.to/lloydrivers/vs-code-yaml-plugin-setup-for-kubernetes-beginners-10hf</link>
      <guid>https://dev.to/lloydrivers/vs-code-yaml-plugin-setup-for-kubernetes-beginners-10hf</guid>
      <description>&lt;h3&gt;
  
  
  Goal
&lt;/h3&gt;

&lt;p&gt;Setting up the &lt;strong&gt;YAML plugin for VS Code&lt;/strong&gt; to get autocompletion and schema support when working with Kubernetes YAML files.&lt;/p&gt;




&lt;h3&gt;
  
  
  Overview
&lt;/h3&gt;

&lt;p&gt;Setting up YAML files for Kubernetes can be tricky, especially when you're just getting started. Here’s how to configure the Red Hat YAML plugin in VS Code to make life easier with autocomplete and an outline view.&lt;/p&gt;




&lt;h3&gt;
  
  
  Steps
&lt;/h3&gt;

&lt;h4&gt;
  
  
  1. Install the YAML Plugin
&lt;/h4&gt;

&lt;p&gt;In VS Code:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Go to the &lt;strong&gt;Extensions&lt;/strong&gt; view by clicking on the Extensions icon in the Activity Bar (or press &lt;code&gt;Ctrl+Shift+X&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;Search for &lt;strong&gt;"YAML"&lt;/strong&gt; by Red Hat.&lt;/li&gt;
&lt;li&gt;Install &lt;strong&gt;YAML (v1.15.0)&lt;/strong&gt; by Red Hat.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm1cujaxbciibefx17b69.png" alt="Image description" width="800" height="187"&gt;
&lt;/h2&gt;

&lt;h4&gt;
  
  
  2. Configure the YAML Plugin
&lt;/h4&gt;

&lt;p&gt;Once the plugin is installed, follow these steps:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Go to &lt;strong&gt;Settings&lt;/strong&gt; (click on the gear icon at the bottom left corner).&lt;/li&gt;
&lt;li&gt;In the search bar at the top of the Settings panel, type &lt;strong&gt;YAML: Schemas&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Under &lt;strong&gt;YAML: Schemas&lt;/strong&gt;, click &lt;strong&gt;Edit in settings.json&lt;/strong&gt; to open your &lt;code&gt;settings.json&lt;/code&gt; file.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgmoxlztckrg190wssl95.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgmoxlztckrg190wssl95.png" alt="Image description" width="800" height="429"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  3. Add Kubernetes Schema
&lt;/h4&gt;

&lt;p&gt;In &lt;code&gt;settings.json&lt;/code&gt;, add the Kubernetes schema for YAML files so the editor recognizes and autocompletes your Kubernetes configurations:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="nl"&gt;"yaml.schemas"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"kubernetes"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"*.yaml"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;This configuration associates the Kubernetes schema with any YAML file (&lt;code&gt;*.yaml&lt;/code&gt;) in the current workspace.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  4. Reload VS Code
&lt;/h4&gt;

&lt;p&gt;Once you’ve saved &lt;code&gt;settings.json&lt;/code&gt;, &lt;strong&gt;reload VS Code&lt;/strong&gt; for the changes to take effect.&lt;/p&gt;




&lt;h4&gt;
  
  
  5. Using Autocomplete and Outline Features
&lt;/h4&gt;

&lt;p&gt;With the YAML plugin configured, you’ll notice a few helpful features:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Autocomplete&lt;/strong&gt;: As you start typing Kubernetes resources, VS Code will suggest options based on the schema, making it easier to write accurate YAML configurations. You can also press control spacebar in mac and get suggestions.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe6wndvgvawt478d9c84w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe6wndvgvawt478d9c84w.png" alt="Image description" width="800" height="445"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Outline View&lt;/strong&gt;: Under the &lt;strong&gt;Outline&lt;/strong&gt; section in the Explorer pane, you can view the structure of your YAML file, which is especially useful for navigating larger configurations.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9po56ye5i82hf7u2mv1u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9po56ye5i82hf7u2mv1u.png" alt="Image description" width="800" height="519"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  Final Thoughts
&lt;/h3&gt;

&lt;p&gt;This setup enhances your productivity by giving you real-time YAML assistance and structured navigation. It’s an ideal tool for beginners getting started with Kubernetes configurations in VS Code!&lt;/p&gt;

</description>
      <category>vscode</category>
      <category>kubernetes</category>
      <category>docker</category>
      <category>beginners</category>
    </item>
    <item>
      <title>CKA Full Course 2024: Day 13/40 Static Pods, Manual Scheduling, Labels, and Selectors in Kubernetes</title>
      <dc:creator>Lloyd Rivers</dc:creator>
      <pubDate>Sun, 03 Nov 2024 20:42:05 +0000</pubDate>
      <link>https://dev.to/lloydrivers/cka-full-course-2024-day-1340-static-pods-manual-scheduling-labels-and-selectors-in-kubernetes-29jb</link>
      <guid>https://dev.to/lloydrivers/cka-full-course-2024-day-1340-static-pods-manual-scheduling-labels-and-selectors-in-kubernetes-29jb</guid>
      <description>&lt;h2&gt;
  
  
  Task: Schedule a Pod Manually Without the Scheduler
&lt;/h2&gt;

&lt;p&gt;In this task, we’ll be exploring how to bypass the Kubernetes scheduler by directly assigning a pod to a specific node in a cluster. This can be a useful approach for specific scenarios where you need a pod to run on a particular node without going through the usual scheduling process.&lt;/p&gt;

&lt;h3&gt;
  
  
  Prerequisites
&lt;/h3&gt;

&lt;p&gt;We assume you have a Kubernetes cluster running, created with a KIND (Kubernetes in Docker) configuration similar to the one described in previous posts. Here, we’ve created a cluster named &lt;code&gt;kind-cka-cluster&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kind create cluster &lt;span class="nt"&gt;--name&lt;/span&gt; kind-cka-cluster &lt;span class="nt"&gt;--config&lt;/span&gt; config.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Since we’ve already covered cluster creation with KIND in earlier posts, we won’t go into those details again.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1: Verify the Cluster Nodes
&lt;/h3&gt;

&lt;p&gt;To see the nodes available in this new cluster, run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get nodes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should see output similar to this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NAME                           STATUS   ROLES           AGE   VERSION
kind-cka-cluster-control-plane Ready    control-plane   7m   v1.31.0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For this task, we’ll be scheduling our pod on &lt;code&gt;kind-cka-cluster-control-plane&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2: Define the Pod Manifest (pod.yml)
&lt;/h3&gt;

&lt;p&gt;Now, let’s create a pod manifest in YAML format. Using the &lt;code&gt;nodeName&lt;/code&gt; field in our pod configuration, we can specify the exact node for the pod, bypassing the Kubernetes scheduler entirely.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;pod.yml&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Pod&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
  &lt;span class="na"&gt;nodeName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kind-cka-cluster-control-plane&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this manifest:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;We set &lt;code&gt;nodeName&lt;/code&gt; to &lt;code&gt;kind-cka-cluster-control-plane&lt;/code&gt;, which means the scheduler will skip assigning a node, and the Kubelet on this specific node will handle placement instead.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This approach is a direct method for node selection, overriding other methods like &lt;code&gt;nodeSelector&lt;/code&gt; or affinity rules. &lt;/p&gt;

&lt;p&gt;According to Kubernetes documentation:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"nodeName is a more direct form of node selection than affinity or nodeSelector. nodeName is a field in the Pod spec. If the nodeName field is not empty, the scheduler ignores the Pod and the kubelet on the named node tries to place the Pod on that node. Using nodeName overrules using nodeSelector or affinity and anti-affinity rules."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;For more details, refer to the &lt;a href="https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/" rel="noopener noreferrer"&gt;Kubernetes documentation on node assignment&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 3: Apply the Pod Manifest
&lt;/h3&gt;

&lt;p&gt;With our manifest ready, apply it to the cluster:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; pod.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command creates the &lt;code&gt;nginx&lt;/code&gt; pod and assigns it directly to the &lt;code&gt;kind-cka-cluster-control-plane&lt;/code&gt; node.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 4: Verify Pod Placement
&lt;/h3&gt;

&lt;p&gt;Finally, check that the pod is running on the specified node:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get pods &lt;span class="nt"&gt;-o&lt;/span&gt; wide
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The output should confirm that the &lt;code&gt;nginx&lt;/code&gt; pod is indeed running on &lt;code&gt;kind-cka-cluster-control-plane&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NAME    READY   STATUS    RESTARTS   AGE   IP           NODE                             NOMINATED NODE   READINESS GATES
nginx   1/1     Running   0          28s   10.244.0.5   kind-cka-cluster-control-plane   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This verifies that by setting the &lt;code&gt;nodeName&lt;/code&gt; field, we successfully bypassed the Kubernetes scheduler and directly scheduled our pod on the control plane node.&lt;/p&gt;




&lt;h2&gt;
  
  
  Task: Login to the control plane node and go to the directory of default static pod manifests and try to restart the control plane components.
&lt;/h2&gt;

&lt;p&gt;To access the control plane node of our newly created cluster, use the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="nt"&gt;-it&lt;/span&gt; kind-cka-cluster-control-plane bash
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Navigate to the directory containing the static pod manifests:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cd&lt;/span&gt; /etc/kubernetes/manifests
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Verify the current manifests:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;ls&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To restart the kube-controller-manager, move its manifest file temporarily:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;mv &lt;/span&gt;kube-controller-manager.yaml /tmp
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After confirming the restart, return the manifest file to its original location:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;mv&lt;/span&gt; /tmp/kube-controller-manager.yaml /etc/kubernetes/manifests/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With these steps, we successfully demonstrated how to access the control plane and manipulate the static pod manifests to manage the lifecycle of control plane components.&lt;/p&gt;




&lt;h3&gt;
  
  
  Confirming the Restart of kube-controller-manager
&lt;/h3&gt;

&lt;p&gt;After temporarily moving the &lt;code&gt;kube-controller-manager.yaml&lt;/code&gt; manifest file to &lt;code&gt;/tmp&lt;/code&gt;, we can verify that the kube-controller-manager has restarted. As mentioned in previous posts, I am using k9s, which does clearly show the restart, but for readers without k9s, try the following command&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Inspect Events&lt;/strong&gt;:&lt;br&gt;
   To gather more information, use:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   kubectl describe pod kube-controller-manager-kind-cka-cluster-control-plane &lt;span class="nt"&gt;-n&lt;/span&gt; kube-system
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Look for events at the end of the output. A successful restart will show events similar to:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   Events:
     Type    Reason   Age                    From     Message
     ----    ------   ----                   ----     -------
     Normal  Killing  4m12s (x2 over 8m32s)  kubelet  Stopping container kube-controller-manager
     Normal  Pulled   3m6s (x2 over 7m36s)   kubelet  Container image "registry.k8s.io/kube-controller-manager:v1.31.0" already present on machine
     Normal  Created  3m6s (x2 over 7m36s)   kubelet  Created container kube-controller-manager
     Normal  Started  3m6s (x2 over 7m36s)   kubelet  Started container kube-controller-manager
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The presence of "Killing," "Created," and "Started" events indicates that the kube-controller-manager was stopped and then restarted successfully.&lt;/p&gt;




&lt;h3&gt;
  
  
  Cleanup
&lt;/h3&gt;

&lt;p&gt;Once you have completed your tasks and confirmed the behavior of your pods, it is important to clean up any resources that are no longer needed. This helps maintain a tidy environment and frees up resources in your cluster.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;List Pods&lt;/strong&gt;:&lt;br&gt;
   First, you can check the current pods running in your cluster:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   kubectl get pods
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You might see output like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   NAME    READY   STATUS    RESTARTS   AGE
   nginx   1/1     Running   0          35m
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Describe Pod&lt;/strong&gt;:&lt;br&gt;
   To get more information about a specific pod, use the describe command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   kubectl describe pod nginx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will give you details about the pod, such as its name, namespace, node, and other configurations:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   Name:             nginx
   Namespace:        default
   Priority:         0
   Service Account:  default
   Node:             kind-cka-cluster-control-plane/172.19.0.3
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Delete the Pod&lt;/strong&gt;:&lt;br&gt;
   If you find that the pod is no longer needed, you can safely delete it with the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   kubectl delete pod nginx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Verify Deletion&lt;/strong&gt;:&lt;br&gt;
   After executing the delete command, you can verify that the pod has been removed by listing the pods again:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   kubectl get pods
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Ensure that the nginx pod no longer appears in the list.&lt;/p&gt;

&lt;p&gt;By performing these cleanup steps, you help ensure that your Kubernetes cluster remains organized and efficient.&lt;/p&gt;




&lt;h3&gt;
  
  
  Creating Multiple Pods with Specific Labels
&lt;/h3&gt;

&lt;p&gt;In this section, we will create three pods based on the nginx image, each with a unique name and specific labels indicating different environments: &lt;code&gt;env:test&lt;/code&gt;, &lt;code&gt;env:dev&lt;/code&gt;, and &lt;code&gt;env:prod&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Create the Script&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;First, we'll create a script that contains the commands to generate the pods. I am using a script for 2 reasons: &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;I want to learn bash, &lt;/li&gt;
&lt;li&gt;If I need to create 3 nodes again I only have to run the file instead of type it all out again.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Use the following command to create the script file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;vi create-pods.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, paste the following code into the file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#!/bin/bash&lt;/span&gt;

&lt;span class="c"&gt;# Create pod1 with label env=test&lt;/span&gt;
kubectl run pod1 &lt;span class="nt"&gt;--image&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;nginx &lt;span class="nt"&gt;--labels&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;env&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;test&lt;/span&gt;

&lt;span class="c"&gt;# Create pod2 with label env=dev&lt;/span&gt;
kubectl run pod2 &lt;span class="nt"&gt;--image&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;nginx &lt;span class="nt"&gt;--labels&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;env&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;dev

&lt;span class="c"&gt;# Create pod3 with label env=prod&lt;/span&gt;
kubectl run pod3 &lt;span class="nt"&gt;--image&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;nginx &lt;span class="nt"&gt;--labels&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;env&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;prod

&lt;span class="c"&gt;# Wait for a few seconds to allow the pods to start&lt;/span&gt;
&lt;span class="nb"&gt;sleep &lt;/span&gt;5

&lt;span class="c"&gt;# Verify the created pods and their labels&lt;/span&gt;
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Verifying created pods and their labels:"&lt;/span&gt;
kubectl get pods &lt;span class="nt"&gt;--show-labels&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 2: Make the Script Executable&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;After saving the file, make the script executable with the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;chmod&lt;/span&gt; +x create-pods.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 3: Execute the Script&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Run the script to create the pods:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;./create-pods.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should see output indicating the creation of the pods:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pod/pod1 created
pod/pod2 created
pod/pod3 created
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 4: Verify the Created Pods&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The script will then display the status of the created pods:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Verifying created pods and their labels:
NAME   READY   STATUS              RESTARTS   AGE   LABELS
pod1   0/1     ContainerCreating   0          5s    env=test
pod2   0/1     ContainerCreating   0          5s    env=dev
pod3   0/1     ContainerCreating   0          5s    env=prod
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;At this point, you can filter the pods based on their labels. For example, to find the pod with the &lt;code&gt;env=dev&lt;/code&gt; label, use the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get po &lt;span class="nt"&gt;-l&lt;/span&gt; &lt;span class="nb"&gt;env&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;dev
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should see output confirming the pod is running:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NAME   READY   STATUS    RESTARTS   AGE
pod2   1/1     Running   0          4m9s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h3&gt;
  
  
  Tags and Mentions
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a class="mentioned-user" href="https://dev.to/piyushsachdeva"&gt;@piyushsachdeva&lt;/a&gt;&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Day 13: &lt;a href="https://www.youtube.com/watch?v=6eGf7_VSbrQ&amp;amp;list=PLl4APkPHzsUUOkOv3i62UidrLmSB8DcGC&amp;amp;index=14" rel="noopener noreferrer"&gt;Video Tutorial&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>webdev</category>
      <category>javascript</category>
      <category>kubernetes</category>
      <category>docker</category>
    </item>
    <item>
      <title>CKA Full Course 2024: Day 12/40 Daemonsets, Job and Cronjob in Kubernetes</title>
      <dc:creator>Lloyd Rivers</dc:creator>
      <pubDate>Thu, 31 Oct 2024 20:41:48 +0000</pubDate>
      <link>https://dev.to/lloydrivers/cka-full-course-2024-day-1240-daemonsets-job-and-cronjob-in-kubernetes-1gma</link>
      <guid>https://dev.to/lloydrivers/cka-full-course-2024-day-1240-daemonsets-job-and-cronjob-in-kubernetes-1gma</guid>
      <description>&lt;p&gt;Since this post is all about deploying an Nginx frontend and gathering metrics from our cluster, let’s start with a brief overview of what we’re building and the tools involved.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: Originally, I wasn’t planning to include another introduction, but given the complexity of today’s setup, a recap will ensure everyone has a solid understanding before diving in.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Project Overview
&lt;/h2&gt;

&lt;p&gt;In this tutorial, we’re deploying a simple Nginx server that serves as our frontend. This setup will serve the default Nginx homepage, which you can access via your browser once it’s running. We’ll also configure a health-checking system using a CronJob, which will periodically check if our Nginx server is up and running, returning a status code as confirmation.&lt;/p&gt;

&lt;p&gt;Additionally, we’ll be setting up a &lt;strong&gt;DaemonSet&lt;/strong&gt; to run a &lt;strong&gt;Node Exporter&lt;/strong&gt; on every node in our cluster. This Node Exporter will gather metrics, giving us insights into the performance and resource usage of our app and cluster. &lt;/p&gt;

&lt;h3&gt;
  
  
  Key Concepts
&lt;/h3&gt;

&lt;p&gt;To make sure we're all on the same page, here’s a breakdown of the main components we’re working with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Nginx&lt;/strong&gt;: &lt;a href="https://nginx.org/en/" rel="noopener noreferrer"&gt;Nginx&lt;/a&gt; is a web server that can serve static content (like HTML) or be configured as a reverse proxy or load balancer. Here, we’re using it to serve the default Nginx homepage, which acts as our app’s frontend.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;CronJob&lt;/strong&gt;: In Kubernetes, a &lt;strong&gt;CronJob&lt;/strong&gt; allows you to run a job at specific intervals, just like scheduled tasks. Here, we’re using a &lt;a href="https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/" rel="noopener noreferrer"&gt;CronJob&lt;/a&gt; to regularly check the health of our Nginx server. If the Nginx server is up and running, it will return a status code that confirms it’s reachable.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;DaemonSet&lt;/strong&gt;: A &lt;strong&gt;&lt;a href="https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/" rel="noopener noreferrer"&gt;DaemonSet&lt;/a&gt;&lt;/strong&gt; ensures that a specific pod runs on every node in your Kubernetes cluster. In this setup, we’re using it to run a &lt;strong&gt;Node Exporter&lt;/strong&gt; on each node, collecting metrics like CPU and memory usage, which is crucial for monitoring app health and resource consumption.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With this structure in mind, we’ll dive into the YAML files needed to set up each component.&lt;/p&gt;




&lt;h3&gt;
  
  
  Prerequisites
&lt;/h3&gt;

&lt;p&gt;Before we start, ensure you have a configuration file to create the Kubernetes cluster. Refer to the &lt;a href="https://kind.sigs.k8s.io/docs/user/quick-start/#creating-a-cluster" rel="noopener noreferrer"&gt;Kind Quick Start guide&lt;/a&gt; for detailed instructions on setting up your Kind cluster.&lt;/p&gt;

&lt;h3&gt;
  
  
  Cluster Configuration (config.yml)
&lt;/h3&gt;

&lt;p&gt;Create a file named &lt;code&gt;config.yml&lt;/code&gt; with the following content to define your Kind cluster:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Cluster&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kind.x-k8s.io/v1alpha4&lt;/span&gt;
&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cka-cluster&lt;/span&gt;  
&lt;span class="na"&gt;nodes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;role&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;control-plane&lt;/span&gt;
  &lt;span class="na"&gt;extraPortMappings&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;30001&lt;/span&gt;  
    &lt;span class="na"&gt;hostPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;30001&lt;/span&gt;
    &lt;span class="na"&gt;listenAddress&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;0.0.0.0"&lt;/span&gt; 
    &lt;span class="na"&gt;protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;tcp&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;role&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;worker&lt;/span&gt;  
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;role&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;worker&lt;/span&gt; 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run the following command to create the cluster:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kind create cluster &lt;span class="nt"&gt;--name&lt;/span&gt; kind-cka-cluster &lt;span class="nt"&gt;--config&lt;/span&gt; config.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Use the following command to set the context to the new cluster:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl config use-context kind-kind-cka-cluster
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Tasks
&lt;/h3&gt;

&lt;p&gt;Here’s the amended section with a focus on what you are doing and encouraging the reader to explore the documentation further:&lt;/p&gt;




&lt;h3&gt;
  
  
  Create a DaemonSet
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;The task here is to create a DaemonSet in Kubernetes. A DaemonSet ensures that all nodes in the cluster run a copy of the specified pod. For this example, I am setting up the Prometheus Node Exporter, which will allow for monitoring metrics from each node. While I demonstrated this based on the video, I encourage the reader to visit the Kubernetes documentation to read more about DaemonSets and their configurations for a deeper understanding.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Solution
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;DaemonSet&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;prometheus-node-exporter&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kube-system&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;prometheus-node-exporter&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;prometheus-node-exporter&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;prometheus-node-exporter&lt;/span&gt;
      &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;prometheus-node-exporter&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;prom/node-exporter:v0.16.0&lt;/span&gt;
        &lt;span class="na"&gt;imagePullPolicy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;IfNotPresent&lt;/span&gt;
        &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;prometheus-node-exporter&lt;/span&gt;
        &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;prom-node-exp&lt;/span&gt;
          &lt;span class="c1"&gt;#^ must be an IANA_SVC_NAME (at most 15 characters, ..)&lt;/span&gt;
          &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;9100&lt;/span&gt;
          &lt;span class="na"&gt;hostPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;9100&lt;/span&gt;
      &lt;span class="na"&gt;tolerations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;node-role.kubernetes.io/master"&lt;/span&gt;
        &lt;span class="na"&gt;effect&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;NoSchedule"&lt;/span&gt;
      &lt;span class="na"&gt;hostNetwork&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
      &lt;span class="na"&gt;hostPID&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
      &lt;span class="na"&gt;hostIPC&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
      &lt;span class="na"&gt;restartPolicy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Always&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Service&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;annotations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;prometheus.io/scrape&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;true'&lt;/span&gt;
    &lt;span class="na"&gt;prometheus.io/app-metrics&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;true'&lt;/span&gt;
    &lt;span class="na"&gt;prometheus.io/app-metrics-path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;/metrics'&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;prometheus-node-exporter&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kube-system&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;prometheus-node-exporter&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;clusterIP&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;None&lt;/span&gt;
  &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;prometheus-node-exporter&lt;/span&gt;
      &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;9100&lt;/span&gt;
      &lt;span class="na"&gt;protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;TCP&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;prometheus-node-exporter&lt;/span&gt;
  &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ClusterIP&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;p&gt;&lt;strong&gt;Create a CronJob&lt;/strong&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;This task involves creating a CronJob that runs every 5 minutes. I chose not to follow the video tutorial here. This CronJob checks if the app is up by returning a 200 status code and the HTML content if successful&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Solution
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;batch/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;CronJob&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx-app-health-check&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;schedule&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;*/5&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;*&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;*&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;*&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;*"&lt;/span&gt;  &lt;span class="c1"&gt;# Runs every 5 minutes&lt;/span&gt;
  &lt;span class="na"&gt;jobTemplate&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;check-web-server&lt;/span&gt;
              &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;appropriate/curl&lt;/span&gt;
              &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
                &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;/bin/sh&lt;/span&gt;
                &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;-c&lt;/span&gt;
                &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
                  &lt;span class="s"&gt;status_code=$(curl -s -o /dev/null -w '%{http_code}' http://nginx-app-svc.default.svc.cluster.local)&lt;/span&gt;
                  &lt;span class="s"&gt;homepage_content=$(curl -s http://nginx-app-svc.default.svc.cluster.local)&lt;/span&gt;
                  &lt;span class="s"&gt;echo "Status Code: $status_code"&lt;/span&gt;
                  &lt;span class="s"&gt;echo "Homepage Content: $homepage_content"&lt;/span&gt;
          &lt;span class="na"&gt;restartPolicy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;OnFailure&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This configuration ensures the CronJob runs every 5 minutes to check the health of the Nginx app. Let me know if any additional clarifications would help here!&lt;/p&gt;




&lt;h3&gt;
  
  
  Putting It All Together
&lt;/h3&gt;

&lt;p&gt;In this section, we combine our Kubernetes resources to deploy the Nginx application effectively. Below are the configurations for both the &lt;strong&gt;Deployment&lt;/strong&gt; and &lt;strong&gt;Service&lt;/strong&gt;. &lt;/p&gt;

&lt;h4&gt;
  
  
  Nginx Deployment
&lt;/h4&gt;

&lt;p&gt;This Deployment ensures that we have 3 replicas of our Nginx application running:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;myapp&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx-app&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;3&lt;/span&gt;  
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx-app&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx-app&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
        &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx:1.23.4-alpine&lt;/span&gt;
        &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Nginx Service
&lt;/h4&gt;

&lt;p&gt;The following Service exposes the Nginx application, allowing external traffic to access it via a specified node port:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Service&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx-app-svc&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx-app&lt;/span&gt;
  &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;TCP&lt;/span&gt;
      &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
      &lt;span class="na"&gt;targetPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
      &lt;span class="na"&gt;nodePort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;30001&lt;/span&gt;
  &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;NodePort&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Common Gotchas
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Label Consistency&lt;/strong&gt;: Ensure that the labels in the Deployment and Service match correctly. In this case, both resources use &lt;code&gt;app: nginx-app&lt;/code&gt; to ensure that the Service can route traffic to the right pods.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;NodePort Configuration&lt;/strong&gt;: When using &lt;code&gt;NodePort&lt;/code&gt;, make sure the specified node port (e.g., 30001) does not conflict with other services running on the same node.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Replica Count&lt;/strong&gt;: The replica count in the Deployment affects availability. In this example, we specified 3 replicas to ensure high availability of the Nginx app.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Container Port&lt;/strong&gt;: Confirm that the container port in the Deployment matches the target port specified in the Service to ensure proper routing of traffic.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;




&lt;h3&gt;
  
  
  Final thoughts...
&lt;/h3&gt;

&lt;p&gt;Today, I focused on building components in Kubernetes, specifically DaemonSets, Jobs, and CronJobs. Here are my key takeaways:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Understanding DaemonSets&lt;/strong&gt;: DaemonSets ensure that a specific pod runs on all or selected nodes in a Kubernetes cluster. This is particularly useful for monitoring and logging applications that need to be deployed across every node.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Utilizing Jobs and CronJobs&lt;/strong&gt;: Jobs are great for executing tasks that run to completion, while CronJobs allow for scheduling tasks at specified intervals. This functionality is essential for automating routine tasks, such as health checks or backups.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Leveraging Documentation&lt;/strong&gt;: I realized the importance of the Kubernetes documentation as a crucial resource when building applications. It's empowering to have access to comprehensive guides and examples, which enhance my ability to troubleshoot and implement features effectively.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Deep Learning through Hands-On Practice&lt;/strong&gt;: Engaging in building applications from scratch aligns perfectly with my desire for deep learning. I find that struggling through challenges helps me develop a stronger grasp of concepts, solidifying my knowledge and skills.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Experimentation and Exploration&lt;/strong&gt;: Taking the initiative to explore beyond tutorials fosters a sense of ownership over the learning process. It’s fulfilling to construct solutions independently, reinforcing my understanding of Kubernetes.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;




&lt;h3&gt;
  
  
  Tags and Mentions
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a class="mentioned-user" href="https://dev.to/piyushsachdeva"&gt;@piyushsachdeva&lt;/a&gt;&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Day 12: &lt;a href="https://www.youtube.com/watch?v=kvITrySpy_k&amp;amp;list=PLl4APkPHzsUUOkOv3i62UidrLmSB8DcGC&amp;amp;index=1312" rel="noopener noreferrer"&gt;Video Tutorial&lt;/a&gt; &lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>webdev</category>
      <category>beginners</category>
      <category>kubernetes</category>
      <category>docker</category>
    </item>
    <item>
      <title>CKA Full Course 2024: Day 11/40 Multi Container Pod Kubernetes - Sidecar vs Init Container</title>
      <dc:creator>Lloyd Rivers</dc:creator>
      <pubDate>Tue, 29 Oct 2024 09:19:11 +0000</pubDate>
      <link>https://dev.to/lloydrivers/cka-full-course-2024-day-1140-multi-container-pod-kubernetes-sidecar-vs-init-container-1df5</link>
      <guid>https://dev.to/lloydrivers/cka-full-course-2024-day-1140-multi-container-pod-kubernetes-sidecar-vs-init-container-1df5</guid>
      <description>&lt;h3&gt;
  
  
  A Note from Me
&lt;/h3&gt;

&lt;p&gt;In this project, I wanted to apply Kubernetes concepts like init containers and sidecar containers, but I didn’t want to just follow along with a tutorial. &lt;/p&gt;

&lt;p&gt;My goal was to build something memorable.&lt;/p&gt;

&lt;p&gt;After some brainstorming, I present to you the &lt;strong&gt;Get Me App&lt;/strong&gt;!&lt;/p&gt;




&lt;h3&gt;
  
  
  What Get Me App Does
&lt;/h3&gt;

&lt;p&gt;The &lt;strong&gt;Get Me App&lt;/strong&gt; is designed to dynamically fetch and display content from various GitHub repositories. The application runs within a Kubernetes pod and comprises two primary components:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Nginx Container&lt;/strong&gt;: This serves as the main application, hosting the dynamically fetched content. It's lightweight and efficient, making it ideal for serving static web pages.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Sidecar Container&lt;/strong&gt;: This runs alongside the primary nginx container. It is responsible for refreshing the content every 5 seconds by fetching the latest HTML from a randomly selected GitHub page. This ensures that the content served by Nginx is always up to date.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Init Container&lt;/strong&gt;: This initializes the environment before the main containers start. It checks for the availability of the Get Me App Service by resolving its DNS entry. This step ensures that the application is ready to interact with the service once it starts running. Read more &lt;a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/" rel="noopener noreferrer"&gt;here&lt;/a&gt; (I had to)&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The nginx container then serves the updated HTML webpage, making it accessible through a browser via a NodePort service.&lt;/p&gt;




&lt;h3&gt;
  
  
  Prerequisites
&lt;/h3&gt;

&lt;p&gt;Before we start, ensure you have a configuration file to create the Kubernetes cluster. Refer to the &lt;a href="https://kind.sigs.k8s.io/docs/user/quick-start/#creating-a-cluster" rel="noopener noreferrer"&gt;Kind Quick Start guide&lt;/a&gt; for detailed instructions on setting up your Kind cluster.&lt;/p&gt;

&lt;h3&gt;
  
  
  Cluster Configuration (config.yml)
&lt;/h3&gt;

&lt;p&gt;Create a file named &lt;code&gt;config.yml&lt;/code&gt; with the following content to define your Kind cluster:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Cluster&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kind.x-k8s.io/v1alpha4&lt;/span&gt;
&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cka-cluster&lt;/span&gt;  
&lt;span class="na"&gt;nodes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;role&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;control-plane&lt;/span&gt;
  &lt;span class="na"&gt;extraPortMappings&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;30001&lt;/span&gt;  &lt;span class="c1"&gt;# Change this to match the NodePort&lt;/span&gt;
    &lt;span class="na"&gt;hostPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;30001&lt;/span&gt;
    &lt;span class="na"&gt;listenAddress&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;0.0.0.0"&lt;/span&gt; 
    &lt;span class="na"&gt;protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;tcp&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;role&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;worker&lt;/span&gt;  
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;role&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;worker&lt;/span&gt; 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run the following command to create the cluster:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kind create cluster &lt;span class="nt"&gt;--name&lt;/span&gt; kind-cka-cluster &lt;span class="nt"&gt;--config&lt;/span&gt; config.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Use the following command to set the context to the new cluster:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl config use-context kind-kind-cka-cluster
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 1: Create the Project Directory and File
&lt;/h3&gt;

&lt;p&gt;First, set up the directory and file structure for the project.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;mkdir &lt;/span&gt;get-me-app
&lt;span class="nb"&gt;cd &lt;/span&gt;get-me-app
nano get-me-app.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 2: Define the Kubernetes Pod Specification
&lt;/h3&gt;

&lt;p&gt;In the &lt;code&gt;get-me-app.yml&lt;/code&gt; file, we’ll define a Kubernetes pod that includes the nginx container, a sidecar container for content refreshing, and an init container for initial data fetch.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Pod&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;get-me-app&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;get-me-app&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;initContainers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;init-myservice&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;busybox:1.28&lt;/span&gt;
    &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;sh'&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;-c'&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;until&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;nslookup&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;get-me-app-service.$(cat&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;/var/run/secrets/kubernetes.io/serviceaccount/namespace).svc.cluster.local;&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;do&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;echo&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;waiting&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;for&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;get-me-app-service;&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;sleep&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;2;&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;done"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;

  &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
    &lt;span class="na"&gt;volumeMounts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;workdir&lt;/span&gt;
      &lt;span class="na"&gt;mountPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/usr/share/nginx/html&lt;/span&gt;

  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;content-refresher&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;busybox&lt;/span&gt;
    &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;sh"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;-c"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;while&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;true;&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;do&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;url=$(shuf&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;-n&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;1&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;-e&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;https://github.com/piyushsachdeva/CKA-2024&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;https://github.com/kubernetes&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;https://github.com/jenkinsci&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;https://github.com/techiescamp/kubernetes-learning-path&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;https://github.com/NotHarshhaa/kubernetes-learning-path);&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;wget&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;-O&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;/work-dir/index.html&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s"&gt;$url&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s"&gt;;&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;sleep&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;5;&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;done"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
    &lt;span class="na"&gt;volumeMounts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;workdir&lt;/span&gt;
      &lt;span class="na"&gt;mountPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/work-dir"&lt;/span&gt;

  &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;workdir&lt;/span&gt;
    &lt;span class="na"&gt;emptyDir&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  What Each Part of This Pod Specification Does
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;nginx container&lt;/strong&gt;: This is the primary container, serving content on port 80. The &lt;code&gt;volumeMount&lt;/code&gt; makes the &lt;code&gt;/usr/share/nginx/html&lt;/code&gt; directory available in the pod’s &lt;code&gt;workdir&lt;/code&gt; volume.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Sidecar container&lt;/strong&gt; (&lt;code&gt;content-refresher&lt;/code&gt;): This container runs a &lt;code&gt;while true&lt;/code&gt; loop, downloading the latest version of the webpage every 5 seconds. This ensures that the content in the &lt;code&gt;workdir&lt;/code&gt; volume stays updated.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Init container&lt;/strong&gt; (&lt;code&gt;init-myservice&lt;/code&gt;): This waits for the get-me-app-service to become available by continuously performing a DNS lookup. It runs only once during initialization and does not restart after completion.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Volumes&lt;/strong&gt;: The &lt;code&gt;workdir&lt;/code&gt; volume (an &lt;code&gt;emptyDir&lt;/code&gt; type) is shared among the containers, allowing the init container, sidecar, and nginx to access and serve the same content.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;




&lt;h3&gt;
  
  
  Step 3: Test Locally with NodePort (Optional)
&lt;/h3&gt;

&lt;p&gt;To make the app accessible through a browser on your local machine, configure a NodePort service to expose the pod’s port 80.&lt;/p&gt;

&lt;p&gt;Add this service definition in &lt;code&gt;get-me-app-service.yml&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Service&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;get-me-app-service&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;NodePort&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;get-me-app&lt;/span&gt;
  &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
    &lt;span class="na"&gt;targetPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
    &lt;span class="na"&gt;nodePort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;30001&lt;/span&gt;  
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Deploy the setup with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; get-me-app.yml

kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; get-me-app-service.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Access it by visiting &lt;code&gt;http://localhost:30001&lt;/code&gt; in your browser, and you should the github page. Refresh the page after 5 seconds and you should see a different github page.&lt;/p&gt;




&lt;h3&gt;
  
  
  Key Takeaways
&lt;/h3&gt;

&lt;p&gt;This project helped me understand how init containers can initialize shared resources and how sidecar containers keep those resources updated for the main application. It's an engaging way to experiment with and learn about real-time data handling in Kubernetes.&lt;/p&gt;




&lt;p&gt;Here’s the formatted content for the section you provided:&lt;/p&gt;




&lt;h3&gt;
  
  
  Tags and Mentions
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a class="mentioned-user" href="https://dev.to/piyushsachdeva"&gt;@piyushsachdeva&lt;/a&gt;&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Day 8: &lt;a href="https://www.youtube.com/watch?v=yVLXIydlU_0&amp;amp;list=PLl4APkPHzsUUOkOv3i62UidrLmSB8DcGC&amp;amp;index=12" rel="noopener noreferrer"&gt;Video Tutorial&lt;/a&gt; &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;End Notes:&lt;/strong&gt; Please remember that this project is not part of the video tutorials. I wanted to build something on my own using the concepts from the tutorial as general guidance.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>beginners</category>
      <category>kubernetes</category>
      <category>javascript</category>
    </item>
    <item>
      <title>CKA Full Course 2024: Day 10/40 Kubernetes Namespace Explained</title>
      <dc:creator>Lloyd Rivers</dc:creator>
      <pubDate>Mon, 28 Oct 2024 12:42:36 +0000</pubDate>
      <link>https://dev.to/lloydrivers/new-title-3gdo</link>
      <guid>https://dev.to/lloydrivers/new-title-3gdo</guid>
      <description>&lt;h3&gt;
  
  
  Note to the Reader
&lt;/h3&gt;

&lt;p&gt;In this post, I'm experimenting with a more concise format. The highly detailed posts are taking 2-3 days to complete, so in this one, I’ll focus on describing Kubernetes Namespaces and their components before moving directly into exercises.&lt;/p&gt;

&lt;p&gt;If you saw the content of the post move around a little, it is because I was getting errors trying to post it and what I had written had not been saved. &lt;/p&gt;

&lt;h3&gt;
  
  
  Kubernetes Namespaces
&lt;/h3&gt;

&lt;p&gt;Namespaces are a way to divide cluster resources between multiple users or applications. They help organize objects in Kubernetes and manage access, often in scenarios where different environments (e.g., development, staging, and production) coexist within the same cluster. With namespaces, we can isolate resources while ensuring that different teams or applications can share the same infrastructure.&lt;/p&gt;

&lt;p&gt;Namespaces are ideal when you need logical separation but want to avoid the complexity of multiple clusters.&lt;/p&gt;




&lt;h3&gt;
  
  
  Create Two Namespaces: &lt;code&gt;ns1&lt;/code&gt; and &lt;code&gt;ns2&lt;/code&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Create the first namespace:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl create namespace ns1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Create the second namespace:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl create namespace ns2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; At this point, it may be easier to work in two terminal windows. In the first terminal, connect to the &lt;code&gt;ns1&lt;/code&gt; namespace. In the second terminal, connect to the &lt;code&gt;ns2&lt;/code&gt; namespace using the following commands:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;For the first terminal:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   kubectl config set-context &lt;span class="nt"&gt;--current&lt;/span&gt; &lt;span class="nt"&gt;--namespace&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;ns1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;For the second terminal:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   kubectl config set-context &lt;span class="nt"&gt;--current&lt;/span&gt; &lt;span class="nt"&gt;--namespace&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;ns2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h3&gt;
  
  
  Create a deployment with a single replica in each of these namespaces with the image as nginx and names as deploy-ns1 and deploy-ns2, respectively
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Deployment for &lt;code&gt;ns1&lt;/code&gt;&lt;/strong&gt; – Save this configuration as &lt;code&gt;ns1-deployment.yml&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;deploy-ns1&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ns1&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;1&lt;/span&gt;  
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;deploy-ns1&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;deploy-ns1&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
        &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx:1.23.4-alpine&lt;/span&gt;
        &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Deployment for &lt;code&gt;ns2&lt;/code&gt;&lt;/strong&gt; – Save this configuration as &lt;code&gt;ns2-deployment.yml&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;deploy-ns2&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ns2&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;1&lt;/span&gt;  
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;deploy-ns2&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;deploy-ns2&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
        &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx:1.23.4-alpine&lt;/span&gt;
        &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Apply each YAML configuration to create the deployments
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; ns1-deployment.yml

kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; ns2-deployment.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h3&gt;
  
  
  Get the IP address of each of the pods
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Namespace &lt;code&gt;ns1&lt;/code&gt;&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   kubectl get pods &lt;span class="nt"&gt;-o&lt;/span&gt; wide &lt;span class="nt"&gt;-n&lt;/span&gt; ns1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Example Output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   NAME                          READY   STATUS    RESTARTS   AGE   IP           NODE                 NOMINATED NODE   READINESS GATES
   deploy-ns1-68b96c55c4-db4b9   1/1     Running   0          13m   10.244.2.8   cka-cluster-worker   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Namespace &lt;code&gt;ns2&lt;/code&gt;&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   kubectl get pods &lt;span class="nt"&gt;-o&lt;/span&gt; wide &lt;span class="nt"&gt;-n&lt;/span&gt; ns2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Example Output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   NAME                          READY   STATUS    RESTARTS   AGE   IP            NODE                  NOMINATED NODE   READINESS GATES
   deploy-ns2-7c6646cf97-76xcz   1/1     Running   0          14m   10.244.1.10   cka-cluster-worker2   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h3&gt;
  
  
  Exec into the pod of &lt;code&gt;deploy-ns1&lt;/code&gt; and try to curl the IP address of the pod running on &lt;code&gt;deploy-ns2&lt;/code&gt;
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="nt"&gt;-it&lt;/span&gt; deploy-ns1-68b96c55c4-db4b9 &lt;span class="nt"&gt;-n&lt;/span&gt; ns1 &lt;span class="nt"&gt;--&lt;/span&gt; curl 10.244.1.10
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Upon successful execution, you should see an HTML response indicating that the deploy-ns1 pod can successfully connect to the deploy-ns2 pod, confirming that the NGINX server is up and running:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight html"&gt;&lt;code&gt;&lt;span class="cp"&gt;&amp;lt;!DOCTYPE html&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;html&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;head&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;title&amp;gt;&lt;/span&gt;Welcome to nginx!&lt;span class="nt"&gt;&amp;lt;/title&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;style&amp;gt;&lt;/span&gt;
        &lt;span class="nt"&gt;html&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="py"&gt;color-scheme&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;light&lt;/span&gt; &lt;span class="n"&gt;dark&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="nt"&gt;body&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nl"&gt;width&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;35em&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nl"&gt;margin&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt; &lt;span class="nb"&gt;auto&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nl"&gt;font-family&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;Tahoma&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;Verdana&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;Arial&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nb"&gt;sans-serif&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;/style&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;/head&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;body&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;h1&amp;gt;&lt;/span&gt;Welcome to nginx!&lt;span class="nt"&gt;&amp;lt;/h1&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;p&amp;gt;&lt;/span&gt;If you see this page, the nginx web server is successfully installed and working. Further configuration is required.&lt;span class="nt"&gt;&amp;lt;/p&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;p&amp;gt;&lt;/span&gt;For online documentation and support, please refer to &lt;span class="nt"&gt;&amp;lt;a&lt;/span&gt; &lt;span class="na"&gt;href=&lt;/span&gt;&lt;span class="s"&gt;"http://nginx.org/"&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&lt;/span&gt;nginx.org&lt;span class="nt"&gt;&amp;lt;/a&amp;gt;&lt;/span&gt;.&lt;span class="nt"&gt;&amp;lt;br/&amp;gt;&lt;/span&gt;
    Commercial support is available at &lt;span class="nt"&gt;&amp;lt;a&lt;/span&gt; &lt;span class="na"&gt;href=&lt;/span&gt;&lt;span class="s"&gt;"http://nginx.com/"&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&lt;/span&gt;nginx.com&lt;span class="nt"&gt;&amp;lt;/a&amp;gt;&lt;/span&gt;.&lt;span class="nt"&gt;&amp;lt;/p&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;p&amp;gt;&amp;lt;em&amp;gt;&lt;/span&gt;Thank you for using nginx.&lt;span class="nt"&gt;&amp;lt;/em&amp;gt;&amp;lt;/p&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;/body&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;/html&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h3&gt;
  
  
  Scale both deployments from 1 to 3 replicas
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; I chose to scale through the YAML file for source control purposes.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Update the &lt;code&gt;replicas&lt;/code&gt; field in each deployment YAML file:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;ns1-deployment.yml&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;deploy-ns1&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ns1&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3&lt;/span&gt;  &lt;span class="c1"&gt;# Updated to 3 replicas&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;deploy-ns1&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;deploy-ns1&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
        &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx:1.23.4-alpine&lt;/span&gt;
        &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;ns2-deployment.yml&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;deploy-ns2&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ns2&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3&lt;/span&gt;  &lt;span class="c1"&gt;# Updated to 3 replicas&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;deploy-ns2&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;deploy-ns2&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
        &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx:1.23.4-alpine&lt;/span&gt;
        &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ol&gt;




&lt;h3&gt;
  
  
  Create two services to expose both of your deployments and name them svc-ns1 and svc-ns2
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Service&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;svc-ns1&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ns1&lt;/span&gt;  
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;deploy-ns1&lt;/span&gt;  
  &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;TCP&lt;/span&gt;
      &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;80&lt;/span&gt;         
      &lt;span class="na"&gt;targetPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;80&lt;/span&gt;   
  &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ClusterIP&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Service&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;svc-ns2&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ns2&lt;/span&gt;  
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;deploy-ns2&lt;/span&gt;  
  &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;TCP&lt;/span&gt;
      &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;80&lt;/span&gt;         
      &lt;span class="na"&gt;targetPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;80&lt;/span&gt;   
  &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ClusterIP&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Apply the Service Definitions
&lt;/h3&gt;

&lt;p&gt;Once you’ve created both YAML files (let's say &lt;code&gt;svc-ns1.yml&lt;/code&gt; and &lt;code&gt;svc-ns2.yml&lt;/code&gt;), you can apply them using the following commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; svc-ns1.yml

kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; svc-ns2.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 5: Verify the Services
&lt;/h3&gt;

&lt;p&gt;After applying, you can verify that your services are up and running by executing:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get services &lt;span class="nt"&gt;-n&lt;/span&gt; ns1

kubectl get services &lt;span class="nt"&gt;-n&lt;/span&gt; ns2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  This should show you the &lt;code&gt;svc-ns1&lt;/code&gt; and &lt;code&gt;svc-ns2&lt;/code&gt; services, along with their cluster IPs and ports.
&lt;/h3&gt;




&lt;h3&gt;
  
  
  Exec into each pod and try to curl the IP address of the service running on the other namespace.
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="nt"&gt;-it&lt;/span&gt; &amp;lt;pod-name&amp;gt; &lt;span class="nt"&gt;-n&lt;/span&gt; ns1 &lt;span class="nt"&gt;--&lt;/span&gt; curl &amp;lt;ip-address&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This curl should work (it did).&lt;/p&gt;

&lt;h3&gt;
  
  
  Now try curling the service name instead of IP
&lt;/h3&gt;

&lt;p&gt;You will notice that you are getting an error and cannot resolve the host.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="nt"&gt;-it&lt;/span&gt; deploy-ns1-68b96c55c4-db4b9 &lt;span class="nt"&gt;-n&lt;/span&gt; ns1 &lt;span class="nt"&gt;--&lt;/span&gt; curl svc-ns2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Now use the FQDN of the service and try to curl again
&lt;/h3&gt;

&lt;p&gt;This should work:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="nt"&gt;-it&lt;/span&gt; deploy-ns1-68b96c55c4-db4b9 &lt;span class="nt"&gt;-n&lt;/span&gt; ns1 &lt;span class="nt"&gt;--&lt;/span&gt; curl svc-ns2.ns2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  In the end, delete both the namespaces
&lt;/h3&gt;

&lt;p&gt;This should delete the services and deployments underneath them:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl delete namespace ns1
kubectl delete namespace ns2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Summary of Learning on Kubernetes Namespaces and Services
&lt;/h3&gt;

&lt;p&gt;In this blog post, we explored the concept of Kubernetes namespaces and their role in organizing cluster resources. We went through a series of exercises to solidify our understanding of namespaces, deployments, and services. Here’s a recap of what we covered:&lt;/p&gt;

&lt;h4&gt;
  
  
  Understanding Kubernetes Namespaces
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Definition&lt;/strong&gt;: Namespaces provide a mechanism to divide cluster resources between multiple users or applications, allowing for logical separation without the complexity of managing multiple clusters.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Use Cases&lt;/strong&gt;: They are particularly useful in scenarios where different environments (development, staging, and production) coexist within the same cluster, enabling teams to share infrastructure while isolating resources.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Creating Namespaces
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;We created two namespaces, &lt;code&gt;ns1&lt;/code&gt; and &lt;code&gt;ns2&lt;/code&gt;, using the following commands:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;  kubectl create namespace ns1
  kubectl create namespace ns2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Deployments in Namespaces
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;We deployed a single replica of Nginx in each namespace:

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Deployment in &lt;code&gt;ns1&lt;/code&gt;&lt;/strong&gt;: Configuration saved as &lt;code&gt;ns1-deployment.yml&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Deployment in &lt;code&gt;ns2&lt;/code&gt;&lt;/strong&gt;: Configuration saved as &lt;code&gt;ns2-deployment.yml&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;We used the &lt;code&gt;kubectl apply&lt;/code&gt; command to create these deployments.&lt;/p&gt;

&lt;h4&gt;
  
  
  Scaling Deployments
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;We scaled both deployments from 1 to 3 replicas using the YAML configuration files for source control, ensuring that our changes were tracked.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Exposing Deployments with Services
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;We created two services to expose our deployments:

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Service for &lt;code&gt;ns1&lt;/code&gt;&lt;/strong&gt;: Configuration saved as &lt;code&gt;ns1-service.yml&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Service for &lt;code&gt;ns2&lt;/code&gt;&lt;/strong&gt;: Configuration saved as &lt;code&gt;ns2-service.yml&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;This step enabled external access to our Nginx deployments.&lt;/p&gt;

&lt;h4&gt;
  
  
  Pod-to-Pod Communication
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;We executed commands to curl the IP addresses of the services from within the pods:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;  kubectl &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="nt"&gt;-it&lt;/span&gt; &amp;lt;pod-name&amp;gt; &lt;span class="nt"&gt;-n&lt;/span&gt; ns1 &lt;span class="nt"&gt;--&lt;/span&gt; curl &amp;lt;ip-address&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;We then attempted to curl the service name directly, which resulted in a resolution error, demonstrating the importance of DNS in Kubernetes.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Fully Qualified Domain Name (FQDN) Usage
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;By using the FQDN of the service, we successfully accessed the services across namespaces:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;  kubectl &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="nt"&gt;-it&lt;/span&gt; deploy-ns1-68b96c55c4-db4b9 &lt;span class="nt"&gt;-n&lt;/span&gt; ns1 &lt;span class="nt"&gt;--&lt;/span&gt; curl svc-ns2.ns2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Cleanup
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;To maintain a tidy workspace, we deleted both namespaces, which also removed the associated services and deployments:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;  kubectl delete namespace ns1
  kubectl delete namespace ns2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Key Takeaways
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Namespaces&lt;/strong&gt; are essential for organizing resources and managing access in a Kubernetes cluster.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Deployments and Services&lt;/strong&gt; work hand-in-hand to provide scalable applications and expose them for external access.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;FQDN&lt;/strong&gt; is critical for cross-namespace communication, emphasizing the importance of DNS within Kubernetes.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Maintaining&lt;/strong&gt; cleanliness in our cluster by deleting unnecessary namespaces and resources is crucial for efficient management.&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  Tags and Mentions
&lt;/h3&gt;

&lt;p&gt;&lt;a class="mentioned-user" href="https://dev.to/piyushsachdeva"&gt;@piyushsachdeva&lt;/a&gt; &lt;br&gt;
Day 8 &lt;a href="https://www.youtube.com/watch?v=yVLXIydlU_0&amp;amp;list=PLl4APkPHzsUUOkOv3i62UidrLmSB8DcGC&amp;amp;index=12" rel="noopener noreferrer"&gt;video&lt;/a&gt; &lt;/p&gt;

</description>
      <category>webdev</category>
      <category>beginners</category>
      <category>kubernetes</category>
      <category>containers</category>
    </item>
    <item>
      <title>CKA Full Course 2024: Day 9/40 Kubernetes Services Explained: ClusterIP vs NodePort vs Loadbalancer vs External</title>
      <dc:creator>Lloyd Rivers</dc:creator>
      <pubDate>Sun, 27 Oct 2024 09:45:50 +0000</pubDate>
      <link>https://dev.to/lloydrivers/cka-full-course-2024-day-940-kubernetes-services-explained-clusterip-vs-nodeport-vs-loadbalancer-vs-external-28nh</link>
      <guid>https://dev.to/lloydrivers/cka-full-course-2024-day-940-kubernetes-services-explained-clusterip-vs-nodeport-vs-loadbalancer-vs-external-28nh</guid>
      <description>&lt;p&gt;&lt;strong&gt;Kubernetes Services Overview&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="noopener noreferrer"&gt;Services&lt;/a&gt; in Kubernetes provide a way for applications to communicate with each other or with external clients. They allow for stable endpoints that remain constant, even if the underlying pods change.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Types of Kubernetes Services&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;ClusterIP&lt;/strong&gt;: The default service type, exposing the service within the Kubernetes cluster only. It's useful for internal communication between services or pods.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;NodePort&lt;/strong&gt;: Exposes the service on a static port on each node’s IP, enabling access from outside the cluster via &lt;code&gt;&amp;lt;NodeIP&amp;gt;:&amp;lt;NodePort&amp;gt;&lt;/code&gt;. Often used in development for simple external access but can have security risks in production.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;LoadBalancer&lt;/strong&gt;: Integrates with cloud providers to automatically create an external load balancer, which routes traffic to your services. This type is most common for exposing services to the internet in production environments on managed Kubernetes clusters.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ExternalName&lt;/strong&gt;: Maps a service to an external DNS name by returning a CNAME record. Useful when pointing to external services outside of the Kubernetes environment, without complex configurations.&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  Prerequisites
&lt;/h3&gt;

&lt;p&gt;Before we begin, ensure you have the following &lt;a href="https://kind.sigs.k8s.io/docs/user/quick-start/#creating-a-cluster" rel="noopener noreferrer"&gt;configuration&lt;/a&gt; for your KIND cluster:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Cluster&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kind.x-k8s.io/v1alpha4&lt;/span&gt;
&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cka-cluster&lt;/span&gt;  
&lt;span class="na"&gt;nodes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;role&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;control-plane&lt;/span&gt;
  &lt;span class="na"&gt;extraPortMappings&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;30001&lt;/span&gt;  &lt;span class="c1"&gt;# Change this to match the NodePort&lt;/span&gt;
    &lt;span class="na"&gt;hostPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;30001&lt;/span&gt;
    &lt;span class="na"&gt;listenAddress&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;0.0.0.0"&lt;/span&gt; 
    &lt;span class="na"&gt;protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;tcp&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;role&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;worker&lt;/span&gt;  
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;role&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;worker&lt;/span&gt;  
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This YAML file sets up a Kubernetes cluster named &lt;code&gt;cka-cluster&lt;/code&gt; with one control plane node and two worker nodes. The &lt;code&gt;extraPortMappings&lt;/code&gt; section maps port 30001 to allow external access to the NodePort service.&lt;/p&gt;




&lt;h3&gt;
  
  
  1. Create a Service named &lt;code&gt;myapp&lt;/code&gt; of type ClusterIP
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Goal:&lt;/strong&gt; This creates an internal network endpoint accessible only within the Kubernetes cluster, allowing internal traffic routing to your application.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Details:&lt;/strong&gt; The &lt;code&gt;ClusterIP&lt;/code&gt; type will not be accessible outside the cluster, so it’s used here to facilitate communication between Pods and other resources internally. The Service will map port 80 to the target port 80 on the Pods.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Steps:
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;Create the KIND cluster&lt;/strong&gt; (if you haven’t already):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   kind create cluster &lt;span class="nt"&gt;--name&lt;/span&gt; kind-cka-cluster &lt;span class="nt"&gt;--config&lt;/span&gt; &amp;lt;kind.yml&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Switch context to your new KIND cluster&lt;/strong&gt; (if required):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   kubectl config use-context kind-kind-cka-cluster
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Create the Service manifest&lt;/strong&gt;:&lt;br&gt;
   Open a new YAML file using your preferred editor:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   nano service.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Define the Service&lt;/strong&gt;:&lt;br&gt;
   Copy the following YAML configuration into &lt;code&gt;service.yml&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Service&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;myapp&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ClusterIP&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;MyApp&lt;/span&gt;  &lt;span class="c1"&gt;# Change this to match your pod labels. I copied this of the k8s docs&lt;/span&gt;
  &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;http&lt;/span&gt;
      &lt;span class="na"&gt;protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;TCP&lt;/span&gt;
      &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
      &lt;span class="na"&gt;targetPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Apply the Service configuration&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; service.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Verify the Service is running&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   kubectl get services
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will create the &lt;code&gt;myapp&lt;/code&gt; Service, of type &lt;code&gt;ClusterIP&lt;/code&gt;, mapping port 80 on the Service to port 80 on the target Pods.&lt;/p&gt;




&lt;h3&gt;
  
  
  2. Create a Deployment named &lt;code&gt;myapp&lt;/code&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Goal:&lt;/strong&gt; The Deployment manages Pod replicas for nginx (in this case, version &lt;code&gt;1.23.4-alpine&lt;/code&gt;), ensuring they’re running and available.  &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Details:&lt;/strong&gt; By setting up a Deployment, you gain automatic scalability, rolling updates, and easier management. You expose the container’s port 80, which the Service will use to route traffic.  &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Steps:
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;Create the Deployment manifest&lt;/strong&gt;: &lt;br&gt;
Open a new YAML file using your preferred editor:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;nano deployment.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Define the Deployment&lt;/strong&gt;: &lt;br&gt;
Copy the following YAML configuration into &lt;code&gt;deployment.yml&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;myapp&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;1&lt;/span&gt;  
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;MyApp&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;MyApp&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
        &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx:1.23.4-alpine&lt;/span&gt;
        &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Apply the Deployment configuration&lt;/strong&gt;: &lt;br&gt;
Run the following command to create the Deployment:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; deployment.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Verify the Deployment is running&lt;/strong&gt;: &lt;br&gt;
Check the status by running:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get deployments
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h3&gt;
  
  
  3. Scale the Deployment to 2 replicas
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Goal:&lt;/strong&gt; This tests the load-balancing functionality of the Service, as it should distribute traffic across the two Pod instances.  &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Details:&lt;/strong&gt; You can scale the Deployment either by updating the manifest to set &lt;code&gt;replicas: 2&lt;/code&gt; or by using a &lt;code&gt;kubectl&lt;/code&gt; command.  &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Steps:
&lt;/h4&gt;

&lt;p&gt;Update the Deployment manifest: If you want to change the number of replicas in the YAML, edit the &lt;code&gt;replicas&lt;/code&gt; field:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2&lt;/span&gt;  &lt;span class="c1"&gt;# Update to 2 for scaling&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Alternatively, use &lt;code&gt;kubectl&lt;/code&gt; to scale the Deployment: Run the following command to scale the Deployment:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl scale deployment myapp &lt;span class="nt"&gt;--replicas&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Verify the scaling: Run the following command to check that the new Pods are created and ready:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get pods
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h3&gt;
  
  
  4. Create a Temporary Pod Using busybox and Run &lt;code&gt;wget&lt;/code&gt; Against the Service IP
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Goal:&lt;/strong&gt; This verifies that the Service is reachable from within the cluster.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Details:&lt;/strong&gt; You’ll launch a temporary &lt;code&gt;busybox&lt;/code&gt; Pod and run &lt;code&gt;wget&lt;/code&gt; to check if it can access the &lt;code&gt;myapp&lt;/code&gt; Service over its ClusterIP. This step assumes you’re comfortable finding the Service’s ClusterIP (accessible via &lt;code&gt;kubectl get svc&lt;/code&gt;).&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Important Labeling Warning:&lt;/strong&gt; Ensure that the labels in the Service selector match the Deployment's matchLabels exactly. Misalignment will prevent the Service from routing traffic to the correct Pods.&lt;/p&gt;

&lt;h4&gt;
  
  
  Steps:
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;Find the Service IP:&lt;/strong&gt;&lt;br&gt;
   Run the following command to retrieve the ClusterIP of &lt;code&gt;myapp&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   kubectl get services
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Example output:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   NAME       TYPE        CLUSTER-IP     EXTERNAL-IP   PORT&lt;span class="o"&gt;(&lt;/span&gt;S&lt;span class="o"&gt;)&lt;/span&gt;   AGE
   myapp      ClusterIP   10.96.87.231   &amp;lt;none&amp;gt;        80/TCP    10h
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Create the Temporary Pod:&lt;/strong&gt;&lt;br&gt;
   Use the following command to create a &lt;code&gt;busybox&lt;/code&gt; Pod and run &lt;code&gt;wget&lt;/code&gt; to test connectivity:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   kubectl run busybox &lt;span class="nt"&gt;--image&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;busybox &lt;span class="nt"&gt;-it&lt;/span&gt; &lt;span class="nt"&gt;--rm&lt;/span&gt; &lt;span class="nt"&gt;--restart&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;Never &lt;span class="nt"&gt;--&lt;/span&gt; wget myapp:80
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Command Breakdown:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;--image=busybox:&lt;/strong&gt; Specifies &lt;code&gt;busybox&lt;/code&gt; as the image to pull.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;-it:&lt;/strong&gt; Interactive terminal, which allows you to access the command line inside the Pod.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;--rm:&lt;/strong&gt; Cleans up and removes the Pod after the command completes.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;--restart=Never:&lt;/strong&gt; Ensures Kubernetes does not attempt to restart the Pod after it finishes.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;wget myapp:80:&lt;/strong&gt; Sends a request to the &lt;code&gt;myapp&lt;/code&gt; Service at port 80, confirming connectivity.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Expected Output:&lt;/strong&gt;&lt;br&gt;
   You should see output similar to the following upon a successful connection:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   Connecting to myapp:80 &lt;span class="o"&gt;(&lt;/span&gt;10.96.87.231:80&lt;span class="o"&gt;)&lt;/span&gt;
   saving to &lt;span class="s1"&gt;'index.html'&lt;/span&gt;
   index.html           100% |&lt;span class="k"&gt;********************************&lt;/span&gt;|   615  0:00:00 ETA
   &lt;span class="s1"&gt;'index.html'&lt;/span&gt; saved
   pod &lt;span class="s2"&gt;"busybox"&lt;/span&gt; deleted
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This output confirms that the Service is accessible within the cluster and can successfully handle requests from other Pods.&lt;/p&gt;




&lt;h3&gt;
  
  
  5. &lt;strong&gt;Run a &lt;code&gt;wget&lt;/code&gt; Command Against the Service from Outside the Cluster&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Goal:&lt;/strong&gt; This tests if external traffic can reach the Service (spoiler: it won’t, as it’s still a &lt;code&gt;ClusterIP&lt;/code&gt; type).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Steps:&lt;/strong&gt; From an external machine or terminal, try reaching the Service IP. You won’t get a response because &lt;code&gt;ClusterIP&lt;/code&gt; restricts access to within the cluster.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Find the Service IP:&lt;/strong&gt;&lt;br&gt;
Run the following command to retrieve the ClusterIP of &lt;code&gt;myapp&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get services
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Example output:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NAME       TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
myapp      ClusterIP   10.96.87.231   &amp;lt;none&amp;gt;        80/TCP    10h
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, attempt to use &lt;code&gt;wget&lt;/code&gt; to access the Service:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;wget myapp:80
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Output:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;--2024-10-27 06:49:46--  http://myapp/
Resolving myapp (myapp)... failed: nodename nor servname provided, or not known.
wget: unable to resolve host address ‘myapp’
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h3&gt;
  
  
  6. &lt;strong&gt;Change the Service Type for External Access&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Goal:&lt;/strong&gt; Now, we want to change the Service type so it’s accessible externally.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Details:&lt;/strong&gt; Changing the Service type to &lt;code&gt;NodePort&lt;/code&gt; or &lt;code&gt;LoadBalancer&lt;/code&gt; enables external traffic. Use &lt;code&gt;NodePort&lt;/code&gt; if you want to access it through a specific node’s IP and port, or &lt;code&gt;LoadBalancer&lt;/code&gt; if your cloud provider’s load balancer will handle it.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Rationale:&lt;/strong&gt; Instead of executing commands directly in the terminal, we should leverage YAML files for defining our configurations. This approach not only facilitates source control but also ensures that our infrastructure as code (IaC) practices are consistent and reproducible.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Steps:&lt;/strong&gt; &lt;br&gt;
&lt;strong&gt;Remove the existing service configuration:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   &lt;span class="nb"&gt;rm &lt;/span&gt;service.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Create a new service definition using your preferred text editor:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   nano service.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Copy the following YAML configuration (adapted from the Kubernetes documentation) into &lt;code&gt;service.yml&lt;/code&gt;:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Service&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;myapp&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;NodePort&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;MyApp&lt;/span&gt;
  &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
      &lt;span class="na"&gt;targetPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
      &lt;span class="na"&gt;nodePort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;30001&lt;/span&gt;  &lt;span class="c1"&gt;# Optional: Define the node port if desired&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Deploy the new service configuration:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; service.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For more information, you can refer to the &lt;a href="https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport" rel="noopener noreferrer"&gt;Kubernetes documentation&lt;/a&gt;.&lt;/p&gt;




&lt;h3&gt;
  
  
  7. &lt;strong&gt;Access the Service in Your Browser or Using &lt;code&gt;curl&lt;/code&gt;&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Goal:&lt;/strong&gt; This confirms external access to the Service once exposed.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Details:&lt;/strong&gt; You can visit the nginx homepage in your browser or use &lt;code&gt;curl&lt;/code&gt; to see the response from the service.&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Command:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; curl http://localhost:30001
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Discussion Points
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Exposing Pods without a Deployment:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Can it be done?&lt;/strong&gt; Yes, it’s possible to create a Service that directly exposes individual Pods by using &lt;code&gt;spec.selector&lt;/code&gt; to match the Pods' labels.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Why (or why not)?&lt;/strong&gt; However, Deployments provide benefits like auto-scaling, rolling updates, and ensuring a specified number of Pods are always running, which wouldn’t be handled automatically if only using a Service.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Choosing Service Types:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;ClusterIP:&lt;/strong&gt; Use it for internal traffic only, where the Service should only be accessible within the cluster. Great for microservices that need to communicate with each other.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;NodePort:&lt;/strong&gt; This makes the Service accessible on each Node's IP at a specific port, allowing limited external access. Often used for development or testing but not ideal for production as it doesn’t offer load balancing.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;LoadBalancer:&lt;/strong&gt; Ideal for production in cloud environments, as it distributes incoming traffic across multiple nodes or Pods and abstracts the complexity of handling traffic.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ExternalName:&lt;/strong&gt; This type creates an alias for an external hostname, allowing Pods to access external services using Kubernetes DNS. Suitable for situations where in-cluster services need access to an external service, but it’s limited to simple DNS resolution.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;




&lt;h3&gt;
  
  
  Tags and Mentions
&lt;/h3&gt;

&lt;p&gt;&lt;a class="mentioned-user" href="https://dev.to/piyushsachdeva"&gt;@piyushsachdeva&lt;/a&gt; &lt;br&gt;
Day 8 &lt;a href="https://www.youtube.com/watch?v=tHAQWLKMTB0&amp;amp;list=PLl4APkPHzsUUOkOv3i62UidrLmSB8DcGC&amp;amp;index=11&amp;amp;ab_channel=TechTutorialswithPiyush" rel="noopener noreferrer"&gt;video&lt;/a&gt; &lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>containers</category>
      <category>cicd</category>
      <category>beginners</category>
    </item>
    <item>
      <title>CKA Full Course 2024: Day 8/40 Deployment, Replication Controller and ReplicaSet Explained</title>
      <dc:creator>Lloyd Rivers</dc:creator>
      <pubDate>Fri, 25 Oct 2024 18:52:03 +0000</pubDate>
      <link>https://dev.to/lloydrivers/cka-full-course-2024-day-840-deployment-replication-controller-and-replicaset-explained-1l1j</link>
      <guid>https://dev.to/lloydrivers/cka-full-course-2024-day-840-deployment-replication-controller-and-replicaset-explained-1l1j</guid>
      <description>&lt;p&gt;Before we dive into the steps, a quick word on &lt;strong&gt;ReplicaSets&lt;/strong&gt;. Initially, I was confused by seeing both &lt;strong&gt;ReplicationControllers&lt;/strong&gt; and &lt;strong&gt;ReplicaSets&lt;/strong&gt; mentioned, but after some research, I realized that ReplicaSets are the modern, recommended approach. According to Kubernetes documentation, the ReplicationController is:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"Legacy API for managing workloads that can scale horizontally. Superseded by the Deployment and ReplicaSet APIs."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;For this reason, I'll only be using &lt;strong&gt;ReplicaSets&lt;/strong&gt; in this example.&lt;/p&gt;




&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;ReplicaSet (from the docs)&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
A &lt;a href="https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/" rel="noopener noreferrer"&gt;ReplicaSet's&lt;/a&gt; purpose is to maintain a stable set of replica Pods running at any given time. As such, it is often used to guarantee the availability of a specified number of identical Pods. Typically, you would define a Deployment and let that Deployment manage ReplicaSets automatically.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h3&gt;
  
  
  &lt;strong&gt;Step-by-Step: Creating and Managing a ReplicaSet&lt;/strong&gt;
&lt;/h3&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;1. Creating the ReplicaSet YAML file&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;First, I created a directory to store my YAML file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;mkdir &lt;/span&gt;day-08
&lt;span class="nb"&gt;cd &lt;/span&gt;day-08
nano rs.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Inside &lt;code&gt;rs.yaml&lt;/code&gt;, I defined the ReplicaSet configuration to create 3 replicas of an nginx pod:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ReplicaSet&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
        &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
        &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Some important keys in this &lt;strong&gt;ReplicaSet&lt;/strong&gt; YAML configuration:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;apiVersion&lt;/strong&gt;: Specifies the API version used, which here is &lt;code&gt;apps/v1&lt;/code&gt;, suitable for ReplicaSets in Kubernetes.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;kind&lt;/strong&gt;: Defines the resource type, which is &lt;code&gt;ReplicaSet&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;metadata&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;name&lt;/strong&gt;: Sets a unique name for the ReplicaSet (in this case, "nginx").&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;spec&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;replicas&lt;/strong&gt;: Indicates the desired number of pod replicas to run, set here to 3.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;selector&lt;/strong&gt;: Uses &lt;code&gt;matchLabels&lt;/code&gt; to match pods with the label &lt;code&gt;app: nginx&lt;/code&gt;, identifying the pods that this ReplicaSet will manage.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;template&lt;/strong&gt;:&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;metadata&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;labels&lt;/strong&gt;: Defines labels for pods created by this ReplicaSet. These labels must match the selector labels to ensure proper management.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;spec&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;containers&lt;/strong&gt;: Lists container details for each pod, in this case, a single container:&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;name&lt;/strong&gt;: Name of the container (also "nginx").&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;image&lt;/strong&gt;: Specifies the container image to use (&lt;code&gt;nginx&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ports&lt;/strong&gt;: Configures the container's ports, with &lt;code&gt;containerPort&lt;/code&gt; set to expose port 80.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;After saving and exiting the editor, I applied the ReplicaSet to the cluster:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; rs.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To confirm that the 3 replicas were up and running, I used &lt;code&gt;k9s&lt;/code&gt; (or you can use &lt;code&gt;kubectl get rs&lt;/code&gt; and &lt;code&gt;kubectl get pods&lt;/code&gt; for a quick check).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frcgnst3avh2zd9beof6k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frcgnst3avh2zd9beof6k.png" alt="Image description" width="800" height="252"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h4&gt;
  
  
  &lt;strong&gt;2. Scaling the &lt;a href="https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/" rel="noopener noreferrer"&gt;ReplicaSet&lt;/a&gt;&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;I then wanted to increase the number of replicas:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Scaling replicas to 6 from the command line&lt;/strong&gt;:
To dynamically &lt;a href="https://kubernetes.io/docs/reference/kubectl/generated/kubectl_scale/" rel="noopener noreferrer"&gt;scale&lt;/a&gt; up to 6 replicas without modifying the YAML file:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   kubectl scale &lt;span class="nt"&gt;--replicas&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;6 rs/nginx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I then verified the new count with &lt;code&gt;k9s&lt;/code&gt; &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0vf07b0lmc5tu0jo3jyg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0vf07b0lmc5tu0jo3jyg.png" alt="Image description" width="800" height="209"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Deployments (from the docs)&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
A &lt;a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="noopener noreferrer"&gt;Deployment&lt;/a&gt; manages a set of Pods to run an application workload, usually one that doesn't maintain state. A Deployment provides declarative updates for Pods and ReplicaSets. You describe a desired state in a Deployment, and the Deployment Controller changes the actual state to the desired state at a controlled rate..&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h3&gt;
  
  
  &lt;strong&gt;Deployments, ReplicaSets, and Pods: How They Work Together&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;In Kubernetes, a &lt;strong&gt;Deployment&lt;/strong&gt; is a higher-level abstraction that adds functionality on top of a &lt;strong&gt;ReplicaSet&lt;/strong&gt;. Here’s how they work together:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Deployment&lt;/strong&gt;: A Deployment manages and oversees the ReplicaSets. When you create a Deployment, it automatically generates and manages one or more ReplicaSets, which in turn manage the actual Pods. This structure enables powerful features like rolling updates, rollbacks, and scaling.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;ReplicaSet&lt;/strong&gt;: The ReplicaSet ensures that the desired number of Pods are running at all times. It serves as an intermediary, managing the pods based on the Deployment’s instructions.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Pods&lt;/strong&gt;: The smallest deployable unit in Kubernetes, Pods contain your containerized application. They are created, maintained, and monitored by the ReplicaSet under the Deployment’s guidance.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In essence, the &lt;strong&gt;Deployment&lt;/strong&gt; acts as the top-level controller, managing the &lt;strong&gt;ReplicaSet&lt;/strong&gt; layer, which then directly handles the &lt;strong&gt;Pods&lt;/strong&gt;. This layered approach provides a robust way to handle containerized applications in production, making scaling and updating more efficient.&lt;/p&gt;




&lt;h3&gt;
  
  
  Creating a Deployment
&lt;/h3&gt;

&lt;p&gt;To create a Deployment using the &lt;code&gt;nginx&lt;/code&gt; image with 3 replicas, follow these steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Create the YAML file&lt;/strong&gt;:
Open a terminal and move into the directory we created earlier:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   &lt;span class="nb"&gt;cd &lt;/span&gt;day-08
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, create a new YAML file for the Deployment:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   nano deployment.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Write the Deployment configuration&lt;/strong&gt;:
Inside the &lt;code&gt;nano&lt;/code&gt; editor, paste the following configuration (watch out for correct indentation if you are copying it from here):
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx-deployment&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
      &lt;span class="na"&gt;tier&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;backend&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
        &lt;span class="na"&gt;tier&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;backend&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
        &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx:1.23.0&lt;/span&gt;
        &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Save and exit the editor by pressing &lt;code&gt;CTRL + X&lt;/code&gt;, then &lt;code&gt;Y&lt;/code&gt;, and finally &lt;code&gt;ENTER&lt;/code&gt;.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Apply the Deployment&lt;/strong&gt;:
Run the following command to apply the configuration and create the Deployment:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; deployment.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After running the &lt;code&gt;apply&lt;/code&gt; command, you should see your Deployment created successfully. To confirm, use the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get deployments
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In &lt;code&gt;k9s&lt;/code&gt;, the deployment should appear as shown below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpvok1omk54p2yh8ztpja.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpvok1omk54p2yh8ztpja.png" alt="Deployment View in K9s" width="800" height="357"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This command lists all deployments in your current context, allowing you to verify that &lt;code&gt;nginx-deployment&lt;/code&gt; is active and correctly configured.&lt;/p&gt;




&lt;h3&gt;
  
  
  Updating the Deployment Image and Annotating the Change Cause in Kubernetes
&lt;/h3&gt;

&lt;p&gt;In Kubernetes, managing updates to Deployments effectively involves more than just changing an image version. Documenting these changes is important for maintaining an organized rollout history and providing context for each deployment update. &lt;/p&gt;

&lt;h4&gt;
  
  
  Step 1: Update the Image
&lt;/h4&gt;

&lt;p&gt;First, we’ll update the &lt;code&gt;nginx&lt;/code&gt; container image in our &lt;code&gt;nginx-deployment&lt;/code&gt; Deployment. Here, we set the version to &lt;code&gt;1.16.0&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl &lt;span class="nb"&gt;set &lt;/span&gt;image deploy/nginx-deployment &lt;span class="nv"&gt;nginx&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;nginx:1.23.4
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Kubernetes will initiate a rolling update across all replicas of the &lt;code&gt;nginx-deployment&lt;/code&gt;. This ensures that each replica gradually switches to the specified image version, minimizing downtime.&lt;/p&gt;

&lt;h4&gt;
  
  
  Step 2: Verify the Rollout
&lt;/h4&gt;

&lt;p&gt;After setting the image, it’s important to check that the update has been applied to all replicas:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl describe deployment nginx-deployment | &lt;span class="nb"&gt;grep &lt;/span&gt;Image
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Or, to list the Pods and their images:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get pods &lt;span class="nt"&gt;-o&lt;/span&gt; wide
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;These commands help confirm that the Deployment’s replicas are now running the correct image version.&lt;/p&gt;

&lt;h4&gt;
  
  
  Step 3: Annotate the Deployment with the Change Cause
&lt;/h4&gt;

&lt;p&gt;Since the &lt;code&gt;--record&lt;/code&gt; flag is deprecated, we’ll use the &lt;code&gt;kubectl annotate&lt;/code&gt; command to document the reason for this deployment update manually. This adds context to the change, allowing it to appear in the rollout history:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl annotate deployment nginx-deployment kubernetes.io/change-cause&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"Pick up patch version"&lt;/span&gt; &lt;span class="nt"&gt;--overwrite&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;--overwrite=true&lt;/code&gt; flag is included to ensure the annotation is updated if there’s already an existing annotation for &lt;code&gt;change-cause&lt;/code&gt;.&lt;/p&gt;

&lt;h4&gt;
  
  
  Step 4: Check the Rollout History
&lt;/h4&gt;

&lt;p&gt;Finally, let’s view the rollout history to confirm that our annotation has been recorded. This step also helps validate that our deployment is tracking each update effectively:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl rollout &lt;span class="nb"&gt;history &lt;/span&gt;deployment/nginx-deployment
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command shows each revision, including the change cause for each update. Proper annotation is invaluable for future reference, as it makes rollback decisions easier and provides insight into the deployment history.&lt;/p&gt;




&lt;h3&gt;
  
  
  Scaling, Viewing Deployment History, and Rolling Back Changes
&lt;/h3&gt;

&lt;p&gt;After updating and annotating the image version, it’s crucial to know how to manage the Deployment’s scale, view the history of updates, and roll back if necessary. This section will walk through these final tasks:&lt;/p&gt;

&lt;h4&gt;
  
  
  Step 5: Scale the Deployment to 5 Replicas
&lt;/h4&gt;

&lt;p&gt;Scaling up a Deployment adjusts the number of Pods to meet the demand. Here’s how to scale &lt;code&gt;nginx-deployment&lt;/code&gt; to 5 replicas:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl scale deployment nginx-deployment &lt;span class="nt"&gt;--replicas&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;5
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command increases the number of Pods under &lt;code&gt;nginx-deployment&lt;/code&gt; to five, ensuring more instances of your application are available.&lt;/p&gt;

&lt;h4&gt;
  
  
  Step 6: Roll Back to a Previous Revision
&lt;/h4&gt;

&lt;p&gt;If needed, you can roll back to a specific revision. To revert to revision 1, for example, use:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl rollout undo deployment/nginx-deployment &lt;span class="nt"&gt;--to-revision&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command rolls the Deployment back to the specified version, ideal for cases where a previous configuration is preferable. &lt;/p&gt;

&lt;h4&gt;
  
  
  Step 7: Confirm the Image Version for All Pods
&lt;/h4&gt;

&lt;p&gt;Finally, verify that all Pods are running the correct image version (&lt;code&gt;nginx:1.23.0&lt;/code&gt;):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl describe deployment nginx-deployment | &lt;span class="nb"&gt;grep &lt;/span&gt;Image
kubectl get pods &lt;span class="nt"&gt;-o&lt;/span&gt; wide
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;These commands help confirm that each replica in the &lt;code&gt;nginx-deployment&lt;/code&gt; Deployment is using the specified image version.&lt;/p&gt;




&lt;p&gt;These steps wrap up the complete process of deploying, updating, scaling, and managing a Deployment in Kubernetes. &lt;/p&gt;

&lt;p&gt;If you are still with me at this point, thank you for reading.&lt;/p&gt;

&lt;p&gt;I appreciate it.&lt;/p&gt;




&lt;h3&gt;
  
  
  Tags and Mentions
&lt;/h3&gt;

&lt;p&gt;&lt;a class="mentioned-user" href="https://dev.to/piyushsachdeva"&gt;@piyushsachdeva&lt;/a&gt; &lt;br&gt;
Day 8 &lt;a href="https://www.youtube.com/watch?v=oe2zjRb51F0&amp;amp;list=PLl4APkPHzsUUOkOv3i62UidrLmSB8DcGC&amp;amp;index=9" rel="noopener noreferrer"&gt;video&lt;/a&gt; &lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>containers</category>
      <category>beginners</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>CKA Full Course 2024: Day 7/40 Pod in Kubernetes Explained</title>
      <dc:creator>Lloyd Rivers</dc:creator>
      <pubDate>Thu, 24 Oct 2024 08:00:09 +0000</pubDate>
      <link>https://dev.to/lloydrivers/cka-full-course-2024-day-740-pod-in-kubernetes-explained-5b83</link>
      <guid>https://dev.to/lloydrivers/cka-full-course-2024-day-740-pod-in-kubernetes-explained-5b83</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0psuz60eii1de2cgyf5q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0psuz60eii1de2cgyf5q.png" alt="Image description" width="800" height="436"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;At the core of Kubernetes is the &lt;strong&gt;Pod&lt;/strong&gt;. Think of a Pod as a wrapper for one or more containers, where these containers share the same network and storage. Pods represent the smallest, most basic deployable objects in Kubernetes, and they typically run a single instance of an application. However, a Pod can contain multiple tightly coupled containers.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Two Ways to Create Pods: Imperative vs Declarative&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;In Kubernetes, there are two main approaches to creating and managing resources: &lt;strong&gt;Imperative&lt;/strong&gt; and &lt;strong&gt;Declarative&lt;/strong&gt;.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Imperative&lt;/strong&gt;: This method allows you to directly instruct the Kubernetes cluster to perform an action immediately using the &lt;code&gt;kubectl&lt;/code&gt; command. It’s quick but may not be suitable for managing large clusters, as it does not retain a record of your infrastructure.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Example&lt;/strong&gt;: Create an nginx Pod with an imperative command:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl run nginx-pod &lt;span class="nt"&gt;--image&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;nginx:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ul&gt;



&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Declarative&lt;/strong&gt;: This method is more sustainable for large-scale environments. You define your desired state in a YAML file, and Kubernetes works to ensure that the cluster matches that state. This approach is beneficial for version control and team collaboration.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Example&lt;/strong&gt;: Create the same nginx Pod using a YAML file.&lt;/p&gt;

&lt;p&gt;First, create a YAML configuration file like this:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# The names in this configuration are case-sensitive. &lt;/span&gt;
&lt;span class="c1"&gt;# Be sure to use the exact casing as shown.&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Pod&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx-pod&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx:latest&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;Then, apply it to the cluster with:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; nginx-pod.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Creating and Managing Pods with YAML&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;For larger teams or more complex applications, we use &lt;strong&gt;declarative YAML files&lt;/strong&gt;. YAML allows us to define the desired state of our Pods and other Kubernetes resources in a clean, human-readable format. Let's break down the example YAML file mentioned earlier:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;apiVersion&lt;/strong&gt;: Specifies the API version (in this case, &lt;code&gt;v1&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;kind&lt;/strong&gt;: Defines the type of resource (here, a &lt;code&gt;Pod&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;metadata&lt;/strong&gt;: Contains information about the object, such as the name of the Pod.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;spec&lt;/strong&gt;: Describes the desired state of the object, including the containers, images, and more.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To create a Pod from this YAML file, save it as &lt;code&gt;mypod.yaml&lt;/code&gt; and run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; mypod.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To view the status of the Pod:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get pods
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you need to delete the Pod:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl delete &lt;span class="nt"&gt;-f&lt;/span&gt; mypod.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you need to see the details of the Pod:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl explain pod
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  &lt;strong&gt;YAML Tutorial for Beginners&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;If you’re new to &lt;a href="https://yaml.org/spec/1.2.2/" rel="noopener noreferrer"&gt;YAML&lt;/a&gt;, don’t worry—it’s a simple, human-readable format used to define objects in Kubernetes. Here’s a breakdown of what you need to know:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Indentation matters&lt;/strong&gt;: YAML relies on spaces (not tabs) to structure data.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Key-Value pairs&lt;/strong&gt;: Similar to JSON but more readable.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Lists&lt;/strong&gt;: Defined using a hyphen (&lt;code&gt;-&lt;/code&gt;) followed by the value.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;list_of_fruits&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;apple&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;banana&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;orange&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In Kubernetes, YAML allows you to define objects like Pods, Services, and more.&lt;/p&gt;




&lt;h3&gt;
  
  
  &lt;strong&gt;Summary&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;In today's post, we covered:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What Pods are in Kubernetes and why they are important.&lt;/li&gt;
&lt;li&gt;The difference between imperative and declarative methods of managing Pods.&lt;/li&gt;
&lt;li&gt;How to create and manage Pods using both imperative commands and declarative YAML files.&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  &lt;strong&gt;Top 3 kubectl Commands from Today's Post&lt;/strong&gt;
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;code&gt;kubectl run nginx-pod --image=nginx:latest&lt;/code&gt; – Create a Pod imperatively.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;kubectl apply -f nginx-pod.yaml&lt;/code&gt; – Apply a YAML configuration to create a Pod.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;kubectl get pods&lt;/code&gt; – View all running Pods in the cluster.&lt;/li&gt;
&lt;/ol&gt;




&lt;h3&gt;
  
  
  Tags and Mentions
&lt;/h3&gt;

&lt;p&gt;&lt;a class="mentioned-user" href="https://dev.to/piyushsachdeva"&gt;@piyushsachdeva&lt;/a&gt; &lt;br&gt;
Day 6 &lt;a href="https://www.youtube.com/watch?v=_f9ql2Y5Xcc&amp;amp;list=PLl4APkPHzsUUOkOv3i62UidrLmSB8DcGC&amp;amp;index=8" rel="noopener noreferrer"&gt;video&lt;/a&gt; &lt;/p&gt;

</description>
    </item>
    <item>
      <title>CKA Full Course 2024: Day 6/40 Multi-Node Cluster Setup: Step by Step</title>
      <dc:creator>Lloyd Rivers</dc:creator>
      <pubDate>Tue, 22 Oct 2024 19:54:56 +0000</pubDate>
      <link>https://dev.to/lloydrivers/cka-full-course-2024-day-540-multi-node-cluster-setup-step-by-step-422</link>
      <guid>https://dev.to/lloydrivers/cka-full-course-2024-day-540-multi-node-cluster-setup-step-by-step-422</guid>
      <description>&lt;p&gt;Setting up a Kubernetes cluster is a foundational step for managing containerized applications at scale. In this post, we’ll guide you through installing &lt;code&gt;kubectl&lt;/code&gt;, creating a Kubernetes cluster using &lt;code&gt;kind&lt;/code&gt;, and configuring a multi-node setup with a step-by-step approach. We’ll also highlight five essential &lt;code&gt;kubectl&lt;/code&gt; commands that will give you a solid start in managing your Kubernetes cluster.&lt;/p&gt;

&lt;h3&gt;
  
  
  What You’ll Learn:
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Installing &lt;code&gt;kubectl&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Installing and configuring &lt;code&gt;kind&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Creating and managing a Kubernetes cluster&lt;/li&gt;
&lt;li&gt;Understanding and using key &lt;code&gt;kubectl&lt;/code&gt; commands&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Let’s dive in!&lt;/p&gt;




&lt;h3&gt;
  
  
  Step 1: Installing &lt;code&gt;kubectl&lt;/code&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;kubectl&lt;/code&gt; is the command-line tool that allows you to interact with your Kubernetes cluster. You’ll need this to manage and configure the cluster once it’s up and running.&lt;/p&gt;

&lt;h4&gt;
  
  
  How to Install &lt;code&gt;kubectl&lt;/code&gt;:
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;On macOS:&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   brew &lt;span class="nb"&gt;install &lt;/span&gt;kubectl
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For more installation options and in-depth instructions, refer to the official &lt;a href="https://kubernetes.io/docs/tasks/tools/" rel="noopener noreferrer"&gt;Kubernetes documentation&lt;/a&gt;.&lt;/p&gt;




&lt;h3&gt;
  
  
  Step 2: Installing &lt;code&gt;kind&lt;/code&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;kind&lt;/code&gt; is a tool for running local Kubernetes clusters using Docker. It’s great for development, testing, and learning Kubernetes concepts before moving to a production environment.&lt;/p&gt;

&lt;h4&gt;
  
  
  How to Install &lt;code&gt;kind&lt;/code&gt;:
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;On macOS &amp;amp; Linux:&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   brew &lt;span class="nb"&gt;install &lt;/span&gt;kind
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;On Windows (using Chocolatey):&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   choco &lt;span class="nb"&gt;install &lt;/span&gt;kind
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For more detailed installation instructions, refer to the &lt;a href="https://kind.sigs.k8s.io/docs/user/quick-start/#installation" rel="noopener noreferrer"&gt;official kind documentation&lt;/a&gt;.&lt;/p&gt;




&lt;h3&gt;
  
  
  Step 3: Creating a Kubernetes Cluster
&lt;/h3&gt;

&lt;p&gt;With both &lt;code&gt;kubectl&lt;/code&gt; and &lt;code&gt;kind&lt;/code&gt; installed, you can now create your Kubernetes cluster.&lt;/p&gt;

&lt;h4&gt;
  
  
  Basic Cluster Creation:
&lt;/h4&gt;

&lt;p&gt;To create a basic single-node cluster, use the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kind create cluster
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Understanding the Cluster Creation Process:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;kind&lt;/code&gt; will bootstrap a Kubernetes cluster using a pre-built node image. These images are hosted at &lt;code&gt;kindest/node&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;By default, the cluster will use the latest stable image for the Kubernetes version. If you want a different version, specify it using the &lt;code&gt;--image&lt;/code&gt; flag like this:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kind create cluster &lt;span class="nt"&gt;--image&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;kindest/node:v1.22.2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Check the &lt;a href="https://github.com/kubernetes-sigs/kind/releases" rel="noopener noreferrer"&gt;kind release notes&lt;/a&gt; to find images for specific Kubernetes versions. You can also build your own custom images if needed (this is more advanced, but worth exploring).&lt;/p&gt;

&lt;h4&gt;
  
  
  Multi-Node Cluster Configuration:
&lt;/h4&gt;

&lt;p&gt;To create a multi-node cluster, you’ll need a configuration YAML file. Here’s a sample configuration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Cluster&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kind.x-k8s.io/v1alpha4&lt;/span&gt;
&lt;span class="na"&gt;nodes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;role&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;control-plane&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;role&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;worker&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;role&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;worker&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Save this configuration to a file, for example &lt;code&gt;kind-config.yaml&lt;/code&gt;, and then create your multi-node cluster:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kind create cluster &lt;span class="nt"&gt;--config&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;kind-config.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command will create a cluster with one control-plane node and two worker nodes.&lt;/p&gt;

&lt;h4&gt;
  
  
  Check Your Cluster Status:
&lt;/h4&gt;

&lt;p&gt;Once your cluster is created, you can check its status with the following &lt;code&gt;kubectl&lt;/code&gt; command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl cluster-info &lt;span class="nt"&gt;--context&lt;/span&gt; kind-&amp;lt;cluster-name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Replace &lt;code&gt;&amp;lt;cluster-name&amp;gt;&lt;/code&gt; with the name of your cluster (e.g., &lt;code&gt;kind-kind&lt;/code&gt; if you used the default).&lt;/p&gt;




&lt;h3&gt;
  
  
  Step 4: Key &lt;code&gt;kubectl&lt;/code&gt; Commands
&lt;/h3&gt;

&lt;p&gt;Here are some essential &lt;code&gt;kubectl&lt;/code&gt; commands to help you manage your cluster effectively:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Check the Nodes:&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   kubectl get nodes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command lists all nodes (control-plane and worker nodes) in your cluster.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;List All Pods:&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   kubectl get pods &lt;span class="nt"&gt;--all-namespaces&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This shows the status of all pods running in all namespaces in your cluster.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Get Detailed Information About a Pod:&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   kubectl describe pod &amp;lt;pod-name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Replace &lt;code&gt;&amp;lt;pod-name&amp;gt;&lt;/code&gt; with the name of your pod to get detailed information about it.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Create a Resource from a YAML File:&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; &amp;lt;your-app.yaml&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Use this to deploy an application from a YAML configuration file.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Delete a Resource:&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   kubectl delete &amp;lt;resource-type&amp;gt; &amp;lt;resource-name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command deletes a specific resource, like a pod or deployment.&lt;/p&gt;

&lt;p&gt;For more commands, check out the official &lt;a href="https://kubernetes.io/docs/reference/kubectl/quick-reference/" rel="noopener noreferrer"&gt;Kubernetes Cheat Sheet&lt;/a&gt;.&lt;/p&gt;




&lt;h3&gt;
  
  
  Changing the Cluster Context
&lt;/h3&gt;

&lt;p&gt;To ensure that you are interacting with the correct cluster, use the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl config use-context kind-&amp;lt;cluster-name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Replace &lt;code&gt;&amp;lt;cluster-name&amp;gt;&lt;/code&gt; with the name of your cluster (e.g., &lt;code&gt;kind-kind&lt;/code&gt;).&lt;/p&gt;




&lt;h3&gt;
  
  
  Coming Soon: A Peek at K9s
&lt;/h3&gt;

&lt;p&gt;While &lt;code&gt;kubectl&lt;/code&gt; gives you control over your Kubernetes clusters, there's an amazing tool that takes cluster management to the next level — &lt;strong&gt;K9s&lt;/strong&gt;. We won’t unpack it fully here, but here's a teaser: imagine navigating and managing your cluster with a slick, terminal-based UI. Stay tuned for more details, and we’ll dive deeper into the awesomeness of K9s in an upcoming post!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6cexceap34o1ptz6zwke.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6cexceap34o1ptz6zwke.png" alt="Image description" width="800" height="420"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;In this guide, we’ve walked through the process of setting up a multi-node Kubernetes cluster using &lt;code&gt;kind&lt;/code&gt;, configuring &lt;code&gt;kubectl&lt;/code&gt;, and managing your cluster with essential commands. By following these steps, you’re well on your way to understanding and managing Kubernetes clusters effectively.&lt;/p&gt;




&lt;h3&gt;
  
  
  Tags and Mentions
&lt;/h3&gt;

&lt;p&gt;&lt;a class="mentioned-user" href="https://dev.to/piyushsachdeva"&gt;@piyushsachdeva&lt;/a&gt; &lt;br&gt;
Day 6 &lt;a href="https://www.youtube.com/watch?v=RORhczcOrWs" rel="noopener noreferrer"&gt;video&lt;/a&gt; &lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>docker</category>
      <category>containers</category>
      <category>beginners</category>
    </item>
    <item>
      <title>CKA Full Course 2024: Day 5/40 Kubernetes Architecture</title>
      <dc:creator>Lloyd Rivers</dc:creator>
      <pubDate>Tue, 22 Oct 2024 06:58:15 +0000</pubDate>
      <link>https://dev.to/lloydrivers/cka-full-course-2024-day-540-kubernetes-architecture-3pob</link>
      <guid>https://dev.to/lloydrivers/cka-full-course-2024-day-540-kubernetes-architecture-3pob</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsgfbo9uzo42vvs51ukky.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsgfbo9uzo42vvs51ukky.png" alt="Image description" width="800" height="491"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Apologies if you saw this blog earlier and it was just a random drawing! I accidentally hit publish before writing anything 😅. I'm still figuring out a smooth workflow between Canva, dev.to, and Eraser.io—so bear with me while I get things right.&lt;/strong&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  Kubernetes &lt;a href="https://kubernetes.io/docs/concepts/architecture/#kube-apiserver" rel="noopener noreferrer"&gt;Architecture&lt;/a&gt; - Master Node and Worker Node Components
&lt;/h3&gt;

&lt;p&gt;In today’s post, I’m going to walk through the basic architecture of Kubernetes, focusing on the two main components: the &lt;strong&gt;Master Node&lt;/strong&gt; and the &lt;strong&gt;Worker Node&lt;/strong&gt;. &lt;/p&gt;

&lt;h4&gt;
  
  
  Master Node
&lt;/h4&gt;

&lt;p&gt;The Master Node is where the control magic happens. It’s responsible for managing the cluster and coordinating everything between the nodes. Here are the key components of the Master Node:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;API Server&lt;/strong&gt;: This is the entry point for all the administrative tasks. Think of it as the main communication hub between the users, nodes, and even the external components.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Scheduler&lt;/strong&gt;: As the name suggests, it’s responsible for scheduling your applications (pods) to run on the Worker Nodes based on available resources.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Controller Manager&lt;/strong&gt;: Responsible for monitoring and maintaining the desired state of the cluster, ensuring that everything is operating smoothly and as expected.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;etcd&lt;/strong&gt;: An open-source, distributed key-value store that holds and replicates all the critical data for the cluster, such as configurations, state information, and metadata essential for maintaining the system's desired state.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Worker Node
&lt;/h4&gt;

&lt;p&gt;On the other side, you have the Worker Nodes. These are where your containers (applications) actually run. Here's a breakdown:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Pods&lt;/strong&gt;: A Pod is the smallest deployable unit in Kubernetes. Each Pod encapsulates one or more containers.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;kubelet&lt;/strong&gt;: It ensures the containers in the Pods are running and reports back to the Master Node.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Container Runtime&lt;/strong&gt;: This is the software that actually runs the containers. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;kube-proxy&lt;/strong&gt;: It manages the network rules that allow the Pods to communicate with each other and with the outside world.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Putting It All Together
&lt;/h4&gt;

&lt;p&gt;The communication between the Master and Worker Nodes is key to keeping the system running smoothly. The API Server communicates with the kubelet on each Worker Node to make sure the containers are running as expected, while the Scheduler decides which Pods go where based on resources.&lt;/p&gt;




&lt;p&gt;I hope this gives you a clear picture of how Kubernetes architecture is organized. If you're learning Kubernetes like me, drawing these diagrams and breaking things down really helps solidify the concepts.&lt;/p&gt;

&lt;h3&gt;
  
  
  Tags and Mentions
&lt;/h3&gt;

&lt;p&gt;&lt;a class="mentioned-user" href="https://dev.to/piyushsachdeva"&gt;@piyushsachdeva&lt;/a&gt; &lt;br&gt;
Day 5 &lt;a href="https://www.youtube.com/watch?v=SGGkUCctL4I&amp;amp;t=7s" rel="noopener noreferrer"&gt;video&lt;/a&gt; &lt;/p&gt;

</description>
      <category>docker</category>
      <category>kubernetes</category>
      <category>javascript</category>
      <category>containers</category>
    </item>
  </channel>
</rss>
