<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: 7h3-3mp7y-m4n</title>
    <description>The latest articles on DEV Community by 7h3-3mp7y-m4n (@7h33mp7ym4n).</description>
    <link>https://dev.to/7h33mp7ym4n</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/7h33mp7ym4n"/>
    <language>en</language>
    <item>
      <title>Set Sail with Kubernetes: The Definitive Guide for New Captains</title>
      <dc:creator>7h3-3mp7y-m4n</dc:creator>
      <pubDate>Fri, 23 Aug 2024 13:53:41 +0000</pubDate>
      <link>https://dev.to/7h33mp7ym4n/set-sail-with-kubernetes-the-definitive-guide-for-new-captains-319a</link>
      <guid>https://dev.to/7h33mp7ym4n/set-sail-with-kubernetes-the-definitive-guide-for-new-captains-319a</guid>
      <description>&lt;p&gt;So, I’ve read a bunch of beginner guides on Kubernetes, and honestly? Most of them either lack depth or are just step-by-step walkthroughs that can leave you stuck. But a few were truly great, and they inspired me to create the best Kubernetes guide ever one that’ll have you running your cluster and creating objects with ease, no matter your background. I hope I succeed on this journey.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Kubernetes?
&lt;/h2&gt;

&lt;p&gt;Yep, I know, this is how every other blog starts. But hey, it’s always good to know more about our beloved Kubernetes, right?&lt;br&gt;
Kubernetes ("K8s" if you’re in a hurry) is an open-source system that automates container deployment tasks. Originally developed by Google, it’s now maintained by the awesome folks at the &lt;strong&gt;&lt;a href="https://www.cncf.io" rel="noopener noreferrer"&gt;Cloud Native Computing Foundation (CNCF)&lt;/a&gt;&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Fun Fact:&lt;/strong&gt; The Kubernetes logo represents a ship’s wheel, symbolizing that you’re the captain of your cluster, steering those containers with the help of an orchestrator.&lt;/p&gt;
&lt;h2&gt;
  
  
  Kubernetes Features: Why It's the Best Thing Since Sliced Bread
&lt;/h2&gt;

&lt;p&gt;Just like I said, Kubernetes has a fantastic community contributing to its continuous improvement. Here are some of the cool features that make Kubernetes the go-to platform for container management:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Automated Rollouts, Scaling, and Rollbacks:&lt;/strong&gt; Imagine having a personal assistant who always ensures you have the right number of containers running, even when one goes down. That’s Kubernetes for you!&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Service Discovery, Load Balancing, and Network Ingress:&lt;/strong&gt; Worried about network issues? Kubernetes has you covered. It’s like having a GPS for your containers.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Stateless and Stateful Applications:&lt;/strong&gt; Whether your app needs to remember everything or nothing at all, Kubernetes has built-in objects to manage both.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Storage Management:&lt;/strong&gt; Persistent storage? No problem. Kubernetes abstracts it all for you, no matter where it’s stored.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Declarative State:&lt;/strong&gt; Kubernetes lets you describe the desired state of your cluster, and it automatically makes it so-like magic 🪄 but better.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Works Across Environments:&lt;/strong&gt; Whether you’re in the cloud, on the edge, or just tinkering on your laptop, Kubernetes is there for you.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Highly Extensible:&lt;/strong&gt; Think of Kubernetes as a LEGO set; you can build anything you want with custom object types, controllers, and operators. (My favorite is &lt;a href="https://knative.dev/docs/" rel="noopener noreferrer"&gt;Knative&lt;/a&gt;🤫)&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Getting Started: Let’s Roll To Our Ship 🚢 ☸️
&lt;/h2&gt;

&lt;p&gt;Kubernetes can run in various environments, thanks to the range of distributions it offers. Creating a cluster using the official distribution can be complex, so most people start with a packaged solution like &lt;a href="https://minikube.sigs.k8s.io/docs/start/?arch=%2Fmacos%2Fx86-64%2Fstable%2Fbinary+download" rel="noopener noreferrer"&gt;Minikube&lt;/a&gt; (my personal favorite ❤️), &lt;a href="https://microk8s.io/tutorials" rel="noopener noreferrer"&gt;MicroK8s&lt;/a&gt;, &lt;a href="https://docs.k3s.io/installation" rel="noopener noreferrer"&gt;K3s,&lt;/a&gt; or &lt;a href="https://kind.sigs.k8s.io/docs/user/quick-start/#installation" rel="noopener noreferrer"&gt;Kind&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;When I first started learning Kubernetes, I used Docker Desktop’s Kubernetes extension—it was the easy way out 🤫. But, if you want to dive deep, installing Kubernetes via Minikube is great practice. You can also try &lt;a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/" rel="noopener noreferrer"&gt;Kubeadm&lt;/a&gt;, cause it’s often asked in CKA if you're preparing for it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pro Tip:&lt;/strong&gt; If you’re lazy like me, Homebrew is your best friend.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;brew install minikube
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;after the Minikube does its thing 🍺 type your welcoming command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;minikube start
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And just like that, you’ve got a working Kubernetes cluster! if you got stuck you can also refer to their docs &lt;a href="https://minikube.sigs.k8s.io/docs/start/?arch=%2Fmacos%2Fx86-64%2Fstable%2Fbinary+download" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The welcome screen would be similar to this.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs0et0b3gbkdij9l7nbvu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs0et0b3gbkdij9l7nbvu.png" alt="MiniKube start" width="800" height="464"&gt;&lt;/a&gt;&lt;br&gt;
Yay! My fellow captains, we just started our ship(Kubernetes cluster).&lt;/p&gt;
&lt;h2&gt;
  
  
  Basic Kubernetes Terms: Let's Get Familiar With Tools
&lt;/h2&gt;

&lt;p&gt;Now that we’ve got Kubernetes running, let’s talk about some basic terms. What if you don't know them? If you’re eager to learn more about its architecture, I’ve got a great article for you, &lt;a href="https://dev.to/7h33mp7ym4n/kubernetes-a-poetic-quest-through-container-realms-5cn9"&gt;check it out&lt;/a&gt;to deepen your understanding. I hope you've read the article, but even if not, I assume you have enough knowledge to follow this walkthrough.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Nodes&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzmk0uk7ywm4muzxux88s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzmk0uk7ywm4muzxux88s.png" alt="diagram of Node" width="800" height="477"&gt;&lt;/a&gt;&lt;br&gt;
Nodes are the physical machines that make up your Kubernetes cluster. They run the containers you create. Let’s see our node in action&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get nodes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You’ll see something like this✨&lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcxz7vbvizg7fbkmuu6xd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcxz7vbvizg7fbkmuu6xd.png" alt="preview of node" width="800" height="81"&gt;&lt;/a&gt;&lt;br&gt;
For more details, we can use&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Kubectl get node -o wide
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;in kubectl the get is a command which is used to retrieve and display information about Kubernetes resources. When you use kubectl get, you are asking Kubernetes to fetch and show you details about specific resources.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Namespace&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In Kubernetes, we have a helpful concept called namespace. Think of it as a folder for your Kubernetes objects, providing a layer of isolation to keep everything organized and prevent mixing things up like a salad. 🌱&lt;/p&gt;

&lt;p&gt;Namespaces are especially useful in environments with many users or teams, allowing each to have their own isolated resources without interfering with others.&lt;/p&gt;

&lt;p&gt;You might already know this, but if not, here's a quick tip: there are already existing namespaces in your cluster. You can list them with the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get ns

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here, &lt;code&gt;ns&lt;/code&gt; is just a shorthand for &lt;code&gt;namespace&lt;/code&gt;---a little shortcut to save you some typing! and now we can see all the existing namespaces.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft4kt1wnqre66xiiwrw3z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft4kt1wnqre66xiiwrw3z.png" alt="Fetching Namespace" width="800" height="500"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Bunch of namespaces we can see, Let’s create our own namespace for this guide&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqhzg1kgi1d8129tr2dpy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqhzg1kgi1d8129tr2dpy.png" alt="namespace check" width="800" height="500"&gt;&lt;/a&gt;&lt;br&gt;
The &lt;code&gt;create&lt;/code&gt; command in &lt;code&gt;kubectl&lt;/code&gt; is used to create various objects within your Kubernetes cluster.&lt;/p&gt;
&lt;h2&gt;
  
  
  Cooking with Kubernetes:
&lt;/h2&gt;

&lt;p&gt;Now, all our work will be neatly tucked away in our namespace called mycookbook. Neat, right?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pods: The Building Blocks&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Pods are the smallest and most basic objects in Kubernetes. Instead of running containers directly on nodes, Kubernetes runs them inside pods. Think of a pod as a tortilla wrapping around your containers, sounds delicious, doesn’t it?&lt;/p&gt;

&lt;p&gt;Let’s create our first tortilla I mean pod haha 😂&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl -n mycookbook run mypod --image=nginx --port=8080&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;so we are making our first pod with nginx which listens at port:8080, We can also see it while it's getting created if we are fast enough.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Kubectl -n my cookbook get pods&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4tmxachozvklym8mivv4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4tmxachozvklym8mivv4.png" alt="Creation of our pod" width="800" height="76"&gt;&lt;/a&gt;&lt;br&gt;
what about we see its whole journal? we can use the &lt;code&gt;describe&lt;/code&gt; command which is very helpful on days when you are debubbing. So lets type&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl -n mycookbook describe pods/mypod&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz7zfgbhftcxlmndekcgo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz7zfgbhftcxlmndekcgo.png" alt="Mypod inside" width="800" height="500"&gt;&lt;/a&gt;&lt;br&gt;
Let's verify our setup by hopping into our browser and visiting the given port. But wait, why do we see nothing when we visit our desired port? 🤔&lt;/p&gt;

&lt;p&gt;Don’t worry, this is expected! By default, a Pod in Kubernetes is only accessible within the Kubernetes cluster itself. This means that external access is not available out of the box.&lt;/p&gt;

&lt;p&gt;So, does this mean we can never access our first Pod?&lt;/p&gt;

&lt;p&gt;Absolutely not! You can still access your Pod externally, but you’ll need to use a Kubernetes Service to expose it to the outside world. Kubernetes Services acts as an entry point, allowing external traffic to reach your Pods. So, while you may not&lt;/p&gt;
&lt;h2&gt;
  
  
  Making Pizza Kubernetes: Pods, Deployments, and Services
&lt;/h2&gt;

&lt;p&gt;Oh God, I'm hungry now, I'm craving something like pizza 🍕, how about we make a pizza on Kubernetes instead of in our kitchen?😋&lt;/p&gt;

&lt;p&gt;I’m making pizza out of niginx and my recipe will have 5 ngnix pod, and they listen to the port 80 and this time we will eat it together 🤗&lt;/p&gt;

&lt;p&gt;so lets make our dough(deployment) for our pizzas.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Deployments: Dough ?&lt;/strong&gt;&lt;br&gt;
The Deployment is your recipe. It tells Kubernetes what kind of pizza (Pods) you want, how many you want to bake, and what ingredients (containers) to use. For example, you might want 5 Margherita pizzas (Pods) with the same ingredients (containers). Kubernetes will ensure that you have exactly 5 of these pizzas ready, all made from the same recipe. So let's deploy 5 Nginx pods and call it pizza (for being simple).🍕&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl -n mycookbook create deployment pizza --image=nginx --replicas=3 --dry-run=client -o yaml &amp;gt; pizza.yml

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;yes we are making pizza from scratch and we should use yaml instead for those direct commands, Now when we &lt;code&gt;cat pizza.yml&lt;/code&gt; we could see a our pizza dough like this..&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fevlhmo6icasn946ctpj2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fevlhmo6icasn946ctpj2.png" alt="Deploying our pizza" width="800" height="500"&gt;&lt;/a&gt;&lt;br&gt;
Oops, we only got 3 replicas, but we wanted 5! No worries, we can scale it up ⬆️. But before that, we should apply it by using &lt;code&gt;apply&lt;/code&gt; command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f pizza.yml

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;so we could see our deployment is being created, we can verify it by the &lt;code&gt;kubectl -n mycookbook get deploy&lt;/code&gt; and see it rolling 😊. But we got the dough for only 3 pizzas what should we do? compromise? Nah, we can we can scale it up ⬆️ by this.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl -n mycookbook scale deploy/pizza --replicas=5

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Clarification: The &lt;code&gt;scale&lt;/code&gt; command allows you to adjust the number of replicas (instances) of a deployment. In this case, it increases the number of Nginx pods from 3 to 5.&lt;/p&gt;

&lt;p&gt;Or, you can use the &lt;code&gt;edit&lt;/code&gt; command to update the number of replicas directly.&lt;/p&gt;

&lt;p&gt;What? What is an edit? can we also be creative like we get while making pizza? And the answer is Yes! 🙌&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;edit&lt;/code&gt; command in Kubernetes is indeed a powerful tool! It allows you to manually modify the configuration of your objects, like deployments. Here's how it works:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Power and Caution: Just like wielding a sharp knife in the kitchen, &lt;code&gt;edit&lt;/code&gt; gives you great control but requires careful handling. If not used properly, changes might cause unexpected issues with your deployment.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;How It Works: When you run the &lt;code&gt;edit&lt;/code&gt; command, it opens the configuration of your deployment in your default text editor (like Vim or Nano). From there, you can make changes directly, such as updating the number of replicas in the &lt;code&gt;replicas&lt;/code&gt; field.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Safe Editing: Although powerful, it's wise to proceed with caution. If you make a mistake, it could affect how your deployment behaves. A safer approach is to use the &lt;code&gt;edit&lt;/code&gt; command to open the configuration, then save and apply changes carefully.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We'll dive into using the &lt;code&gt;edit&lt;/code&gt; command later, and I promise you'll get the hang of it! 🤗&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Services: Plating and Serving 🍽️&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Our pods(pizzas) are running, but how do we access them? That's where Kubernetes Services comes in. They expose your pods to the network, letting you access them from outside the cluster.&lt;/p&gt;

&lt;p&gt;One of the best resources for learning more about Kubernetes is its comprehensive &lt;a href="https://kubernetes.io/docs/home/" rel="noopener noreferrer"&gt;documentation&lt;/a&gt;. Contributed to by experts from around the world, it simplifies complex concepts and enhances your understanding. So let's plate our pizzas ;) With the help of Kubernetes documentation(Kubernetes presentation book) by clicking &lt;a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;By reading through docs we can do this by adding this yaml to our pizza.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
apiVersion: v1
kind: Service
metadata:
  name: pizza
  namespace: mycookbook
  labels:
    app: pizza
spec:
  selector:
    app: pizza
  type: NodePort
  ports:
  - port: 80
    targetPort: 80
    protocol: TCP
    nodePort: 30080  # Optional: Specify a NodePort or let Kubernetes assign one

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can copy this and there is also an easy way to do this but we are head chef right now so let's do everything from scratch 🤗, let's apply this, I have saved this file as &lt;code&gt;pizzasvc.yml&lt;/code&gt;, you can name it anything or you can also add it to our previous deployment file and &lt;code&gt;apply&lt;/code&gt; it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fflsd7li2m7k45coot6hh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fflsd7li2m7k45coot6hh.png" alt="Service check" width="800" height="65"&gt;&lt;/a&gt;&lt;br&gt;
Let's also verify it by &lt;code&gt;kubectl -n mycookbook get svc/pizza&lt;/code&gt;.We can see that it's ready to be served 😋.&lt;/p&gt;

&lt;p&gt;But wait did we &lt;code&gt;edit&lt;/code&gt; command as I promised? no, right ? So let's use it cause I forgot to name our delicious pizzas as Margherita. So let us use it&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl -n mycookbook edit deploy/pizza

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;woah what just opened? the heart of our pizza❤️. let us find the place where we can name our pod Margherita.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F01gw5w82ppa7e8vp3ej5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F01gw5w82ppa7e8vp3ej5.png" alt="Changing name of container" width="800" height="500"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl -n mycookbook get deployment pizza -o jsonpath='{.spec.template.spec.containers[*].name}'

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and now we have each container named a Margherita, Yummy 😋.&lt;/p&gt;

&lt;h2&gt;
  
  
  Reaching the endpoint: Time to eat 🤤
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flcnexh4uv704ndcm0h8a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flcnexh4uv704ndcm0h8a.png" alt="Pizza on ship with k8s" width="800" height="793"&gt;&lt;/a&gt;&lt;br&gt;
Now, we can check out your nginx pizza by visiting the assigned port. We can follow the pattern as http://: . For me the NodePort is 30080 as I have defined you can check it by the &lt;code&gt;kubectl -n mycookbook get svc&lt;/code&gt; and for the node, we can use &lt;code&gt;kubectl get node -o wide&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;we can use &lt;code&gt;curl&lt;/code&gt; but for Minikube it is useful to use &lt;code&gt;minikube -n mycookbook svc pizza&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;There was another easy way to skip the service yaml part by using the &lt;code&gt;kubectl -n mycookbook expose deploy/pizza --type=NodePort --port=80"&lt;/code&gt; which can do the same But I wanted you guys to feel like a chef to learn it with heart ❤️.&lt;/p&gt;

&lt;p&gt;So there you have it, folks! 🍕&lt;br&gt;
If you followed along, you could have your very own pizza or whatever you were making cooked up and ready to serve. We all have a little chef inside us, and instead of a kitchen, we're using Kubernetes to cook our creations! 😉&lt;/p&gt;

&lt;p&gt;But after cooking, what comes next? Dishes, right?&lt;/p&gt;

&lt;p&gt;In the Kubernetes world, "doing the dishes" means cleaning up our resources to keep things tidy and efficient. Let's go ahead and do that!&lt;/p&gt;
&lt;h2&gt;
  
  
  Cleaning Up: Doing the Dishes
&lt;/h2&gt;

&lt;p&gt;To clean up everything we created:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Delete the Deployment:&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt; kubectl -n mycookbook delete deployment pizza

&lt;/code&gt;&lt;/pre&gt;




&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;Delete the Service:&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt; kubectl -n mycookbook delete service pizza

&lt;/code&gt;&lt;/pre&gt;




&lt;/li&gt;

&lt;/ol&gt;

&lt;h2&gt;
  
  
  Wrapping up: Farwell 😔
&lt;/h2&gt;

&lt;p&gt;Yes, I know there are more advanced topics like RBAC, Ingress, Volumes, and many more. But those are for another time, as they dive deeper into the wonderful world of Kubernetes.&lt;/p&gt;

&lt;p&gt;For now, I hope you've gotten a good overview of Kubernetes. I truly hope you learned something valuable today and enjoyed our little cooking adventure in the world of K8s.&lt;/p&gt;

&lt;p&gt;As a parting gift, I want to share something cool with you. Did you know that Kubernetes has a beautiful dashboard? It's like a control panel where you can visualize everything that's happening in your cluster.&lt;/p&gt;

&lt;p&gt;Wanna check it out? Just type&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;minikube dashboard

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Isn't that pretty? 😉 Enjoy exploring, and see you next time when we dive into the more advanced topics! ☸️&lt;/p&gt;

</description>
      <category>devops</category>
      <category>kubernetes</category>
      <category>beginners</category>
      <category>learning</category>
    </item>
    <item>
      <title>Kubernetes: A Poetic Quest Through Container Realms</title>
      <dc:creator>7h3-3mp7y-m4n</dc:creator>
      <pubDate>Fri, 02 Aug 2024 22:10:54 +0000</pubDate>
      <link>https://dev.to/7h33mp7ym4n/kubernetes-a-poetic-quest-through-container-realms-5cn9</link>
      <guid>https://dev.to/7h33mp7ym4n/kubernetes-a-poetic-quest-through-container-realms-5cn9</guid>
      <description>&lt;p&gt;Containers are getting more and more popular these days, and everyone is talking about them. The most popular topic in the container era is none other than Kubernetes. Kubernetes is an open-source container orchestrator that automates container deployment, scaling, and administration tasks.&lt;/p&gt;

&lt;p&gt;Kubernetes is a distributed system. It horizontally scales containers across multiple physical hosts termed Nodes. This produces fault-tolerant deployments that adapt to conditions such as Node resource pressure, instability, and elevated external traffic levels. If one Node suffers an outage, Kubernetes can reschedule your containers onto neighboring healthy Nodes.&lt;/p&gt;

&lt;p&gt;With a wonderful tool written in Go, capable of doing most of the things—from scaling pods to managing security, from allowing network policies to keeping pods alive—there is so much it can do.&lt;/p&gt;

&lt;p&gt;A Houdini tool that can spin up so much magic in the world of containers makes us wonder how it works internally. Does the architecture resemble a rocket engine? Do I have to be a genius to understand it? Well, the Kubernetes architecture doesn’t look like a rocket engine, and you don’t have to be a genius to understand it. I’ve got you covered, and we need to understand what Kubernetes architecture is for our certification exams.&lt;/p&gt;

&lt;h2&gt;
  
  
  A heartwarming Architecture diagram
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr73nxpas6d6ne51ejnd5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr73nxpas6d6ne51ejnd5.png" alt="An lovely architecture of kubernetes" width="800" height="444"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Main Components of Kubernetes Architecture
&lt;/h2&gt;

&lt;p&gt;One of the best things about Kubernetes is the way it lowers the management overhead of running tons of containers. Kubernetes achieves this by pooling multiple container computer nodes into one giant entity called a Cluster. When we deploy a workload to our Kubernetes cluster, it automatically starts our containers on one or more nodes based on the requirement. Here are the key elements of a Kubernetes cluster:&lt;/p&gt;

&lt;h3&gt;
  
  
  Workloads
&lt;/h3&gt;

&lt;p&gt;K8s has multiple layers of abstraction that define our application. With these workload objects, it helps us take full control over management. Some of them are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Pod&lt;/strong&gt;: A Pod is a fundamental compute unit of Kubernetes. A Pod can be one or a group of containers that share the same specifications of our application.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Deployment&lt;/strong&gt;: A Deployment is a resource object that defines the desired state for your application. It encapsulates the instructions for creating and managing a group of identical Pods, which collectively form your application's backend. These instructions include details such as container images, resource requirements, environment variables, and more.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Service&lt;/strong&gt;: A Service is a portal through which we expose Pods to the network. We use Services to permit access to Pods, either within your cluster via automatic service discovery, or externally through an Ingress.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Job&lt;/strong&gt;: A Job in Kubernetes is a way to execute short-lived, non-replicated tasks or batch jobs reliably within our cluster.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Kubernetes also offers other workload types, like DaemonSets, StatefulSets, and more.&lt;/p&gt;

&lt;h3&gt;
  
  
  Control Plane
&lt;/h3&gt;

&lt;p&gt;In our Kubernetes (K8s) cluster, the control plane functions as the mastermind, or "The Mastermind." It serves as the central management interface, overseeing various aspects of the cluster's operations. The control plane stores the cluster's state, continuously monitors the health of nodes, and takes necessary actions to maintain optimal performance.&lt;/p&gt;

&lt;p&gt;What’s fascinating is that actions within the control plane can be initiated either manually or automatically. This duality provides administrators with flexibility in managing the cluster, allowing for both hands-on intervention and automated responses to changes in the cluster environment.&lt;/p&gt;

&lt;p&gt;The control plane is a foundational component that ensures the smooth functioning of our Kubernetes ecosystem, embodying the essence of control and coordination in the realm of distributed systems.&lt;/p&gt;

&lt;p&gt;To explain further, the control plane is made up of different parts, each providing the tools needed to control the cluster, though they don’t directly start or run the containers where your applications live.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;API Server&lt;/strong&gt;: The API Server is the control plane component that exposes the Kubernetes API. We use this API whenever we run commands with &lt;code&gt;kubectl&lt;/code&gt;. If we lose our API, we will lose access to our cluster.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Controller Manager&lt;/strong&gt;: As the name suggests, it’s responsible for monitoring and controlling our K8s cluster. It’s like a loop that monitors the cluster and performs actions when needed. For example, when we make a deployment, we set replicas, port access, and other details. The Controller Manager keeps an eye on the deployment and manages the cluster to ensure that our Pods work seamlessly.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Scheduler&lt;/strong&gt;: The Scheduler is like a project manager whose task is to place newly created Pods on the desired Nodes in our cluster. We can customize our scheduler to specify which cluster a certain Pod should use.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Etcd&lt;/strong&gt;: Etcd is like a data center for K8s. It’s a distributed key-value storage system that holds every API object, including sensitive data stored in our ConfigMaps.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Cloud Controller Manager&lt;/strong&gt;: The Cloud Controller Manager integrates Kubernetes with your cloud provider’s platform. It facilitates interactions between your cluster and its outside environment. This component is involved whenever Kubernetes objects change your cloud account, such as provisioning a load balancer, adding a block storage volume, or creating a virtual machine to act as a Node.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Nodes
&lt;/h3&gt;

&lt;p&gt;Nodes are the physical or virtual machines that host the Pods in your Kubernetes cluster. While you can technically run a cluster with just one Node, production environments typically use multiple Nodes to allow for horizontal scaling and high availability.&lt;/p&gt;

&lt;p&gt;Nodes join the cluster using a token issued by the control plane. After a Node is admitted, the control plane begins scheduling new Pods to it. Each Node runs various software components necessary to start containers and maintain communication with the control plane.&lt;br&gt;
Why didn’t the Node join the cluster party? Because it couldn’t find its token and was left out in the cold!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Kubelet&lt;/strong&gt;: Kubelet is the software running on each Node that acts as the control plane’s helper. It regularly checks in with the control plane to report the status of the Node’s workloads. When the control plane wants to schedule a new Pod on the Node, it contacts Kubelet. Kubelet is also in charge of running the Pod containers. It pulls the necessary images for new Pods and starts the containers. Once they’re running, Kubelet keeps an eye on them to make sure they stay healthy.&lt;br&gt;
Why did the Kubelet get a promotion? Because it was great at container management and never let the Pods crash the party!🎉&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Kube Proxy&lt;/strong&gt;: The Kube Proxy component helps Nodes in your cluster communicate with each other. It sets up and maintains networking rules so that Pods exposed by Services can connect. If Kube Proxy fails, the Pods on that Node won't be reachable over the network.&lt;/p&gt;

&lt;p&gt;Why did the Kube Proxy get grounded? Because it kept breaking up the connections!😔&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Container Runtime&lt;/strong&gt;: To run a container, we need a container runtime to start our beloved containers. The container runtime is the most popular option, but alternatives such as CRI-O and Docker Engine can be used instead.&lt;br&gt;
Why did the container feel alone? Because there was no container runtime to lift its mood!🥰&lt;/p&gt;

&lt;h2&gt;
  
  
  Customizing Kubernetes
&lt;/h2&gt;

&lt;p&gt;You heard it right—the architecture doesn’t stop here. There are many different terms we can add to our beloved cluster, like CRDs, webhooks, charts, plugins, and more.&lt;/p&gt;

&lt;p&gt;I hope we’ve learned a lot about our lovely K8s. Some terms might sound new, but stay tuned on this journey to learn more about them!&lt;br&gt;
So the main thing that strikes my mind ...&lt;br&gt;
Why did the Kubernetes cluster start writing poetry?🤔&lt;br&gt;
Because it wanted to orchestrate its own verse of nodes and pods!😂&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>architecture</category>
      <category>learning</category>
    </item>
    <item>
      <title>Which is good for making scrappers in Golang?</title>
      <dc:creator>7h3-3mp7y-m4n</dc:creator>
      <pubDate>Mon, 03 Jun 2024 13:38:42 +0000</pubDate>
      <link>https://dev.to/7h33mp7ym4n/which-is-good-for-making-scrappers-in-golang-4118</link>
      <guid>https://dev.to/7h33mp7ym4n/which-is-good-for-making-scrappers-in-golang-4118</guid>
      <description>&lt;p&gt;I'm thinking of making a web scrapper again and I heard there is a new Rod named Library, should I give it a try or should I stick with the OG colly? &lt;/p&gt;

</description>
      <category>discuss</category>
      <category>go</category>
      <category>webdev</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Understanding Kubernetes: Its Significance and Benefits</title>
      <dc:creator>7h3-3mp7y-m4n</dc:creator>
      <pubDate>Wed, 03 Jan 2024 16:53:01 +0000</pubDate>
      <link>https://dev.to/7h33mp7ym4n/understanding-kubernetes-its-significance-and-benefits-5390</link>
      <guid>https://dev.to/7h33mp7ym4n/understanding-kubernetes-its-significance-and-benefits-5390</guid>
      <description>&lt;h2&gt;
  
  
  What is Kubernetes?
&lt;/h2&gt;

&lt;p&gt;Kubernetes is now in everyone's minds, whenever we hear containers Microservices Kubernetes always shines in our brain. But what is Kubernetes? And why do we need it?&lt;/p&gt;

&lt;h2&gt;
  
  
  Kubernetes ☸️
&lt;/h2&gt;

&lt;p&gt;Apart from being a giant ship wheel, Kubernetes is a wonderful orchestrator framework. It is generally used to manage applications that are made on top of containers- physical machines, virtual machines, and hybrid environments.&lt;/p&gt;

&lt;p&gt;According to the website of k8s, Kubernetes is :&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“A portable, extensible, open-source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;What does it actually ?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Can it help me to cook lobster? Can I use it as the steering for my car? Can it help me with my math homework? No, but we can do pretty amazing things with it so let's roll back the time and understand it from the stone age.&lt;/p&gt;

&lt;h2&gt;
  
  
  Going back in time
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--w8Hnl2_S--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/e4i8zp7klr13qtpee9c4.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--w8Hnl2_S--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/e4i8zp7klr13qtpee9c4.gif" alt="Stone age man carving his computer" width="800" height="600"&gt;&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;Traditional deployment era:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--l4EhmxaV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jv4v40ftpufg3zjlqsfl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--l4EhmxaV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jv4v40ftpufg3zjlqsfl.png" alt="Traditional deployment image" width="800" height="484"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Early on, organizations deployed applications on physical servers without the ability to clearly define resource boundaries for individual applications. This lack of resource delineation led to challenges in resource allocation. For instance, when multiple applications share a physical server, one application could monopolize most resources, adversely affecting the performance of other applications. While one solution was to run each application on a separate physical server, this approach proved inefficient and costly due to underutilized resources and the maintenance overhead associated with numerous physical servers.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Virtualized deployment era:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--CpHi6TtB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/63nm6bocl1jvq5w9vpnl.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--CpHi6TtB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/63nm6bocl1jvq5w9vpnl.jpg" alt="Virtualized deployment" width="800" height="485"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As a remedy, virtualization emerged as a solution. It enables the concurrent operation of multiple Virtual Machines (VMs) on a single CPU of a physical server. Virtualization ensures the segregation of applications within individual VMs, enhancing security by restricting access to one application's information by others.&lt;/p&gt;

&lt;p&gt;This approach optimizes resource utilization on a physical server, facilitating improved scalability. It streamlines the addition or updating of applications, reduces hardware expenses, and more. Virtualization enables the presentation of a pool of physical resources as a cluster of disposable virtual machines.&lt;/p&gt;

&lt;p&gt;Each VM functions as a complete entity, hosting all components, including its independent operating system, layered atop the virtualized hardware.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Container deployment era:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--IetlPNVx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/25pbs6dpvu28tqv47ot0.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--IetlPNVx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/25pbs6dpvu28tqv47ot0.jpg" alt="Container deployment" width="800" height="485"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Containers are like Virtual Machines (VMs), but they share the same Operating System (OS) among applications, making them lighter. Each container has its own space for files, CPU, memory, and processes, similar to a VM. The cool part is, that containers can easily move around different clouds and Operating Systems because they're not tied to the underlying infrastructure.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--TMueL5fr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/iebd1llndeh5r5a0414n.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--TMueL5fr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/iebd1llndeh5r5a0414n.jpeg" alt="conatiner toystory meme" width="800" height="416"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Why container are becoming so popular?
&lt;/h2&gt;

&lt;p&gt;These days containers have become so popular  because they provide extra benefits, such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Easy App Creation and Deployment:&lt;/strong&gt; Making container images is simpler and quicker compared to using Virtual Machine (VM) images.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Continuous Development and Deployment:&lt;/strong&gt; This lets you reliably and frequently build and deploy container images. If something goes wrong, you can quickly roll back to a previous image.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Separation of Dev and Ops:&lt;/strong&gt; You create application container images before deploying them, keeping applications separate from infrastructure.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Observability:&lt;/strong&gt; Provides info not just about the operating system but also about how the application is doing.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Consistent Environments:&lt;/strong&gt; Your app runs the same whether it's on your computer, in testing, or in production.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Portability Across Platforms:&lt;/strong&gt; Works on various systems like Ubuntu, RHEL, CoreOS, and in the cloud or on your server.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;App-Focused Management:&lt;/strong&gt; Shifts the focus from running an operating system on virtual hardware to running an app on an OS using logical resources.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Flexible Micro-Services:&lt;/strong&gt; This breaks apps into smaller parts that can be easily managed and deployed, moving away from one big complicated setup.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Resource Isolation:&lt;/strong&gt; Make sure your app performs predictably by keeping its resources separate.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;What problems does it solve ?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Containers are like neat packages for your apps, making them easy to run. When your apps are running for real, you want to make sure these containers are looked after and that your apps stay up without any breaks. If one container stops, another should take over, right? Imagine if a system could take care of all this for you. Much simpler, right?&lt;/p&gt;

&lt;p&gt;That's where Kubernetes steps in as the superhero! It gives you a toolkit to handle your apps' wild adventures. Kubernetes takes charge of scaling, rescues your app in case it falters, and even throws in some cool deployment tricks.&lt;/p&gt;

&lt;h2&gt;
  
  
  Thing Kubernetes provides you
&lt;/h2&gt;

&lt;p&gt;Now here are some Kubernetes special powers 🪄&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Service Discovery and Load Balancing:&lt;/strong&gt; Kubernetes helps your apps connect and balance the workload. It can give each app a unique name or IP address and manage traffic so everything stays stable.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Storage orchestration:&lt;/strong&gt; You can tell Kubernetes where to store your stuff – it could be on your computer, in the cloud, or somewhere else. It takes care of the details so you don't have to.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Easy Updates and Rollbacks:&lt;/strong&gt; Describe how you want your apps to be, and Kubernetes makes it happen. It can gradually switch things up, like bringing in new containers or getting rid of old ones, all without causing chaos.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Smart Resource Use:&lt;/strong&gt; Tell Kubernetes what your apps need in terms of power (CPU) and memory. It then cleverly fits them into the available space on your computer or server.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Self-Healing Magic:&lt;/strong&gt; If an app messes up, Kubernetes jumps in. It restarts, replaces, or removes problem-causing apps and doesn't let them cause trouble until they're good to go.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Secret Keeper:&lt;/strong&gt; Got secret info like passwords or special keys? Kubernetes can keep them safe and update them without making a fuss.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Batch Jobs Too:&lt;/strong&gt; Not just for regular apps – Kubernetes can handle batch jobs and tasks for you, even replacing ones that misbehave.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Scaling Made Easy:&lt;/strong&gt; Need more power or less? Just tell Kubernetes, and it scales your apps up or down with a snap – no tech hassle.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Architecture behind Kubernetes?
&lt;/h2&gt;

&lt;p&gt;So here comes the building blocks of Kubernetes&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--9K8ttMtk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jz5ab2es2wcue2aqaajg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--9K8ttMtk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jz5ab2es2wcue2aqaajg.png" alt="k8s architecture meme" width="800" height="1213"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Haha got you but that’s an actual architecture behind the Kubernetes let me break it down into an easy diagram.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--eof3cNJq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dhxmpzx4iyn6dkzorbrm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--eof3cNJq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dhxmpzx4iyn6dkzorbrm.png" alt="k8s architecture" width="800" height="444"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now let me demonstrate what each term do&lt;br&gt;
&lt;strong&gt;But before that I’ll tell you what are pods ?&lt;/strong&gt;&lt;br&gt;
A pod is the smallest thing you can deploy in Kubernetes. It's like a tiny package that holds one or more containers, which are like software boxes. These containers work together, sharing space and talking to each other. So, a pod is just a team of containers that hang out together and help run your apps in Kubernetes. It's like the basic building block for getting stuff done in Kubernetes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Control Plane
&lt;/h2&gt;

&lt;p&gt;The control plane is responsible for container orchestration and maintaining the desired state of the cluster. It just acts Like a Captain of your containers. It has the following components:&lt;/p&gt;

&lt;p&gt;*&lt;em&gt;Kube-APIserver *&lt;/em&gt; &lt;/p&gt;

&lt;p&gt;Think of the kube-api server as the big boss in a Kubernetes team. It's like the main office where everyone, including regular folks and important team players, goes to communicate. When you use a tool like Kubectl to handle the team's tasks, you chat with this big boss through a special language (HTTP REST APIs).&lt;/p&gt;

&lt;p&gt;Now, inside the team, there are some fancy departments like the scheduler and controller. These guys have their secret language (gRPC) to talk directly to the big boss. Most of the time, it's like using English (HTTP REST APIs) when you're on the outside, but inside, they have their cool way of chatting.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;etcd&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In Kubernetes, etcd is like the brain – a smart database that helps the team find each other (service discovery) and stores important info. It's the go-to source for keeping everything in sync.&lt;/p&gt;

&lt;p&gt;&lt;u&gt;How does it work ?&lt;/u&gt;?&lt;/p&gt;

&lt;p&gt;In plain terms, when you use Kubectl to check details about a Kubernetes object, you're essentially fetching that info from etcd. Likewise, when you deploy something like a pod, etcd keep a record of it. Etcd is like the behind-the-scenes storage that holds all the key details about what's happening in your Kubernetes setup.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;kube-scheduler&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Think of the Kube-scheduler in Kubernetes as a sort of matchmaker for pods and worker nodes.&lt;/p&gt;

&lt;p&gt;When you create a pod, you tell the scheduler what the pod needs, like how much CPU, memory, and other things. The scheduler's main job is to take this info and find the perfect spot (worker node) for the pod. It's like a super-smart assistant, making sure each pod gets the right home in the cluster that fits its needs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Kube-controller-manager&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Think of the Kube controller manager in Kubernetes as the boss overseeing all the controllers.&lt;/p&gt;

&lt;p&gt;Now, controllers are like managers for different things in Kubernetes, such as pods, namespaces, jobs, and replicasets. They make sure everything is running smoothly. Even the Kube scheduler, which is like the team's scheduler, is managed by this boss – the Kube controller manager. So, it's the one in charge of keeping all the different parts of the Kubernetes team working together seamlessly&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cloud-controller-manager&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When Kubernetes is deployed in cloud environments, the cloud controller manager acts as a bridge between Cloud Platform APIs and the Kubernetes cluster.&lt;/p&gt;

&lt;p&gt;This way the core Kubernetes core components can work independently and allow the cloud providers to integrate with Kubernetes using plugins.&lt;/p&gt;

&lt;h2&gt;
  
  
  Kubernetes Worker Node Components
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Kubelet&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The Kubelet is like the worker on each team member's computer in the Kubernetes squad. It's not a container itself, instead, it's like a dedicated helper running in the background, managed by systemd.&lt;/p&gt;

&lt;p&gt;What it does is pretty important. It registers each team member's computer with the main hub (API server) and pays close attention to the team's instructions (podSpec). This special set of instructions says what containers should be in the team, their limits (like CPU and memory), and other important details like environment variables, storage spaces, and labels. So, the Kubelet is the go-to guy making sure everything runs smoothly on each team member's computer.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Kube proxy&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Think of Kube-proxy as the guardian on each team member's computer in Kubernetes. It's like a quiet helper running in the background, making sure that when you ask for a bunch of team members (pods) using just one team name (Service), everything works smoothly. It handles load balancing and service discovery, ensuring messages reach the right team members. It's the traffic manager for the team.&lt;/p&gt;

&lt;p&gt;&lt;u&gt;So how does this guardian work ?&lt;/u&gt;&lt;/p&gt;

&lt;p&gt;Kube proxy talks to the API server to get the details about the Service (ClusterIP) and respective pod IPs &amp;amp; ports (endpoints). It also monitors for changes in service and endpoints.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Container Runtime&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The container runtime in Kubernetes is like the engine in every car of the group. It pulls images of the cars (container images) from the parking area (container registries), gets the cars running, assigns parking spots, and takes care of their well-being on the road (container lifecycle). Essentially, it makes sure everything runs smoothly for the containers on each car in the group.&lt;/p&gt;




&lt;p&gt;Keep an eye out for upcoming concepts that will be introduced soon along with my learning journey 😊.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>kubernetes</category>
      <category>architecture</category>
      <category>beginners</category>
    </item>
    <item>
      <title>A Thorough Exploration of Kubernetes DaemonSets: An In-Depth Examination</title>
      <dc:creator>7h3-3mp7y-m4n</dc:creator>
      <pubDate>Thu, 05 Oct 2023 15:57:19 +0000</pubDate>
      <link>https://dev.to/7h33mp7ym4n/a-thorough-exploration-of-kubernetes-daemonsets-an-in-depth-examination-l1a</link>
      <guid>https://dev.to/7h33mp7ym4n/a-thorough-exploration-of-kubernetes-daemonsets-an-in-depth-examination-l1a</guid>
      <description>&lt;p&gt;When I was Learning about Kubernetes Deployment I found that the pods were running on worker nodes but not giving me the output that I was expecting, Later I figured out that Deployments are primarily focused on managing the desired number of replicas of my application. So After diving further, I found out about DaemonSets.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is a DaemonSet In Kubernetes?
&lt;/h2&gt;

&lt;p&gt;A DaemonSet design ensures that a single pod will run on each worker node. It shows that we can't scale its pods in a node. And if somehow the DaemonSet pod gets deleted then the DaemonSet controller will create it again. When nodes join the cluster, Pods are provisioned onto them, and when nodes leave the cluster, the associated Pods are automatically removed through garbage collection. Deleting a DaemonSet will trigger the cleanup of the Pods it had previously deployed.&lt;/p&gt;

&lt;p&gt;let's understand DaemonSe with an example .. suppose there are 500 worker nodes and you deploy a DaemonSet, the DaemonSet controller will run one pod per worker node by default. That is a total of 500 pods. But will be very expensive if we try this example in the real world. However, using nodeSelector, nodeAffinity, Taints, and Tolerations, you can restrict the daemonSet to run on specific nodes.&lt;/p&gt;

&lt;p&gt;As an example, if you have a specific number of worker nodes dedicated to platform tools example Ingress, Nginx, logs etc... and want to run Daemonset related to platform tools only on the nodes labeled as platform tools. In this case, you can use the nodeSelector to run the DaemonSet pods only on the worker nodes dedicated to platform tooling.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s---JtWfoGt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ea7f0pgl6uz8f3pjjsvb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s---JtWfoGt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ea7f0pgl6uz8f3pjjsvb.png" alt="Daemon Pod" width="800" height="500"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Kubernetes Architecture with DaemonSet
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--uI0ijX8C--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/uyvaq6g6pmxsc5s1aws8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--uI0ijX8C--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/uyvaq6g6pmxsc5s1aws8.png" alt="K8s Architecture" width="800" height="444"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The Kubernetes cluster consists of one or more master nodes (control plane) responsible for managing the cluster and one or more worker nodes where applications run. Each worker node includes the Kube-proxy component, which runs as a daemon process.&lt;/p&gt;

&lt;h2&gt;
  
  
  Real-life Use Cases of Kubernetes DaemonSet
&lt;/h2&gt;

&lt;p&gt;There are multiple use cases of DameonSet in the Kubernetes cluster. some of them are.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Cluster Log Collection: Deploying a log collector like Fluentd, Logstash, or Fluent Bit on every node allows you to centralize Kubernetes logging data, making it easier to monitor and troubleshoot issues.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Centralized logging helps in aggregating logs from various containers and pods, allowing for easier analysis and alerting.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Cluster Monitoring: Deploying monitoring agents like Grafana and Prometheus Node Exporter on every node collects node-level metrics such as CPU usage, memory usage, and disk space.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Prometheus can scrape these metrics and store them for later analysis and alerting, providing insights into the health of cluster nodes.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Security and Compliance: Running tools like kube-bench on every node helps ensure that nodes comply with security benchmarks like CIS benchmarks. This enhances the security posture of the cluster.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Deploying security agents, intrusion detection systems, or vulnerability scanners on specific nodes with sensitive data helps identify and mitigate security threats.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Storage Provisioning: Running a storage plugin or system on every node can provide shared storage resources to the entire cluster. This allows pods to access and persist data in a scalable and reliable manner.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Storage solutions like Ceph, GlusterFS, or distributed file systems can be used to provide storage to Kubernetes workloads.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Network Management: Deploying a network plugin or firewall on every node ensures consistent network policy enforcement across the cluster.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Network plugins like Calico, Flannel, or Cilium manage network connectivity between pods and enforce network policies such as network segmentation and security rules.&lt;/p&gt;

&lt;h2&gt;
  
  
  DaemonSet Example
&lt;/h2&gt;

&lt;p&gt;As per Kubernetes version 1.28, we can't use the CLI command to deploy a Daemonset but we can do it via using the YAML config file by using kubectl apply -f filename.&lt;/p&gt;

&lt;p&gt;let us assume that we are about to deploy an nginx on multiple nodes.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: nginx-daemonset
  labels:
    app: nginx
spec:
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      nodeSelector: # Specify node label selectors
        node-type: worker # I target nodes labeled with "node-type: worker
      containers:
      - name: nginx-container
        image: nginx:latest
        ports:
        - containerPort: 80
        resources:
          requests:
            memory: "64Mi"
            cpu: "250m"
          limits:
            memory: "128Mi"
            cpu: "500m"

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;DaemonSet will ensure that one pod with an Nginx container is running on each node that has the label "node-type: worker." The Nginx container will listen on port 80 and have resource limits and requests defined.&lt;/p&gt;

&lt;h2&gt;
  
  
  Implement Taints and Tolerations in a DaemonSet
&lt;/h2&gt;

&lt;p&gt;Well Taints and Tolerations are the Kubernetes feature that allows you to ensure that pods are not placed on inappropriate nodes. We taint the nodes and add tolerations in the pod schema. the CLI command goes like &lt;code&gt;kubectl taint nodes node1 key1=value1:&amp;lt;Effect&amp;gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;There are 3 different kinds of effects and they are :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;NoSchedule&lt;/code&gt;: Kubernetes scheduler will only allow scheduling pods that have tolerations for the tainted nodes.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;PreferNoSchedule:&lt;/code&gt; Kubernetes scheduler will try to avoid scheduling pods that don’t have tolerations for the tainted nodes.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;NoExecute&lt;/code&gt;: Kubernetes will evict the running pods from the nodes if the pods don’t have tolerations for the tainted nodes.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;over here I'm using &lt;code&gt;NoSchedule&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl taint nodes -l node-role=worker my-taint-key=my-taint-value:NoSchedule
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After adding the taint to the nodes, you can apply the DaemonSet YAML you provided. But make sure the YAML configuration will go like this.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: nginx-daemonset
  labels:
    app: nginx
spec:
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      nodeSelector:
        node-role: worker # Node label key and value
      tolerations:
      - key: my-taint-key
        operator: Equal
        value: my-taint-value
        effect: NoSchedule
      containers:
      - name: nginx-container
        image: nginx:latest
        ports:
        - containerPort: 80
        resources:
          requests:
            memory: "64Mi"
            cpu: "250m"
          limits:
            memory: "128Mi"
            cpu: "500m"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  DaemonSet Node Affinity
&lt;/h2&gt;

&lt;p&gt;We can even take full control over our hands using nodeAffinity. The DemonSet controller then manage and crate pods which are matched by nodeAffinity.The nodeAffinity comes with set rules and some of them are :&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;NodeSelectorAffinity&lt;/strong&gt;: This type of affinity specifies simple node selector rules based on node labels.
&lt;code&gt;requiredDuringSchedulingIgnoredDuringExecution&lt;/code&gt;: Pods with this rule must be scheduled on nodes that satisfy the node selector rules. If no nodes match the criteria, the pod remains unscheduled.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;code&gt;preferredDuringSchedulingIgnoredDuringExecution:&lt;/code&gt; Pods with this rule are preferred to be scheduled on nodes that satisfy the node selector rules, but they can be scheduled elsewhere if necessary.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;NodeAffinity&lt;/strong&gt;: This type of affinity allows more advanced node selection based on expressions.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;code&gt;requiredDuringSchedulingIgnoredDuringExecution&lt;/code&gt;: Specifies that the pod must be scheduled on nodes that satisfy all of the specified expressions.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;preferredDuringSchedulingIgnoredDuringExecution&lt;/code&gt;: Specifies that the pod prefers to be scheduled on nodes that satisfy the specified expressions, but it can be scheduled elsewhere if needed.&lt;/p&gt;

&lt;p&gt;To use nodeAffinity in a DaemonSet, you can specify the affinity field in the pod template's spec. Here's an example of how to set up node affinity for a DaemonSet:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: nginx-daemonset
  labels:
    app: nginx
spec:
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      nodeSelector:
        node-type: worker
      affinity: # Add node affinity rules here
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: node-type # Customize this based on your node labels
                operator: In
                values:
                - worker # Customize this based on your node labels
      containers:
      - name: nginx-container
        image: nginx:latest
        ports:
        - containerPort: 80
        resources:
          requests:
            memory: "64Mi"
            cpu: "250m"
          limits:
            memory: "128Mi"
            cpu: "500m"

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Rolling Update, Rollback, and Deleting DaemonSet
&lt;/h2&gt;

&lt;p&gt;Let us now learn how we can Update, Rollback and Delete the DaemonSet&lt;/p&gt;

&lt;h2&gt;
  
  
  Rolling Update
&lt;/h2&gt;

&lt;p&gt;Rolling Updates refer to a method of deploying updates or changes to applications while ensuring that the service remains available and stable throughout the update process. Rolling updates are a key feature in maintaining high availability and minimizing service disruption. we can simply edit the YAML for the update.&lt;/p&gt;

&lt;h2&gt;
  
  
  How does it work?
&lt;/h2&gt;

&lt;p&gt;Here's how rolling updates works:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Parallel Deployment: Instead of stopping all instances of the old version and starting all instances of the new version simultaneously, rolling updates deploy the new version incrementally, one or a few instances at a time.&lt;/li&gt;
&lt;li&gt;Gradual Transition: During a rolling update, a few new instances of the application are deployed, and traffic is gradually shifted from the old instances to the new ones. This allows the system to gracefully transition without causing a complete outage.&lt;/li&gt;
&lt;li&gt;Monitoring and Validation: The update process is continuously monitored for issues. Health checks and readiness probes are used to ensure that the new instances are healthy and ready to serve traffic before the old instances are taken down.&lt;/li&gt;
&lt;li&gt;Scaling and Load Balancing: Scaling and load balancing mechanisms may be employed to ensure that the system can handle increased traffic during the update, especially if the new version introduces changes that affect the application's resource utilization.&lt;/li&gt;
&lt;li&gt;Rollback: If issues are detected during the update, the process can be rolled back to the previous version, minimizing downtime and potential impact on users.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Strategy For Update
&lt;/h2&gt;

&lt;p&gt;Two parameters determine the pace of the rollout: maxUnavailable and maxSurge.&lt;br&gt;
They can be specified in the absolute number of pods, or percentage of the replica count.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;There will always be at least replicas - maxUnavailable pods available.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;There will never be more than replicas + maxSurge pods in total.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;There will therefore be up to maxUnavailable + maxSurge pods being updated.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Rollback
&lt;/h2&gt;

&lt;p&gt;Oh crap, we just updated the serves and not it is the least effective, What should we do to make things better like the previous one?&lt;/p&gt;

&lt;p&gt;The Rollback is always a savior when we run into error-based updates. Here is how you can do it.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl rollout undo daemonset &amp;lt;daemonset-name&amp;gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;We can even do it by editing YAML. But what if the derived state we want is sitting way back? , When we apply kubectl rollout undo daemonset  it will take us to the previous update but if we try to apply it again it will make a cycle between the last and the current state. It is very frustrating when we get stuck in a loop like this. To imagine it, here is a diagram below to understand things better.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--0OvolRYP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/orvt3e68oogzqtu93wbh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--0OvolRYP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/orvt3e68oogzqtu93wbh.png" alt="Update node" width="800" height="710"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now to go bad to our healthy V1 state we to debug it and what is the better way than describing it? here is a command for it &lt;code&gt;kubectl describe daemonset &amp;lt;name&amp;gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;It is often recommended to apply the change by editing the YAML file for the DeamonSet cause most of the CLI commands do not work with the DeamonSet so generally it's best to edit the Config file.&lt;/p&gt;
&lt;h2&gt;
  
  
  Deletion
&lt;/h2&gt;

&lt;p&gt;Well for deleting a DaemonSet We can simply use the CLI command which is &lt;code&gt;kubectl delete daemonset &amp;lt;daemonset-name&amp;gt;&lt;/code&gt;.&lt;/p&gt;
&lt;h2&gt;
  
  
  DaemonSet Pod Priority
&lt;/h2&gt;

&lt;p&gt;We can set a higher pod PriorityClass to the DaemonSet in case you are running critical system components as a Deamonset. This ensures that the DaemonSet pods are not preempted by lower-priority or less critical pods.&lt;/p&gt;

&lt;p&gt;Here is the YAML File to understand it better.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: scheduling.k8s.io/v1
kind: PriorityClass
metadata:
  name: high-priority
value: 1000000 # Set the priority value (adjust as needed)

---

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: nginx-daemonset
  labels:
    app: nginx
spec:
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      priorityClassName: high-priority # Assigned the PriorityClass here
      containers:
      - name: nginx-container
        image: nginx:latest
        ports:
        - containerPort: 80
        resources:
          requests:
            memory: "64Mi"
            cpu: "250m"
          limits:
            memory: "128Mi"
            cpu: "500m"

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Resolving Issues with DaemonSets
&lt;/h2&gt;

&lt;p&gt;A DaemonSet is called unhealthy when any pod is not running on the node. The generic reason for that is pod status is crashloopbackoff, the pod is pending or in an error state. We can fix this by:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Pod might be running out of resources. We can lower the requested CPU and memory of the DaemonSet.&lt;/li&gt;
&lt;li&gt;&lt;p&gt;We can move some pods off of the affected nodes to free up resources. Use taint and tolerations to prevent the pods from running on certain nodes&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Review the node selector in the DaemonSet spec to ensure it matches your node labels. Incorrect node selectors can lead to pods not being scheduled.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Using various debugging tools and utilities like kubectl exec, kubectl describe, and kubectl get events to gather more information about the issue.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>devops</category>
      <category>kubernetes</category>
      <category>learning</category>
    </item>
  </channel>
</rss>
