<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Laxman Patel</title>
    <description>The latest articles on DEV Community by Laxman Patel (@imlucky883).</description>
    <link>https://dev.to/imlucky883</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/imlucky883"/>
    <language>en</language>
    <item>
      <title>CNI Demystified: The Backbone of Kubernetes Networking</title>
      <dc:creator>Laxman Patel</dc:creator>
      <pubDate>Wed, 01 Oct 2025 06:04:30 +0000</pubDate>
      <link>https://dev.to/imlucky883/cni-demystified-the-backbone-of-kubernetes-networking-1mm9</link>
      <guid>https://dev.to/imlucky883/cni-demystified-the-backbone-of-kubernetes-networking-1mm9</guid>
      <description>&lt;p&gt;People can get puzzled when they need to choose one of the available &lt;a href="https://github.com/containernetworking/cni" rel="noopener noreferrer"&gt;networking solutions for Kubernetes.&lt;/a&gt; As you can see there are a lot of solutions.&lt;/p&gt;

&lt;p&gt;Most of the mentioned solutions include &lt;code&gt;container network interface (CNI)&lt;/code&gt; plug-ins. These are the cornerstones of Kubernetes networking, and it is essential to understand them to make an informed decision which networking solution to chose. It is also useful to know some details about the internals of your preferred networking solution. This way, you will be able to choose what Kubernetes networking features you need, analyze networking performance / security / reliability, and troubleshoot low-level issues.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Some basics behind CNI&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;When Kubernetes starts up your pod (a logical group of containers), something it will do is create a “infra container” – this is a container that has a shared network namespace (among other namespaces) that all the containers in the pod share.&lt;/p&gt;

&lt;p&gt;This means that any networking elements you create in that infra container will be available to all the containers in the pod. This also means that as containers come and go within that pod, the networking stays stable.&lt;/p&gt;

&lt;p&gt;If you have a running Kubernetes (which has some pods running), you can perform a &lt;code&gt;docker ps&lt;/code&gt; and see containers that often running with &lt;a href="http://gcr.io/google_containers/pause-amd64" rel="noopener noreferrer"&gt;&lt;code&gt;gcr.io/google_containers/pause-amd64&lt;/code&gt;&lt;/a&gt; image, and they’re running a command that looks like &lt;code&gt;/pause&lt;/code&gt;.In theory this is a lightweight enough container that it “shouldn’t really die” and should be available to all containers within that pod.&lt;/p&gt;

&lt;p&gt;As Kubernetes creates this infra container, it also will call an executable as specified in the &lt;code&gt;/etc/cni/net.d/*conf&lt;/code&gt; files.&lt;/p&gt;

&lt;p&gt;If you want even more detail, you can checkout the &lt;a href="https://github.com/containernetworking/cni/blob/master/SPEC.md" rel="noopener noreferrer"&gt;CNI specification itself.&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The official documentation &lt;a href="https://kubernetes.io/docs/concepts/cluster-administration/networking/#the-kubernetes-network-model" rel="noopener noreferrer"&gt;outlines&lt;/a&gt; &lt;a href="https://kubernetes.io/docs/concepts/cluster-administration/networking/#the-kubernetes-network-model" rel="noopener noreferrer"&gt;a numbe&lt;/a&gt;r of requirements that any CNI plugin implementation should support. Rephrasing it in a slightly different way, a CNI plugin must provide at least the following two things:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Connectivity&lt;/strong&gt; - making sure that a Pod gets its default &lt;code&gt;eth0&lt;/code&gt; interface with IP reachable from the root network namespace of the hosting Node.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Reachability&lt;/strong&gt; - making sure that Pods from other Nodes can reach each other directly (without NAT)&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Configuring a CNI plug-in&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;While it’s not required – you probably want to have a Kubernetes environment setup for yourself where you can experiment with deploying the plugins in Kubernetes. In my case, I used a two EC2 instances on AWS with a master and worker node. Both are of the &lt;code&gt;type: t2medium&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;master[ip-10-0-0-210] ; worker[ip-10-0-10-51]&lt;/code&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Using kubeadm to deploy Kubernetes&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;We will be using kubeadm to configure and run Kubernetes components on our EC2. You can follow the script from the Github to setup the cluster. Once done, to check what subnets are allocated from the pod network range to both master and worker nodes&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl describe node ip-10-0-0-210 | &lt;span class="nb"&gt;grep &lt;/span&gt;PodCIDR
PodCIDR:                     10.244.0.0/24

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl describe node ip-10-0-10-51| &lt;span class="nb"&gt;grep &lt;/span&gt;PodCIDR
PodCIDR:                     10.244.1.0/24
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see from the output, the whole pod network range (&lt;code&gt;10.244.0.0./20&lt;/code&gt;) has been divided into small subnets, and each of the nodes received its own subnets. This means that the master node can use any of the &lt;code&gt;10.244.0.0&lt;/code&gt;–&lt;code&gt;10.244.0.255&lt;/code&gt; IPs for its Pods, and the worker node uses &lt;code&gt;10.244.1.0&lt;/code&gt;–&lt;code&gt;10.244.1.255&lt;/code&gt; IPs.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;NAME            STATUS   ROLES           AGE   VERSION
ip-10-0-0-210   NotReady    control-plane   34m   v1.31.5
ip-10-0-10-51   NotReady    &amp;lt;none&amp;gt;          20m   v1.31.5
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 1 : Creation of the plugin configuration file
&lt;/h3&gt;

&lt;p&gt;The first thing you should do is create the plug-in configuration. Save the following file as &lt;code&gt;/etc/cni/net.d/10-bash-cni-plugin.conf&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="s2"&gt;"cniVersion"&lt;/span&gt;: &lt;span class="s2"&gt;"0.4.0"&lt;/span&gt;, &lt;span class="c"&gt;#  version of the CNI specification&lt;/span&gt;
        &lt;span class="s2"&gt;"name"&lt;/span&gt;: &lt;span class="s2"&gt;"mynet"&lt;/span&gt;,       &lt;span class="c"&gt;#  name of the plugin&lt;/span&gt;
        &lt;span class="s2"&gt;"type"&lt;/span&gt;: &lt;span class="s2"&gt;"bash-cni"&lt;/span&gt;,    &lt;span class="c"&gt;#  plugin written in Bash&lt;/span&gt;
        &lt;span class="s2"&gt;"network"&lt;/span&gt;: &lt;span class="s2"&gt;"10.244.0.0/20"&lt;/span&gt;,
        &lt;span class="s2"&gt;"subnet"&lt;/span&gt;: &lt;span class="s2"&gt;"10.244.0.0/24"&lt;/span&gt; &lt;span class="c"&gt;# node-cidr-range&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This must be done on both master and worker nodes. Don’t forget to replace &lt;code&gt;&amp;lt;node-cidr-range&amp;gt;&lt;/code&gt; with &lt;code&gt;10.244.0.0/24&lt;/code&gt; for the master and &lt;code&gt;10.244.1.0./24&lt;/code&gt; for the worker. It is also very important that you put the file into the &lt;code&gt;/etc/cni/net.d/&lt;/code&gt; folder.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;kubelet uses &lt;code&gt;/etc/cni/net.d/&lt;/code&gt; to discover CNI plug-ins.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The first three parameters in the configuration (&lt;code&gt;cniVersion&lt;/code&gt;, &lt;code&gt;name&lt;/code&gt;, and &lt;code&gt;type&lt;/code&gt;) are mandatory.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;cniVersion&lt;/code&gt; is used to determine the CNI version used by the plugin&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;name&lt;/code&gt; is just the network name&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;type&lt;/code&gt; refers to the file name of the CNI plug-in executable.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The &lt;code&gt;network&lt;/code&gt; and &lt;code&gt;subnet&lt;/code&gt; parameters are our custom parameters, they are not mentioned in the CNI specification, and later we will see how exactly they are used by the &lt;code&gt;bash-cni&lt;/code&gt; network plug-in.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2 : Preparation of a network bridge on both master and worker VMs
&lt;/h3&gt;

&lt;p&gt;The bridge network is a device that aggregates network packets from multiple network interfaces. A bridge is analogous to a network switch.&lt;/p&gt;

&lt;p&gt;The bridge can also have its own MAC and IP addresses, so each container sees the bridge as another device plugged into the same network. We reserve the &lt;code&gt;10.244.0.1&lt;/code&gt; IP address for the bridge on the master VM and &lt;code&gt;10.244.1.1&lt;/code&gt; for the bridge on the worker VM. The following commands can be used to create and configure the bridge with the &lt;code&gt;cni0&lt;/code&gt; name:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;brctl addbr cni0
&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;ip &lt;span class="nb"&gt;link set &lt;/span&gt;cni0 up
&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;ip addr add &amp;lt;bridge-ip&amp;gt;/24 dev cni0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;These commands create the bridge, enable it, and then assign an IP address to it. The last command also implicitly creates a route, so that all traffic with the destination IP belonging to the pod CIDR range, local to the current node, will be redirected to the cni0 network interface.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 3 : Creation of the plugin binary
&lt;/h3&gt;

&lt;p&gt;The plug-in’s binary format must be placed in the &lt;code&gt;/opt/cni/bin/&lt;/code&gt; folder, its name must be exactly the same as the &lt;code&gt;type&lt;/code&gt; parameter in the plug-in configuration (&lt;code&gt;bash-cni&lt;/code&gt;), and its contents can be found in the &lt;a href="https://github.com/Imlucky883/cni-plugin/blob/main/bash-cni" rel="noopener noreferrer"&gt;Github-repo&lt;/a&gt; After you put the plug-in in the correct folder, don’t forget to make it executable by running &lt;code&gt;sudo chmod +x bash-cni&lt;/code&gt;.) This should be done on both master and worker VMs.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The &lt;code&gt;/opt/cni/bin&lt;/code&gt; folder stores all the CNI plug-ins.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The CNI plugin binary has 5 main commands :&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;ADD&lt;/strong&gt;: This command is invoked by the container runtime to create a new network interface for a container.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;DEL&lt;/strong&gt;: This command is used to delete a network interface when a container is removed.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;CHECK&lt;/strong&gt;: This command checks whether a given configuration is valid and operational.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;VERSION&lt;/strong&gt;: This command retrieves the version of the CNI plugin being used.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;GC (Garbage Collection)&lt;/strong&gt;: This command is used to clean up unused resources or stale configurations.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Step 4 : Testing the plugin
&lt;/h3&gt;

&lt;p&gt;Now, if you execute the &lt;code&gt;kubectl get node&lt;/code&gt; command, you can see that both nodes should go to the “Ready” state. So, let’s try to deploy an application and see how it works. But before we’re able to do this, we should “untaint” the master node. By default, the scheduler will not put any pods on the master node, because it is “tainted.” But we want to test cross-node container communication, so we need to deploy some pods on the master, as well as on the worker. The taint can be removed using the following command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl taint nodes ip-10-0-0-210 node-role.kubernetes.io/control-plane:NoSchedule-
ip-10-0-0-210 untainted
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;NOTE&lt;/strong&gt; : Make sure you install &lt;code&gt;nmap&lt;/code&gt; on both of your nodes.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Next, let’s use this test &lt;a href="https://github.com/Imlucky883/cni-plugin/blob/main/manifests/master-deploy.yaml" rel="noopener noreferrer"&gt;deployment&lt;/a&gt; to validate the CNI plug-in.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; https://github.com/Imlucky883/cni-plugin/blob/main/manifests/master-deploy.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here, we are deploying four simple pods. Two goes on the master and the remaining two on the worker. (Pay attention to how we are using the &lt;code&gt;nodeName&lt;/code&gt; property to tell the pod, where it should be deployed.) On both master we have two pods running NGINX while on the worker nodes we have two pods running BUSYBOX with &lt;code&gt;sleep&lt;/code&gt; command. Now, let’s run &lt;code&gt;kubectl get pod&lt;/code&gt; to make sure that all pods are healthy and then get the pods IP addresses. As I checked the containers were in the PendingState, after I described the pods I got to know that the error was with CNI.  &lt;/p&gt;

&lt;p&gt;I was getting the below error when I checked the error log of the plugin at &lt;code&gt;/var/log/bash-cni.log&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;Error: any valid address is expected rather than &lt;span class="s2"&gt;"(10.244.0.1)"&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt;
CNI &lt;span class="nb"&gt;command&lt;/span&gt;: DEL
stdin: &lt;span class="o"&gt;{&lt;/span&gt;&lt;span class="s2"&gt;"cniVersion"&lt;/span&gt;:&lt;span class="s2"&gt;"0.4.0"&lt;/span&gt;,&lt;span class="s2"&gt;"name"&lt;/span&gt;:&lt;span class="s2"&gt;"mynet"&lt;/span&gt;,&lt;span class="s2"&gt;"network"&lt;/span&gt;:&lt;span class="s2"&gt;"10.244.0.0/20"&lt;/span&gt;,&lt;span class="s2"&gt;"subnet"&lt;/span&gt;:&lt;span class="s2"&gt;"10.244.0.0/24"&lt;/span&gt;,&lt;span class="s2"&gt;"type"&lt;/span&gt;:&lt;span class="s2"&gt;"bash-cni"&lt;/span&gt;&lt;span class="o"&gt;}&lt;/span&gt;
CNI &lt;span class="nb"&gt;command&lt;/span&gt;: ADD
stdin: &lt;span class="o"&gt;{&lt;/span&gt;&lt;span class="s2"&gt;"cniVersion"&lt;/span&gt;:&lt;span class="s2"&gt;"0.4.0"&lt;/span&gt;,&lt;span class="s2"&gt;"name"&lt;/span&gt;:&lt;span class="s2"&gt;"mynet"&lt;/span&gt;,&lt;span class="s2"&gt;"network"&lt;/span&gt;:&lt;span class="s2"&gt;"10.244.0.0/20"&lt;/span&gt;,&lt;span class="s2"&gt;"subnet"&lt;/span&gt;:&lt;span class="s2"&gt;"10.244.0.0/24"&lt;/span&gt;,&lt;span class="s2"&gt;"type"&lt;/span&gt;:&lt;span class="s2"&gt;"bash-cni"&lt;/span&gt;&lt;span class="o"&gt;}&lt;/span&gt;
Allocated container IP: 10.244.0.61
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After surfing the online for more than 1 hour, I found that the way &lt;code&gt;nmap&lt;/code&gt; generates IP addresses includes extra characters like parentheses. I added &lt;code&gt;tr -d '()'&lt;/code&gt; to the plugin script which actually fixed the issue and now IP’s were allocated to the Pods. Below is the command that I added in the &lt;code&gt;/opt/cni/bin/bash-cni&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;all_ips&lt;/span&gt;&lt;span class="o"&gt;=(&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;nmap &lt;span class="nt"&gt;-sL&lt;/span&gt; &lt;span class="nv"&gt;$subnet&lt;/span&gt; | &lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="s2"&gt;"Nmap scan report"&lt;/span&gt; | &lt;span class="nb"&gt;awk&lt;/span&gt; &lt;span class="s1"&gt;'{print $NF}'&lt;/span&gt; | &lt;span class="nb"&gt;tr&lt;/span&gt; &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="s1"&gt;'()'&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now I checked the Pod status and both the pods are successfully been allocated a IP.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get pods &lt;span class="nt"&gt;-o&lt;/span&gt; wide

busybox-deployment-787d986855-dxq7z   1/1     Running   0          22s   10.244.1.2   ip-10-0-10-51   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
busybox-deployment-787d986855-jxtx9   1/1     Running   0          22s   10.244.1.3   ip-10-0-10-51   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
nginx-deployment-6b874d4659-bmnk9     1/1     Running   0          30m   10.244.0.3   ip-10-0-0-210   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
nginx-deployment-6b874d4659-pkn2n     1/1     Running   0          30m   10.244.0.2   ip-10-0-0-210   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now the objective is to test the connectivity between pod-pod, pod-node &amp;amp; pod-external connectivity.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="nt"&gt;-it&lt;/span&gt; busybox-deployment-787d986855-dxq7z &lt;span class="nt"&gt;--&lt;/span&gt; sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;/ ping 10.0.10.51 &lt;span class="c"&gt;# can ping own host&lt;/span&gt;
PING 10.0.10.51 &lt;span class="o"&gt;(&lt;/span&gt;10.0.10.51&lt;span class="o"&gt;)&lt;/span&gt;: 56 data bytes
64 bytes from 10.0.10.51: &lt;span class="nb"&gt;seq&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;0 &lt;span class="nv"&gt;ttl&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;64 &lt;span class="nb"&gt;time&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;0.070 ms
64 bytes from 10.0.10.51: &lt;span class="nb"&gt;seq&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1 &lt;span class="nv"&gt;ttl&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;64 &lt;span class="nb"&gt;time&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;0.071 ms

&lt;span class="nv"&gt;$ &lt;/span&gt;ping 10.0.0.210 &lt;span class="c"&gt;# can’t ping different host &lt;/span&gt;
PING 10.0.0.210 &lt;span class="o"&gt;(&lt;/span&gt;10.0.0.210&lt;span class="o"&gt;)&lt;/span&gt;: 56 data bytes

&lt;span class="nv"&gt;$ &lt;/span&gt;ping 10.244.0.2 &lt;span class="c"&gt;# can ping a pod on the same host&lt;/span&gt;
64 bytes from 10.244.0.2: &lt;span class="nb"&gt;seq&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;0 &lt;span class="nv"&gt;ttl&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;64 &lt;span class="nb"&gt;time&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;0.092 ms
64 bytes from 10.244.0.2: &lt;span class="nb"&gt;seq&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1 &lt;span class="nv"&gt;ttl&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;64 &lt;span class="nb"&gt;time&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;0.055 ms

&lt;span class="nv"&gt;$ &lt;/span&gt;ping 10.244.1.3 &lt;span class="c"&gt;# can’t ping a pod on a different host&lt;/span&gt;
PING 10.244.1.3 &lt;span class="o"&gt;(&lt;/span&gt;10.244.1.3&lt;span class="o"&gt;)&lt;/span&gt;: 56 data bytes

&lt;span class="nv"&gt;$ &lt;/span&gt;ping 108.177.121.113 &lt;span class="c"&gt;# can’t ping any external address&lt;/span&gt;
PING 110.234.16.223 &lt;span class="o"&gt;(&lt;/span&gt;110.234.16.223&lt;span class="o"&gt;)&lt;/span&gt;: 56 data bytes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see, the only thing that actually works is a pod to pod and Pod to host communication.&lt;/p&gt;

&lt;h3&gt;
  
  
  Case 1 : Can’t ping external address
&lt;/h3&gt;

&lt;p&gt;The fact that our container can’t reach the Internet should be no surprise for you. Our containers are located in a private subnet (10.244.0.0/24).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F31k16usfgtkw2hcde17l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F31k16usfgtkw2hcde17l.png" alt="1" width="374" height="489"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In order to fix this, we should set up a network address translation (NAT) on the host VM. NAT is a mechanism that replaces a source IP address in the outcoming package with the IP address of the host VM. The original source address is stored somewhere else in the TCP packet. When the response arrives to the host VM, the original address is restored, and a package is forwarded to the container network interface. You can easily setup NAT using the following two commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;iptables &lt;span class="nt"&gt;-t&lt;/span&gt; nat &lt;span class="nt"&gt;-A&lt;/span&gt; POSTROUTING &lt;span class="nt"&gt;-s&lt;/span&gt; 10.244.0.0/24 &lt;span class="o"&gt;!&lt;/span&gt; &lt;span class="nt"&gt;-o&lt;/span&gt; cni0 &lt;span class="nt"&gt;-j&lt;/span&gt; MASQUERADE &lt;span class="c"&gt;# on master&lt;/span&gt;

&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;iptables &lt;span class="nt"&gt;-t&lt;/span&gt; nat &lt;span class="nt"&gt;-A&lt;/span&gt; POSTROUTING &lt;span class="nt"&gt;-s&lt;/span&gt; 10.244.1.0/24 &lt;span class="o"&gt;!&lt;/span&gt; &lt;span class="nt"&gt;-o&lt;/span&gt; cni0 &lt;span class="nt"&gt;-j&lt;/span&gt; MASQUERADE &lt;span class="c"&gt;# on worker&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Pay attention that here we are NATing only packages with the source IP belonging to the local pod subnet, which are not meant to be sent to the &lt;code&gt;cni0&lt;/code&gt; bridge. The &lt;code&gt;! -o cni0&lt;/code&gt; condition in the iptables rule ensures that NAT is &lt;strong&gt;not applied&lt;/strong&gt; to traffic intended to stay within the cluster (i.e., traffic sent to other pods via the &lt;code&gt;cni0&lt;/code&gt; bridge). Only traffic leaving the cluster and heading to external destinations is NATed.&lt;/p&gt;

&lt;h3&gt;
  
  
  Case 2 : Can’t ping pod on the other host
&lt;/h3&gt;

&lt;p&gt;Pods on different host can’t talk to each other. If you think about it, this makes a perfect sense. If we are sending a request from the &lt;code&gt;10.244.0.4&lt;/code&gt; Pod to the &lt;code&gt;10.244.1.3&lt;/code&gt; Pod, we never specified that the request should be routed through the &lt;code&gt;10.0.10.51&lt;/code&gt; host. Usually, in such cases, we can rely on the &lt;code&gt;ip route&lt;/code&gt; command to setup some additional routes for us. If we carried out this experiment on some bare-metal servers directly connected to each other, we could do something like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fntry9pxfpozmd6p41kmh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fntry9pxfpozmd6p41kmh.png" alt="2" width="800" height="529"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;ip route add 10.244.1.0/24 via 10.0.10.51 dev enX0 &lt;span class="c"&gt;# run on master &lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;ip route add 10.244.0.0/24 via 10.0.0.210 dev enX0 &lt;span class="c"&gt;# run on worker&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>kubernetes</category>
      <category>cni</category>
      <category>devops</category>
    </item>
    <item>
      <title>PriorityClass in Kubernetes: The VIP Pass for Your Pods</title>
      <dc:creator>Laxman Patel</dc:creator>
      <pubDate>Wed, 01 Oct 2025 05:47:09 +0000</pubDate>
      <link>https://dev.to/imlucky883/priorityclass-in-kubernetes-the-vip-pass-for-your-pods-4knl</link>
      <guid>https://dev.to/imlucky883/priorityclass-in-kubernetes-the-vip-pass-for-your-pods-4knl</guid>
      <description>&lt;p&gt;Imagine you're at an airport, and there's a long queue at security. But suddenly, a VIP with a priority pass walks straight through a special lane, skipping the entire line. Kubernetes has a similar system for pods—it’s called &lt;strong&gt;PriorityClass&lt;/strong&gt;! 🚀&lt;/p&gt;

&lt;p&gt;In this article, we’ll break down what &lt;strong&gt;PriorityClass&lt;/strong&gt; is, why it matters, and when you should use it to keep your Kubernetes workloads running smoothly.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;What is PriorityClass?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;PriorityClass&lt;/strong&gt; is a Kubernetes feature that assigns priority levels to different pods. It helps the scheduler decide which pods should be scheduled first &lt;strong&gt;and&lt;/strong&gt; which ones should be evicted when resources are limited.&lt;/p&gt;

&lt;p&gt;Think of it as a VIP pass for workloads—&lt;strong&gt;critical pods get scheduled first&lt;/strong&gt;, while less important ones may have to wait or even be evicted if resources run out.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;How it Works&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Each &lt;strong&gt;PriorityClass&lt;/strong&gt; has a numeric value.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The &lt;strong&gt;higher the value&lt;/strong&gt;, the more important the pod is.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If the cluster runs out of resources, lower-priority pods &lt;strong&gt;may get evicted&lt;/strong&gt; to make room for higher-priority ones.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You can also set &lt;strong&gt;preemption policies&lt;/strong&gt; to decide whether a pod should evict lower-priority ones or not.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Why is PriorityClass Helpful?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Kubernetes works best when resources are efficiently managed. PriorityClass helps by**:**&lt;/p&gt;

&lt;p&gt;✅ &lt;strong&gt;Ensuring critical workloads always run&lt;/strong&gt; (e.g., system monitoring, security agents)&lt;/p&gt;

&lt;p&gt;✅ &lt;strong&gt;Preventing resource starvation&lt;/strong&gt; for high-importance services&lt;/p&gt;

&lt;p&gt;✅ &lt;strong&gt;Handling overload situations&lt;/strong&gt; by gracefully evicting low-priority pods&lt;/p&gt;

&lt;p&gt;✅ &lt;strong&gt;Improving cluster resilience&lt;/strong&gt; by making sure essential services don’t get disrupted&lt;/p&gt;

&lt;p&gt;Without PriorityClass, all pods are treated equally, which can lead to important services getting stuck in a pending state while non-essential ones consume resources.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;When Should You Use PriorityClass?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Here are some real-world scenarios where &lt;strong&gt;PriorityClass&lt;/strong&gt; is a lifesaver:&lt;/p&gt;

&lt;p&gt;🔹 &lt;strong&gt;Critical System Services :&lt;/strong&gt; Monitoring, logging, and security services should &lt;strong&gt;always&lt;/strong&gt; be running. Assigning them a &lt;strong&gt;high-priority class&lt;/strong&gt; ensures they don’t get evicted when the cluster is under pressure.&lt;/p&gt;

&lt;p&gt;🔹 &lt;strong&gt;Multi-Tenant Clusters :&lt;/strong&gt; If multiple teams share a Kubernetes cluster, you might want to &lt;strong&gt;prioritize production workloads&lt;/strong&gt; over development or testing environments.&lt;/p&gt;

&lt;p&gt;🔹 &lt;strong&gt;Failover &amp;amp; Disaster Recovery :&lt;/strong&gt; During &lt;strong&gt;failovers&lt;/strong&gt;, critical services need to come back online ASAP. Using PriorityClass ensures your key services recover before anything else.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;How to Implement PriorityClass in Kubernetes ?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Below workflow best explains the working of Priorityclass :-&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjcrlobzcyb6khztx5w4n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjcrlobzcyb6khztx5w4n.png" alt="DC" width="800" height="734"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;scheduling.k8s.io/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;PriorityClass&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;high-priority&lt;/span&gt;
&lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1000000&lt;/span&gt;
&lt;span class="na"&gt;preemptionPolicy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;PreemptLowerPriority&lt;/span&gt;
&lt;span class="na"&gt;globalDefault&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
&lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;High&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;priority&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;class&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;for&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;critical&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;services"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;value: 1000000&lt;/code&gt; → Higher value means higher priority.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;preemptionPolicy: PreemptLowerPriority&lt;/code&gt; → This pod can evict lower-priority pods if resources are tight.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;globalDefault: false&lt;/code&gt; → This is not the default priority (you must explicitly assign it to pods).&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Pod&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;low-priority-pod&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apache&lt;/span&gt;
      &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apache2&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  &lt;strong&gt;Assigning PriorityClass to a Pod&lt;/strong&gt;
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Pod&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;high-priority-pod&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;priorityClassName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;high-priority&lt;/span&gt;
  &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
      &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, whenever &lt;strong&gt;critical-app&lt;/strong&gt; is deployed, Kubernetes will give it priority over lower-priority pods.&lt;/p&gt;




&lt;h2&gt;
  
  
  Important Things to Note
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;To mark a Pod as critical, set priorityClassName for that Pod to &lt;code&gt;system-cluster-critical&lt;/code&gt; or &lt;code&gt;system-node-critical&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;system-node-critical&lt;/code&gt; is the highest available priority, even higher than &lt;code&gt;system-cluster-critical&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;All the control-plane components are marked with the &lt;code&gt;system-node-critical&lt;/code&gt; as you can see below :-&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;nodeName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kind-control-plane&lt;/span&gt;
  &lt;span class="s"&gt;preemptionPolicy&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="s"&gt;PreemptLowerPriority&lt;/span&gt;
  &lt;span class="s"&gt;priority&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2000001000&lt;/span&gt;
  &lt;span class="na"&gt;priorityClassName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;system-node-critical&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;preemptionPolicy&lt;/code&gt; field in a &lt;strong&gt;PriorityClass&lt;/strong&gt; resource can take two possible values:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;code&gt;PreemptLowerPriority&lt;/code&gt; (default) : This allows preemption, meaning that if a pod using this priority class is scheduled but there aren't enough resources, it can evict (preempt) lower-priority pods to make space.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Never&lt;/code&gt; : This disables preemption, meaning that if there aren't enough resources, the pod will remain in a pending state instead of evicting lower-priority pods.&lt;/li&gt;
&lt;/ol&gt;


&lt;/li&gt;

&lt;li&gt;&lt;p&gt;A static pod marked as critical can't be evicted. However, non-static pods marked as critical are always rescheduled.&lt;/p&gt;&lt;/li&gt;

&lt;/ol&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
    </item>
    <item>
      <title>DaemonSets in Kubernetes - The Silent Guardians of Your Cluster 🛡️</title>
      <dc:creator>Laxman Patel</dc:creator>
      <pubDate>Wed, 01 Oct 2025 05:34:28 +0000</pubDate>
      <link>https://dev.to/imlucky883/daemonsets-in-kubernetes-the-silent-guardians-of-your-cluster-11j2</link>
      <guid>https://dev.to/imlucky883/daemonsets-in-kubernetes-the-silent-guardians-of-your-cluster-11j2</guid>
      <description>&lt;p&gt;Kubernetes is a bustling ecosystem of pods, services, and deployments. But what about those tasks that need to run on &lt;strong&gt;every node&lt;/strong&gt; in your cluster? &lt;/p&gt;

&lt;p&gt;Enter &lt;strong&gt;DaemonSets&lt;/strong&gt;—the unsung heroes of Kubernetes. In this article, we’ll explore what DaemonSets are, why they’re essential, and how to use them effectively.&lt;/p&gt;




&lt;h2&gt;
  
  
  � The Problem: Node-Level Tasks in Kubernetes
&lt;/h2&gt;

&lt;p&gt;Imagine you need to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Run a logging agent on every node.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Deploy a monitoring tool like Prometheus Node Exporter.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Ensure a security agent is always present on all nodes.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Using a regular Deployment or Pod won’t cut it because:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;You can’t guarantee a pod will run on &lt;strong&gt;every node&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Scaling manually is tedious and error-prone.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;New nodes won’t automatically get the required pods.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is where &lt;strong&gt;DaemonSets&lt;/strong&gt; come to the rescue.&lt;/p&gt;




&lt;h2&gt;
  
  
  🛠️ What Are DaemonSets?
&lt;/h2&gt;

&lt;p&gt;A &lt;strong&gt;DaemonSet&lt;/strong&gt; is a Kubernetes controller that ensures a copy of a pod runs on &lt;strong&gt;every node&lt;/strong&gt; (or a subset of nodes) in your cluster. If a new node is added, the DaemonSet automatically schedules a pod on it. If a node is removed, the pod is garbage-collected.&lt;/p&gt;

&lt;p&gt;Key features:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Node-Level Coverage&lt;/strong&gt;: Runs a pod on every node (or specific nodes using labels).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Automatic Scaling&lt;/strong&gt;: Scales with your cluster—no manual intervention needed.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Self-Healing&lt;/strong&gt;: If a pod is deleted, the DaemonSet recreates it.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Resource Efficiency&lt;/strong&gt;: Ensures only one pod runs per node (unless overridden).&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  🎯 Why Are DaemonSets Needed?
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Node-Specific Tasks&lt;/strong&gt;: Perfect for logging, monitoring, and security agents.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Cluster-Wide Consistency&lt;/strong&gt;: Ensures every node has the required software.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Automatic Scaling&lt;/strong&gt;: Handles node additions and removals seamlessly.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Resource Optimization&lt;/strong&gt;: Avoids over-provisioning by running only one pod per node.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  🛠️ How to Use DaemonSets
&lt;/h2&gt;

&lt;p&gt;Let’s create a DaemonSet to deploy a logging agent on every node in your cluster.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;DaemonSet&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;logging-agent&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kube-system&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;logging-agent&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;logging-agent&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;logging-agent&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;logging-agent&lt;/span&gt;
        &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;fluent/fluentd:latest&lt;/span&gt;
        &lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;limits&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;200Mi"&lt;/span&gt;
            &lt;span class="na"&gt;cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;100m"&lt;/span&gt;
          &lt;span class="na"&gt;requests&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;100Mi"&lt;/span&gt;
            &lt;span class="na"&gt;cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;50m"&lt;/span&gt;
      &lt;span class="na"&gt;tolerations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;node-role.kubernetes.io/master&lt;/span&gt;
        &lt;span class="na"&gt;effect&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;NoSchedule&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;selector&lt;/strong&gt;: Matches the pods managed by this DaemonSet.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;template&lt;/strong&gt;: Defines the pod specification.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;tolerations&lt;/strong&gt;: Allows the DaemonSet to run on master nodes (optional).&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Apply the DaemonSet:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; daemonset.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, a &lt;code&gt;logging-agent&lt;/code&gt; pod will run on every node in your cluster. If you add or remove nodes, the DaemonSet will handle it automatically.&lt;/p&gt;




&lt;h2&gt;
  
  
  🧩 Advanced Use Cases
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Node-Specific Workloads&lt;/strong&gt;: Use node labels to run DaemonSets on specific nodes.&lt;br&gt;&lt;br&gt;
Example: Run a GPU monitoring tool only on GPU-enabled nodes.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Custom Taints and Tolerations&lt;/strong&gt;: Control which nodes the DaemonSet can run on.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Rolling Updates&lt;/strong&gt;: Update DaemonSet pods in a controlled manner using &lt;code&gt;updateStrategy&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  🎯 Key Takeaways
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;DaemonSets&lt;/strong&gt; ensure a pod runs on every node in your cluster.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;They’re perfect for node-specific tasks like logging, monitoring, and security.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;They scale automatically with your cluster and handle node changes seamlessly.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Use them to maintain consistency and efficiency across your nodes.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So, the next time you need to run a pod on every node, think &lt;strong&gt;DaemonSets&lt;/strong&gt;—your silent guardians in the Kubernetes world. 🛡️&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>daemonsets</category>
    </item>
    <item>
      <title>LimitRange and Resource Quotas: Taming Kubernetes Resource Chaos 🚀</title>
      <dc:creator>Laxman Patel</dc:creator>
      <pubDate>Wed, 01 Oct 2025 05:26:59 +0000</pubDate>
      <link>https://dev.to/imlucky883/limitrange-and-resource-quotas-taming-kubernetes-resource-chaos-o84</link>
      <guid>https://dev.to/imlucky883/limitrange-and-resource-quotas-taming-kubernetes-resource-chaos-o84</guid>
      <description>&lt;p&gt;Kubernetes is a powerful orchestration tool, but with great power comes great responsibility. Imagine a scenario where one greedy pod consumes all the CPU and memory, leaving other pods starving. Or a namespace where resources are over-allocated, causing cluster-wide performance issues. Sounds like a nightmare, right? Enter &lt;strong&gt;LimitRange&lt;/strong&gt; and &lt;strong&gt;ResourceQuota&lt;/strong&gt;—Kubernetes' dynamic duo for resource management. In this article, we’ll dive into what they are, why they’re essential, and how to use them effectively.&lt;/p&gt;




&lt;h2&gt;
  
  
  � The Problem: Resource Anarchy in Kubernetes
&lt;/h2&gt;

&lt;p&gt;In a Kubernetes cluster, multiple teams and applications share the same resources. Without proper governance, chaos ensues:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Resource Hogging&lt;/strong&gt;: A single pod can monopolize CPU and memory, starving others.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Over-Provisioning&lt;/strong&gt;: Developers might request excessive resources "just to be safe," leading to wasted capacity.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Under-Provisioning&lt;/strong&gt;: Pods might not get enough resources, causing crashes or poor performance.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Namespace Sprawl&lt;/strong&gt;: One namespace could consume all available resources, leaving nothing for others.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is where &lt;strong&gt;LimitRange&lt;/strong&gt; and &lt;strong&gt;ResourceQuota&lt;/strong&gt; come to the rescue.&lt;/p&gt;




&lt;h2&gt;
  
  
  🛠️ What Are LimitRange and ResourceQuota?
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. &lt;strong&gt;LimitRange&lt;/strong&gt;: Setting Boundaries for Pods and Containers
&lt;/h3&gt;

&lt;p&gt;Think of &lt;strong&gt;LimitRange&lt;/strong&gt; as a bouncer at a club. It enforces rules on how much CPU, memory, and storage each pod or container can request or use. It ensures that no single pod or container goes overboard.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key features:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Sets &lt;strong&gt;minimum&lt;/strong&gt; and &lt;strong&gt;maximum&lt;/strong&gt; resource limits for CPU and memory.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Defines &lt;strong&gt;default requests&lt;/strong&gt; and &lt;strong&gt;limits&lt;/strong&gt; if not specified by the user.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Prevents resource starvation by ensuring fair usage.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  2. &lt;strong&gt;ResourceQuota&lt;/strong&gt;: Governing Namespace-Level Resources
&lt;/h3&gt;

&lt;p&gt;While &lt;strong&gt;LimitRange&lt;/strong&gt; focuses on individual pods, &lt;strong&gt;ResourceQuota&lt;/strong&gt; operates at the namespace level. It’s like a budget manager for your Kubernetes namespace, ensuring that no single namespace consumes all the cluster resources.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key features:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Limits the total amount of CPU, memory, and storage a namespace can use.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Controls the number of objects (pods, services, secrets, etc.) in a namespace.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Prevents resource exhaustion across the cluster.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  � Why Are They Needed?
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Fair Resource Allocation&lt;/strong&gt;: Ensures that all applications get their fair share of resources.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Cost Optimization&lt;/strong&gt;: Prevents over-provisioning, saving cloud costs.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Stability and Performance&lt;/strong&gt;: Avoids resource contention, ensuring smooth cluster operations.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Multi-Tenancy&lt;/strong&gt;: Enables safe sharing of clusters across teams or projects.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Compliance&lt;/strong&gt;: Helps meet organizational policies and SLAs.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  🛠️ How to Use LimitRange and ResourceQuota
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. &lt;strong&gt;LimitRange in Action&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Let’s create a &lt;code&gt;LimitRange&lt;/code&gt; to enforce resource constraints in a namespace.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;LimitRange&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;resource-limits&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-namespace&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;limits&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Container&lt;/span&gt;
    &lt;span class="na"&gt;default&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;500m"&lt;/span&gt;
      &lt;span class="na"&gt;memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;512Mi"&lt;/span&gt;
    &lt;span class="na"&gt;defaultRequest&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;100m"&lt;/span&gt;
      &lt;span class="na"&gt;memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;256Mi"&lt;/span&gt;
    &lt;span class="na"&gt;max&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;1"&lt;/span&gt;
      &lt;span class="na"&gt;memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;1Gi"&lt;/span&gt;
    &lt;span class="na"&gt;min&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;50m"&lt;/span&gt;
      &lt;span class="na"&gt;memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;128Mi"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;default&lt;/strong&gt;: If no resource limits are specified, these values are applied.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;defaultRequest&lt;/strong&gt;: The minimum resources a container will request.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;max&lt;/strong&gt; and &lt;strong&gt;min&lt;/strong&gt;: The upper and lower bounds for resource usage.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Apply the &lt;code&gt;LimitRange&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; limitrange.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, any pod in the &lt;code&gt;my-namespace&lt;/code&gt; namespace will be bound by these limits.&lt;/p&gt;




&lt;h3&gt;
  
  
  2. &lt;strong&gt;ResourceQuota in Action&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Next, let’s create a &lt;code&gt;ResourceQuota&lt;/code&gt; to limit the total resources in a namespace.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ResourceQuota&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;namespace-quota&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-namespace&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;hard&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;requests.cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;2"&lt;/span&gt;
    &lt;span class="na"&gt;requests.memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;2Gi"&lt;/span&gt;
    &lt;span class="na"&gt;limits.cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;4"&lt;/span&gt;
    &lt;span class="na"&gt;limits.memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;4Gi"&lt;/span&gt;
    &lt;span class="na"&gt;pods&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;10"&lt;/span&gt;
    &lt;span class="na"&gt;services&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;5"&lt;/span&gt;
    &lt;span class="na"&gt;secrets&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;10"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;requests.cpu/memory&lt;/strong&gt;: Total requested CPU and memory.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;limits.cpu/memory&lt;/strong&gt;: Total limit for CPU and memory.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;pods/services/secrets&lt;/strong&gt;: Limits the number of these objects in the namespace.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Apply the &lt;code&gt;ResourceQuota&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; resourcequota.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, the &lt;code&gt;my-namespace&lt;/code&gt; namespace is capped at the specified resource limits.&lt;/p&gt;




&lt;h2&gt;
  
  
  🧩 Combining LimitRange and ResourceQuota
&lt;/h2&gt;

&lt;p&gt;When used together, &lt;strong&gt;LimitRange&lt;/strong&gt; and &lt;strong&gt;ResourceQuota&lt;/strong&gt; provide a robust resource management framework:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;LimitRange&lt;/strong&gt; ensures individual pods don’t exceed resource bounds.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;ResourceQuota&lt;/strong&gt; ensures the namespace as a whole stays within its budget.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For example, if a namespace has a &lt;code&gt;ResourceQuota&lt;/code&gt; of 2 CPU and a &lt;code&gt;LimitRange&lt;/code&gt; with a max CPU of 1 per pod, you can’t create more than 2 pods with the maximum CPU limit.&lt;/p&gt;




&lt;h2&gt;
  
  
  🚀 Real-World Use Case: Multi-Tenant Clusters
&lt;/h2&gt;

&lt;p&gt;Imagine a SaaS platform where each customer gets a dedicated namespace. Without &lt;strong&gt;LimitRange&lt;/strong&gt; and &lt;strong&gt;ResourceQuota&lt;/strong&gt;, one customer’s misconfigured application could consume all cluster resources, affecting others. By enforcing resource limits and quotas, you ensure fair usage and isolate performance issues.&lt;/p&gt;




&lt;h2&gt;
  
  
  🎯 Key Takeaways
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;LimitRange&lt;/strong&gt; enforces resource limits at the pod/container level.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;ResourceQuota&lt;/strong&gt; governs resource usage at the namespace level.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Together, they ensure fair resource allocation, cost optimization, and cluster stability.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Use them to tame resource chaos and build reliable, multi-tenant Kubernetes clusters.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>kubernetes</category>
      <category>namespace</category>
      <category>devops</category>
    </item>
    <item>
      <title>Kubernetes Probes: The Secret to Self-Healing Applications 🚑</title>
      <dc:creator>Laxman Patel</dc:creator>
      <pubDate>Wed, 01 Oct 2025 05:18:31 +0000</pubDate>
      <link>https://dev.to/imlucky883/kubernetes-probes-the-secret-to-self-healing-applications-2hnn</link>
      <guid>https://dev.to/imlucky883/kubernetes-probes-the-secret-to-self-healing-applications-2hnn</guid>
      <description>&lt;p&gt;Imagine deploying an application in Kubernetes, only to find that some pods randomly stop working, others become unresponsive, and traffic still gets routed to a dead service. Nightmare, right? 😱 This is where &lt;strong&gt;Kubernetes Probes&lt;/strong&gt; come to the rescue!&lt;/p&gt;

&lt;p&gt;In this article, we'll explore why probes are essential, what happens if you don’t use them, and how you can leverage different types of probes—&lt;strong&gt;liveness, readiness, and startup probes&lt;/strong&gt;—to make your applications more &lt;strong&gt;resilient and self-healing&lt;/strong&gt;. We’ll also cover different ways to implement probes using &lt;strong&gt;HTTP, gRPC, TCP, and exec checks&lt;/strong&gt;. By the end, you'll be a probe expert! 💡&lt;/p&gt;




&lt;h2&gt;
  
  
  🚀 Why Do We Need Probes in Kubernetes?
&lt;/h2&gt;

&lt;p&gt;Kubernetes runs applications as &lt;strong&gt;containers&lt;/strong&gt;, but unlike traditional applications, containers don’t always behave predictably. They might:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Crash unexpectedly (due to memory leaks, panics, etc.).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Become unresponsive while still running.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Take too long to start due to heavy initialization tasks.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Without probes, Kubernetes &lt;strong&gt;has no idea&lt;/strong&gt; whether your application is healthy or not! As a result, it may keep sending traffic to a broken pod or fail to restart a crashed application.&lt;/p&gt;

&lt;h3&gt;
  
  
  What Happens If We Don’t Use Probes?
&lt;/h3&gt;

&lt;p&gt;🚨 &lt;strong&gt;Scenario 1: A Dead Application Still Gets Traffic&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;A web server crashes but remains in a "running" state.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kubernetes still routes traffic to it, causing users to get errors.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Customers get frustrated, and you lose business. 😭&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;🚨 &lt;strong&gt;Scenario 2: A Slow-Starting App Gets Killed Prematurely&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Your application takes 60 seconds to initialize.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kubernetes thinks it’s dead and restarts it over and over.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Your app &lt;strong&gt;never becomes available&lt;/strong&gt;, even though it was working fine! 🙃&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;🚨 &lt;strong&gt;Scenario 3: A Pod Crashes but Never Gets Restarted&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Your backend service crashes due to a memory leak.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kubernetes doesn’t detect it and never restarts it.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The entire system slowly fails because of one bad pod. 🔥&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  🛠️ Types of Kubernetes Probes
&lt;/h2&gt;

&lt;p&gt;Kubernetes provides three types of probes to prevent these issues:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;strong&gt;Probe Type&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Failure Action&lt;/strong&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Liveness Probe&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Kubelet &lt;strong&gt;restarts the container&lt;/strong&gt; if the probe fails.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Readiness Probe&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Kubelet &lt;strong&gt;removes the pod from the service endpoint&lt;/strong&gt; (no traffic sent to it).&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Startup Probe&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;If the probe fails, the &lt;strong&gt;pod is killed and restarted&lt;/strong&gt;.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h2&gt;
  
  
  🕵️‍♂️ Exploring Different Probe Methods
&lt;/h2&gt;

&lt;p&gt;Kubernetes offers multiple ways to check container health:&lt;/p&gt;

&lt;h3&gt;
  
  
  1️⃣ &lt;strong&gt;HTTP Probe&lt;/strong&gt; - Ideal for Web Applications 🌍
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Use Case:&lt;/strong&gt; Check if a web server is responding.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;livenessProbe&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;httpGet&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/healthz&lt;/span&gt;
    &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;8080&lt;/span&gt;
  &lt;span class="na"&gt;initialDelaySeconds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;5 ------------&amp;gt; the wait period before kubelet starts checking&lt;/span&gt;
  &lt;span class="na"&gt;periodSeconds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;10 -----------------&amp;gt; kubelet checks again after 10sec period&lt;/span&gt;
  &lt;span class="na"&gt;failureThreshold&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;3 ---------------&amp;gt; kubelet restarts container after 3 consecutive failure&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;✅ Kubernetes sends an HTTP request to &lt;code&gt;/healthz&lt;/code&gt;. If the response is &lt;strong&gt;200 OK&lt;/strong&gt;, the pod is healthy. If not, Kubernetes restarts it.&lt;/p&gt;




&lt;h3&gt;
  
  
  2️⃣ &lt;strong&gt;gRPC Probe&lt;/strong&gt; - Perfect for Microservices 🔗
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Use Case:&lt;/strong&gt; Check if a gRPC service is alive.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;livenessProbe&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;grpc&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;50051&lt;/span&gt;
  &lt;span class="na"&gt;initialDelaySeconds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;5&lt;/span&gt;
  &lt;span class="na"&gt;periodSeconds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;10&lt;/span&gt;
  &lt;span class="na"&gt;failureThreshold&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;✅ Kubernetes makes a &lt;strong&gt;gRPC health check&lt;/strong&gt; request. If the service responds, it's considered healthy. If not, it's restarted.&lt;/p&gt;




&lt;h3&gt;
  
  
  3️⃣ &lt;strong&gt;TCP Probe&lt;/strong&gt; - Great for Databases and TCP-Based Services 📡
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Use Case:&lt;/strong&gt; Check if a database (e.g., PostgreSQL) is accepting connections.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;livenessProbe&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;tcpSocket&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;5432&lt;/span&gt;
  &lt;span class="na"&gt;initialDelaySeconds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;5&lt;/span&gt;
  &lt;span class="na"&gt;periodSeconds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;10&lt;/span&gt;
  &lt;span class="na"&gt;failureThreshold&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;✅ Kubernetes tries to establish a &lt;strong&gt;TCP connection&lt;/strong&gt;. If successful, the pod is healthy. If it fails, Kubernetes restarts the container.&lt;/p&gt;




&lt;h3&gt;
  
  
  4️⃣ &lt;strong&gt;Exec Probe&lt;/strong&gt; - Custom Commands for Complex Applications 🛠️
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Use Case:&lt;/strong&gt; Run a script inside the container to verify health.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;livenessProbe&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;exec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;cat&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;/tmp/healthy&lt;/span&gt;
  &lt;span class="na"&gt;initialDelaySeconds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;5&lt;/span&gt;
  &lt;span class="na"&gt;periodSeconds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;10&lt;/span&gt;
  &lt;span class="na"&gt;failureThreshold&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;✅ Kubernetes &lt;strong&gt;executes a command&lt;/strong&gt; inside the container. If the command runs successfully (exit code 0), the pod is healthy. Otherwise, Kubernetes restarts it.&lt;/p&gt;

&lt;p&gt;💡&lt;br&gt;
The Kubelet communicates with the &lt;strong&gt;container runtime&lt;/strong&gt; (e.g., Docker, containerd, CRI-O) to &lt;strong&gt;execute probe commands&lt;/strong&gt; inside containers. If the probe fails, Kubelet instructs the runtime to restart or remove the pod from service.&lt;/p&gt;




&lt;h2&gt;
  
  
  🏆 Best Practices for Using Probes
&lt;/h2&gt;

&lt;p&gt;✅ &lt;strong&gt;Use Readiness Probes for External Dependencies&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;If your service depends on a database, only mark it as ready &lt;strong&gt;after&lt;/strong&gt; it establishes a DB connection.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;✅ &lt;strong&gt;Use Startup Probes for Slow-Starting Apps&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;If your application initializes slowly, a startup probe prevents it from getting restarted too early.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;✅ &lt;strong&gt;Set Reasonable Probe Timeouts&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Don’t check too frequently—this can cause unnecessary restarts.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;✅ &lt;strong&gt;Always Test Your Probes&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Deploy your application and ensure the probes behave as expected.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  🎯 Conclusion
&lt;/h2&gt;

&lt;p&gt;Kubernetes &lt;strong&gt;Probes&lt;/strong&gt; are a powerful feature that makes your applications more &lt;strong&gt;resilient, self-healing, and production-ready&lt;/strong&gt;. Without them, you risk serving broken services to users, premature restarts, or complete application failure.&lt;/p&gt;

&lt;p&gt;By using &lt;strong&gt;liveness, readiness, and startup probes&lt;/strong&gt;, combined with &lt;strong&gt;HTTP, gRPC, TCP, or exec methods&lt;/strong&gt;, you can ensure your Kubernetes workloads stay healthy and responsive. 🚀&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>probes</category>
      <category>devops</category>
    </item>
    <item>
      <title>RBAC in Kubernetes: Understanding Roles, and RoleBindings 🔐</title>
      <dc:creator>Laxman Patel</dc:creator>
      <pubDate>Wed, 01 Oct 2025 05:02:08 +0000</pubDate>
      <link>https://dev.to/imlucky883/rbac-in-kubernetes-understanding-roles-and-rolebindings-4nj3</link>
      <guid>https://dev.to/imlucky883/rbac-in-kubernetes-understanding-roles-and-rolebindings-4nj3</guid>
      <description>&lt;p&gt;Kubernetes is a powerful platform for managing containerized applications, but with great power comes the need for &lt;strong&gt;granular access control&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Enter &lt;strong&gt;Role-Based Access Control (RBAC)&lt;/strong&gt;—a security mechanism that allows you to define &lt;strong&gt;who can do what&lt;/strong&gt; in your cluster. In this article, we’ll dive into &lt;strong&gt;Roles&lt;/strong&gt;, &lt;strong&gt;RoleBindings&lt;/strong&gt;, &lt;strong&gt;ClusterRoles&lt;/strong&gt;, and &lt;strong&gt;ClusterRoleBindings&lt;/strong&gt;, explore their different combinations, and explain why &lt;strong&gt;ClusterRole with RoleBinding&lt;/strong&gt; is possible but not the other way around.&lt;/p&gt;




&lt;h2&gt;
  
  
  � The Problem: Managing Access in Kubernetes
&lt;/h2&gt;

&lt;p&gt;In a Kubernetes cluster, you might have:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Developers&lt;/strong&gt; who need access to specific namespaces.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Admins&lt;/strong&gt; who require cluster-wide privileges.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;CI/CD tools&lt;/strong&gt; that need limited permissions to deploy applications.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Without proper access control:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Users or services might have &lt;strong&gt;more permissions than they need&lt;/strong&gt;, leading to security risks.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It’s difficult to enforce the &lt;strong&gt;principle of least privilege&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Auditing and compliance become challenging.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is where &lt;strong&gt;RBAC&lt;/strong&gt; comes into play.&lt;/p&gt;




&lt;h2&gt;
  
  
  🛠️ What Are Roles, RoleBindings, ClusterRoles, and ClusterRoleBindings?
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. &lt;strong&gt;Role&lt;/strong&gt;: Namespace-Specific Permissions
&lt;/h3&gt;

&lt;p&gt;A &lt;strong&gt;Role&lt;/strong&gt; defines a set of permissions (e.g., &lt;code&gt;get&lt;/code&gt;, &lt;code&gt;list&lt;/code&gt;, &lt;code&gt;create&lt;/code&gt;) for resources within a &lt;strong&gt;specific namespace&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Example Role:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;rbac.authorization.k8s.io/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Role&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;pod-reader&lt;/span&gt;
&lt;span class="na"&gt;rules&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;apiGroups&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
  &lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;pods"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
  &lt;span class="na"&gt;verbs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;get"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;list"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;watch"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  2. &lt;strong&gt;RoleBinding&lt;/strong&gt;: Granting Roles to Users or Groups
&lt;/h3&gt;

&lt;p&gt;A &lt;strong&gt;RoleBinding&lt;/strong&gt; binds a Role to a user, group, or service account within a &lt;strong&gt;specific namespace&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Example RoleBinding:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;rbac.authorization.k8s.io/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;RoleBinding&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;pod-reader-binding&lt;/span&gt;
&lt;span class="na"&gt;subjects&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;User&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;alice&lt;/span&gt;
  &lt;span class="na"&gt;apiGroup&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;rbac.authorization.k8s.io&lt;/span&gt;
&lt;span class="na"&gt;roleRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Role&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;pod-reader&lt;/span&gt;
  &lt;span class="na"&gt;apiGroup&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;rbac.authorization.k8s.io&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  3. &lt;strong&gt;ClusterRole&lt;/strong&gt;: Cluster-Wide Permissions
&lt;/h3&gt;

&lt;p&gt;A &lt;strong&gt;ClusterRole&lt;/strong&gt; defines permissions for resources across the &lt;strong&gt;entire cluster&lt;/strong&gt; (including non-namespaced resources like Nodes or PersistentVolumes).&lt;/p&gt;

&lt;p&gt;Example ClusterRole:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;rbac.authorization.k8s.io/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ClusterRole&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cluster-admin&lt;/span&gt;
&lt;span class="na"&gt;rules&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;apiGroups&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
  &lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;*"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
  &lt;span class="na"&gt;verbs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;*"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  4. &lt;strong&gt;ClusterRoleBinding&lt;/strong&gt;: Granting ClusterRoles to Users or Groups
&lt;/h3&gt;

&lt;p&gt;A &lt;strong&gt;ClusterRoleBinding&lt;/strong&gt; binds a ClusterRole to a user, group, or service account across the &lt;strong&gt;entire cluster&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Example ClusterRoleBinding:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;rbac.authorization.k8s.io/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ClusterRoleBinding&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cluster-admin-binding&lt;/span&gt;
&lt;span class="na"&gt;subjects&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;User&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;admin&lt;/span&gt;
  &lt;span class="na"&gt;apiGroup&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;rbac.authorization.k8s.io&lt;/span&gt;
&lt;span class="na"&gt;roleRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ClusterRole&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cluster-admin&lt;/span&gt;
  &lt;span class="na"&gt;apiGroup&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;rbac.authorization.k8s.io&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  🧩 Different Combinations of Roles and Bindings
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;strong&gt;Role Type&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Binding Type&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Scope&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Use Case&lt;/strong&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Role&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;RoleBinding&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Namespace-specific&lt;/td&gt;
&lt;td&gt;Granting access to resources within a specific namespace (e.g., dev team).&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;ClusterRole&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;RoleBinding&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Namespace-specific&lt;/td&gt;
&lt;td&gt;Granting cluster-wide permissions but limiting them to a specific namespace.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;ClusterRole&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;ClusterRoleBinding&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Cluster-wide&lt;/td&gt;
&lt;td&gt;Granting cluster-wide permissions (e.g., cluster admins).&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Role&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;ClusterRoleBinding&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Not Allowed&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Roles are namespace-specific and cannot be bound cluster-wide.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h2&gt;
  
  
  🚀 Why ClusterRole with RoleBinding is Possible (But Not the Other Way Around)
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. &lt;strong&gt;ClusterRole with RoleBinding&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;A &lt;strong&gt;ClusterRole&lt;/strong&gt; can be bound to a &lt;strong&gt;RoleBinding&lt;/strong&gt; within a specific namespace. This allows you to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Define cluster-wide permissions (e.g., for a set of resources).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Apply those permissions only to a specific namespace.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Example&lt;/strong&gt;: A ClusterRole for managing Pods across the cluster can be bound to a RoleBinding in the &lt;code&gt;default&lt;/code&gt; namespace, granting Pod management permissions only in that namespace.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. &lt;strong&gt;Role with ClusterRoleBinding is Not Allowed&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;A &lt;strong&gt;Role&lt;/strong&gt; is &lt;strong&gt;namespace-scoped&lt;/strong&gt;, meaning it only applies to resources within a specific namespace. Binding it to a &lt;strong&gt;ClusterRoleBinding&lt;/strong&gt; (which is cluster-wide) would create ambiguity:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Which namespace should the Role apply to?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;How should the Role’s permissions be enforced across the cluster?&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is why Kubernetes does not allow this combination.&lt;/p&gt;




&lt;h2&gt;
  
  
  🎯 When to Use Each Combination
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Role + RoleBinding&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;Use for &lt;strong&gt;namespace-specific access&lt;/strong&gt; (e.g., granting a developer access to the &lt;code&gt;dev&lt;/code&gt; namespace).&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;ClusterRole + RoleBinding&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;Use for &lt;strong&gt;namespace-specific access with cluster-wide permissions&lt;/strong&gt; (e.g., granting a CI/CD tool access to Deployments in the &lt;code&gt;staging&lt;/code&gt; namespace).&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;ClusterRole + ClusterRoleBinding&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;Use for &lt;strong&gt;cluster-wide access&lt;/strong&gt; (e.g., granting cluster admin privileges).&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  📚 Best Practices
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Principle of Least Privilege&lt;/strong&gt;: Grant only the permissions users or services need.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Use Namespaces&lt;/strong&gt;: Isolate resources and permissions using namespaces.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Audit Regularly&lt;/strong&gt;: Review Roles, ClusterRoles, and their bindings periodically.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Avoid Wildcards&lt;/strong&gt;: Be specific with permissions (e.g., avoid &lt;code&gt;verbs: ["*"]&lt;/code&gt; unless absolutely necessary).&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>kubernetes</category>
      <category>security</category>
      <category>rbac</category>
      <category>authorization</category>
    </item>
    <item>
      <title>ConfigMaps and Secrets: Managing Configuration and Sensitive Data in Kubernetes 🔐</title>
      <dc:creator>Laxman Patel</dc:creator>
      <pubDate>Wed, 01 Oct 2025 03:22:00 +0000</pubDate>
      <link>https://dev.to/imlucky883/configmaps-and-secrets-managing-configuration-and-sensitive-data-in-kubernetes-25am</link>
      <guid>https://dev.to/imlucky883/configmaps-and-secrets-managing-configuration-and-sensitive-data-in-kubernetes-25am</guid>
      <description>&lt;p&gt;Kubernetes is all about running applications at scale, but how do you manage configuration data and sensitive information like passwords or API keys? &lt;/p&gt;

&lt;p&gt;Enter &lt;strong&gt;ConfigMaps&lt;/strong&gt; and &lt;strong&gt;Secrets&lt;/strong&gt;—two powerful tools that help you decouple configuration and sensitive data from your application code. In this article, we’ll explore what ConfigMaps and Secrets are, how they work, when to use each, and how to use them effectively in your Kubernetes clusters.&lt;/p&gt;




&lt;h2&gt;
  
  
  � The Problem: Configuration and Sensitive Data in Kubernetes
&lt;/h2&gt;

&lt;p&gt;When deploying applications, you often need to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Configure application settings&lt;/strong&gt; like environment variables, configuration files, or command-line arguments.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Manage sensitive data&lt;/strong&gt; like passwords, API keys, or TLS certificates.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Hardcoding these values into your application or container images is a bad idea because:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;It makes your application &lt;strong&gt;inflexible&lt;/strong&gt; and environment-specific.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It exposes sensitive data, creating &lt;strong&gt;security risks&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It requires rebuilding and redeploying your application for every configuration change.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is where &lt;strong&gt;ConfigMaps&lt;/strong&gt; and &lt;strong&gt;Secrets&lt;/strong&gt; come into play.&lt;/p&gt;




&lt;h2&gt;
  
  
  🛠️ What Are ConfigMaps and Secrets?
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. &lt;strong&gt;ConfigMaps&lt;/strong&gt;: Managing Non-Sensitive Configuration Data
&lt;/h3&gt;

&lt;p&gt;A &lt;strong&gt;ConfigMap&lt;/strong&gt; is a Kubernetes object that stores &lt;strong&gt;non-sensitive configuration data&lt;/strong&gt; in key-value pairs. It allows you to decouple configuration from your application code, making your application more portable and easier to manage.&lt;/p&gt;

&lt;p&gt;Key features:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Stores configuration data as &lt;strong&gt;key-value pairs&lt;/strong&gt; or &lt;strong&gt;files&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Can be injected into pods as &lt;strong&gt;environment variables&lt;/strong&gt; or &lt;strong&gt;configuration files&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Ideal for non-sensitive data like environment settings, URLs, or feature flags.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  2. &lt;strong&gt;Secrets&lt;/strong&gt;: Managing Sensitive Data
&lt;/h3&gt;

&lt;p&gt;A &lt;strong&gt;Secret&lt;/strong&gt; is a Kubernetes object designed to store &lt;strong&gt;sensitive information&lt;/strong&gt; like passwords, API keys, or TLS certificates. Secrets are similar to ConfigMaps but are specifically designed for sensitive data.&lt;/p&gt;

&lt;p&gt;Key features:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Stores sensitive data as &lt;strong&gt;key-value pairs&lt;/strong&gt; or &lt;strong&gt;files&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Data is &lt;strong&gt;base64-encoded&lt;/strong&gt; (not encrypted by default).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Can be injected into pods as &lt;strong&gt;environment variables&lt;/strong&gt; or &lt;strong&gt;mounted files&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Ideal for sensitive data like database credentials, OAuth tokens, or TLS certificates.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  🎯 When to Use ConfigMaps vs. Secrets
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;strong&gt;Use Case&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;ConfigMap&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Secret&lt;/strong&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Environment Variables&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Non-sensitive data (e.g., app settings)&lt;/td&gt;
&lt;td&gt;Sensitive data (e.g., database passwords)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Configuration Files&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Non-sensitive config files&lt;/td&gt;
&lt;td&gt;Sensitive config files (e.g., TLS certs)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Command-Line Arguments&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Non-sensitive arguments&lt;/td&gt;
&lt;td&gt;Sensitive arguments&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Data Encoding&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Plain text&lt;/td&gt;
&lt;td&gt;Base64-encoded&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Security&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Not secure&lt;/td&gt;
&lt;td&gt;More secure (but not encrypted by default)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h2&gt;
  
  
  🛠️ How to Use ConfigMaps and Secrets
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. &lt;strong&gt;Creating a ConfigMap&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Let’s create a ConfigMap to store non-sensitive configuration data.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ConfigMap&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;app-config&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
&lt;span class="na"&gt;data&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;APP_COLOR&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;blue&lt;/span&gt;
  &lt;span class="na"&gt;APP_ENV&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;production&lt;/span&gt;
  &lt;span class="na"&gt;config.json&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
    &lt;span class="s"&gt;{&lt;/span&gt;
      &lt;span class="s"&gt;"logLevel": "debug",&lt;/span&gt;
      &lt;span class="s"&gt;"timeout": "30s"&lt;/span&gt;
    &lt;span class="s"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;data&lt;/strong&gt;: Stores key-value pairs or files.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;APP_COLOR&lt;/strong&gt; and &lt;strong&gt;APP_ENV&lt;/strong&gt;: Simple key-value pairs.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;config.json&lt;/strong&gt;: A configuration file stored as a multi-line string.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Apply the ConfigMap:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="s"&gt;kubectl apply -f configmap.yaml&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  2. &lt;strong&gt;Creating a Secret&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Now, let’s create a Secret to store sensitive data.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Secret&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;app-secret&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
&lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Opaque&lt;/span&gt;
&lt;span class="na"&gt;data&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;DB_USERNAME&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;dXNlcm5hbWU=&lt;/span&gt;  &lt;span class="c1"&gt;# base64-encoded "username"&lt;/span&gt;
  &lt;span class="na"&gt;DB_PASSWORD&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cGFzc3dvcmQ=&lt;/span&gt;  &lt;span class="c1"&gt;# base64-encoded "password"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;data&lt;/strong&gt;: Stores base64-encoded key-value pairs.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;type: Opaque&lt;/strong&gt;: The default type for generic secrets.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Apply the Secret:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="s"&gt;kubectl apply -f secret.yaml&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  🧩 Injecting ConfigMaps and Secrets into Pods
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Pod&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;app-pod&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;app-container&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;busybox:latest&lt;/span&gt;
    &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/bin/sh"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;-c"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;env"&lt;/span&gt; &lt;span class="pi"&gt;]&lt;/span&gt;
    &lt;span class="c1"&gt;# Method 1: Injecting ConfigMap and Secret data as environment variables [2]&lt;/span&gt;
    &lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;APP_COLOR&lt;/span&gt;
      &lt;span class="na"&gt;valueFrom&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;configMapKeyRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;app-config&lt;/span&gt;
          &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;APP_COLOR&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;APP_ENV&lt;/span&gt;
      &lt;span class="na"&gt;valueFrom&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;configMapKeyRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;app-config&lt;/span&gt;
          &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;APP_ENV&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;DB_USERNAME&lt;/span&gt;
      &lt;span class="na"&gt;valueFrom&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;secretKeyRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;app-secret&lt;/span&gt;
          &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;DB_USERNAME&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;DB_PASSWORD&lt;/span&gt;
      &lt;span class="na"&gt;valueFrom&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;secretKeyRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;app-secret&lt;/span&gt;
          &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;DB_PASSWORD&lt;/span&gt;
    &lt;span class="c1"&gt;# Method 2: Mounting ConfigMap and Secret as Volumes [2]&lt;/span&gt;
    &lt;span class="na"&gt;volumeMounts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;config-volume&lt;/span&gt;
      &lt;span class="na"&gt;mountPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/etc/config&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;secret-volume&lt;/span&gt;
      &lt;span class="na"&gt;mountPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/etc/secrets&lt;/span&gt;
  &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;config-volume&lt;/span&gt;
    &lt;span class="na"&gt;configMap&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;app-config&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;secret-volume&lt;/span&gt;
    &lt;span class="na"&gt;secret&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;secretName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;app-secret&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  🚀 Real-World Use Cases
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;ConfigMaps&lt;/strong&gt;:&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;Storing environment-specific configurations (e.g., dev, staging, prod).&lt;/li&gt;
&lt;li&gt;Managing feature flags or application settings.&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Secrets&lt;/strong&gt;:&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;Storing database credentials or API keys. &lt;/li&gt;
&lt;li&gt;Managing TLS certificates for HTTPS.
&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  📚 Best Practices
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Use ConfigMaps for Non-Sensitive Data&lt;/strong&gt;: Avoid storing sensitive information in ConfigMaps.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Use Secrets for Sensitive Data&lt;/strong&gt;: Always use Secrets for sensitive information, and consider enabling encryption at rest for added security.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Avoid Hardcoding&lt;/strong&gt;: Never hardcode configuration or sensitive data in your application code or container images.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Use Namespaces&lt;/strong&gt;: Isolate ConfigMaps and Secrets using namespaces for better organization and security.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  � Key Takeaways
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;ConfigMaps&lt;/strong&gt; are for non-sensitive configuration data.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Secrets&lt;/strong&gt; are for sensitive information like passwords and API keys.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Both can be injected into pods as environment variables or mounted files.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Use them to decouple configuration and sensitive data from your application code.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By leveraging ConfigMaps and Secrets, you can build more &lt;strong&gt;portable, secure, and maintainable&lt;/strong&gt; applications in Kubernetes. 🚀&lt;/p&gt;

</description>
      <category>devops</category>
      <category>kubernetes</category>
      <category>security</category>
    </item>
    <item>
      <title>The Offset Reset Dilemma: Avoiding Surprise Replays in Kafka</title>
      <dc:creator>Laxman Patel</dc:creator>
      <pubDate>Thu, 18 Sep 2025 06:43:23 +0000</pubDate>
      <link>https://dev.to/imlucky883/the-offset-reset-dilemma-avoiding-surprise-replays-in-kafka-5hc6</link>
      <guid>https://dev.to/imlucky883/the-offset-reset-dilemma-avoiding-surprise-replays-in-kafka-5hc6</guid>
      <description>&lt;p&gt;A consumer needs an &lt;strong&gt;offset&lt;/strong&gt; (a bookmark) to know where to start reading from a partition. Normally:&lt;/p&gt;

&lt;p&gt;Kafka stores the last committed offset in &lt;code&gt;_consumer_offsets&lt;/code&gt;. When you restart a consumer, it resumes from that committed offset.&lt;/p&gt;

&lt;p&gt;But… what if there is no valid offset for a partition?&lt;br&gt;
That’s where &lt;code&gt;auto.offset.reset&lt;/code&gt; kicks in.&lt;/p&gt;

&lt;h2&gt;
  
  
  🚨 When Does “no valid offset” Happen?
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;New consumer group (first time this group subscribes to a topic → no committed offsets exist yet).&lt;/li&gt;
&lt;li&gt;Offsets got deleted (Kafka has a retention policy for committed offsets — e.g., offsets.retention.minutes).&lt;/li&gt;
&lt;li&gt;Offset is invalid (maybe pointing to data that was deleted due to log retention).&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  ⚙️ auto.offset.reset Options
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. earliest
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdzja7clcp4fqeyayhesj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdzja7clcp4fqeyayhesj.png" alt="earliest" width="351" height="435"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Start reading from the beginning of the log (smallest available offset).&lt;/li&gt;
&lt;li&gt;Consumer will replay all historical data.&lt;/li&gt;
&lt;li&gt;Good for batch jobs, data pipelines, or when you really want everything (e.g., reindexing a search database).&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  2. latest
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8pkr49f6v9nwivh5dh0x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8pkr49f6v9nwivh5dh0x.png" alt="laest" width="351" height="435"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Start reading from the end of the log (largest offset).&lt;/li&gt;
&lt;li&gt;Consumer ignores past data → only gets new messages arriving after it joined.&lt;/li&gt;
&lt;li&gt;Good for real-time dashboards or monitoring, where you don’t care about history.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  📌 Why is this Important?
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;If you forget this setting, you can accidentally replay millions of messages when you didn’t intend to.&lt;/li&gt;
&lt;li&gt;Conversely, you might miss data if you start from latest in a system that needs history.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>kafka</category>
      <category>opensource</category>
      <category>programming</category>
      <category>eventdriven</category>
    </item>
    <item>
      <title>How Data is Stored in Kafka: JSON vs Avro Explained</title>
      <dc:creator>Laxman Patel</dc:creator>
      <pubDate>Wed, 17 Sep 2025 07:13:40 +0000</pubDate>
      <link>https://dev.to/imlucky883/how-data-is-stored-in-kafka-json-vs-avro-explained-4gi3</link>
      <guid>https://dev.to/imlucky883/how-data-is-stored-in-kafka-json-vs-avro-explained-4gi3</guid>
      <description>&lt;p&gt;If you’re new to Kafka, you’ve probably asked yourself:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;“When a producer sends a record, what does Kafka actually store on disk? Is it JSON? &lt;a href="https://avro.apache.org/" rel="noopener noreferrer"&gt;Avro&lt;/a&gt;? Hex? Bytes??”&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I recently went down this rabbit hole myself — so here’s a breakdown of what really happens behind the scenes.&lt;/p&gt;

&lt;h2&gt;
  
  
  First Principles: Kafka Stores &lt;strong&gt;Bytes&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Let’s clear the air first: &lt;/p&gt;

&lt;p&gt;Kafka doesn’t know or care about JSON, Avro, Protobuf, POJOs, or unicorns 🦄.   It only deals in &lt;strong&gt;byte arrays&lt;/strong&gt; (&lt;code&gt;byte[]&lt;/code&gt;).&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Producer sends: &lt;code&gt;byte[] key&lt;/code&gt;, &lt;code&gt;byte[] value&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Broker writes to log: &lt;code&gt;byte[]&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Consumer reads: &lt;code&gt;byte[]&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Everything else (JSON, Avro, Protobuf, String) is just a &lt;strong&gt;serialization format&lt;/strong&gt; layered on top.&lt;/p&gt;




&lt;h2&gt;
  
  
  Serialization vs Encoding
&lt;/h2&gt;

&lt;p&gt;Two words that get thrown around a lot:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Serialization&lt;/strong&gt; = turning an in-memory object (like a Java POJO) into a storable/transmittable format.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Encoding&lt;/strong&gt; = mapping characters into bytes (e.g., UTF-8 for text).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;👉 JSON uses &lt;em&gt;both&lt;/em&gt;. Avro is pure &lt;em&gt;serialization&lt;/em&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  How is data actually stored
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F85tux96ua74h7pmnqcwf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F85tux96ua74h7pmnqcwf.png" alt="avro_json" width="800" height="961"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Case 1&lt;/strong&gt;: &lt;strong&gt;JSON in Kafka&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Imagine a &lt;strong&gt;Java POJO&lt;/strong&gt;: &lt;code&gt;User{name="Lucky", age=30}&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Steps&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Serialize to JSON string
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="s2"&gt;"Lucky"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="nl"&gt;"age"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;30&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Encode string in UTF-8 → bytes&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;[0x7B, 0x22, 0x6E, 0x61, ...]&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;(&lt;strong&gt;In hex:&lt;/strong&gt; 7B 22 6E 61 6D 65 22 3A 22 4C )&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Producer sends the above byte[] to Kafka.&lt;/li&gt;
&lt;li&gt;Kafka stores raw bytes in its log&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When you &lt;code&gt;xxd&lt;/code&gt; the Kafka log file, you’ll see that hex dump.&lt;/p&gt;

&lt;p&gt;👉️ &lt;strong&gt;Takeaway&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;JSON → text → UTF-8 encoding → bytes → Kafka.&lt;br&gt;
It’s human-readable but bulky and &lt;em&gt;slower to parse&lt;/em&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Case 2: Avro in Kafka&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Same POJO: &lt;code&gt;User{name="Lucky", age=30}&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Steps&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Serialize directly to Avro binary format&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Field values are encoded as compact bytes following the Avro schema.&lt;/li&gt;
&lt;li&gt;Example output (simplified):
&lt;code&gt;[0x08, 0x4C, 0x75, 0x63, 0x6B, 0x79, 0x3C]&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;&lt;p&gt;Producer sends byte[] to Kafka&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;&lt;p&gt;Kafka stores raw bytes in its log&lt;/p&gt;&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;No intermediate “string” or UTF-8 step.&lt;/p&gt;

&lt;p&gt;👉️ &lt;strong&gt;Takeaway&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Avro → compact binary → bytes → Kafka.&lt;br&gt;
It’s efficient, schema-driven, and &lt;em&gt;faster to deserialize&lt;/em&gt;.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  JSON vs Avro: Side-by-Side
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Step&lt;/th&gt;
&lt;th&gt;JSON&lt;/th&gt;
&lt;th&gt;Avro&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Serialization&lt;/td&gt;
&lt;td&gt;POJO → JSON text&lt;/td&gt;
&lt;td&gt;POJO → Avro binary&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Encoding&lt;/td&gt;
&lt;td&gt;UTF-8 needed (text → bytes)&lt;/td&gt;
&lt;td&gt;Not needed (already binary)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Size&lt;/td&gt;
&lt;td&gt;Larger (e.g., &lt;code&gt;{"name":"Lucky"}&lt;/code&gt;)&lt;/td&gt;
&lt;td&gt;Smaller, compact&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Readability&lt;/td&gt;
&lt;td&gt;Human-readable&lt;/td&gt;
&lt;td&gt;Not human-readable&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Schema enforcement&lt;/td&gt;
&lt;td&gt;Optional&lt;/td&gt;
&lt;td&gt;Strict (via Avro schema)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h2&gt;
  
  
  Wrap-Up
&lt;/h2&gt;

&lt;p&gt;When people ask “Does Kafka store JSON or Avro?”, the real answer is:&lt;/p&gt;

&lt;p&gt;👉 Neither. Kafka stores raw bytes.&lt;br&gt;
👉 JSON/Avro/Protobuf are just contracts between producer and consumer.&lt;/p&gt;

&lt;p&gt;So next time you see a Kafka hex dump like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;7B 22 6E 61 6D 65 22 3A 22 4C 75 63 6B 79 22 7D
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;…you’ll know: that’s just your data, encoded and serialized, resting in Kafka’s logs waiting for the next consumer.&lt;/p&gt;

</description>
      <category>kafka</category>
      <category>tutorial</category>
      <category>learning</category>
      <category>pubsub</category>
    </item>
    <item>
      <title>How I Built a Python CLI for Blazing-Fast Crypto Exchange Price Notifications</title>
      <dc:creator>Laxman Patel</dc:creator>
      <pubDate>Wed, 13 Aug 2025 14:07:02 +0000</pubDate>
      <link>https://dev.to/imlucky883/how-i-built-a-python-cli-for-blazing-fast-crypto-exchange-price-notifications-20dp</link>
      <guid>https://dev.to/imlucky883/how-i-built-a-python-cli-for-blazing-fast-crypto-exchange-price-notifications-20dp</guid>
      <description>&lt;p&gt;We’ve all been there — staring at a price chart, switching tabs, doing other work… and boom — the market moves.&lt;br&gt;
You come back, and it’s either a missed profit or an unnecessary loss.&lt;/p&gt;

&lt;p&gt;That was me, multiple times, while trading on Delta Exchange. &lt;a href="https://www.delta.exchange/" rel="noopener noreferrer"&gt;Delta Exchange&lt;/a&gt; offers a great trading experience but lacks &lt;strong&gt;real-time custom alerts for specific price points&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;I knew I couldn’t sit and watch the chart all day.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmi7l6bmozv727n8uzej7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmi7l6bmozv727n8uzej7.png" alt="Output" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I wanted a fast, resource-efficient, and reliable way to track prices and &lt;em&gt;trigger alerts&lt;/em&gt; without constantly refreshing charts or relying on third-party apps.&lt;/p&gt;

&lt;h2&gt;
  
  
  Tech Stack Used
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Python&lt;/strong&gt; – Core programming language.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;WebSocket&lt;/strong&gt; – For real-time price streaming.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;JSON&lt;/strong&gt; – To store and manage alert configurations.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Plyer&lt;/strong&gt; – For cross-platform desktop notifications.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Backend Logic
&lt;/h2&gt;

&lt;p&gt;Here’s how the tool works under the hood:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Connect to Delta Exchange WebSocket feed to get real-time price data.&lt;/li&gt;
&lt;li&gt;Compare latest price with stored alert conditions from a local JSON file.&lt;/li&gt;
&lt;li&gt;Trigger an alert when the price crosses the target threshold.&lt;/li&gt;
&lt;li&gt;Mark the alert as “triggered” so it’s not repeatedly fired.&lt;/li&gt;
&lt;li&gt;Send notifications via:

&lt;ul&gt;
&lt;li&gt;Desktop notification using Plyer.&lt;/li&gt;
&lt;li&gt;Email notification for remote alerts.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  Why WebSocket Instead of REST API?
&lt;/h2&gt;

&lt;p&gt;Initially, I considered using the &lt;strong&gt;REST API&lt;/strong&gt; for fetching prices, but there were a few drawbacks:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;REST is pull-based – I’d have to make repeated requests to get the latest price.&lt;/li&gt;
&lt;li&gt;This leads to API rate limits and unnecessary network usage.&lt;/li&gt;
&lt;li&gt;Even with short intervals, REST polling introduces latency in updates.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;On the other hand, WebSocket:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Pushes data in real-time as soon as the price changes.&lt;/li&gt;
&lt;li&gt;Reduces server load and network calls.&lt;/li&gt;
&lt;li&gt;Ensures alerts trigger instantly without delays.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For a price alert system, real-time updates are critical — so &lt;strong&gt;WebSocket&lt;/strong&gt; was the obvious choice.&lt;/p&gt;

&lt;h2&gt;
  
  
  Sample Output
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft7l48dlrxbd3z6jwgxwl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft7l48dlrxbd3z6jwgxwl.png" alt="output_Image" width="800" height="158"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Future Developments
&lt;/h2&gt;

&lt;p&gt;While the CLI works perfectly for my needs, I plan to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Containerize the application with Docker for easy deployment.&lt;/li&gt;
&lt;li&gt;Create a web-based UI for broader accessibility and a better user experience.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you trade on Delta Exchange (or any platform without great alerting), I highly recommend building something like this. It’s not just about automation — it’s about &lt;strong&gt;peace of mind&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Checkout my project repo on &lt;a href="https://github.com/Imlucky883/deltalert" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt; &lt;/p&gt;

</description>
    </item>
    <item>
      <title>🚀 How I Hosted a Website Locally Using NGINX + Cloudflare Tunnel + Custom Domain</title>
      <dc:creator>Laxman Patel</dc:creator>
      <pubDate>Thu, 31 Jul 2025 13:16:53 +0000</pubDate>
      <link>https://dev.to/imlucky883/how-i-hosted-a-website-locally-using-nginx-cloudflare-tunnel-custom-domain-4ffg</link>
      <guid>https://dev.to/imlucky883/how-i-hosted-a-website-locally-using-nginx-cloudflare-tunnel-custom-domain-4ffg</guid>
      <description>&lt;p&gt;Two days ago, I bought a &lt;code&gt;.cfd&lt;/code&gt; domain for ₹89 on impulse. I didn’t have a hosting plan. No VPS. No static IP. No plan, really.&lt;/p&gt;

&lt;p&gt;But I &lt;em&gt;did&lt;/em&gt; have one goal:&lt;/p&gt;

&lt;p&gt;👉 &lt;strong&gt;"Can I host a secure, public-facing website straight from my local machine... without exposing my IP or touching router configs?"&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Turns out, yes. Yes, I can. Here's how — and more importantly, &lt;em&gt;why&lt;/em&gt; I did it this way.&lt;/p&gt;




&lt;h2&gt;
  
  
  🧠 The Problem Space
&lt;/h2&gt;

&lt;p&gt;I wanted:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;To run a simple NGINX-powered static site&lt;/li&gt;
&lt;li&gt;No monthly hosting bills&lt;/li&gt;
&lt;li&gt;HTTPS (because insecure URLs scream &lt;em&gt;"hobby project"&lt;/em&gt;)&lt;/li&gt;
&lt;li&gt;And ideally, a solution that "just works" — no duct tape&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But I didn’t have:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A static IP&lt;/li&gt;
&lt;li&gt;Control over my ISP’s router for port forwarding&lt;/li&gt;
&lt;li&gt;Budget for VPS hosting&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  🧪 My First Thought: Ngrok?
&lt;/h2&gt;

&lt;p&gt;Of course, I thought of &lt;strong&gt;Ngrok&lt;/strong&gt;. It's the OG local tunnel tool. Dead simple. Plug and play.&lt;/p&gt;

&lt;p&gt;But there were issues:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Free version rotates URLs&lt;/strong&gt; — not great for a custom domain&lt;/li&gt;
&lt;li&gt;Their &lt;strong&gt;custom domain support is behind a paid plan&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Limited concurrent tunnels&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I needed more control. I wanted my domain to point to my machine — directly and reliably.&lt;/p&gt;




&lt;h2&gt;
  
  
  ☁️ Enter: Cloudflare Tunnel
&lt;/h2&gt;

&lt;p&gt;I was already using &lt;strong&gt;Cloudflare&lt;/strong&gt; for DNS, so I dug deeper and found &lt;a href="https://developers.cloudflare.com/cloudflare-one/connections/connect-apps/" rel="noopener noreferrer"&gt;&lt;code&gt;cloudflared&lt;/code&gt;&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;This tool lets you:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Expose your local server to the internet &lt;strong&gt;securely&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Use your &lt;strong&gt;own domain name&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Terminate HTTPS at the edge (no messing with certs)&lt;/li&gt;
&lt;li&gt;Skip firewall config, port forwarding, and NAT struggles&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And it’s &lt;strong&gt;100% free&lt;/strong&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  🛠️ The Stack I Used
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;What&lt;/th&gt;
&lt;th&gt;Tool Used&lt;/th&gt;
&lt;th&gt;Why I Picked It&lt;/th&gt;
&lt;th&gt;Alternatives Considered&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Domain&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;.cfd&lt;/code&gt; from Hostinger&lt;/td&gt;
&lt;td&gt;Cheap (₹89) + DNS manageable&lt;/td&gt;
&lt;td&gt;Namecheap&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Web Server&lt;/td&gt;
&lt;td&gt;NGINX on Ubuntu&lt;/td&gt;
&lt;td&gt;Lightweight, battle-tested&lt;/td&gt;
&lt;td&gt;Apache, Caddy&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Tunnel&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;cloudflared&lt;/code&gt; (FIPS)&lt;/td&gt;
&lt;td&gt;Secure, free, native domain support&lt;/td&gt;
&lt;td&gt;Ngrok, LocalTunnel&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;DNS Provider&lt;/td&gt;
&lt;td&gt;Cloudflare&lt;/td&gt;
&lt;td&gt;Fast, robust, full control&lt;/td&gt;
&lt;td&gt;GoDaddy, Namecheap&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h2&gt;
  
  
  🔧 How I Did It (The Short Version)
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Bought the domain: &lt;code&gt;goobara.cfd&lt;/code&gt; (because why not?)&lt;/li&gt;
&lt;li&gt;Pointed its &lt;strong&gt;NS records&lt;/strong&gt; to Cloudflare&lt;/li&gt;
&lt;li&gt;Waited ~2 hours for DNS propagation (it &lt;em&gt;will&lt;/em&gt; test your patience)&lt;/li&gt;
&lt;li&gt;Installed &lt;code&gt;cloudflared&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Created and configured a tunnel
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   cloudflared tunnel create goobara
   cloudflared tunnel route dns goobara goobara.cfd
   cloudflared tunnel run goobara
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;6.Set up NGINX to serve localhost:80&lt;br&gt;
7.Boom — &lt;a href="https://goobara.cfd" rel="noopener noreferrer"&gt;goobara.cfd&lt;/a&gt; was live!  &lt;/p&gt;

&lt;h2&gt;
  
  
  🧠 A Few "Aha!" Moments
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;curl -H "Host: goobara.cfd" http://localhost&lt;/code&gt; became my best friend.&lt;/li&gt;
&lt;li&gt;My site didn’t work initially — turns out, I hadn’t removed Hostinger's default DNS records (rookie mistake).&lt;/li&gt;
&lt;li&gt;Even though cloudflared showed success, DNS propagation + caching can cause confusion. Always verify with:&lt;code&gt;dig goobara.cfd +short&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  💡 Why This Setup Rocks
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;No need for public IP&lt;/li&gt;
&lt;li&gt;No Certbot / Let's Encrypt headaches&lt;/li&gt;
&lt;li&gt;Minimal attack surface (everything is proxied through Cloudflare)&lt;/li&gt;
&lt;li&gt;Zero cost hosting for hobby projects, dashboards, or even staging apps&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>cloudflare</category>
      <category>nginx</category>
      <category>selfhosted</category>
      <category>devops</category>
    </item>
  </channel>
</rss>
