<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Rajesh Deshpande</title>
    <description>The latest articles on DEV Community by Rajesh Deshpande (@rajeshdeshpande02).</description>
    <link>https://dev.to/rajeshdeshpande02</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/rajeshdeshpande02"/>
    <language>en</language>
    <item>
      <title>Kubernetes Myth #12: K8s Service Account Pulls Images</title>
      <dc:creator>Rajesh Deshpande</dc:creator>
      <pubDate>Thu, 13 Mar 2025 04:21:27 +0000</pubDate>
      <link>https://dev.to/rajeshdeshpande02/kubernetes-myth-12-k8s-service-account-pulls-images-hj7</link>
      <guid>https://dev.to/rajeshdeshpande02/kubernetes-myth-12-k8s-service-account-pulls-images-hj7</guid>
      <description>&lt;p&gt;🔍 &lt;strong&gt;Myth&lt;/strong&gt;: "A Kubernetes &lt;strong&gt;ServiceAccount&lt;/strong&gt; is responsible for pulling container images."&lt;/p&gt;

&lt;p&gt;💡 &lt;strong&gt;Reality&lt;/strong&gt;: The &lt;strong&gt;Container Runtime&lt;/strong&gt; on the worker node—not the ServiceAccount—pulls container images from the registry!&lt;/p&gt;

&lt;p&gt;This misconception often leads to confusion when working with private registries and authentication. Let’s dive into the real mechanism.&lt;/p&gt;

&lt;p&gt;🚀 &lt;strong&gt;Deep Dive: How Kubernetes Pulls Images&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When a Pod is scheduled, Kubernetes ensures that the required container images are available on the node. But Kubelet itself does not pull images—it delegates this task to the container runtime (containerd, CRI-O, Docker).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step-by-Step Image Pulling Process&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Pod Creation &amp;amp; Scheduling:&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;A user applies a Pod spec with an image (nginx:latest).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The Kubernetes scheduler assigns the Pod to a node.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;Kubelet Contacts the Container Runtime:&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Kubelet detects that a new Pod needs to run on its node.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It checks if the required image is already present.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If not, Kubelet instructs the container runtime to pull the image via the Container Runtime Interface (CRI).&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;Container Runtime Handles Authentication &amp;amp; Pulling:&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;The container runtime (e.g., containerd, CRI-O) authenticates with the container registry.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It checks for imagePullSecrets (for private registries).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It fetches the image from the specified container registry (Docker Hub, ECR, GCR, etc.).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Once downloaded, the image is stored locally on the node.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;Container is Started:&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;The container runtime unpacks the image and starts the container.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kubelet monitors the container's lifecycle.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;🛑 &lt;strong&gt;ServiceAccount is NOT involved in this process!&lt;/strong&gt; It does not fetch images or interact with the container registry.&lt;/p&gt;

&lt;p&gt;🔑 &lt;strong&gt;What is a Kubernetes ServiceAccount Actually For?&lt;/strong&gt;&lt;br&gt;
A ServiceAccount in Kubernetes is used for Pod-to-API-Server authentication, not for pulling images. Here’s what it does:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It provides a secure identity for Pods to interact with the Kubernetes API.&lt;/li&gt;
&lt;li&gt;It is automatically mounted into Pods to authenticate API requests.&lt;/li&gt;
&lt;li&gt;It can be used to manage RBAC permissions for Pods.&lt;/li&gt;
&lt;li&gt;It has NOTHING to do with pulling images from a registry!&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;🔥 &lt;strong&gt;Why Do People Think ServiceAccounts Pull Images?&lt;br&gt;
This myth exists because:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;ImagePullSecrets can be linked to a ServiceAccount&lt;/strong&gt;, making it seem like the ServiceAccount is responsible for authentication.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ServiceAccount tokens are automatically mounted into Pods&lt;/strong&gt;, leading to confusion about their purpose.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Both ServiceAccounts and imagePullSecrets handle authentication&lt;/strong&gt;, but for different things (API access vs. registry authentication).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;✅ &lt;strong&gt;How to Pull Images Correctly in Kubernetes?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If your images are public, Kubernetes pulls them without any authentication. But for private registries, you must provide credentials explicitly using imagePullSecrets:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Option 1: Define imagePullSecrets in the Pod
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Pod
metadata:
  name: my-app
spec:
  containers:
    - name: app
      image: my-private-registry.com/my-app:v1
  imagePullSecrets:
    - name: my-registry-secret
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ul&gt;
&lt;li&gt;Option 2: Attach imagePullSecrets to a ServiceAccount
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: ServiceAccount
metadata:
  name: my-sa
imagePullSecrets:
  - name: my-registry-secret
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Then reference the ServiceAccount in your Pod:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Pod
metadata:
  name: my-app
spec:
  serviceAccountName: my-sa
  containers:
    - name: app
      image: my-private-registry.com/my-app:v1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Even though we attach imagePullSecrets to a ServiceAccount, it's still the Container Runtime that pulls the image—not the ServiceAccount!&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;🎯 &lt;strong&gt;Key Takeaways – Myth Busted!&lt;/strong&gt;&lt;br&gt;
🚫 Kubernetes ServiceAccounts DO NOT pull container images.&lt;br&gt;
✅ Kubelet pulls images using the container runtime.&lt;br&gt;
✅ Use imagePullSecrets for private registries.&lt;br&gt;
✅ ServiceAccounts handle API authentication, not image pulling.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Next time someone says, “ServiceAccount pulls images”, you’ll know the truth!&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>platformengineering</category>
      <category>k8smyths</category>
    </item>
    <item>
      <title>Kubernetes Myth #11: CPU Requests Guarantee Reserved CPU</title>
      <dc:creator>Rajesh Deshpande</dc:creator>
      <pubDate>Wed, 12 Mar 2025 13:57:02 +0000</pubDate>
      <link>https://dev.to/rajeshdeshpande02/kubernetes-myth-11-cpu-requests-guarantee-reserved-cpu-3be6</link>
      <guid>https://dev.to/rajeshdeshpande02/kubernetes-myth-11-cpu-requests-guarantee-reserved-cpu-3be6</guid>
      <description>&lt;p&gt;❌ Myth: If I set cpu: 500m in my pod’s requests, Kubernetes reserves 0.5 CPU exclusively for my pod.&lt;/p&gt;

&lt;p&gt;✅ Reality: Kubernetes does not reserve CPU like it does for memory. CPU is a compressible resource, meaning:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Your pod gets priority access to 500m CPU but can use more if available.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Other pods can borrow unused CPU from your pod if it's not using the full 500m.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If the node is under high load, the kernel’s CPU throttling kicks in, limiting your pod’s CPU usage.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;📌 Example:&lt;/p&gt;

&lt;p&gt;🔹 You set cpu: 100m request, but your pod only needs 50m most of the time.&lt;br&gt;
🔹 Another pod on the same node borrows the remaining 50m CPU.&lt;br&gt;
🔹 Later, when your pod suddenly needs 100m CPU, it may get throttled if the node is already overloaded.&lt;/p&gt;

&lt;p&gt;💡 What to Do?&lt;br&gt;
✅ Set realistic requests—don’t go too high or too low.&lt;br&gt;
✅ Use CPU limits if you want to cap a pod’s max CPU usage.&lt;br&gt;
✅ Monitor throttling using 'kubectl top pods' and Prometheus metrics.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxxu38frr7xa3aa33rrtw.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxxu38frr7xa3aa33rrtw.jpg" alt="Image description" width="800" height="819"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>platformengineering</category>
      <category>devops</category>
      <category>cloudnative</category>
    </item>
    <item>
      <title>Kubernetes Myth #10: Kube-Proxy Assigns IP Addresses to Pods</title>
      <dc:creator>Rajesh Deshpande</dc:creator>
      <pubDate>Wed, 12 Mar 2025 13:54:47 +0000</pubDate>
      <link>https://dev.to/rajeshdeshpande02/kubernetes-myth-10-kube-proxy-assigns-ip-addresses-to-pods-3l58</link>
      <guid>https://dev.to/rajeshdeshpande02/kubernetes-myth-10-kube-proxy-assigns-ip-addresses-to-pods-3l58</guid>
      <description>&lt;p&gt;Many believe that Kube-Proxy is responsible for assigning IP addresses to Pods. But is that really the case?&lt;/p&gt;

&lt;p&gt;❌ Myth: Kube-Proxy assigns IP addresses to Pods.&lt;/p&gt;

&lt;p&gt;✅ Reality: The CNI (Container Network Interface) plugin handles Pod IP address allocation!&lt;/p&gt;

&lt;p&gt;🔍 Breaking it Down&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;When a Pod is created, it needs a unique IP address to communicate within the cluster. However:&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kube-Proxy is responsible for service-level networking and load balancing across Pods.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;CNI Plugins (like Flannel, Calico, Cilium) handle Pod networking, including assigning IPs to Pods and setting up routes.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;⚙️ How Pod Networking Actually Works&lt;/p&gt;

&lt;p&gt;1️⃣ When a Pod is scheduled, Kubelet requests an IP for it.&lt;br&gt;
2️⃣ The installed CNI plugin assigns an IP and configures network routes.&lt;br&gt;
3️⃣ The Pod uses this IP to communicate within the cluster.&lt;br&gt;
4️⃣ Kube-Proxy does not interfere with this process—it only manages routing for Kubernetes Services (e.g., ClusterIP, NodePort, LoadBalancer).&lt;/p&gt;

&lt;p&gt;📌 Why Does This Matter?&lt;/p&gt;

&lt;p&gt;If a Pod isn’t getting an IP, the issue is likely with the CNI plugin, not Kube-Proxy.&lt;/p&gt;

&lt;p&gt;Misconfiguring the CNI plugin can lead to network failures in your cluster.&lt;/p&gt;

&lt;p&gt;Understanding the difference helps in debugging Kubernetes networking issues more efficiently.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnozzde5z1zlwjehbiqlj.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnozzde5z1zlwjehbiqlj.jpg" alt="Image description" width="800" height="523"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>platformengineering</category>
      <category>devops</category>
      <category>cloudnative</category>
    </item>
    <item>
      <title>Kubernetes Myth #09: ClusterIP Services Always Use Round-Robin Load Balancing</title>
      <dc:creator>Rajesh Deshpande</dc:creator>
      <pubDate>Wed, 12 Mar 2025 13:52:45 +0000</pubDate>
      <link>https://dev.to/rajeshdeshpande02/kubernetes-myth-09-clusterip-services-always-use-round-robin-load-balancing-139o</link>
      <guid>https://dev.to/rajeshdeshpande02/kubernetes-myth-09-clusterip-services-always-use-round-robin-load-balancing-139o</guid>
      <description>&lt;p&gt;Myth: Kubernetes ClusterIP Services always distribute traffic using a round-robin algorithm.&lt;/p&gt;

&lt;p&gt;Reality: The default iptables mode of kube-proxy does not use round-robin. Instead, it uses random probability-based selection for load balancing.&lt;/p&gt;

&lt;p&gt;🔍 How It Actually Works:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;kube-proxy sets up NAT rules using iptables.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;When a request hits a ClusterIP, iptables randomly selects one of the backend pods.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The selection is not true round-robin but statistically distributed.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;🔥 Want True Round-Robin?&lt;/p&gt;

&lt;p&gt;✅ Use kube-proxy in IPVS mode:&lt;/p&gt;

&lt;p&gt;Supports round-robin (rr), least connections (lc), and other scheduling methods.&lt;/p&gt;

&lt;p&gt;Enable it with:&lt;/p&gt;

&lt;p&gt;kube-proxy --proxy-mode=ipvs&lt;/p&gt;

&lt;p&gt;✅ Use an external load balancer (e.g., NGINX, HAProxy) if strict round-robin is needed.&lt;/p&gt;

&lt;p&gt;🚀 Kubernetes is full of surprises—what other myths have you encountered? Drop them in the comments!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcpq8x9nw2uoxwx3dk7xc.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcpq8x9nw2uoxwx3dk7xc.jpg" alt="Image description" width="511" height="590"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>platformengineering</category>
      <category>devops</category>
      <category>cloudnative</category>
    </item>
    <item>
      <title>Kubernetes Myth #08: Kubelet runs only on worker nodes!</title>
      <dc:creator>Rajesh Deshpande</dc:creator>
      <pubDate>Wed, 12 Mar 2025 13:50:35 +0000</pubDate>
      <link>https://dev.to/rajeshdeshpande02/kubernetes-myth-08-kubelet-runs-only-on-worker-nodes-3og9</link>
      <guid>https://dev.to/rajeshdeshpande02/kubernetes-myth-08-kubelet-runs-only-on-worker-nodes-3og9</guid>
      <description>&lt;p&gt;🧐 If Kubelet is only for worker nodes, then who runs the control plane components like the API server, scheduler, and controller-manager?&lt;/p&gt;

&lt;p&gt;Reality: ❌ Kubelet runs on both worker and control plane nodes—but their roles differ!&lt;/p&gt;

&lt;p&gt;✅ On Worker Nodes → Kubelet manages application pods, ensuring they run as per the scheduler’s instructions.&lt;br&gt;
✅ On Control Plane Nodes → Kubelet manages static pods (like API server, scheduler, and controller-manager) by reading manifest files from /etc/kubernetes/manifests/.&lt;/p&gt;

&lt;p&gt;🔹 The real twist? Control plane components are NOT scheduled like regular pods! Instead, Kubelet directly runs them as static pods, bypassing the API server entirely.&lt;/p&gt;

&lt;p&gt;💡 Understanding this helps in debugging control plane failures and troubleshooting cluster issues. Next time someone says, "Kubelet is only for worker nodes," you’ll know how to bust that myth!&lt;/p&gt;

&lt;p&gt;Have you ever had to debug Kubelet behavior on a control plane node? Share your experience in the comments! 👇&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fko5op26w5ssfc2uha2wu.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fko5op26w5ssfc2uha2wu.jpg" alt="Image description" width="800" height="529"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>platformengineering</category>
      <category>devops</category>
      <category>cloudnative</category>
    </item>
    <item>
      <title>Kubernetes Myth #07: K8s Uses Limits and Requests for Scheduling</title>
      <dc:creator>Rajesh Deshpande</dc:creator>
      <pubDate>Wed, 12 Mar 2025 13:42:37 +0000</pubDate>
      <link>https://dev.to/rajeshdeshpande02/kubernetes-myth-07-k8s-uses-limits-and-requests-for-scheduling-24g3</link>
      <guid>https://dev.to/rajeshdeshpande02/kubernetes-myth-07-k8s-uses-limits-and-requests-for-scheduling-24g3</guid>
      <description>&lt;p&gt;💡 Reality: The Kubernetes scheduler only considers resource requests when making scheduling decisions. Limits do NOT impact scheduling, nor does actual CPU/memory usage!&lt;/p&gt;

&lt;p&gt;🔍 How It Actually Works:&lt;/p&gt;

&lt;p&gt;✅ Requests define scheduling – The scheduler ensures a node has enough reserved capacity before placing a pod.&lt;br&gt;
❌ Limits do NOT impact scheduling – They only affect runtime behavior (CPU throttling, OOMKills).&lt;br&gt;
❌ Actual resource usage is ignored – Even if a node has free CPU/memory, if requests are already "reserved," a new pod won’t be scheduled there.&lt;/p&gt;

&lt;p&gt;🛠️ Why Does Kubernetes Work This Way?&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Ensures predictable resource allocation&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Prevents nodes from getting overloaded later&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Avoids constant pod rescheduling, ensuring stability&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;🔥 Myth busted! Kubernetes scheduling is based on requests, not limits or actual usage. Misunderstanding this can lead to inefficient resource planning and unexpected scheduling issues.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flrxddr17zqk2tumpbkoq.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flrxddr17zqk2tumpbkoq.jpg" alt="Image description" width="800" height="957"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>platformengineering</category>
      <category>cloudnative</category>
      <category>devops</category>
    </item>
    <item>
      <title>Kubernetes Myth #06: Kubernetes Pods Always Need a Service Account</title>
      <dc:creator>Rajesh Deshpande</dc:creator>
      <pubDate>Wed, 12 Mar 2025 13:23:37 +0000</pubDate>
      <link>https://dev.to/rajeshdeshpande02/kubernetes-myth-06-3mb7</link>
      <guid>https://dev.to/rajeshdeshpande02/kubernetes-myth-06-3mb7</guid>
      <description>&lt;p&gt;🛑 Myth: "Every pod in Kubernetes must need a service account to function."&lt;/p&gt;

&lt;p&gt;✅ Reality: A service account is only needed when a pod needs to communicate with the Kubernetes API server or requires an identity for authentication.&lt;/p&gt;

&lt;p&gt;But, Kubernetes assigns one to every pod by default.&lt;/p&gt;

&lt;p&gt;🔍 Why does Kubernetes do this?&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Some workloads need to interact with the API server (e.g., retrieving secrets, managing resources, scaling applications).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kubernetes enforces a secure-by-default approach, ensuring every pod has an identity—even if it never uses it.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It follows RBAC (Role-Based Access Control) best practices, restricting what workloads can do in the cluster.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;⚠️ But what if my pod doesn’t need API access?&lt;/p&gt;

&lt;p&gt;Even if your pod doesn’t interact with the Kubernetes API, it still gets a default service account. You can’t remove it, but you can strip its power to improve security.&lt;/p&gt;

&lt;p&gt;You can disable token mount at two levels, Pod level and SA level.&lt;/p&gt;

&lt;p&gt;🔍Which One to Use?&lt;/p&gt;

&lt;p&gt;Use pod-level: When you only want to restrict specific pods.&lt;/p&gt;

&lt;p&gt;Use SA-level: When you want to enforce the restriction namespace-wide for all pods using that SA.&lt;/p&gt;

&lt;p&gt;Tip: If both are set, the pod-level setting takes precedence.&lt;/p&gt;

&lt;p&gt;💡 Bottom Line: You can’t remove the service account itself, but you can make it powerless by removing its token. &lt;br&gt;
This is a simple yet effective way to reduce unnecessary attack surfaces in your cluster.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcjo11j131fyk1e52dih5.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcjo11j131fyk1e52dih5.jpg" alt="Image description" width="800" height="954"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>cloudnative</category>
      <category>platformengineering</category>
      <category>devops</category>
    </item>
    <item>
      <title>Kubernetes Myth #05: ClusterIP is Only for Internal Communication</title>
      <dc:creator>Rajesh Deshpande</dc:creator>
      <pubDate>Wed, 12 Mar 2025 13:22:54 +0000</pubDate>
      <link>https://dev.to/rajeshdeshpande02/kubernetes-myth-05-2npn</link>
      <guid>https://dev.to/rajeshdeshpande02/kubernetes-myth-05-2npn</guid>
      <description>&lt;p&gt;🛑 The Myth:&lt;br&gt;
"A ClusterIP service in Kubernetes is only for internal communication."&lt;/p&gt;

&lt;p&gt;✅ The Reality:&lt;br&gt;
Yes, a pure ClusterIP service is internal. But… even NodePort and LoadBalancer services rely on ClusterIP!&lt;/p&gt;

&lt;p&gt;💡 How It Actually Works:&lt;br&gt;
1️⃣ Every Kubernetes service (NodePort, LoadBalancer) has a ClusterIP behind the scenes.&lt;br&gt;
2️⃣ External traffic first hits the NodePort (on a node) or a LoadBalancer (via a cloud provider).&lt;br&gt;
3️⃣ Kubernetes routes that traffic through ClusterIP to distribute requests across pods.&lt;/p&gt;

&lt;p&gt;🔍 Breakdown of How Services Work:&lt;br&gt;
🔹 ClusterIP: Internal communication only.&lt;br&gt;
🔹 NodePort: Exposes a node’s port externally, but still forwards traffic through ClusterIP.&lt;br&gt;
🔹 LoadBalancer: Cloud-managed external access, but traffic ultimately flows via ClusterIP.&lt;/p&gt;

&lt;p&gt;📌 Bottom Line: ClusterIP isn’t just for internal traffic—it’s the core of Kubernetes networking, even for external services.&lt;/p&gt;

&lt;p&gt;💬 Have you encountered this myth before? Let’s discuss in the comments! 👇&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5ce16bvr83m7qg1k29iu.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5ce16bvr83m7qg1k29iu.jpg" alt="Image description" width="800" height="1151"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgpoef42cmcq65vumrei6.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgpoef42cmcq65vumrei6.jpg" alt="Image description" width="800" height="1108"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fduic10lnwezyz736jdok.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fduic10lnwezyz736jdok.jpg" alt="Image description" width="800" height="1088"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>clusterip</category>
      <category>k8smyths</category>
      <category>cloudnative</category>
    </item>
    <item>
      <title>Kubernetes Myth #04: K8s Can Work Without a CNI Plugin</title>
      <dc:creator>Rajesh Deshpande</dc:creator>
      <pubDate>Wed, 12 Mar 2025 13:22:38 +0000</pubDate>
      <link>https://dev.to/rajeshdeshpande02/kubernetes-myth-04-o80</link>
      <guid>https://dev.to/rajeshdeshpande02/kubernetes-myth-04-o80</guid>
      <description>&lt;p&gt;Think you can run Kubernetes without a CNI plugin? 🤔 Think again!&lt;/p&gt;

&lt;p&gt;🔮 Myth: Kubernetes can function without a CNI (Container Network Interface) plugin.&lt;/p&gt;

&lt;p&gt;✅ Reality: Kubernetes needs a CNI (Container Network Interface) plugin to function properly. Without one:&lt;br&gt;
❌ Pods can’t communicate across nodes.&lt;br&gt;
❌ kubectl exec and kubectl logs may stop working.&lt;br&gt;
❌ Pods might get stuck in ContainerCreating.&lt;/p&gt;

&lt;p&gt;Think of it like setting up a phone system without any network provider—sure, you have the hardware, but no way to actually connect!&lt;/p&gt;

&lt;p&gt;A CNI (Container Network Interface) plugin is essential for:&lt;br&gt;
✅ Pod-to-pod communication across nodes&lt;br&gt;
✅ Service discovery &amp;amp; load balancing&lt;br&gt;
✅ IP address management for pods&lt;/p&gt;

&lt;p&gt;Unless you rely on host networking (which isn’t scalable), a CNI plugin like Calico, Flannel, or Cilium is a must!&lt;/p&gt;

&lt;p&gt;💡 Moral of the story? Don’t overlook networking—without a CNI, your Kubernetes cluster is just a bunch of isolated containers.&lt;/p&gt;

&lt;p&gt;Which CNI do you prefer? Drop your thoughts below! ⬇️&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faon5hcj79zhbtd1288pn.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faon5hcj79zhbtd1288pn.jpg" alt="Image description" width="800" height="665"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>cni</category>
      <category>calico</category>
      <category>k8smyths</category>
    </item>
    <item>
      <title>Kubernetes Myth #03: Kube-Proxy is a Must-Have</title>
      <dc:creator>Rajesh Deshpande</dc:creator>
      <pubDate>Wed, 12 Mar 2025 13:21:31 +0000</pubDate>
      <link>https://dev.to/rajeshdeshpande02/kubernetes-myth-03-kube-proxy-is-a-must-have-ccm</link>
      <guid>https://dev.to/rajeshdeshpande02/kubernetes-myth-03-kube-proxy-is-a-must-have-ccm</guid>
      <description>&lt;p&gt;You’ve been lied to. Kube-Proxy is NOT mandatory in Kubernetes.&lt;/p&gt;

&lt;p&gt;🚫 Myth: Kubernetes needs Kube-Proxy to function.&lt;/p&gt;

&lt;p&gt;✅ Reality: Kubernetes can run just fine without Kube-Proxy if a CNI plugin like Cilium takes over its job using eBPF-based networking.&lt;/p&gt;

&lt;p&gt;The Shocking Truth&lt;/p&gt;

&lt;p&gt;Kube-Proxy traditionally manages service-to-pod communication using iptables/ipvs, but that comes with overhead. What if you could eliminate it entirely?&lt;/p&gt;

&lt;p&gt;That’s where Cilium steps in. Instead of routing packets through layers of iptables, Cilium hooks directly into the kernel using eBPF—handling traffic dynamically, efficiently, and at lightning speed.&lt;/p&gt;

&lt;p&gt;Why Should You Care?&lt;/p&gt;

&lt;p&gt;⚡ Lower Latency – No more iptables slowdowns.&lt;br&gt;
📈 Better Scalability – Handles massive clusters with ease.&lt;br&gt;
🔒 Stronger Security – Enables fine-grained network policies &amp;amp; deep observability.&lt;/p&gt;

&lt;p&gt;So next time someone tells you Kube-Proxy is mandatory, challenge them. Kubernetes can be even more powerful without it.&lt;/p&gt;

&lt;p&gt;Have you tried Kubernetes without Kube-Proxy? Drop your thoughts below! ⬇️&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq8w6c92zroo87k6xefs1.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq8w6c92zroo87k6xefs1.jpg" alt="Image description" width="800" height="508"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>kubeproxy</category>
      <category>k8smyths</category>
      <category>platformengineering</category>
    </item>
    <item>
      <title>Kubernetes Myth #02: All Pods Are Created Using the API Server and Scheduler</title>
      <dc:creator>Rajesh Deshpande</dc:creator>
      <pubDate>Wed, 12 Mar 2025 12:59:57 +0000</pubDate>
      <link>https://dev.to/rajeshdeshpande02/kubernetes-myth-02-all-pods-are-created-using-the-api-server-and-scheduler-799</link>
      <guid>https://dev.to/rajeshdeshpande02/kubernetes-myth-02-all-pods-are-created-using-the-api-server-and-scheduler-799</guid>
      <description>&lt;p&gt;❌ Myth: Every Pod in Kubernetes is created through the API server and scheduled by the Kubernetes scheduler.&lt;/p&gt;

&lt;p&gt;✅ Reality: Not all Pods follow this path! There’s a special type of Pod that completely bypasses the API server and scheduler—Static Pods.&lt;/p&gt;

&lt;p&gt;Here’s how they break the rules:&lt;br&gt;
🔹 No API Server Involvement – Static Pods are launched directly by the kubelet from YAML files stored in "/etc/kubernetes/manifests/".&lt;br&gt;
🔹 No Scheduler Needed – The kubelet binds them to a specific node instead of waiting for the Kubernetes scheduler to assign them.&lt;br&gt;
🔹 Mirror Pods in the API Server – They appear in kubectl get pods, but deleting them via kubectl delete pod won’t work—you have to remove their YAML file!&lt;br&gt;
🔹 Critical for Kubernetes Itself – The API server, scheduler, and controller manager in self-managed clusters are all Static Pods running outside the normal scheduling flow.&lt;/p&gt;

&lt;p&gt;💡 Fun Fact: Since Static Pods start before the CNI plugin is ready, they initially have no network! Only after the CNI kicks in do they get an IP.&lt;/p&gt;

&lt;p&gt;Would you ever use a Static Pod outside of the control plane? Let’s discuss below!&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F64swkot7rfstxsh3n7vd.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F64swkot7rfstxsh3n7vd.jpg" alt="Image description" width="800" height="494"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>cloudnative</category>
      <category>k8smyths</category>
      <category>platformengineering</category>
    </item>
    <item>
      <title>Kubernetes Myth #01: You Need 3 Control Plane Nodes!</title>
      <dc:creator>Rajesh Deshpande</dc:creator>
      <pubDate>Wed, 12 Mar 2025 12:47:40 +0000</pubDate>
      <link>https://dev.to/rajeshdeshpande02/kubernetes-myth-01-1epj</link>
      <guid>https://dev.to/rajeshdeshpande02/kubernetes-myth-01-1epj</guid>
      <description>&lt;p&gt;We've all heard this:&lt;br&gt;
&lt;strong&gt;"Kubernetes requires at least 3 control plane nodes for high availability!"&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;But here’s the reality—that’s not always true.&lt;/p&gt;

&lt;p&gt;Take AWS EKS, for example. It runs with only 2 API server nodes, not 3. However, it still ensures high availability.&lt;/p&gt;

&lt;p&gt;So, what’s going on? 🤔&lt;/p&gt;

&lt;p&gt;🔹 The real 3-node requirement comes from etcd, not Kubernetes itself. etcd, which stores the cluster state, follows Raft consensus and needs a quorum to function. That’s why AWS EKS runs 3 etcd nodes—to tolerate failures while maintaining consistency.&lt;/p&gt;

&lt;p&gt;🔹 But the Kubernetes API Server, Scheduler, and Controller Manager don’t require quorum. That’s why AWS EKS can run just 2 API server nodes while keeping the control plane available.&lt;/p&gt;

&lt;p&gt;💡 Takeaway?&lt;br&gt;
Not every Kubernetes cluster follows the same HA model. Don’t blindly apply rules—understand why they exist.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkkqk486hfx8k68b8qol6.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkkqk486hfx8k68b8qol6.jpg" alt="AWS EKS Control Plan with 2 API Server node" width="800" height="573"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>k8smyths</category>
      <category>cloudnative</category>
      <category>controlplane</category>
    </item>
  </channel>
</rss>
