🛠 Setting the Stage: A Kind Cluster
Kubernetes is full of magic, but one of its most fascinating components is kube-proxy. It’s the silent operator that ensures traffic hitting a Service gets distributed across the right Pods. Under the hood, kube-proxy leverages Linux iptables to make this happen. Let’s peel back the layers and see it in action.
For this demo, I spun up a 3-node Kind cluster. On top of it, I deployed a simple nginx Deployment exposed via a ClusterIP Service.
![]()
Here’s the deployment and service in action:
📜 Peeking into iptables
Now comes the fun part. I logged into one of the nodes where a Pod is running and listed the NAT rules in the KUBE-SERVICES chain:

Notice the entry for our nginx-deployment Service. The destination IP here is the ClusterIP of the Service. This is kube-proxy’s starting point for redirecting traffic
🔀 Diving into the Service Chain
Every Service gets its own chain. For nginx, that’s KUBE-SVC-WRNOD73BKRQH4VVX. Let’s inspect it:

And here’s the magic:
When traffic hits the ClusterIP, kube-proxy rewrites it to one of the Pod IPs backing the Deployment.
The rules show a probability ratio — in this case, 50/50. That means half the traffic goes to one Pod, and the other half to the second Pod.
This is how kube-proxy achieves load balancing using nothing more than iptables.
So, what did we just see?
ClusterIP → Pod IPs translation via iptables.
Masquerading ensures the source IP is rewritten correctly.
Probability rules distribute traffic evenly across endpoints
🌐 How DNS Works in the Cluster
So far, we’ve seen how kube-proxy handles traffic routing and load balancing. But how does your application even know where to send requests? That’s where CoreDNS comes in.
CoreDNS acts as the nameserver inside Kubernetes, resolving Service names into their corresponding ClusterIPs. Let’s walk through it step by step.
🔍 Inspecting the kube-dns Service
In the kube-system namespace, you’ll find the kube-dns Service. This is essentially the front door to CoreDNS:
📄 The resolv.conf File
Inside Pods, the resolv.conf file contains the nameserver details and DNS search domains. This is how Kubernetes ensures that when you query something like nginx-deployment.default.svc.cluster.local, it knows how to resolve it.
🧪 Testing with nslookup
Let’s put it to the test. Logging into a node and running an nslookup shows the DNS resolution in action:
And it works exactly as expected — the Service name resolves to the ClusterIP, which kube-proxy then maps to the Pod IPs.
🎯 Wrapping It All Up
Between kube-proxy and CoreDNS, Kubernetes ensures that:
Traffic hitting a Service is load balanced across Pods.
Service names are resolved seamlessly into ClusterIPs.
Applications don’t need to worry about IP addresses — they just use DNS names. These two components are the backbone of Kubernetes networking. Without them, Services wouldn’t be discoverable or scalable.
🔥 And that’s the no-bluff walkthrough of kube-proxy and CoreDNS — two vital pieces of the Kubernetes puzzle. Next time you deploy an app, you’ll know exactly how the traffic finds its way to the right Pod.
Thats what kube-proxy does. Isnt it really cool ?



Top comments (0)