DEV Community

Manish Pillai
Manish Pillai

Posted on • Edited on

How Kubernetes Maps Service IP to Pods

When you first learn Kubernetes, you hear:

“Pods talk to each other using Services.”

It sounds simple, but the Service IP doesn't actually exist on any physical interface.

The Problem: Pods are Ephemeral

Pods are temporary. If you delete a Pod and it is recreated, it gets a new IP address.

  • Frontend calls Backend via Pod IP: 10.244.1.5
  • Backend restarts: The old IP no longer exists.
  • Result: Connection failure. ❌

The Solution: Services

Kubernetes provides Services to act as a stable entry point. They offer:

  1. Stable IP (ClusterIP)
  2. DNS name
  3. Load balancing

Instead of calling a specific Pod, the frontend calls the backend-service.

The Catch: The "Ghost" IP

A Service IP (e.g., 10.96.0.10) is:

  • Not assigned to any Pod.
  • Not owned by any Node.
  • Not visible in ifconfig or ip addr.

How it Works: Data vs. Routing

1. Finding the Pods

When you define a Service with a selector, Kubernetes creates an EndpointSlice. This is essentially a list of the actual Pod IPs behind that Service.

Note: This is just data stored in the API Server. It doesn't route anything yet.

2. kube-proxy Steps In

kube-proxy runs on every node as a DaemonSet. Its job is to watch the API Server for new Services and Endpoints and translate them into local networking rules.

3. Creating the Rules (iptables)

kube-proxy uses iptables (a Linux kernel feature) to intercept packets. It sets up a chain of rules:

  1. Match: "Is this packet going to Service IP 10.96.0.10?"
  2. Select: "Pick one of the available Pod IPs from the list."
  3. Rewrite (DNAT): Change the destination IP from the Service IP to the chosen Pod IP.

The Request Flow

When a request is sent:

  1. Client/Pod sends a packet to the Service IP.
  2. Linux Kernel checks the iptables rules (installed by kube-proxy).
  3. DNAT happens: The destination is rewritten to a Pod IP.
  4. The packet is routed to the actual Pod.

kube-proxy Modes

While the concept remains the same, the efficiency of how rules are matched varies:

Mode Technology Performance Scaling
iptables Linux rule list Standard Slower as you add thousands of services
IPVS Linux Load Balancer High Very fast; uses hash tables for lookups
Userspace kube-proxy process Low Slow (Old, no longer used)

Key Takeaway

The name kube-proxy is confusing because, in modern Kubernetes, it is not a proxy. It doesn't sit in the middle of your traffic.

  • kube-proxy runs on each node and manages the networking rules on each node to enable Service discovery and load balancing.
  • The Linux Kernel is the Data Plane that does the actual heavy lifting of routing packets (using iptables or IPVS).

Remember: kube-proxy sets things up; the kernel does the real work.

Top comments (0)