If you've ever looked at a Kubernetes cluster and wondered why Pods and Services get the IPs they do, you’ve probably bumped into the term CIDR.
It sounds intimidating, but it’s actually just a clever way to keep the "address book" of your cluster organized. Let’s break it down like humans.
First off: What is CIDR?
Think of CIDR as a way to define a territory of IP addresses without having to list every single one.
Example: 10.244.0.0/16
Instead of saying, "I have 65,536 addresses from 10.244.0.0 to 10.244.255.255," we just use that one little string.
The number after the slash tells you how much of the address is "locked" (the network) and how much is "free real estate" (the hosts).
- /16 = A massive range (Lots of room for nodes).
- /24 = A smaller chunk (Perfect for a single node).
Why this matters in Kubernetes
When you spin up a cluster, it gets a big "bucket" of IPs (a /16). But a cluster has multiple nodes. To keep things from getting messy, Kubernetes splits that big bucket into smaller bowls (/24) for each node.
The Hierarchy:
-
Cluster Level:
10.244.0.0/16-
Node 1:
10.244.1.0/24 -
Node 2:
10.244.2.0/24
-
Node 1:
The Two Types of CIDR
A healthy cluster usually manages two separate "buckets":
- Pod CIDR: For your actual containers.
- Service CIDR: For the stable entry points (Services).
How Pod IPs are assigned
When a node joins your cluster, the kube-controller-manager says, "Welcome! Here is your personal slice of the IP pie."
- The Split: The controller assigns a small range (like a
/24) to that specific node. - The Creation: You deploy a Pod.
- The CNI (Calico/Flannel): The CNI plugin looks at the node's assigned range and grabs a free IP.
-
Result:
10.244.1.5
-
Result:
Because every node has its own unique range, Pod IPs never overlap. No collisions.
How Service IPs are assigned
Service IPs are a bit different. They don't care about nodes because Services are virtual.
- The Request: You create a Service.
- The API Server: The
kube-apiserverlooks at the global Service CIDR bucket. - The Assignment: It picks an IP (e.g.,
10.96.0.10) and stamps it on the Service.
You won't find this IP on any actual hardware interface. It lives purely in the cluster's iptables/IPVS.
The "Why": Why different components?
This used to confuse me. Why does the Controller Manager handle Pods, but the API Server handles Services?
-
Pod CIDRs are dynamic: Nodes come and go. We need a "
Controller" to constantly watch the cluster and hand out ranges to new nodes. - Service CIDRs are static: It’s just a flat pool of IPs. The API Server can just grab the next available one from the list while it's processing your YAML.
Summary
- Pod IP: Assigned by CNI from the Node’s specific slice.
- Service IP: Assigned by the API Server from the Cluster’s global pool.
Top comments (0)