Episode 6: The Harbour Gates β Services and How Traffic Flows In π¦
The Pod IP Address Problem Is Going to Make You CRY π
OK. Your Deployment is running. Three beautiful Pods. You're feeling good. You feel like a Kubernetes PROFESSIONAL.
Then someone asks: "What's the URL of your application?"
You check the Pod IPs.
kubectl get pods -o wide
# NAME IP
# web-app-abc123 10.244.1.5
# web-app-def456 10.244.2.8
# web-app-ghi789 10.244.3.2
"It's... one of these three?" you say. "Pick one. 10.244.1.5 probably?"
WRONG. For three reasons:
- Which one? There are THREE Pods. Traffic should go to ALL of them. Not just one. Load balancing! Hello!
- These IPs change. Every time a Pod dies and gets recreated β new IP. The IP you just gave someone? Gone in five minutes.
- External traffic can't reach Pod IPs. These are internal cluster IPs. They don't exist outside the cluster.
You're standing at a harbour where the quays keep MOVING and the addresses keep CHANGING and you're trying to tell ships where to dock by GUESSING.
This is where Services come in. And they are BEAUTIFUL. π
The SIPOC of a Service ποΈ
| Detail | ||
|---|---|---|
| Supplier | Who defines the Service? | You, your platform team, Helm charts |
| Input | What goes in? | A label selector pointing to target Pods |
| Process | What does the Service do? | Maintains a stable IP/DNS name, load-balances traffic across matching Pods |
| Output | What comes out? | A single stable endpoint that routes to healthy Pods |
| Consumer | Who uses it? | Other Pods, Ingress controllers, external users, your monitoring stack |
What a Service Actually Is π―
A Service gives your Pods a stable identity. A fixed address. A name that doesn't change even when every single Pod behind it gets replaced.
Think of the harbour gate. The physical gate doesn't move even if the ships behind it change every day. Ships coming IN always go through Gate 7. Gate 7 figures out which berth to send them to. Gate 7 is always there. Always at the same location. The berths? Those change. The gate? Never.
External traffic
|
v
π¦ Service: "web-app"
Stable IP: 10.96.45.12
Stable DNS: web-app.default.svc.cluster.local
|
|--- load balances to ---> π¦ Pod: 10.244.1.5 (web-app-abc123)
| π¦ Pod: 10.244.2.8 (web-app-def456)
| π¦ Pod: 10.244.3.2 (web-app-ghi789)
# Pod abc123 dies. New pod web-app-jkl012 with IP 10.244.1.9 appears.
# The SERVICE IP? Still 10.96.45.12. Nothing changed from the outside. π©
How Services Find Pods: Label Selectors π·οΈ
A Service doesn't care about specific Pod names or IPs. It uses labels. Any Pod with the matching labels gets included automatically.
# Your Deployment gave every Pod this label:
labels:
app: web-app
# Your Service finds Pods using that label:
selector:
app: web-app
# Labels are how everything connects in Kubernetes.
# See the labels on your Pods:
kubectl get pods --show-labels
# NAME LABELS
# web-app-abc123 app=web-app,pod-template-hash=7d9f8b4c9f
# web-app-def456 app=web-app,pod-template-hash=7d9f8b4c9f
# See which Pods a Service is routing to (the Endpoints):
kubectl get endpoints web-app
# NAME ENDPOINTS AGE
# web-app 10.244.1.5:80,10.244.2.8:80,10.244.3.2:80 5m
When a Pod dies, its IP disappears from the Endpoints list. When a new Pod appears with the matching label, its IP gets ADDED. The Service always points to exactly the healthy Pods. πͺ
The Three Types of Service: Gate Options π
Type 1: ClusterIP β The Internal Gate π
Default type. Only accessible inside the cluster. Other Pods can reach it. External traffic cannot.
apiVersion: v1
kind: Service
metadata:
name: web-app
spec:
type: ClusterIP # Default. Internal only.
selector:
app: web-app
ports:
- protocol: TCP
port: 80 # Port on the Service (what callers use)
targetPort: 80 # Port on the Pod (what your app listens on)
kubectl apply -f service-clusterip.yaml
kubectl get service web-app
# NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
# web-app ClusterIP 10.96.45.12 <none> 80/TCP 5s
# Test it from INSIDE the cluster:
kubectl run test-pod --image=busybox --rm -it --restart=Never -- wget -O- http://web-app
# Downloads the nginx page! The Service DNS works!
DNS for services inside the cluster follows this pattern:
<service-name>.<namespace>.svc.cluster.local
So web-app in the production namespace is reachable at:
web-app.production.svc.cluster.local
From the SAME namespace? Just web-app works. No need for the full address. Kubernetes is helpful like that. π€
Type 2: NodePort β The Gate With a Door on the Outside πͺ
Exposes the Service on a specific port on every node in the cluster. External traffic can reach it through <NodeIP>:<NodePort>.
apiVersion: v1
kind: Service
metadata:
name: web-app-nodeport
spec:
type: NodePort
selector:
app: web-app
ports:
- protocol: TCP
port: 80 # ClusterIP port (internal)
targetPort: 80 # Pod port (your app)
nodePort: 30080 # External port on every node (30000-32767 range)
kubectl apply -f service-nodeport.yaml
kubectl get service web-app-nodeport
# NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
# web-app-nodeport NodePort 10.96.45.13 <none> 80:30080/TCP
# Access it via any node's IP on port 30080:
curl http://<any-node-ip>:30080
# With minikube:
minikube service web-app-nodeport --url
# http://127.0.0.1:30080
NodePort works but it's not elegant. It's like telling everyone "come to Rotterdam Harbour, Gate 30080." Possible? Yes. Professional? Questionable. π¬
NodePort is mostly useful for development and testing. For production external access, you want LoadBalancer or Ingress (Episode 7).
Type 3: LoadBalancer β The VIP Entrance π
Provisions an actual cloud load balancer (Azure Load Balancer, AWS ELB, GCP CLB) with a public IP. The professional way to expose services externally.
apiVersion: v1
kind: Service
metadata:
name: web-app-lb
spec:
type: LoadBalancer
selector:
app: web-app
ports:
- protocol: TCP
port: 80 # Public-facing port
targetPort: 80 # Pod port
kubectl apply -f service-loadbalancer.yaml
# Watch for the external IP to be assigned (takes 30-90 seconds in cloud)
kubectl get service web-app-lb --watch
# NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
# web-app-lb LoadBalancer 10.96.45.14 <pending> 80:31234/TCP
# web-app-lb LoadBalancer 10.96.45.14 20.123.45.67 80:31234/TCP <- Got it!
# Now anyone can reach your app at:
curl http://20.123.45.67
# π Your app is live on the public internet!
One LoadBalancer Service = one cloud load balancer = money. If you have 20 services that need external access, that's 20 load balancers. That's an expensive harbour gate. πΈ
This is why Ingress exists (Episode 7) β one load balancer to rule them all.
ExternalName: The Forwarding Address π¬
A fourth, special service type. Not for routing to Pods β for routing to something outside the cluster. Like a redirect.
apiVersion: v1
kind: Service
metadata:
name: external-database
namespace: production
spec:
type: ExternalName
externalName: my-database.azure.com # The actual external DNS name
Now Pods inside the cluster can call external-database and get routed to my-database.azure.com. Clean DNS aliasing. Useful when migrating services out of the cluster or referencing managed cloud services. π―
Headless Services: No Gate, Just a Directory π
Sometimes you don't WANT load balancing. You want to talk to specific Pods directly β for stateful applications like databases where you need to know which replica you're talking to.
apiVersion: v1
kind: Service
metadata:
name: database-headless
spec:
clusterIP: None # "None" = headless! No virtual IP assigned.
selector:
app: database
ports:
- port: 5432
# With a headless service, DNS returns the IPs of ALL matching Pods directly:
kubectl run test --image=busybox --rm -it --restart=Never -- nslookup database-headless
# Name: database-headless.default.svc.cluster.local
# Address: 10.244.1.8 <- pod 1 IP
# Address: 10.244.2.4 <- pod 2 IP
# Address: 10.244.3.1 <- pod 3 IP
# (All Pod IPs returned β caller picks which one to use)
Headless Services are used heavily with StatefulSets (Episode 14) where each replica needs a stable individual DNS name. π―
Port Forwarding: The Service Bypass for Debugging π§
In development, you often want to reach a Service or Pod directly from your laptop without going through an Ingress or NodePort. Port forwarding to the rescue:
# Forward your laptop's port 8080 to the Service's port 80
kubectl port-forward service/web-app 8080:80
# Now in another terminal:
curl http://localhost:8080 # Talks directly to your Service!
# Or forward directly to a specific Pod:
kubectl port-forward pod/web-app-abc123 8080:80
# Ctrl+C to stop the tunnel
This is your development debugging shortcut. Not for production. The Harbourmaster frowns upon tunnels that bypass the official gates. π€
The Full Picture: Traffic Flow Through the Cluster πΊοΈ
Internet
|
v
βοΈ Cloud Load Balancer (if LoadBalancer type)
|
v
π¦ Service (kube-proxy maintains iptables rules on every node)
|
|--- 33% ---> π¦ Pod: web-app-abc123 (Node 1)
|--- 33% ---> π¦ Pod: web-app-def456 (Node 2)
|--- 33% ---> π¦ Pod: web-app-ghi789 (Node 3)
kube-proxy is what makes Services work at the network level. It runs on every node and maintains iptables (or ipvs) rules that redirect traffic from the Service's virtual IP to the actual Pod IPs. When Pods change, kube-proxy updates the rules. All behind the scenes. You never touch it directly.
# See all Services in your cluster
kubectl get services --all-namespaces
# Get detailed info about a Service
kubectl describe service web-app
# Pay attention to:
# - Selector: what labels it matches
# - Endpoints: which Pod IPs are currently routed to
# - Port: the mapping
# See the Endpoints object separately
kubectl get endpoints web-app
The Harbourmaster's Log β Entry 6 π
Added Services to the harbour today. Gave every gate a permanent number.
Before Services: "I dunno, try 10.244.1.5, or maybe 10.244.2.8, one of those should work, unless they've been recreated since breakfast."
After Services: "Hit port 80 on web-app. It'll get there. Always."
Someone asked me how the Service knows which Pods are healthy. I explained label selectors and Endpoints. They asked why we don't just use IP addresses directly. I explained that Pod IPs change constantly.
They looked horrified. I said: "That's why we're having this conversation."
Three developers deleted their bookmark for http://10.244.1.5 and replaced it with http://web-app.production.svc.cluster.local. Progress. π©
Your Mission, Should You Choose to Accept It π―
- Create a Deployment with 3 replicas of
nginx:latest - Create a ClusterIP Service for it
- Prove it works by exec-ing into a test Pod and curling the Service DNS name
- Watch the Endpoints update in real time as you kill and replace Pods:
# Terminal 1: Watch Endpoints change
kubectl get endpoints web-app --watch
# Terminal 2: Kill a Pod
kubectl delete pod web-app-abc123
# See the IP disappear from Endpoints, then a new one appear!
-
Bonus: Create a NodePort Service and access your application from outside the cluster (use
minikube serviceor the node IP directly)
Next Time on "Welcome to Container Harbour" π¬
In Episode 7, we tackle the Customs Office β the Ingress controller. Because having one LoadBalancer per Service is expensive, and having NodePorts everywhere is embarrassing. Ingress gives you ONE entrance point with intelligent routing: this path goes here, that domain goes there, TLS terminates here. One gate. Infinite destinations. π
P.S. β When you call web-app.default.svc.cluster.local, that DNS name is resolved by **CoreDNS* β a small DNS server running inside your cluster as a Deployment. Yes, there's a Kubernetes Deployment managing the DNS that makes your Services work. Kubernetes all the way down.* π’
π― Key Takeaways:
- Pod IPs are ephemeral. Services are stable. Always use Services for communication.
- Services find Pods via label selectors β no hardcoded IPs, ever.
- ClusterIP = internal only. Default. For Pod-to-Pod communication.
- NodePort = exposes on a port on every node. Works. Not pretty.
- LoadBalancer = gets a real public IP from your cloud provider. Costs money per service.
- ExternalName = DNS alias to an external resource. For cloud-managed services.
- Headless = no virtual IP, DNS returns Pod IPs directly. For StatefulSets and databases.
-
kubectl port-forward= your development debugging tunnel. Not for production. Ever. - kube-proxy does the actual network magic, invisibly, on every node. πͺ
Top comments (0)