Moving beyond NodePort and LoadBalancers to smarter traffic routing.
Today’s exploration into Kubernetes networking took me a step deeper than the standard Service types. After getting comfortable with ClusterIP, NodePort, and LoadBalancer, I ran into a logical question: What happens when I have 50 microservices? Do I really want to pay for 50 Cloud Load Balancers or open 50 random ports on my nodes?
The answer, thankfully, is no. That is where Ingress and Ingress Controllers come in. Here is what I learned today about how they work and why they are essential for production clusters.
The Problem: Why Standard Services Aren't Enough
Before understanding Ingress, I had to understand the limitations of the other methods:
NodePort: It opens a specific port on every Node in the cluster. It's messy to manage, has security implications, and you are limited to a specific port range (30000-32767). It’s fine for testing, but bad for production.
LoadBalancer: This creates a distinct external IP address (usually a cloud load balancer from AWS, GCP, or Azure) for each service. If you have 20 microservices, that’s 20 separate bills for 20 load balancers. It’s expensive and inefficient.
The Solution: Ingress
I learned that Ingress is essentially a smart router for your cluster. Instead of exposing every service directly to the internet, you expose one entry point, and that entry point decides where the traffic goes based on rules you define.
Think of it like an office building:
NodePort is like giving everyone their own key to a side door.
LoadBalancer is like building a separate main entrance for every single employee.
Ingress is having one main reception desk. You walk in, tell the receptionist who you are looking for ("I need the Billing Department"), and they direct you to the right room.
In technical terms, Ingress allows you to do Path-Based Routing or Host-Based Routing.
example.com/api -> Routes to the Backend Service
example.com/shop -> Routes to the Frontend Service
The Missing Piece: The Ingress Controller
Here was the "aha!" moment for me today: Ingress by itself does nothing.
If you create an Ingress resource (the YAML file), it’s just a piece of paper with rules on it. It’s a configuration request. For those rules to actually work, you need an implementation. This is called the Ingress Controller.
The Ingress Controller is the actual software (a Pod) running in your cluster that reads your Ingress rules and processes the traffic.
Ingress = The Rules (The Map)
Ingress Controller = The Enforcer (The Traffic Cop)
The most popular controller is NGINX, but there are others like Traefik, HAProxy, and Istio.
Why They Are Needed (Summary)
Cost Efficiency: You only pay for one Cloud Load Balancer (which sits in front of the Ingress Controller) regardless of how many services you have inside.
Clean URLs: You can route traffic based on domains (app.com, api.app.com) or paths (/app, /login) rather than weird port numbers like 192.168.1.5:32044.
SSL/TLS Termination: You can manage your security certificates in one place (the Ingress) rather than configuring SSL on every single microservice application.
Learning about Ingress feels like graduating from "making things work" to "making things scalable." It separates the routing logic from the application logic and saves massive amounts of cloud resources.
Linkedin: https://www.linkedin.com/in/dasari-jayanth-b32ab9367/
Top comments (0)