DEV Community

Cover image for Kubernetes Ingress Controllers: Routing Traffic Made Simple
Yash Londhe for RubixKube

Posted on

5 2 2 3 2

Kubernetes Ingress Controllers: Routing Traffic Made Simple

Imagine you run an online store hosted on Kubernetes. Your store has multiple services: one for products, another for payments, and another for user accounts. How do you ensure that when a customer visits yourstore.com/products, their request reaches the correct backend service? This is where Ingress Controllers come into play.

Kubernetes makes deploying applications easy, but handling external traffic is tricky. Services inside a Kubernetes cluster do not have public IPs by default, so routing customer requests correctly requires additional configuration. Ingress is the solution that helps manage this traffic efficiently, making routing simple and scalable.

In this blog, we’ll explore how Ingress and Ingress Controllers work, why they matter, and how to set up Nginx Ingress Controller in a Kubernetes cluster.

What is an Ingress in Kubernetes?

In simple terms, an Ingress is like the receptionist of a large office. When a visitor arrives, the receptionist directs them to the correct department. Similarly, Ingress in Kubernetes ensures that incoming requests reach the right service inside the cluster.

Ingress is a Kubernetes resource that manages HTTP/HTTPS traffic to services running inside a cluster. It provides features like:

  • Host-based routing: Directing requests based on the domain (e.g., shop.com vs. blog.com).
  • Path-based routing: Sending traffic to different services based on URL paths (e.g., /products to a product service and /cart to a cart service).
  • TLS termination: Handling SSL certificates to secure communication.

Without Ingress, you’d have to expose every service using a separate LoadBalancer or NodePort, which is inefficient and costly. Ingress simplifies this by consolidating routing into a single resource.

What is an Ingress Controller?

If Ingress is the receptionist, the Ingress Controller is the manager that ensures visitors get the right service. It’s the component that actually enforces the routing rules defined in the Ingress resource.

Ingress Controllers work by:

  • Watching for Ingress resources in the cluster.
  • Configuring underlying proxies (like Nginx) to route traffic accordingly.
  • Handling SSL termination, load balancing, and request filtering.

There are several popular Ingress Controllers, each suited for different needs:

  • Nginx Ingress Controller (Most commonly used, good for general traffic management)
  • Traefik (Lightweight and dynamic routing, great for microservices)
  • HAProxy Ingress (High performance, optimized for large-scale workloads)
  • AWS ALB Ingress Controller (Best for AWS environments)

The choice depends on your infrastructure and specific requirements.

How Does It Work? A Real-World Example

Let’s say you’re running an online bookstore with two services: book-service and author-service.

You want:

  • bookstore.com/books to go to book-service
  • bookstore.com/authors to go to author-service

Here’s how an Ingress Controller handles this:

  1. A customer types bookstore.com/books in their browser.
  2. The request reaches the Ingress Controller (e.g., Nginx).
  3. The Ingress Controller checks the Ingress rules.
  4. It routes the request to book-service inside the Kubernetes cluster.
  5. The response is sent back to the customer.

This routing ensures that customers seamlessly access different services without needing multiple public IP addresses.

Setting Up an Nginx Ingress Controller

Let’s walk through deploying an Nginx Ingress Controller step by step.

Step 1: Prerequisites

You need:

  • A running Kubernetes cluster (Minikube, GKE, EKS, etc.).
  • kubectl installed and configured.

Step 2: Install the Nginx Ingress Controller

Run the following command to install the Nginx Ingress Controller:

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/cloud/deploy.yaml
Enter fullscreen mode Exit fullscreen mode

Verify the installation:

kubectl get pods -n ingress-nginx
Enter fullscreen mode Exit fullscreen mode

If the controller is running, you’re good to go.

Step 3: Deploy a Sample Application

We’ll create a simple Hello World service.

Apply the following YAML file (hello-world.yaml):

apiVersion: apps/v1
kind: Deployment
metadata:
  name: hello-world
spec:
  replicas: 2
  selector:
    matchLabels:
      app: hello-world
  template:
    metadata:
      labels:
        app: hello-world
    spec:
      containers:
      - name: hello-world
        image: hashicorp/http-echo
        args:
        - "-text=Hello, Kubernetes!"
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: hello-world-service
spec:
  selector:
    app: hello-world
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80
Enter fullscreen mode Exit fullscreen mode

Apply it:

kubectl apply -f hello-world.yaml
Enter fullscreen mode Exit fullscreen mode

Step 4: Create an Ingress Resource

Now, define an Ingress resource to route traffic.

Save this as ingress.yaml:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: example-ingress
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
  - host: bookstore.com
    http:
      paths:
      - path: /books
        pathType: Prefix
        backend:
          service:
            name: hello-world-service
            port:
              number: 80
Enter fullscreen mode Exit fullscreen mode

Apply it:

kubectl apply -f ingress.yaml
Enter fullscreen mode Exit fullscreen mode

Step 5: Test the Setup

  • Find the external IP of the Ingress Controller:
kubectl get svc -n ingress-nginx
Enter fullscreen mode Exit fullscreen mode
  • Edit /etc/hosts to map bookstore.com to the external IP.
  • Open http://bookstore.com/books in a browser. You should see Hello, Kubernetes!.

Advanced Features and Best Practices

Advanced Features

  • HTTPS/SSL Termination: Use Let’s Encrypt with cert-manager to auto-generate free SSL certificates.
  • Rate Limiting: Protect your API from abuse by adding limits (e.g., 100 requests/minute per user).
  • Canary Deployments: Route 5% of traffic to a new app version to test it before a full rollout.

Best Practices

  • Use Namespaces: Keep Ingress resources organized.
  • Monitor Traffic: Use tools like Prometheus & Grafana for insights.
  • Secure Ingress: Enforce authentication and HTTPS wherever possible.

Conclusion

Ingress Controllers make routing traffic in Kubernetes easy, cost-effective, and scalable. The Nginx Ingress Controller is one of the most popular choices due to its simplicity and powerful features.

Now that you understand the basics, try deploying your own Ingress Controller and experiment with different configurations.

Next steps:

  • Explore cert-manager for automated TLS certificates.
  • Try Traefik for a more lightweight option.

Sentry image

Hands-on debugging session: instrument, monitor, and fix

Join Lazar for a hands-on session where you’ll build it, break it, debug it, and fix it. You’ll set up Sentry, track errors, use Session Replay and Tracing, and leverage some good ol’ AI to find and fix issues fast.

RSVP here →

Top comments (0)

Sentry image

See why 4M developers consider Sentry, “not bad.”

Fixing code doesn’t have to be the worst part of your day. Learn how Sentry can help.

Learn more