DEV Community

AttractivePenguin
AttractivePenguin

Posted on

Kubernetes ingress-nginx EOL: Your March 2026 Migration Playbook

Kubernetes ingress-nginx EOL: Your March 2026 Migration Playbook

The clock has run out. If you're running the community-maintained kubernetes/ingress-nginx controller in production, your cluster is now on borrowed time.

On March 2026, the kubernetes/ingress-nginx controller reached end-of-life. No more bug fixes. No more security patches. This isn't a drill—it's a reality that affects thousands of Kubernetes clusters worldwide.

In this guide, I'll walk you through exactly what this means, why it matters, and most importantly—how to migrate to a secure alternative without breaking your production workloads.


A Brief History: Why This Matters

The ingress-nginx project started as a Kubernetes community project and became the go-to solution for HTTP/HTTPS routing in Kubernetes clusters. For years, it was the recommended approach in the official Kubernetes documentation.

But the writing has been on the wall for a while:

  • Maintenance burden: The community struggled to keep up with Kubernetes' rapid release cycle
  • Security concerns: Unpatched vulnerabilities in ingress controllers are particularly dangerous since they handle external traffic
  • Cloud-native evolution: Cloud providers developed their own managed ingress solutions
  • Gateway API adoption: The Kubernetes networking community is pushing toward the more flexible Gateway API

The EOL announcement isn't surprising—it's the natural conclusion of a project that's served us well but needs to give way to more maintainable solutions.


The Situation: What Actually Changed

The ingress-nginx controller has been the de facto standard for ingress in Kubernetes for years. It's been downloaded millions of times, used by countless organizations, and recommended in countless tutorials.

Here's what's ending:

  • New features — Gone
  • Bug fixes — Gone
  • Security patches — Gone (this is the big one)

What's NOT ending:

  • Your existing clusters won't magically stop working tomorrow
  • Managed Kubernetes services may offer extended support until November 2026 for critical patches
  • You have time to plan, but not time to waste

The risk? Running unpatched ingress controllers in production is a security liability. Vulnerabilities in ingress controllers can expose your entire cluster to attack.

Real-World Migration Example

Let me walk through an actual migration I helped a team with last month. They had:

  • 47 ingress resources across 12 namespaces
  • Multiple custom annotations for rate limiting, cors, and redirecting
  • About 200 services behind the ingress
  • 50GB of daily ingress traffic

The timeline:

  • Week 1: Audit and planning
  • Week 2: Staging environment migration
  • Week 3: Production parallel run (50% traffic)
  • Week 4: Full cutover and cleanup

The biggest challenge wasn't technical—it was inventorying all the custom annotations they were using. Some were so old nobody remembered why they existed.

Key learnings:

  1. Document EVERY annotation before you start
  2. Test with a non-critical service first
  3. Keep the old ingress running for 2 weeks post-migration
  4. Monitor both old and new ingress for the first 48 hours

Understanding Your Migration Options

You have four main paths forward. Let me break down each one:

Option 1: Migrate to AWS ALB/Nginx Plus

If you're on AWS, Application Load Balancer (ALB) with AWS Load Balancer Controller is a natural fit.

Pros:

  • Managed by AWS—no maintenance burden
  • Native AWS integration (IAM, WAF, CloudWatch)
  • Excellent performance at scale

Cons:

  • AWS-specific (not portable)
  • Cost can scale with traffic
  • Some nginx features require paid NGINX Plus

Migration complexity: Medium

# Install AWS Load Balancer Controller
helm install aws-load-balancer-controller \
  -n kube-system \
  eks/aws-load-balancer-controller \
  --set clusterName=your-cluster \
  --set serviceAccount.create=false \
  --set serviceAccount.name=aws-load-balancer-controller
Enter fullscreen mode Exit fullscreen mode
# Convert your ingress to ALB annotation-based
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: my-app
  annotations:
    kubernetes.io/ingress.class: alb
    alb.ingress.kubernetes.io/scheme: internet-facing
    alb.ingress.kubernetes.io/target-type: ip
spec:
  rules:
  - host: myapp.example.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: my-app-service
            port:
              number: 80
Enter fullscreen mode Exit fullscreen mode

Option 2: Migrate to GKE Ingress (Google Cloud)

For Google Cloud users, GKE Ingress provides a managed solution.

Pros:

  • Fully managed by Google
  • Global external IP with anycast
  • Built-in Google Cloud Armor integration

Cons:

  • Google Cloud-specific
  • Requires GKE for full feature set

Migration complexity: Low (if already on GKE)

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: my-app-ingress
  annotations:
    kubernetes.io/ingress.class: gce
    networking.gke.io/allow-google-load-balancer-ip-range: "35.244.0.0/16"
spec:
  defaultBackend:
    service:
      name: my-app-service
      port:
        number: 80
Enter fullscreen mode Exit fullscreen mode

Option 3: Migrate to Azure AGIC

For Azure AKS users, the Application Gateway Ingress Controller (AGIC) is the recommended path.

Pros:

  • Azure-native solution
  • Web Application Firewall (WAF) built-in
  • Good for microservices

Cons:

  • Azure-specific
  • Requires Application Gateway v2 SKU

Migration complexity: Medium

# Enable AGIC via Helm
helm install ingress-azure \
  -n kube-system \
  application-gateway-kubernetes-ingress/ingress-azure \
  --set agicConfig.appgw.subscriptionId=<subscription-id> \
  --set agicConfig.appgw.resourceGroup=<rg-name> \
  --set agicConfig.appgw.name=<appgw-name> \
  --set rbac.enabled=true
Enter fullscreen mode Exit fullscreen mode

Option 4: Migrate to Traefik, Kong, or Cilium Mesh

For cloud-agnostic solutions, these alternatives work anywhere Kubernetes runs.

Traefik (Recommended for Most)

# Install Traefik via Helm
helm install traefik traefik/traefik \
  -n ingress \
  --create-namespace

# Your existing ingress just needs annotation change
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: my-app
  annotations:
    kubernetes.io/ingress.class: traefik
spec:
  rules:
  - host: myapp.example.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: my-app-service
            port:
              number: 80
Enter fullscreen mode Exit fullscreen mode

Pros:

  • Works on any Kubernetes cluster
  • Active community, frequent updates
  • Built-in Let’s Encrypt support
  • Rich middleware (rate limiting, circuit breakers)

Cons:

  • Self-hosted (you manage the pod)
  • Slightly different configuration model

Kong Ingress Controller

# Install Kong
helm install kong kong/ingress -n kong --create-namespace
Enter fullscreen mode Exit fullscreen mode

Pros:

  • Enterprise-grade plugins
  • Excellent for API gateways
  • Service mesh integration

Cons:

  • More resource-intensive
  • Learning curve for Kong-specific features

Cilium Ingress (EBPF-powered)

# Install Cilium
cilium install --version 1.16.0

# Use Cilium's built-in ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: my-app
  annotations:
   cilium.io/ingress: "true"
Enter fullscreen mode Exit fullscreen mode

Pros:

  • EBPF-powered (extremely fast)
  • Built-in network policies
  • No sidecar required

Cons:

  • Requires specific kernel versions
  • Steeper learning curve

Migration Strategy: Step by Step

Here's a battle-tested approach that minimizes downtime:

Phase 1: Audit (Before You Touch Anything)

# List all ingress resources
kubectl get ingress --all-namespaces -o wide

# Export current configuration
kubectl get ingress -n <namespace> -o yaml > ingress-backup.yaml

# Check your current ingress-nginx version
kubectl get pods -n ingress-nginx -o jsonpath='{.items[0].spec.containers[0].image}'
Enter fullscreen mode Exit fullscreen mode

Phase 2: Deploy Parallel Ingress

Deploy your new ingress controller alongside the existing one. Use a different namespace and ingress class.

# Example: Running both ingress-nginx and Traefik
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: my-app-v2
  annotations:
    kubernetes.io/ingress.class: traefik  # Different class
spec:
  rules:
  - host: myapp-staging.example.com  # Test domain first
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: my-app-service
            port:
              number: 80
Enter fullscreen mode Exit fullscreen mode

Phase 3: Test Thoroughly

  1. Unit tests: Verify your application's routing works
  2. Integration tests: Test with staging traffic
  3. Load testing: Ensure new ingress handles traffic volume
  4. TLS verification: Check certificates work correctly
  5. Webhooks/API access: Ensure all endpoints route properly

Phase 4: Cut Over

# Option A: Blue-green cutover
# Switch DNS to new ingress IP/endpoint

# Option B: Gradual rollout
# Use weighted DNS or service mesh canary routing
Enter fullscreen mode Exit fullscreen mode

Phase 5: Clean Up

# Remove old ingress-nginx resources
kubectl delete namespace ingress-nginx

# Remove old ingress resources
kubectl delete ingress <old-ingress-name> -n <namespace>
Enter fullscreen mode Exit fullscreen mode

Common Pitfalls and How to Avoid Them

Pitfall 1: Forgetting Custom Annotations

ingress-nginx has many custom annotations that other controllers may not support.

Solution: Check the documentation for equivalent annotations in your new controller.

# ingress-nginx
annotations:
  nginx.ingress.kubernetes.io/proxy-body-size: "50m"
  nginx.ingress.kubernetes.io/proxy-connect-timeout: "60"

# Traefik equivalent
annotations:
  traefik.ingress.kubernetes.io/router.entrypoints: web
  traefik.ingress.kubernetes.io/router.middlewares: stripprefix-staging
Enter fullscreen mode Exit fullscreen mode

Pitfall 2: SSL/TLS Certificate Management

Different controllers handle certificates differently.

Solution: Use cert-manager consistently across migrations:

apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: myapp-tls
spec:
  secretName: myapp-tls
  issuerRef:
    name: letsencrypt-prod
    kind: ClusterIssuer
  dnsNames:
  - myapp.example.com
Enter fullscreen mode Exit fullscreen mode

Pitfall 3: Missing Health Checks

New ingress controllers may have different default health check configurations.

Solution: Explicitly configure health checks:

apiVersion: v1
kind: Service
metadata:
  name: my-app-service
  annotations:
    # Traefik health check
    traefik.lb.google/lb_whitelist_source_range: "10.0.0.0/8"
spec:
  ports:
  - port: 80
    targetPort: 8080
    name: http
  selector:
    app: my-app
Enter fullscreen mode Exit fullscreen mode

Pitfall 4: Not Testing WebSocket/GRPC

WebSocket and gRPC traffic often fails silently if not configured correctly.

Solution: Explicitly configure for your protocol:

# gRPC example for Traefik
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: my-grpc-service
  annotations:
    traefik.ingress.kubernetes.io/router.entrypoints: websecure
    traefik.ingress.kubernetes.io/router.tls: "true"
spec:
  tls:
  - hosts:
    - grpc.example.com
  rules:
  - host: grpc.example.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: my-grpc-service
            port:
              number: 50051
Enter fullscreen mode Exit fullscreen mode

Rollback Strategy

Always have a rollback plan:

# Keep old ingress-nginx pods running but scaled to 0
kubectl scale deployment ingress-nginx-controller -n ingress-nginx --replicas=0

# If issues arise, scale back up immediately
kubectl scale deployment ingress-nginx-controller -n ingress-nginx --replicas=1
Enter fullscreen mode Exit fullscreen mode

For critical production systems:

  1. Run parallel for 24-48 hours
  2. Monitor error rates, latency, and traffic patterns
  3. Have an on-call engineer ready to rollback
  4. Document the exact rollback command (test it!)

Decision Matrix: Which Path Should You Choose?

Scenario Recommended Choice
On AWS EKS AWS ALB
On GKE GKE Ingress
On AKS Azure AGIC
Multi-cloud / On-prem Traefik or Kong
Need max performance Cilium Ingress
Already using service mesh Istio/Gateway API

What If You Can't Migrate Right Now?

If you're stuck and can't migrate immediately:

  1. Isolate the ingress-nginx pod with network policies
  2. Enable pod security policies to limit blast radius
  3. Monitor for CVEs in ingress-nginx actively
  4. Set a hard migration deadline (ideally within 30 days)
  5. Inform your security team about the risk

Conclusion

The ingress-nginx EOL is a significant event, but it's manageable with the right approach. The key actions:

  1. Audit your current ingress resources
  2. Choose your target controller based on your infrastructure
  3. Test thoroughly in a non-production environment
  4. Migrate with zero-downtime strategy
  5. Clean up old resources after verification

Your clusters won't stop working tomorrow, but running unpatched ingress controllers in production is a risk not worth taking. The migration paths above are well-tested—pick the one that fits your infrastructure and get moving.


FAQ

Q: How long do I have?
A: Managed services may offer support until November 2026, but don't wait. Start migration planning immediately.

Q: Will my services still work after the EOL?
A: Yes, they'll continue running, but you'll receive no security patches for critical vulnerabilities.

Q: Can I use the ingress-nginx fork?
A: There are community forks, but they don't have the same security maintenance guarantees. Not recommended for production.

Q: What's the easiest migration path?
A: If you're on a major cloud provider, use their native ingress (ALB/GKE/AGIC). If you're on-prem or multi-cloud, Traefik is the easiest drop-in replacement.

Q: Will I need to change my application code?
A: Generally no—only ingress resource configuration changes. Your backend services remain unchanged.

Q: How long does migration take?
A: For simple setups, 2-4 hours. For complex multi-team environments, 1-2 weeks including testing.

Q: My ingress uses custom templates. Will those work?
A: Custom templates are ingress-nginx specific. You'll need to rewrite these as middleware or alternative configurations in your new controller.

Q: Can I run both controllers temporarily?
A: Yes, this is the recommended approach. Each controller needs its own IngressClass and preferably different ingress class names.

Q: What about Web Application Firewall (WAF)?
A: Options vary by platform:

  • AWS: Use WAFv2 with ALB
  • Azure: AGIC has built-in WAF
  • GKE: Use Cloud Armor
  • Self-hosted: Consider ModSecurity or cloud WAF

Q: How do I handle sticky sessions/session affinity?
A: Most controllers support sticky sessions. For ingress-nginx, you likely used nginx.ingress.kubernetes.io/affinity-mode or nginx.ingress.kubernetes.io/affinity. Check your new controller's equivalent.

Q: What about path-based routing differences?
A: ingress-nginx is lenient with path matching. Other controllers may be stricter about pathType. You may need to adjust paths from ImplementationSpecific to Prefix or Exact.


Troubleshooting Common Issues

Issue: 404 Errors After Migration

Symptoms: All routes return 404 after switching to new ingress.

Diagnosis:

# Check if IngressClass is correct
kubectl get ingress <name> -o jsonpath='{.spec.ingressClassName}'

# Verify the controller is watching the namespace
kubectl get pods -n <ingress-namespace>
Enter fullscreen mode Exit fullscreen mode

Solution: Ensure the ingress class name matches your new controller's configuration.

Issue: TLS Certificates Not Working

Symptoms: HTTPS routes fail with certificate errors.

Diagnosis:

# Check cert-manager certificates
kubectl get certificates -n <namespace>

# Check secret exists
kubectl get secret <tls-secret> -n <namespace>
Enter fullscreen mode Exit fullscreen mode

Solution: If using cert-manager, ensure ClusterIssuer is accessible. Some cloud controllers manage their own certificates.

Issue: Pods Can't Reach External Services

Symptoms: Backend services can't make outbound requests through ingress.

Diagnosis:

# Check pod network policies
kubectl get networkpolicies -n <namespace>

# Verify egress configuration
kubectl describe pod <pod-name> -n <namespace>
Enter fullscreen mode Exit fullscreen mode

Solution: New ingress controllers may have different network requirements. Check egressAllowLists.

Issue: Slow Response Times

Symptoms: Latency increased after migration.

Diagnosis:

# Check resource limits
kubectl top pods -n <ingress-namespace>

# Check for connection pooling
kubectl describe ingress <name>
Enter fullscreen mode Exit fullscreen mode

Solution: New controllers may have different default connection pooling. Adjust backend configuration or add connection pool annotations.

Issue: Custom NGINX Config Not Working

Symptoms: Custom nginx.conf snippets no longer apply.

Solution: You'll need to convert custom nginx directives to your new controller's middleware or ConfigMap approach. Each controller has its own method:

# Traefik - use Middleware CRD
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
  name: strip-prefix
spec:
  stripPrefix:
    prefixes:
    - /api/v1

# Kong - use KongPlugin
apiVersion: configuration.konghq.com/v1
kind: KongPlugin
metadata:
  name: request-terminate
config:
  value: "0"
  key: "abort"
Enter fullscreen mode Exit fullscreen mode

Have questions or run into issues? Drop a comment below—I'll help you debug.

Top comments (0)