In March 2026, the community-maintained kubernetes/ingress-nginx controller will reach End-of-Life (EOL). While your existing clusters will not break overnight and the core Kubernetes Ingress API remains fully supported, the controller's repository will become read-only. This means no new features and, critically, no future CVE patches. Engineering teams must plan their migrations now to avoid compounding security risks and compliance violations.
It’s common to leave functioning infrastructure untouched, but running deprecated software in your cluster’s critical path introduces unnecessary risk. With a public and immovable deadline, engineering teams have a clear window to plan a structured migration rather than reacting hastily to a future vulnerability.
What’s Actually Happening with ingress-nginx March 2026 Retirement
Let’s be crystal clear on terminology, because confusion here is costing teams weeks.
The Kubernetes Ingress API resource remains GA, feature-frozen, and fully supported. Nothing changes there. What is retiring is the community-maintained ingress-nginx controller (kubernetes/ingress-nginx).
From the official November 11, 2025 Kubernetes blog post by Tabitha Sable (Security Response Committee):
“Best-effort maintenance will continue until March 2026. Afterward, there will be no further releases, no bugfixes, and no updates to resolve any security vulnerabilities that may be discovered. Existing deployments of Ingress NGINX will continue to function and installation artifacts will remain available.”
The January 29, 2026 joint Steering + SRC statement by Kat Cosgrove made the urgency unmistakable:
“In March 2026, Kubernetes will retire Ingress NGINX, a piece of critical infrastructure for about half of cloud native environments… Half of you will be affected. You have two months left to prepare… We cannot overstate the severity of this situation or the importance of beginning migration… immediately.”
Post-March, the repositories move to the kubernetes-retired organization and become read-only. That’s the end of the road.
Note for managed Kubernetes users: Services like Azure AKS Application Routing or certain GKE ingress configurations have vendor-extended support until November 2026 for critical patches. Always check your cloud provider’s SLA before assuming you’re immediately exposed.
Why ingress-nginx EOL Is a Much Bigger Deal Than Most Teams Realize
- Security & Compliance: The CVE-2025–1974 vulnerability in March 2025 (an unauthenticated RCE via exposed admission webhooks) demonstrated the potential blast radius of controller flaws. After March 2026, the absence of patches means any newly discovered vulnerabilities will remain unmitigated.
- Audit Blockers: "EOL software in the L7 data path" triggers automatic findings in SOC 2, PCI-DSS, ISO 27001, and HIPAA. Compliance teams are already flagging this, blocking production promotions, and delaying customer deployments.
- Talent Attrition: The strongest SREs and platform engineers explicitly ask in interviews: “What’s your long-term ingress strategy?” No one wants to own an abandoned controller in 2027 while peers ship Gateway API features.
- Compounding Operational Debt: Every new service adds another Ingress object tied to dead code. Migration effort scales linearly with ingress count, but exponentially with annotation sprawl.
How to Check if You’re Affected
Run these commands today. Any positive result means you’re affected—and you need to start planning your replace project now.
Standard controller & Helm detection:
# Standard controller detection
kubectl get pods --all-namespaces --selector app.kubernetes.io/name=ingress-nginx
# Helm releases
helm list -A --filter ingress-nginx
# Any deployment method
kubectl get deployments,daemonsets --all-namespaces -l app.kubernetes.io/name=ingress-nginx
Checking for the IngressNightmare signature (CVE-2025–1974):
Thousands of clusters still have this lingering signature. Check admission webhook TLS certs for Issuer Organization = “nil1”, Subject Organization = “nil2”, or DNS names containing ingress-nginx-controller-admission or nginx.
One-liner inside the cluster:
kubectl get secret -n ingress-nginx ingress-nginx-admission -o jsonpath='{.data.tls\.crt}' | base64 -d | openssl x509 -text -noout | grep -E "Issuer:|Subject:|DNS:"
External scanners (Censys, Shodan, etc.) still surface matches with:
tls.certificate.parsed.issuer.organization:"nil1" AND tls.certificate.parsed.subject.organization:"nil2" AND tls.certificate.parsed.names:nginx
Also inspect the webhook config:
kubectl get validatingwebhookconfigurations ingress-nginx-admission -o yaml
If you see any of the above, you now have a documented production risk with a hard deadline.
Migration Options Ranked by Realism for Production Teams (February 2026)
The ecosystem has shifted heavily toward the Gateway API. Here is a breakdown of the most viable targets based on your infrastructure and long-term strategy:
| Option | Migration Effort (1-5) | Annotation Compatibility | Gateway API Readiness | Commercial Support | Zero-Downtime Feasibility | Best For |
|---|---|---|---|---|---|---|
| Gateway API (any implementation) | 4 | None (full translation) | Native v1.2+ | Varies | High (dual-run) | Future-proof teams |
| Traefik (nginx-ingress provider) | 1-2 | ~80% of common nginx.* | Full v1.4 | OSS + Enterprise | Very high | Brownfield with heavy annotations |
| F5 / NGINX Inc Controller | 2-3 | High (official NGINX) | Growing | Strong (F5) | High | NGINX veterans |
| Cilium (eBPF Gateway) | 3 | Low | Native | OSS + Enterprise | High | Existing Cilium users |
| Envoy Gateway | 3-4 | Low | Native | OSS | High | Modern Envoy stacks |
| HAProxy Ingress | 3 | Medium | Partial | OSS + Enterprise | High | HAProxy preference |
| Kong | 3 | Medium | Native | OSS + Enterprise | High | API-gateway heavy workloads |
Special callout for brownfield teams: Traefik’s experimental
nginx-ingressprovider stands out as the brownfield hero. It reuses ~80% of your existingnginx.ingress.kubernetes.io/*annotations out of the box, letting you keep most of your Ingress objects unchanged while immediately getting full Gateway API underneath. In every migration I’ve seen with 50–300 ingresses, this cut the effort almost in half.
As of February 2026, Gateway API v1.2+ is battle-tested. 2026 is actually the perfect time to migrate—the ecosystem has stabilized, the documentation is excellent, and the talent that already knows Gateway API is still rare and highly sought after.
Practical Migration Playbook (Zero-Fluff, Production-Tested)
- Inventory (Day 1): Audit your current state.
kubectl get ingress -A -o jsonpath='{range .items[*]}{.metadata.namespace}{"\t"}{.metadata.name}{"\t"}{.metadata.annotations}{"\n"}{end}'
- Convert automatically: Utilize translation tools to kickstart the process.
ingress2gateway print --providers=nginx --input-file=ingresses.yaml
-
Deploy side-by-side: Deploy the new controller using a different
ingressClassName. -
Validate rigorously: Reuse your existing test suite, utilize traffic shadowing/mirroring, and use
kubeconform+ custom Rego policies for annotation translation. -
Cutover safely (with a rollback path): Use Blue/green deployments via Gateway HTTPRoute weights. Isolate the blast radius by updating namespace-by-namespace. Define explicit rollback triggers (e.g., 5xx error rate > 1% or latency > 200ms) and pre-document the exact
kubectl applycommands needed to instantly revert. - Post-Cutover Observation: Maintain a 72-hour monitoring window to catch delayed edge cases like renewed cert-manager failures or long-lived connection drops.
Estimate timelines (Add a 20–30% buffer for true zero-downtime):
- Small (<50 services): 1–2 engineer-weeks
- Medium (50–200 services): 3–6 engineer-weeks
- Large (200+ ingresses, multi-cluster): 8–16 engineer-weeks
Common Pitfalls & Real War Stories from the Field
- Rewrite-target annotation hell: One team with 47 clusters spent 120 engineer-hours writing custom Traefik middleware to handle their old rewrite-target annotations before they could cut over.
- Default backend surprises: Three services 404’d on cutover because fallback behavior differs across controllers.
-
cert-manager webhook cert rotation: The infamous
nil1/nil2certs refused to rotate cleanly on one migration until we pinned the new controller’s policy. -
Legacy vs new ingressClassName: Old
kubernetes.io/ingress.classannotations are silently ignored by modern controllers. - Dual-controller TLS cert conflicts: Sporadic 503s occurred during cutover until we pinned the new controller’s admission secret rotation policy.
- Canary / auth / rate-limit sprawl: Teams that tried "weekend heroics" regretted it heavily.
30-Day Action Checklist to Beat the March 2026 Deadline
Copy-paste this into your runbook—it’s the exact checklist I use with every client:
- Run audit commands and screenshot results today.
- Build a central inventory of every Ingress + annotation count.
- Spin up a PoC with your chosen replacement controller this week.
- Run
ingress2gatewayon a non-prod namespace. - Complete staging dual-controller cutover by Day 15.
- Document every custom annotation and its replacement.
- Update runbooks, golden images, and CI pipelines.
- Brief security & compliance with official Kubernetes quotes.
- Schedule a full dry-run cutover weekend.
- Reach out for help if blocked.
Final Word
The March 2026 deadline is firm. Teams that allocate time in their roadmaps now will transition smoothly to modern, fully supported networking architectures. Proactive planning ensures your team controls the timeline, rather than being forced into a rushed emergency swap later. By auditing early, utilizing tools like ingress2gateway, and performing an incremental dual-stack cutover, you can modernize your cluster routing safely and systematically.
About the Author: Nino Skopac
- GitHub: @NinoSkopac
- LinkedIn: Nino Skopac
- X (Twitter): @NinoSkopac
Finally, let me leave you with this valuable resource from The Linux Foundation:
Top comments (0)