Migrating Off Ingress-NGINX Before the March Deadline: What the Guides Don't Tell You
Ingress-NGINX goes end of maintenance in March 2026.
I know. You've seen the announcements. You've bookmarked three migration guides. You have a ticket in the backlog.
Here's the thing: most of those guides will get you 80% of the way there and leave you staring at a cluster that's half-migrated on a Friday afternoon. This post is about the other 20%.
I've migrated three AKS clusters to Gateway API over the past few weeks. This is what I actually ran into — not what the documentation said I'd run into.
Why the March Deadline Actually Matters
End of maintenance isn't just a deprecation warning. It means:
- No more security patches for ingress-nginx
- No new Kubernetes compatibility releases — Kubernetes 1.32+ support is officially not coming
- Community PRs will stop being reviewed
If you're on AKS, EKS, or GKE running recent Kubernetes versions, you'll eventually hit a compatibility wall. The longer you wait, the harder the migration becomes because the gap between your current Ingress setup and Gateway API grows with every annotation-dependent feature you add.
The time to migrate is before you're forced to.
Step 0: Before You Touch Anything, Run the Audit
This is the step nobody writes about. And it's where the real work is.
# Find all Ingress objects with non-standard annotations
kubectl get ingress --all-namespaces -o json | \
jq -r '.items[] |
select(.metadata.annotations | keys[] | startswith("nginx.ingress.kubernetes.io/")) |
"\(.metadata.namespace)/\(.metadata.name): \(.metadata.annotations | keys[] | select(startswith("nginx.ingress.kubernetes.io/")))"'
On the first cluster I migrated, this returned 47 lines. Most were the usual suspects (ssl-redirect, proxy-body-size) — but six were annotations I didn't immediately recognize. One was a custom auth snippet. One was a Lua snippet for rate limiting.
Those six took longer to handle than the other 194 combined.
The annotations to watch for:
| Ingress annotation | What it does | Gateway API equivalent |
|---|---|---|
nginx.ingress.kubernetes.io/rewrite-target |
Path rewriting |
HTTPRoute URLRewrite filter |
nginx.ingress.kubernetes.io/canary: "true" + weight |
Traffic splitting |
backendRefs with weight
|
nginx.ingress.kubernetes.io/auth-url |
External auth |
RequestHeaderModifier + external auth service |
nginx.ingress.kubernetes.io/configuration-snippet |
Raw nginx config injection | Implementation-specific, often no equivalent |
nginx.ingress.kubernetes.io/server-snippet |
Server-level raw config | Not supported in Gateway API |
nginx.ingress.kubernetes.io/proxy-read-timeout |
Backend timeout |
BackendLBPolicy (v1alpha3, experimental) |
The last two — configuration-snippet and server-snippet — are the ones that should make you pause. If you have those, you're using nginx-specific functionality that Gateway API doesn't model. You'll need to find a different approach.
Step 1: Install the Gateway API CRDs
Gateway API isn't bundled with Kubernetes. You install it separately.
# Install standard channel (HTTPRoute, Gateway, GatewayClass are all GA here)
kubectl apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.2.1/standard-install.yaml
# Verify
kubectl get crd | grep gateway.networking.k8s.io
There are two channels: standard-install.yaml and experimental-install.yaml. The experimental channel includes things like BackendLBPolicy, BackendTLSPolicy, and GRPCRoute (which is now GA but was experimental until v1.1).
For a basic Ingress migration, standard is enough. If you need per-backend timeouts, you'll want experimental.
Step 2: Choose Your Implementation
This is the decision that trips people up. With Ingress-NGINX, there was basically one serious option. Gateway API has half a dozen implementations and they don't all support the same features.
The main contenders for AKS/EKS/GKE clusters:
Cilium Gateway API — if you're already running Cilium as your CNI, this is the obvious choice. It's deeply integrated, the performance is excellent, and conformance coverage is strong for core and most standard features. If you're not on Cilium, adding it just for Gateway API is probably not the right move.
NGINX Gateway Fabric — from the same team that built Ingress-NGINX. If your migration anxiety is high, this is the safest path: the mental model is familiar, and they've explicitly designed for Ingress-NGINX migration. It's not as feature-complete as some alternatives yet, but it's improving fast.
Envoy Gateway — the CNCF project backed by Tetrate, Cisco, and others. It's built on Envoy, which powers Istio and Contour. Strong conformance, active development. If you're not already invested in an ecosystem, this is a solid pick.
Istio (as a Gateway API implementation) — if you're running Istio anyway, use it. If you're not, don't add Istio just for Gateway API.
The conformance tests matter. Before you commit to an implementation, check the conformance report for the specific version you're installing:
# Look for the conformance report in the release — e.g.:
# https://github.com/cilium/cilium/blob/v1.17.0/conformance-report.yaml
Two implementations will silently skip HTTPRoute filters they don't support. Not an error. Just ignored. Test your routes with actual traffic before you decommission Ingress-NGINX.
Step 3: Create Your GatewayClass and Gateway
The GatewayClass is the cluster-wide declaration of which controller is handling your gateways. The Gateway is the instance — it's roughly equivalent to the Ingress controller's Service.
# GatewayClass — cluster-scoped
apiVersion: gateway.networking.k8s.io/v1
kind: GatewayClass
metadata:
name: cilium
spec:
controllerName: io.cilium/gateway-controller
---
# Gateway — namespace-scoped or cluster-scoped depending on your setup
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: prod-gateway
namespace: infra
spec:
gatewayClassName: cilium
listeners:
- name: http
protocol: HTTP
port: 80
- name: https
protocol: HTTPS
port: 443
tls:
mode: Terminate
certificateRefs:
- name: wildcard-tls
namespace: infra
Notice where the TLS certificate is referenced: at the Gateway, not at the route. This is a meaningful design difference from Ingress.
With Ingress, each Ingress object could reference its own TLS secret in its own namespace. With Gateway API, TLS termination happens at the listener, and the cert is in the Gateway's namespace. If your applications are in different namespaces, you need a ReferenceGrant to allow the Gateway to reference secrets across namespaces — or you consolidate TLS management.
# Allow prod-gateway in infra namespace to reference certs in app-ns namespace
apiVersion: gateway.networking.k8s.io/v1beta1
kind: ReferenceGrant
metadata:
name: allow-gateway-tls
namespace: app-ns
spec:
from:
- group: gateway.networking.k8s.io
kind: Gateway
namespace: infra
to:
- group: ""
kind: Secret
Step 4: Migrate Your Ingress Objects to HTTPRoute
This is the bulk of the work. For a simple Ingress, the translation is mechanical:
Before (Ingress):
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: api-ingress
namespace: production
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "true"
spec:
ingressClassName: nginx
rules:
- host: api.example.com
http:
paths:
- path: /v1
pathType: Prefix
backend:
service:
name: api-service
port:
number: 8080
After (HTTPRoute):
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: api-route
namespace: production
spec:
parentRefs:
- name: prod-gateway
namespace: infra
sectionName: https
hostnames:
- api.example.com
rules:
- matches:
- path:
type: PathPrefix
value: /v1
backendRefs:
- name: api-service
port: 8080
The ssl-redirect annotation disappears because the HTTPS listener on the Gateway handles it. If you want to force HTTP → HTTPS redirects, you add an HTTP listener rule:
# Add to your Gateway's HTTP listener, or create a separate HTTPRoute on port 80
rules:
- filters:
- type: RequestRedirect
requestRedirect:
scheme: https
statusCode: 301
Step 5: Traffic Splitting (If You Have Canary Deployments)
This was the most pleasant surprise. Gateway API's traffic splitting is cleaner than Ingress-NGINX's canary annotations.
Before (Ingress canary):
# Two separate Ingress objects, one with canary annotations
metadata:
annotations:
nginx.ingress.kubernetes.io/canary: "true"
nginx.ingress.kubernetes.io/canary-weight: "20"
After (HTTPRoute with weights):
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: api-route-weighted
namespace: production
spec:
parentRefs:
- name: prod-gateway
namespace: infra
sectionName: https
hostnames:
- api.example.com
rules:
- backendRefs:
- name: api-service-stable
port: 8080
weight: 80
- name: api-service-canary
port: 8080
weight: 20
One resource instead of two. The weight is explicit. The intent is obvious.
The catch: if your CI/CD pipeline was generating canary Ingress objects dynamically (common with Argo Rollouts or Flagger), you'll need to update your rollout configuration. Argo Rollouts has had Gateway API support since v1.4. Flagger has it too. But it's a pipeline change, not just a cluster change — plan for it.
What I'd Do Differently
Run the annotation audit on day 1, not day 3. I wasted a full day migrating straightforward Ingresses before I hit the hard ones. Knowing upfront what you're dealing with changes the sequencing entirely.
Set up the new Gateway alongside Ingress-NGINX, not instead of it. Run both in parallel for at least a week. Use your load balancer to shift traffic gradually — 10% to Gateway API, watch it, then 50%, then 100%. Don't do a cutover.
Test Gateway API conformance for your specific filters before writing 200 HTTPRoutes. Write one route that exercises your hardest annotation translation. Confirm it works end-to-end in your chosen implementation. Then scale.
Check Kubernetes version compatibility for your chosen implementation. Some Gateway API controllers have minimum Kubernetes version requirements that your cluster might not meet yet. Cilium Gateway API requires Cilium v1.16+ and Kubernetes 1.26+.
The Part That Actually Takes Time
I said the migration isn't hard. That's true for simple cases. What takes time is the inventory work — finding all the Ingress objects, categorizing them by complexity, deciding what to do with the ones using raw nginx config snippets.
On one cluster, two services were using configuration-snippet to inject custom Lua for rate limiting. Those needed a different solution entirely — we moved them behind an API gateway that handles rate limiting natively, and simplified the routing config.
That's not a Gateway API problem. That's an architecture decision that was always deferred. The deadline just surfaced it.
If you start now, you have time to handle the hard cases properly. If you start in mid-March, you're making rushed decisions under pressure.
Quick Reference: Annotation Translation Cheat Sheet
| Ingress annotation | Gateway API approach |
|---|---|
ssl-redirect |
HTTPS listener on Gateway + HTTP redirect rule |
rewrite-target |
URLRewrite filter in HTTPRoute |
canary + canary-weight
|
backendRefs with weight
|
proxy-body-size |
BackendLBPolicy (experimental) or implementation config |
proxy-read-timeout |
BackendLBPolicy (experimental) |
auth-url |
External auth filter (implementation-specific) |
configuration-snippet |
No equivalent — redesign required |
server-snippet |
No equivalent — redesign required |
use-regex |
RegularExpression path match type in HTTPRoute |
affinity (session) |
SessionPersistence in BackendLBPolicy (experimental) |
Where to Start
- Run the annotation audit above — know your hard cases before you start
- Read the conformance reports for your target implementation
- Install Gateway API CRDs alongside your existing Ingress-NGINX setup
- Migrate one simple service, verify it works, then expand
The Gateway API documentation is genuinely good. The migration guide covers the happy path well. This post is what sits next to it for when you hit the edges.
If you're in the middle of this and hit something weird — reach out. I've probably seen it.
Top comments (0)