DEV Community

Cover image for Kubernetes Just Retired the Ingress Everyone Thought Was “The Default”
<devtips/>
<devtips/>

Posted on

Kubernetes Just Retired the Ingress Everyone Thought Was “The Default”

The community-backed Ingress-NGINX is officially on the way out here’s why it happened, why nobody saw it coming, and what devs should actually do next.

If you’ve ever set up a Kubernetes cluster at work, at home, or during a questionable late-night “lab session,” you’ve probably deployed something called the NGINX ingress controller. And like most of us, you assumed it was the default, the standard, the one true path in life.

Well… surprise.
Kubernetes just announced the retirement of the community-maintained Ingress-NGINX, the one referenced in half the tutorials on the internet and the one many teams don’t even realize they’re running.

Cue the classic YAML-burning-in-the-background meme.

The funniest (and by funniest I mean mildly horrifying) part?
There are two NGINX ingress controllers:

  • Ingress-NGINX → maintained by Kubernetes community → now deprecated
  • NGINX Ingress Controller (by F5/NGINX Inc) → fully supported → still alive and kicking

Most engineers only discover this difference the day they see the deprecation notice… or when someone in Slack asks, “Wait, which one are we actually using?”

And the reason for all this? Not drama. Not politics. Just the harsh, familiar reality of open source: only one or two maintainers were keeping the community controller alive, and eventually, the load became impossible.

TL;DR: The old ingress is going away, the future points toward Gateway API, and you probably need to check your cluster before your next coffee.

Wait… which NGINX even died?

Here’s the plot twist nobody asked for: when people say “the NGINX ingress controller,” they might be talking about completely different things. Same name, same vibes, totally different ancestry. It’s like Kubernetes secretly shipped two bosses with identical character models but entirely different abilities and then forgot to mention it in the tutorial.

So let’s clear this up once and for all.

There are two NGINX ingress controllers:

  1. Ingress-NGINX (community)
  • Maintained by Kubernetes maintainers
  • Lives under the Kubernetes org
  • The one most tutorials reference
  • This is the one getting deprecated

2. NGINX Ingress Controller (by NGINX Inc / F5)

  • Vendor-backed
  • Production-grade
  • Better docs, better support
  • This one is totally fine

The problem? Ninety percent of teams still can’t tell which one they installed. I’ve seen clusters where the documentation said “official ingress controller,” and after five minutes of kubectl spelunking, it turned out to be the community one installed three DevOps engineers ago.

If you’re unsure which version you’re running, there’s a simple sniff test:

kubectl get pods -n ingress-nginx --show-labels

If you spot labels or annotations referencing things like:

kubernetes.io/ingress-nginx
app.kubernetes.io/name=ingress-nginx

Congrats (or… uh, my condolences): you’re on the community edition.

If instead you see references like:

app.kubernetes.io/name=nginx-ingress
nginx.org/...

Then you’re using the vendor-backed controller the one not being retired.

It’s honestly wild how long this naming confusion has haunted Kubernetes. But hey, this is the same ecosystem where CRDs multiply like Pokémon, so maybe it checks out.

Why Kubernetes is retiring Ingress-NGINX

Let’s get this out of the way: Kubernetes didn’t kill Ingress-NGINX because it was bad. Or insecure. Or slow.
They retired it for the most painfully relatable reason in tech:

There were basically no maintainers left.

I’m not exaggerating. According to the deprecation notice, the project was running on one or two active developers for a controller that routes traffic for thousands of companies. Imagine your entire production network hanging from the free time of two humans who also have day jobs. That’s not DevOps. That’s emotional roulette.

This is the quiet truth about open source that nobody likes admitting:
People love using OSS. They do not love maintaining it.

Everyone says “I want to contribute!” until they realize contribution means:

  • Reading 40-comment GitHub issues
  • Debugging someone else’s YAML
  • Triaging bug reports written like ransom notes
  • Staying awake to patch CVEs
  • Keeping up with every single Kubernetes API change

Maintaining an ingress controller is not cute. It’s unpaid infrastructure babysitting at scale.

CNCF projects have been feeling this squeeze for years, but Ingress-NGINX is the first mainstream Kubernetes component to publicly tap out. And honestly? It took guts for the maintainers to say “we can’t sustainably keep this alive.”

Because they could’ve done the usual open-source thing: disappear quietly, let the issues rot, and leave the community to figure it out.
But instead they chose honesty probably while staring at a PR backlog that looked like a Soulsborne boss health bar.

And that’s why the Kubernetes team is guiding everyone toward Gateway API, which has more contributors, better design, and doesn’t depend on two exhausted saints doing weekend heroics.

What “deprecated” actually means for your cluster

When Kubernetes says something is deprecated, it’s easy to shrug and go, “Cool, I’ll deal with that… later.”
But this isn’t one of those soft deprecations where the thing keeps working for five more years because nobody wants to touch it.

This one is real.
And it has teeth.

Here’s the translation in plain dev-speak:

After March 2026:

  • No new versions
  • No security patches
  • No CVE fixes
  • No compatibility updates for newer Kubernetes releases
  • No one reviewing PRs when the internet catches fire

If you keep using Ingress-NGINX after that date, you’re basically running your production traffic through an abandoned GitHub repo. It’ll work right up until the day a new CVE drops and your CISO appears behind you like a summon in Elden Ring.

Sure, you could fork the project.
People say that a lot: “We’ll just maintain it ourselves.”

Okay.
So you’re going to:

  • Track Kubernetes API changes
  • Fix emerging bugs
  • Keep up with NGINX upstream changes
  • Patch vulnerabilities
  • Test across every weird edge case your cluster has accumulated over the years

All while your PM asks why “the ingress thingy” is taking so long.

The reality is simple: if you adopt the fork, you become the maintainer.
Which is fun for approximately zero teams on earth.

So yes depreciation actually matters here. It’s a countdown.
And the responsible move is to pick your next ingress strategy before the timer hits zero.

Your actual migration options

Okay the community controller is retiring, the clock is ticking, and nobody wants to become the accidental maintainer of a legacy NGINX fork. So what now?
Good news: you’ve got three solid paths, and none of them involve sacrificing your weekends.

Option A: Gateway API (the actual future)

If Ingress was Kubernetes’ “starter home,” Gateway API is the grown-up apartment with real plumbing.
It fixes almost every pain point we’ve had with Ingress for years:

  • Multiple listeners (HTTP, HTTPS, TCP, UDP)
  • Cleaner separation of responsibilities
  • CRDs that don’t feel like patchwork
  • Proper support for complex routing patterns

Using Gateway API for the first time feels like switching from your old membrane keyboard to a mechanical one you suddenly realize how janky the old setup was.

The ecosystem momentum is real, too.
CNCF SIG-Network is investing here.
Vendors are building controllers around it.
The docs actually make sense (a rare Kubernetes phenomenon).

If you’re starting something new: go Gateway API first.

Option B: NGINX Ingress Controller (by NGINX Inc / F5)

This is the version that isn’t going away.

If you love NGINX features, or your entire company already speaks fluent nginx.conf, this is the closest drop-in replacement.
Vendor-backed means:

  • Real support
  • Tested modules
  • No disappearing maintainers
  • More stable releases

Yes, switching will require config tweaks but it’s like rebinding keys in a game: annoying for a minute, then muscle memory takes over.

Option C: Traefik, Kong, HAProxy, and other battle-tested alternatives

If you’re not married to NGINX, the universe of ingress controllers is actually pretty good:

  • Traefik → super fast, CRD-friendly, DevOps favorite
  • Kong Ingress → API-first, powerful plugins
  • HAProxy → rock-solid performance, great for high traffic

These aren’t “side quests” they’re fully supported main-story characters.

Migration tip:

Spin up a staging cluster and test your routing rules before switching.
Each controller handles rewrites, timeouts, and annotations differently.
Treat it like swapping engines in a car don’t test it for the first time on the highway.

Why two NGINX controllers existed in the first place

If you’re wondering, “How did we even end up with two NGINX controllers?” welcome to one of Kubernetes’ longest-running lore drops. It’s like discovering halfway through a game that the tutorial boss actually had an evil twin.

Back in Kubernetes’ early days, there was no native load balancer.
Zero. Nada. Just vibes and YAML.
So the community came up with a workaround: let vendors build their own ingress controllers. Anyone could show up and make one NGINX, F5, Traefik, Kong, whoever had the time and a dream.

But Kubernetes wanted an example implementation too. Not a production-grade one literally just a reference for documentation and teaching. So they built:

Ingress-NGINX (the community version)
→ Lightweight
→ Easy to deploy
→ Intended as a learning example
→ Accidentally became the “default” because the internet copy-pastes everything

Meanwhile, over in the commercial realm, the actual NGINX team (now part of F5) built:

NGINX Ingress Controller (the vendor version)
→ Proper engineering
→ Enterprise support
→ Functionality closer to full NGINX
→ Definitely not intended to be confused with the community project… but naming happens

And that’s how the confusion was born: two projects, same name, same mascot, different universes.

If you’ve ever debugged SSL passthrough between the two, you know the pain. Different annotations, different configs, different behavior and entire Reddit threads filled with developers realizing they installed the wrong one.

The good news? Once the community version sunsets, this naming chaos finally calms down.

Kubernetes networking is finally growing up

If there’s one thing this whole saga proves, it’s that the open-source world isn’t powered by corporate magic or infinite goodwill it’s held together by a handful of humans quietly grinding through issues at night. Ingress-NGINX lived a long life for a project that was never meant to become the production ingress, and the fact it survived this long on one or two maintainers is honestly wild.

But there’s something healthy about this change.
Kubernetes networking has been stuck in “early access mode” for years. Between dozens of controllers, CRDs that behave differently, annotation jungles, and vendor plugins duct-taped onto YAML, routing traffic in Kubernetes has always felt more complicated than it needed to be.

Gateway API finally gives the ecosystem a unified direction something with real design behind it instead of community inertia. Fewer accidental defaults, fewer abandoned components masquerading as standards, fewer “why does this annotation not work in this controller” afternoons.

And if nothing else, this retirement is a reminder to all of us: infrastructure deserves maintainers, not martyrs. We can’t build reliable systems on top of projects hanging by a thread.

So take a moment, check your clusters, plan your migration, and maybe just maybe appreciate the maintainers who kept this thing alive long enough for Kubernetes to evolve past it.

Your turn: what’s your ingress story, and which migration path are you taking? Drop it in the comments dev takes welcome.

Helpful resources

Top comments (0)