DEV Community

Cover image for From Ingress to Gateway API: A Hands-On Walkthrough with NGINX Gateway Fabric
Kene Ojiteli
Kene Ojiteli

Posted on

From Ingress to Gateway API: A Hands-On Walkthrough with NGINX Gateway Fabric

Kubernetes is a container orchestration platform designed to run distributed applications reliably and efficiently. At its core, it schedules workloads (Pods), keeps them running, and gives primitives to scale and heal systems automatically.

But Kubernetes does not give an application architecture for free, especially when it comes to networking.

A Pod is the smallest deployable unit in Kubernetes, and it is intentionally ephemeral. Pods can be recreated at any time, which means their IP addresses change frequently. This design is great for resilience, but terrible if you try to talk to pods directly.

To solve this, Kubernetes introduced services.

Services: Stable Networking, Limited Scope

A service provides a stable virtual IP and DNS name that routes traffic to a group of Pods. This solves the Pod IP problem cleanly.

However, Services are fundamentally cluster-internal abstractions:

  • ClusterIP works only inside the cluster.
  • NodePort exposes ports, but with poor ergonomics and security concerns.
  • LoadBalancer depends heavily on cloud provider integrations.

Most importantly, services do not give expressive HTTP routing, TLS control, or multi-team ownership boundaries.

This is where Ingress came into play.

Ingress: A Necessary Step, but a Compromise

Ingress introduced HTTP concepts, such as hosts and paths, into Kubernetes. With an Ingress controller (NGINX, Traefik, HAProxy), applications could be exposed externally in a structured way.

Ingress solved real problems, but over time, its limitations became obvious:

  • The API is underspecified and relies on controller-specific annotations.
  • One Ingress object often becomes a shared choke point for many teams.
  • Infrastructure and application concerns are tightly coupled.
  • Advanced use cases (TCP, gRPC, multi-protocol) feel bolted on.

Ingress works, but it does not scale well organizationally. This is the context in which the Gateway API was created.

Gateway API: A Cleaner Separation of Concerns

Gateway API is an evolution of Ingress, not a replacement-by-default. One of Gateway API’s core ideas is ownership separation: platform teams manage Gateways, while application teams define Routes.

Its key idea is role separation:

  • GatewayClass - defined by platform teams, shows who implements the Gateway.
  • Gateway - infrastructure-level entry points, show where traffic enters the cluster.
  • Routes (HTTPRoute, GRPCRoute, etc.) - owned by application teams, show how traffic is routed.

Instead of one overloaded resource doing everything, responsibilities are explicit. A Gateway does not route traffic by itself. It only defines entry points (listeners). All routing logic lives in Route resources.

To understand whether this actually improves things in practice, I built a small project.

Project goal: This project demonstrates how Kubernetes Gateway API improves traffic management compared to Ingress by deploying a multi-service application and exposing it externally using NGINX Gateway Fabric.

NGINX Gateway Fabric is an implementation of the Kubernetes Gateway API built and maintained by NGINX. It plays the same role that an Ingress controller plays for Ingress, but for Gateway API.

Physically, NGINX Gateway Fabric runs inside the cluster as:

  • A controller that watches Gateway API resources.
  • One or more NGINX data-plane pods that act as the actual traffic proxy.

The controller translates Gateway API objects into live NGINX configuration and keeps it in sync.

Prerequisites

  • A running Kubernetes cluster (kind, minikube, or managed).
  • kubectl.
  • helm.
  • Basic understanding of Kubernetes YAML.
  • Willingness to debug errors(permission issues, version mismatches).

Step-by-Step Walkthrough

  • I carried out this walkthrough on an EC2 instance to simulate a realistic cloud environment. I launched an instance with sufficient memory and storage, then connected to it remotely using VS Code over SSH.

ec2

  • Before installing any tools, I updated the system’s package index to ensure I was working with the latest available versions.

update-packages

  • I installed the core tools needed for this walkthrough:

    • Docker – to run containers.
    • Kind – to run Kubernetes locally inside Docker.
    • kubectl – to interact with the Kubernetes cluster.
    • Helm – to install the NGINX Gateway Fabric controller.

kind

docker

docker-version

helm

kubectl

  • I created a Kubernetes cluster named gateway-api-demo using Kind and a configuration file. This cluster will host all Gateway API resources and workloads.

k8s-cluster

  • I installed the Gateway API CRDs, which introduced new Kubernetes resource types such as GatewayClass, Gateway, and HTTPRoute. These are definitions only; they do not route traffic by themselves. They simply tell Kubernetes what kinds of objects are allowed to exist.

gateway-api-crd

  • With the CRDs installed, I installed NGINX Gateway Fabric using Helm. This component is the actual controller that monitors Gateway API resources and converts them into actual NGINX configurations.

ngf-install

ngf-resources

  • As part of its startup process, NGINX Gateway Fabric automatically created a GatewayClass named nginx. This GatewayClass declares that NGINX Gateway Fabric is responsible for implementing any Gateway that references it. Installing Gateway API CRDs only defines the resource types. The GatewayClass is created by the controller (NGINX Gateway Fabric), not by Kubernetes itself.

  • To demonstrate routing behaviour, I deployed three simple Python-based HTTP servers, each representing a different device-specific frontend. All applications were deployed into the same namespace.

deploy-apps

  • At this stage, pods are running, but no external traffic can reach them yet.

  • I then created a Gateway resource. The Gateway defines where traffic enters the cluster by specifying listeners (ports and protocols) and referencing the nginx GatewayClass.

gateway

  • Describing the Gateway shows that the NGINX Gateway fabric accepted the Gateway, the Gateway was successfully programmed, a service was created and exposed, and no routes are attached yet.

  • This implies that the Gateway is live, listening for traffic, but has no routing rules.

describe-gateway

describe-gateway1

  • The GatewayClass confirms that NGINX Gateway Fabric is the active controller responsible for handling Gateways that reference it.

gatewayclass

  • At this point, the ngf-gatewayapi-ns namespace contains NGINX Gateway Fabric controller pods and the Gateway and its supporting resources.

ngf-gatewayapi-ns-ns

more-details

  • I attempted to access the application via: <NodeIP>:<NodePort> (NodePort is used here for simplicity in a demo environment to make the Gateway reachable from outside the EC2 instance. In production, this would be replaced by a cloud LoadBalancer or external traffic manager). I also updated the EC2 security group to allow inbound traffic on the NodePort.

inbound-rule

  • The request failed. This behaviour is expected because the gateway accepts traffic, but there is no backend defined, and no route exists to forward traffic to any service. This is where the HTTPRoute comes in.

access-app

  • The HTTPRoute defines how requests are matched, specifies which backend service should receive traffic and attaches itself to a gateway using parentRefs.

  • I created three services and a single HTTPRoute that forwards traffic to the appropriate backend based on request rules.

  • After applying the HTTPRoute, traffic could flow freely. Traffic follows a clear path: Client → Gateway → NGINX Gateway Fabric → HTTPRoute → Service → Pod.

HTTPRoute

  • I was able to access the applications successfully as traffic was routed through the gateway and its proxy pods.

desktop-route
android-route
iphone-route

Challenges Encountered and Fixes
Error1: I encountered a permission denied error while trying to create a kind cluster. This was caused by insufficient user privileges to interact with the Docker daemon.

cluster-error

Fix: A temporary fix would be adding sudo to the command for creating the cluster (this is not recommended), but I permanently resolved this error by adding my current user to the Docker group, as shown below.

cluster-error-fix

Error2: I encountered a CrashLoopBackOff error in the NGINX Gateway Fabric controller pod. The pod failed immediately on startup, and traffic handling never initialised.

crd-version-mismatch-error

Actions taken: I inspected the controller pod logs and observed repeated startup failures referencing BackendTLSPolicy, with errors indicating that the API server could not find the resource kind.

This indicated that the controller was attempting to register an informer for BackendTLSPolicy during startup. I verified that my installed Gateway API CRDs did not include the BackendTLSPolicy definition, even though I was not explicitly creating or using this resource in my manifests.

pod-logs

Fix: The issue was caused by a version mismatch between the Gateway API CRDs and the NGINX Gateway Fabric controller.

The installed controller version expected the BackendTLSPolicy CRD to exist as part of the Gateway API it was built against. Although I did not intend to use BackendTLSPolicy, the controller still attempted to register an informer for it during startup. Since the CRD was missing, the Kubernetes API server rejected the informer registration, causing the controller to crash.

I resolved the issue by upgrading the Gateway API CRDs to a release that includes the BackendTLSPolicy resource, ensuring compatibility with the installed NGINX Gateway Fabric controller. Once the CRD existed in the cluster, the controller started successfully even without any BackendTLSPolicy objects being created.

v2.1.0

v2.3.0

Wrapping It All Up
This walkthrough moves from Kubernetes fundamentals to a modern, production-grade traffic management model using Gateway API and NGINX Gateway Fabric.

It shows how:

  • Pods are ephemeral and unreliable entry points.
  • Services provide stable in-cluster access, but don’t solve external routing.
  • Ingress improved the situation, but centralised too much responsibility.
  • Gateway API splits concerns cleanly:
    • GatewayClass defines who implements networking.
    • Gateway defines where traffic enters.
    • HTTPRoute defines how traffic is routed.

Most importantly, this walkthrough shows that Gateway API is not just “Ingress v2”. It is a deliberate redesign that establishes clear ownership boundaries, enhances extensibility, and provides improved operational visibility for Kubernetes networking.

If you are building multi-team platforms, managing multiple routes and protocols, or preparing for service mesh and mTLS-heavy environments, Gateway API is no longer optional knowledge; it is the direction Kubernetes is heading.

What This Project Solves

This project demonstrates how to:

  • Expose applications externally without coupling routing logic to workloads.
  • Safely evolve routing rules without redeploying Gateways.
  • Use a standards-based API instead of vendor-specific annotations.
  • Understand why traffic flows the way it does, not just that it works.

If Ingress ever felt magical or fragile to you, Gateway API replaces that magic with explicit contracts.

All manifests, configurations, and steps used in this walkthrough are available here

Did this help you understand how Gateway API works? Drop a comment!

Top comments (0)