DEV Community

Michael Levan
Michael Levan

Posted on

Kubernetes Ingress: Nginx Ingress Edition

In Kubernetes, some applications need to be public-facing. Perhaps it’s a web app or something else that needs to be externally accessible outside of the private network. In either case, you’re going to need a way to communicate with the application.

With Kubernetes, there are a few options. One of those being an Ingress Controller.

In this blog post, you’ll learn about what Ingress is, where it fits into Kubernetes, and how to get started with it right now.

Prerequisites

To follow along with this blog post, you should have:

  • Minikube installed OR have a Kubernetes cluster already running on-prem or in the cloud.

What’s Ingress?

How does an application reach the public internet with, say, a Kubernetes Deployment or a Kubernetes Pod? It does so by utilizing a Kubernetes Service. A Kubernetes Service can attach a load balancer, whether you’re in the cloud or on-prem, to the Kubernetes Service so people (clients, customers, yourself, etc.) can reach the application publically.

The thing is, that means each Kubernetes Service has a load balancer. That can get costly and time-consuming.

That’s where Ingress Controllers come into play. An Ingress Controller is a specialized load balancer that you can point your Kubernetes Service to via the Service name and the path that you want the Service to be reached on.

For example, below is a snippet from a manifest:

ingressClassName: nginx-example
  rules:
  - http:
      paths:
      - path: /testpath
        pathType: Prefix
        backend:
          service:
            name: test
            port:
              number: 80
Enter fullscreen mode Exit fullscreen mode

Notice a few things:

  • There is a path key/value mapping that shows what path you’ll be able to reach the application on via the load balancer. It could look something like http://load_balancer_name/testpath
  • There is a service key/value mapping that points to the Kubernetes Service name and the port that the Service is running on. In this case, the name of the service is test

What this essentially says is that you have a Kubernetes Service that you created called test. The test service is set up in Nginx Ingress and the way that you reach the Service is by going to the /testpath path.

Why Is It Needed?

In the previous section, you learned about what Ingress is. Now let’s talk about why it’s needed.

Here are a few things to keep in mind.

  • Cloud load balancers are expensive
  • If you use an on-prem solution, chances are you have to set up BGP so you don’t run into ARP requests
  • Who wants to manage multiple cloud load balancers?
  • Who wants to manage BGP?

For many teams, it would be much easier to have multiple Kubernetes Services without each of them having a load balancer and instead, having one load balancer that every service is pointing to. It’s very similar to what we do with applications that aren’t containerized. For example, think about an Nginx Reverse Proxy. Within a reverse proxy, you have multiple server names, paths, ports, etc. and people can reach the applications via one address (the reverse proxy).

Another fact is that cloud load balancers can get expensive. Per the AWS docs: $0.0225 per Application Load Balancer-hour (or partial hour). Although it’s a small number, that adds up if you have an application running for years and you have multiple applications running, each having its own load balancer.

Getting Started With Nginx Ingress

As with all Kubernetes resources, there’s a Kind/Spec for Nginx Ingress and an API. The API is from the Named API Group.

KIND:     IngressClass                                                               
VERSION:  networking.k8s.io/v1
Enter fullscreen mode Exit fullscreen mode

Let’s jump in and figure out not only how to install Nginx Ingress, but how to start using it.

Installing an Ingress Controller

For installing an Ingress Controller, it’s all going to depend on what platform you’re using. The installation methods are all sort of the same, as in using a Helm chart to install Nginx Ingress, but some options and flags may vary based on what Kubernetes cloud or on-prem service you’re using.

For the purposes of this blog post, you can use the Minikube installation if you have Minikube running on your local machine. You can find the instructions here.

Image description

If you want to install Nginx Ingress somewhere else, like AKS or EKS, take a look at the installation guides here and go through the step-by-step.

Image description

Running an Ingress Controller

Now that Nginx Ingress is installed, let’s look at how to get an application up and running to use Nginx Ingress.

First things first, you’ll need a front-facing application. To get one going, you can use a stateless version of Nginx via the nginx:latest Docker image. The below Kubernetes Manifest will create:

  • An Nginx Deployment spec
  • A Service Deployment spec
apiVersion: apps/v
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  selector:
    matchLabels:
      app: nginxdeployment
  replicas: 2
  template:
    metadata:
      labels:
        app: nginxdeployment
    spec:
      containers:
      - name: nginxdeployment
        image: nginx:latest
        imagePullPolicy: Never
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: nginxservice
spec:
  selector:
    app: nginxdeployment
  ports:
    - protocol: TCP
      port: 80
Enter fullscreen mode Exit fullscreen mode

Save the above Kubernetes Manifest in a location of your choosing (your Desktop for example) and run the following command:

kubectl create -f deployment.yaml
Enter fullscreen mode Exit fullscreen mode

If you run kubectl get pods, you should see Nginx pods running.

Next, you’ll need the Kubernetes Manifest to create an Ingress Controller, which you’ll find below. Notice a few things:

  • The Kubernetes Service name is the same name as the Service you created in the previous Kubernetes Manifest. That’s because the Ingress Controller has to point to the Kubernetes Service
  • The host is pointing to localhost with the assumption that you’re using Minikube.
  • The port is 8080
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-nginxservice-a
spec:
  ingressClassName: nginx-servicea
  rules:
  - host: localhost
    http:
      paths:
      - path: /nginxappa
        pathType: Prefix
        backend:
          service:
            name: nginxservice
            port:
              number: 8080
Enter fullscreen mode Exit fullscreen mode

Save the Ingress Manifest as ingress.yaml and run the following command:

kubectl create -f ingress.yaml
Enter fullscreen mode Exit fullscreen mode

Once complete, open up a web browser and go to localhost:8080. You’ll see the Nginx application is now running.

Image description

Congrats! You have officially set up your first Ingress Controller.

Other Ingress Controllers

Although Nginx Ingress is one of the most popular ingress controllers, there are several others ranging from open-source projects to large organizations creating it’s own implementation of ingress controllers. Below is a list of ingress controllers and this comes directly from the Kubernetes website: https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/

As you can see, there are a lot of options. The best way to figure out which one to go with for your organization is ultimately to understand what the underlying infrastructure is. For example, if your organization runs all of its network equipment via F5 and you have F5 enterprise support, using the F5 ingress controller may not be that bad of an idea. If your organization is deep in the Kong ecosystem, the Kong ingress controller would make sense.

At the end of the day, they all pretty much do the same thing at their core. It’s just a matter of if there’s one that has some different feature that you need or if it’s coming from a company/product that you’re already heavily invested in.

Top comments (0)