DEV Community

Ishan Khare
Ishan Khare

Posted on

Deploy a golang API on Kubernetes with Nginx ingress

This blog post is a follow up on my previous post where we talked about deploying a Python app with nginx reverse proxy using docker.

So lets line up all the things that we're going to touch in this post:

  1. Ingress
  2. Services
  3. Deployments

Now that we have set the tone, lets go in brief about what the above this are about that what role they play.
We will then actually go over each and code them and implement the deployment in kubernetes.

What is Ingress?

From the kubernetes documentation Ingress is defined as:

Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. Traffic routing is controlled by rules defined on the Ingress resource.

Also this diagram seems to provide a bit more details so I'll just leave it here:

    internet
        |
   [ Ingress ]
   --|-----|--
   [ Services ]

All though IMO the above diagram is a bit more simplified. Hence I came up with this arch. diagram explaining the above arrangement in a bit more detail.

Another way to look at Ingress is that its a Layer 7 load balancer. As layer 7 LB is application aware, it can determine where to send the traffic depending on application state.

In contrast, a simple load balancer like Google cloud load balancer or AWS' ELB are Layer 4 LB. You would use it to expose single app or service to the outside world. It would balance the load based
on destination IP address, protocol and port.


What are services?

Without going into much detail about what services are, k8s docs describe services as:

An abstract way to expose an application running on a set of Pods as a network service.


And Deployments?

Deployment are merely high level abstractions over Pods (which in themselves are abstractions over your containers). Well that's only half correct.

In fact ReplicaSets are a direct abstractions on your Pods and Deployments are an abstraction over ReplicaSets.
Again from the docs:

You describe a desired state in a Deployment, and the Deployment Controller changes the actual state to the desired state at a controlled rate. You can define Deployments to create new ReplicaSets, or to remove existing Deployments and adopt all their resources with new Deployments.


So lets dive right in

Let's first see how our Go code is going to look like:

package main

import (
    "fmt"
    "net/http"

    "github.com/gin-gonic/gin"
)

func greet(c *gin.Context) {
    name := c.Param("name")
    c.String(http.StatusOK, fmt.Sprintf("Hello %s!", name))
}

func stranger(c *gin.Context) {
    c.String(http.StatusOK, "Hello stranger!")
}

func main() {
    router := gin.Default()

    router.GET("/hello/:name", greet)
    router.GET("/hello/", stranger)
    router.GET("/", func(c *gin.Context) {
        c.String(http.StatusOK, "")
    })
    router.Run(":8000")
}

This is a very trivial api, we add three endpoints. The root url is basically also a health check and hence must return a 200 OK http status.

Once we have this go code built into a container – which BTW is not the subject of this post and I'm skipping it out, we can start concentrating on more about how to deploy
this service on kubernetes.

I'm going to deploy this to a k8s cluster on Google Cloud, but the code should work seamlessly on any other k8s deployments as well.

Also, I've created a container with the above code and made it available on docker hub publicly. The container can be found here – https://hub.docker.com/repository/docker/ishankhare07/ingress-test

You can pull down the image using docker pull ishankhare07/ingress-test:0.0.3


The cluster

So lets get down to business, I'm assuming you have your cluster running and kubectl command configured to connect to the cluster. We'll be setting up an

  • ingress resource
  • ingress controller

While there are many plug-and-play options for ingress available today written over the ingress api, the simplest and most straight forward one is nginx ingress.
I've given the link to official docs which details the exact installation steps for setting it up. But these steps vary slightly depending on the cloud provider, k8s version etc.

Hence the best and easiest way to actually setup nginx ingress is through helm

WTF is helm?

Helm is a package manager for k8s. Consider it like apt-get or yum but for kubernetes. Just tell helm the package – which are called charts in helm terminology to be installed, and helm will do it for you. The good this about helm is – its sort of an automation tool over kubernetes. So if you have something that requires multiple steps to setup, helm will do it for you – like run a few deployments, expose it at a port through a service, mount a volume from a persistent disk etc. More on helm some other day. Right now go ahead and install helm on your cluster.

And yes you read that right, "install on cluster". You see helm is a 2 part thing

  1. The helm cli client that is installed on your dev machine.
  2. The tiller controllers that are spun as pod controllers inside your cluster

The way it works is you tell helm what to install using the helm cli. The cli then communicates this to the tiller controllers running on your k8s cluster.
Those controllers fetch and spin the required pods etc. in the cluster for you. In a way we are using kubernetes provided primitives to automate and extend kubernetes itself.

Once you've helm installed and setup, we can now proceed with deploying our application.

Deploying our go backend

We want out go backend to be available on port 8000 because that is what we exposed in our code.
So lets write a greeter.yaml file with the following content

kind: Deployment
apiVersion: apps/v1
metadata:
  name: go-greeter
  labels:
    app: greeter
spec:
  replicas: 3
  selector:
    matchLabels:
      app: greeter
  template:
    metadata:
      labels:
        app: greeter
    spec:
      containers:
      - name: greeter
        image: ishankhare07/ingress-test:0.0.3
        command: ["/go/bin/ingress-test"]
        ports:
          - containerPort: 8000

---

kind: Service
apiVersion: v1
metadata:
  name: greeter-service
spec:
  selector:
      app: greeter
  type: NodePort
  ports:
    - port: 8000

We define a Deployment – with a replicaset of 3 pods. Each pod's port 8000 is open only accessible from inside the cluster. Next we expose a service that funnels all the incoming data to the 3 replicas of our containers in a round robin fashion.

Lets deploy this to our cluster first:
kubectl apply -f greeter.yaml

Lets see the deployment:

$ kubectl get deploy
NAME                            READY   UP-TO-DATE   AVAILABLE   AGE
go-greeter                      3/3     3            3           13d

Also we deployed a service, lets check that too:

$ kubectl get svc
NAME                            TYPE           CLUSTER-IP    EXTERNAL-IP     PORT(S)                      AGE
greeter-service                 NodePort       10.36.7.174   <none>          8000:31269/TCP               13d

So our Deployment is up and running and is exposed inside the cluster through a service. Next we need to connect it to the outside world through Ingress.

Installing nginx ingress through helm

Assuming helm is installed and setup with RBAC roles applied, we can install nginx-ingress chart with the following command:
helm install --name nginx-ingress stable/nginx-ingress --set rbac.create=true --set controller.publishService.enabled=true

If all goes well, this should give us the following output:

NAME                           TYPE          CLUSTER-IP    EXTERNAL-IP  PORT(S)                     AGE
nginx-ingress-controller       LoadBalancer  10.7.248.226  pending      80:30890/TCP,443:30258/TCP  1s
nginx-ingress-default-backend  ClusterIP     10.7.245.75   none         80/TCP                      1s

So we have our ingress controller and the default backend setup and running. Now all that's left is telling this nginx-ingress-controller how and where to redirect incoming traffic.
Let's write a new file ingress.yaml for it:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: my-ingress
  annotations:
    kubernetes.io/ingress.class: "nginx"
    ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
    - http:
      paths:
        - path: /hello/
          backend:
            serviceName: greeter-service
            servicePort: 8000

Let's deploy this now:
kubectl apply -f ingress.yaml

This will create and ingress and a loadbalancer service and attach a public IP address, which can be seen by:

$ kubectl get ingress
NAME            HOSTS   ADDRESS         PORTS   AGE
my-ingress   *       XXX.XXX.XXX.XXX   80      11d

Now, if we visit the above IP Address we'll hit the default backend for 404 – since / url is not defined in our ingress.

Now on hitting the first path /hello, this is what we get:

Also we have another internal API endpoint in our greeter service, lets hit that too – /hello/ishan, what we get:


Now that we are able to direct traffic to different deployments using the ingress, let's deploy a new pod and try to redirect some traffic there. We'll simply use hashicorp/http-echo
to use as a second service. Let's write a manifest for this:
apple.yaml

kind: Pod
apiVersion: v1
metadata:
  name: apple-app
  labels:
    app: apple
spec:
  containers:
    - name: apple-app
      image: hashicorp/http-echo
      args:
        - "-text=apple"

---

kind: Service
apiVersion: v1
metadata:
  name: apple-service
spec:
  selector:
    app: apple
  type: NodePort
  ports:
    - port: 5678

Let's deploy this:
kubectl apply -f apple.yaml

Now that we have this pod running and exposed through the service, we would want to update the ingress file with these changes.

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: local-ingress
  annotations:
    kubernetes.io/ingress.class: "nginx"
    ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
    - http:
        paths:
          - path: /apple
            backend:
              serviceName: apple-service
              servicePort: 5678
          - path: /hello/
            backend:
              serviceName: greeter-service
              servicePort: 8000

Re-configure the ingress by running:

$ kubectl apply -f ingress.yaml
ingress.extensions/my-ingress configured

We can test it by going to the path /apple

And that's it! I'm sure we can all appreciate the loosely coupled nature of kubernetes and the scalability it provides. In the next blog post we'll look into deploying a gRPC endpoint written in Go deployed on this same k8s cluster behind the ingress. We'll also try to connect it to via a python client.

This post was originally published on my blog at ishankhare.com

Top comments (2)

Collapse
 
ante_gulin profile image
Ante Gulin

Great blog post. I'm really looking forward to the next post where you're exposing a gRPC service.
When can we expect the next one :D?

Collapse
 
ishankhare07 profile image
Ishan Khare

Thanks. You can expect the next iteration probably in the coming week. I'm currently working on it. Would try my best to finish it as soon as possible.