DEV Community

Chathra Serasinghe
Chathra Serasinghe

Posted on

EKS Fargate with Nginx Ingress Controller

Why Fargate?

Fargate removes the requirement for you to set up and maintain EC2 instances for your Kubernetes applications. When your pods start, Fargate automatically allocates computing resources to operate them on-demand.

When to use?

If your workload/traffic patterns are irregular and unpredictable.

Why would I choose the NGINX ingress controller over the Application Load Balancer (ALB) ingress controller?

  • With the NGINX Ingress controller:
    • can have multiple ingress objects for multiple environments or namespaces with the same network load balancer
  • With the ALB ingress controller:
    • each ingress object requires a new load balancer.

Why Nginx ingress controller cannot run on Fargate only cluster?

Nginx ingress controller needs privilege escalation but in Fargate you are not allowed to do it. You will get following error when you try to deploy.

Pod not supported on Fargate: invalid SecurityContext fields: AllowPrivilegeEscalation

Why?

There are currently a few limitations that you should be aware of EKS fargate:

  • There is a maximum of 4 vCPU and 30Gb memory per pod.
  • Currently there is no support for stateful workloads that require persistent volumes or file systems.
  • You cannot run Daemonsets, Privileged pods, or pods that use HostNetwork or HostPort.
  • The only load balancer you can use is an Application Load Balancer.

So how can you still run your workload on fargate while using Nginx ingress controller?

You can run Nginx ingress contoller on EKS managed nodes and workload can be run on Fargate nodes.

What is a Fargate Profile?

Before you can schedule pods on Fargate in your cluster, you must define at least one Fargate profile that specifies which pods use Fargate when launched.
The Fargate profile allows an administrator to declare which pods run on Fargate.
This declaration is done through the profile’s selectors. Each profile can have up to five selectors that contain a namespace and optional labels.
If a pod matches multiple Fargate profiles, Amazon EKS picks one of the matches at random. In this case, you can specify which profile a pod should use by adding the following Kubernetes label to the pod specification: eks.amazonaws.com/fargate-profile:

How fargate profile should look like:

  • can have maximum 5 selectors per profile
  • selector must associated only one namespace
  • you can also specify labels for a namespace (optional)
  • you should have an pod execution role
  • You should specify subnet ids (only private subnets)

Demo:

1) Deploy EKS Fargate Cluster with a managed node

You can use this code to launch your EKS Fargate cluster



https://github.com/chathra222/eks-fargate-example


Enter fullscreen mode Exit fullscreen mode

2) Use the AWS CLI update-kubeconfig command to create or update your kubeconfig for your cluster.



aws eks --region <region-code> update-kubeconfig --name <cluster_name>


Enter fullscreen mode Exit fullscreen mode

Get the nodes. You will notice that there are 2 Fargate nodes and a one managed node.



kubectl get no
NAME                                                      STATUS   ROLES    AGE    VERSION
fargate-ip-172-16-1-180.ap-southeast-1.compute.internal   Ready    <none>   142m   v1.20.7-eks-135321
fargate-ip-172-16-1-81.ap-southeast-1.compute.internal    Ready    <none>   126m   v1.20.7-eks-135321
ip-172-16-1-192.ap-southeast-1.compute.internal           Ready    <none>   10m    v1.20.10-eks-3bcdcd


Enter fullscreen mode Exit fullscreen mode

3) Install Nginx Ingress Controller in EKS Fargate



kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.0.4/deploy/static/provider/aws/deploy.yaml


Enter fullscreen mode Exit fullscreen mode

You will notice that Nginx ingress controller is deployed on managed node.



 kubectl get po -o wide -n ingress-nginx
NAME                                        READY   STATUS      RESTARTS   AGE    IP             NODE                                              NOMINATED NODE   READINESS GATES
ingress-nginx-admission-create-7fp69        0/1     Completed   0          136m   172.16.1.28    ip-172-16-1-192.ap-southeast-1.compute.internal   <none>           <none>
ingress-nginx-admission-patch-br5qg         0/1     Completed   1          136m   172.16.1.179   ip-172-16-1-192.ap-southeast-1.compute.internal   <none>           <none>
ingress-nginx-controller-5699dc4f77-x9z4j   1/1     Running     0          15m    172.16.1.196   ip-172-16-1-192.ap-southeast-1.compute.internal   <none>           <none>


Enter fullscreen mode Exit fullscreen mode

In step 1, I created a fargate cluster with a fargate profile called default.

Image test

Lets create a pod called test with a label called WorkerType=fargate in namespace default

test.yaml



apiVersion: v1
kind: Pod
metadata:
  labels:
    WorkerType: fargate
  name: test
spec:
  containers:
  - image: frjaraur/non-root-nginx
    name: test


Enter fullscreen mode Exit fullscreen mode

kubectl apply -f test.yaml

You will notice that it has scheduled in fargate node.



kubectl get po -o wide
NAME   READY   STATUS    RESTARTS   AGE     IP             NODE                                                      NOMINATED NODE   READINESS GATES
test   1/1     Running   0          6m25s   172.16.3.209   fargate-ip-172-16-3-209.ap-southeast-1.compute.internal   <none>           <none>


Enter fullscreen mode Exit fullscreen mode

But if you run a pod which doesn't match with fargate profile's selectors, then it will not run on fargate nodes.

nomatchpod.yaml



apiVersion: v1
kind: Pod
metadata:
  labels:
    app: nomatchpod
  name: nomatchpod
spec:
  containers:
  - image: nginx
    name: nginx


Enter fullscreen mode Exit fullscreen mode

kubectl apply -f nomatchpod.yaml

You can notice that nomatchpod didn't run on fargate node because it didn't match the criteria defined in fargate profile.



 kubectl get po -o wide
NAME         READY   STATUS    RESTARTS   AGE   IP             NODE                                                      NOMINATED NODE   READINESS GATES
nomatchpod   1/1     Running   0          23s   172.16.1.41    ip-172-16-1-192.ap-southeast-1.compute.internal           <none>           <none>
test         1/1     Running   0          15m   172.16.3.209   fargate-ip-172-16-3-209.ap-southeast-1.compute.internal   <none>           <none>


Enter fullscreen mode Exit fullscreen mode

Lets expose test pod as a service called testsvc



kubectl expose po test --name=testsvc --port=80
service/testsvc exposed


Enter fullscreen mode Exit fullscreen mode

Then create an ingress resource ingress.yaml



apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: minimal-ingress
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
    kubernetes.io/ingress.class: nginx # make sure to add this
spec:
  rules:
    - http:
        paths:
          - path: /test
            pathType: Prefix
            backend:
              service:
                name: testsvc
                port:
                  number: 80




Enter fullscreen mode Exit fullscreen mode


kubectl apply -f ingress.yaml
ingress.networking.k8s.io/minimal-ingress created


Enter fullscreen mode Exit fullscreen mode


NAME              CLASS    HOSTS   ADDRESS                                                                              PORTS   AGE
minimal-ingress   <none>   *       a4c206981b7d14678bf6be5911d8223a-2122040efbe82462.elb.ap-southeast-1.amazonaws.com   80      39m


Enter fullscreen mode Exit fullscreen mode

You can access the service now using http://nlb_hostname/test

Image done

This is how you can use Nginx Ingress Controller in a EKS fargate Cluster.

Top comments (0)