DEV Community

Victor Manuel Pinzon
Victor Manuel Pinzon

Posted on

Serverless on Kubernetes with KNative and EKS

For a long time, Serverless has been presented as the definitive solution for all your scalability problems. You upload your code and the provider does the rest; you just pay for what you use. The most common serverless model is AWS Lambda, which is one the early drivers of this work model.

However, there are some edge cases where we are not allowed to lock in with a specific provider, and at the same time we need to benefit from the serverless model.

When we start a new project, there is always a common question: "Do we use Kubernetes or Serverless?". This post will try to give you a new question "How about serverless on Kubernetes?"

What's the deal with Serverless?

There is always a misconception that serverless means "no servers", but that cannot be further from the truth. It just means that we transfer the operational workload to the provider. The developer will focus only on business logic, while the provider will focus on scaling, routing, and availability.

When we talk about Serverless, we talk about:

  • Scaling under demand.
  • Automatic Scale-to-zero when there is no demand.
  • Development experience centered
  • Resource optimization.

There is always a catch with all these. In order to transfer the operational workload to a provider, you must lock-in with a specific technology, and there is not always a happy ending with lock-in.

Why implementing Serverless on Kubernetes?

Kubernetes has become the de facto standard to deploy data high-intensive applications. Not just for containers, but for the whole ecosystem that includes observability, security, networking, politics and governance.

K8s --for simplicity-- is an strategic investment. It means that everything is built around k8s, such as clusters, pipelines, CI/CD, security controls, operative models, and event dev/ops teams.

K8s does not offer serverless natively. It was not built to deal with this type of workload. However, it does not mean that we cannot tweak our cluster to work as a serveless provider without locking in with a specific provider, and here is where Knative enters.

Knative: Serverless on Kubernetes

Knative is an abstraction layer that turns your k8s cluster into a real serverless platform.

Knative introduces into your cluster the following features:

  • Real Scale-to-zero.
  • Smart routing (canary, blue/green).
  • Automatic versioning.
  • Event Driven architecture capabilities.
  • An experience based on "deploy and execute".

Knative Pillars:

There are three main pillars on Knative. They are in charge of turning your k8s cluster into a serverless platform.

  1. Knative Serving: It is the component in charge of loading and executing your serverless applications on kubernetes. This layer abstracts all the native resources such as Deployments, Services, Ingress, and allows the developer to focus only on their application. It allows automatic scaling -- including scale to zero --, versioning, endpoint exposition.

  2. Knative Eventing: It brings event-driven capabilities into the table, allowing services to be triggered without coupling. Main features of this layer are decoupling between producers and consumers, multiple source events (HTTP, Kafka, RabbitMQ), etc.

  3. Knative Functions: Allows FaaS (Functions as a Services) on k8s, centered on the developers.

Proof Of Concept

I developed a Proof Of Concept (POC) in order to show the benefits of implementing serveless functions on your k8s cluster. I used an Amazon EKS cluster and I deployed the Knative layer upon it.

The POC deploys a minimalist API without defining Deployments, Services, or Ingress. All these Kubernetes native objects are managed by Knative. The POC includes:

  • Automatic scaling based on HTTP trafic -- including scale to zero --.
  • Endpoint exposition to internet using Kourier as network controller.
  • End-to-end flow from creating the kluster to your application deployment.

Source code: Serveless with Knative

EKS

One of the most important things about this POC is that does not use Minikube or Kind, but a real-life Amazon EKS cluseter, which means:

  • We built the POC under real-life circumstances on a high-availability environment with scalable nodes.
  • EKS takes the responsability of node orchestration, IAM integration, VPCs and load balancers.
  • We use eksctl to create and configure the cluster using IaC.

Conclusion

Knative brings the serverless model into Kubernetes, offering an abstraction layer and simplifying deployment, automatic scaling, and routing on cloud-native applications. Knative allows the developers to focus only on their application, while the platform is in charge of auto-scaling, scale-to-zero, and routing. These features make Knative building block for organizations that are searching for flexibility, portability and efficiency without sacrificing their infraestructure.

Top comments (0)