DEV Community

Michael Levan
Michael Levan

Posted on

KEDA On Azure Kubernetes Service (AKS)

As Managed Kubernetes Service offerings like AKS, EKS, GKE, etc. continue to grow, the more you’re going to see available addons and third-party services. The addon services can be anything from Service Mesh solutions to security solutions.

The most recent add-on that AKS has added is Kubernetes Event-Driven Autoscaling (KEDA).

In this blog post, you’ll learn about what KEDA is, an example of how to use KEDA, and how to install both with and without the addon.

What Is KEDA?

In the world of performance and resource optimization, we typically see two focus points in the Kubernetes landscape:

  1. Pod optimization
  2. Cluster optimization

Pod optimization is ensuring that the containers inside of the Pods have enough resources (memory and CPU) available to run the workloads as performant as possible. This sometimes means adding more resources or taking away resources if the containers no longer need them. As more load hits a Pod, more CPU and memory will be needed. Once that load goes away, the Pods no longer need the extra CPU and memory.

Cluster optimization is about ensuring that the Worker Nodes have enough memory and CPU available to give the Pods when they ask for it. This means adding more Worker Nodes automatically when needed and when the extra CPU and memory is no longer needed, the extra Worker Nodes can go away. This is the job of cluster autoscalers like Karpetner and Cluster Autoscaler.

The third piece to the resource scaling and optimization puzzle that we don’t see as much as numbers one and two is scaling based on a particular event. Think about it like serverless. Serverless allows you to run code based on a particular trigger from an event that occurs.

Kubernetes has a similar approach with KEDA.

KEDA scales workloads up and down based on particular events that occur. It works alongside other scalers like the Horizontal Pod Autoscaler (HPA) and Vertical Pod Autoscaler (VPA).

💡 HPA and VPA are not the same type of autoscaling as KEDA. HPA and VPA are all about scaling when a Pod is under load and needs more resources. KEDA scales based on an event that occurs.

Keep in mind that KEDA is primarily used for event-driven applications that are containerized.

KEDA Example

Within KEDA, the primary object/kind you’ll see used for Kubernetes is the ScaledObject.

In the example below, KEDA is performing automatically via metrics that come from Prometheus. It’s scaled based on a trigger with a threshold of 50 from Prometheus. Much like with HPA, you can set the min count and the max count of Pods that can run.

KEDA also has a “scale to zero” option which allows the workflow to drop to 0 Pods if the metric value that’s defined is below the threshold.

apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
  name: prometheustest
spec:
  scaleTargetRef:
    apiVersion: argoproj.io/v1alpha1
    kind: Rollout
    name: promtest
  triggers:
    - type: prometheus
      metadata:
      serverAddress:  http://localhost:9090
      metricName: http_request_total
      query: envoy_cluster_upstream_rq{appId="300", cluster_name="aksenvironment01", container="test", response_code="200" }
      threshold: "50"
  idleReplicaCount: 0                       
  minReplicaCount: 2
  maxReplicaCount: 6
Enter fullscreen mode Exit fullscreen mode

Outside of Kubernetes components, you can also use KEDA within Kubernetes for triggers that aren’t Kubernetes objects.

For example, below is an example of a trigger for Azure Blob Storage. It has nothing to do with Kubernetes, but you can still manage it via Kubernetes (hence Kubernetes becoming the underlying platform of choice).

triggers:
- type: azure-blob
  metadata:
    blobContainerName: functions-blob
    blobCount: '5'
    activationBlobCount: '50'
    connectionFromEnv: STORAGE_CONNECTIONSTRING_ENV_NAME
    accountName: storage-account-name
    blobPrefix: myprefix
    blobDelimiter: /example
    cloud: Private
    endpointSuffix: blob.core.airgap.example # Required when cloud=Private
    recursive: false
    globPattern: glob-pattern
Enter fullscreen mode Exit fullscreen mode

Implementing KEDA Without The Addon

KEDA isn't a solution that you can only install on AKS, nor is it a solution that’s just available via an addon. You can also install it via Helm and Kubernetes Manifests. Although this blog post is focused on the KEDA addon, it’s good to still know that you can use it outside of AKS and without the addon.

Below is the configuration to deploy via Helm.

First, add the KEDA repo.

helm repo add kedacore https://kedacore.github.io/charts
Enter fullscreen mode Exit fullscreen mode

Next, ensure that the chart repo is updated.

helm repo update
Enter fullscreen mode Exit fullscreen mode

Lastly, install KEDA in it’s own Namespace.

helm install keda kedacore/keda --namespace keda --create-namespace
Enter fullscreen mode Exit fullscreen mode

Implementing KEDA With The AKS Addon

As mentioned in the opening of this blog post, there are a ton of Managed Kubernetes Service providers that are creating ways to use third-party tools and addons directly within their environment vs you having to go out and install it in a different way.

For the KEDA AKS addon, there are three solutions:

  1. Within the Azure Portal
  2. Creating a new cluster and enabling KEDA within the commands
  3. Update an existing cluster and enable KEDA within the commands

Azure Portal

To use the GUI, go to your existing AKS cluster and under Settings, choose the Application scaling option.

Click the + Create button.

Image description

You’ll now see the “Scale with KEDA” screen and you can set up KEDA from here.

Image description

New Cluster

If you decide that you want to create a new cluster instead of using an existing one, you can do that as well.

As you’re creating the cluster, all you have to do is use the --enable-keda flag to enable KEDA.

Below is an example.

az aks create --resource-group your_rg_name \
--name your_new_cluster_name
--enable-keda
Enter fullscreen mode Exit fullscreen mode

Existing Cluster

Like the az aks create command, there’s an update subcommand that’s very similar. All you have to do is use the same --enable-keda flag.

az aks update --resource-group your_rg_name \
--name your_existing_cluster_name \
--enable-keda
Enter fullscreen mode Exit fullscreen mode

Closing Thoughts

When you think about what your resource optimization, performance optimization, and overall strategy is when thinking about how you want to ensure proper scaling within your environment, KEDA is a great implementation if you have a need to perform event-driven scaling via particular triggers.

Top comments (0)