<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Arthur Ávila</title>
    <description>The latest articles on DEV Community by Arthur Ávila (@arthuravila26).</description>
    <link>https://dev.to/arthuravila26</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/arthuravila26"/>
    <language>en</language>
    <item>
      <title>Azure Functions running on Kubernetes using Keda</title>
      <dc:creator>Arthur Ávila</dc:creator>
      <pubDate>Wed, 17 Feb 2021 23:53:19 +0000</pubDate>
      <link>https://dev.to/arthuravila26/azure-functions-running-on-kubernetes-using-keda-2pi5</link>
      <guid>https://dev.to/arthuravila26/azure-functions-running-on-kubernetes-using-keda-2pi5</guid>
      <description>&lt;p&gt;Have you ever heard about Keda? &lt;/p&gt;

&lt;p&gt;No? Soooo, just grab a coffee or whatever you want and follow me the goods!&lt;/p&gt;

&lt;h2&gt;
  
  
  What's Keda?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://keda.sh" rel="noopener noreferrer"&gt;Keda&lt;/a&gt; is a Event Driven Autoscaler based on Kubernetes. Was developed by Microsoft and Red Hat and now it's a Cloud Native Computing Foundation (CNCF) sandbox project.&lt;br&gt;
With Keda you can simply scale you application of any container in Kubernetes based on the number of events.&lt;br&gt;
This means you can create any application that is Event-Drive and get it up when something arrives and get it down when it's nothing there. This makes the cost of your application lower when running in a Cloud provider as AWS, Azure or GCP.&lt;br&gt;
You can use Keda for scale based in a queue on Rabbit, Azure ServiceBus, Kafka. Using CPU usage, as a Cron, on MongoDB queries and &lt;a href="https://keda.sh/docs/2.1/scalers/" rel="noopener noreferrer"&gt;many more&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Awesome, isn't it?&lt;/em&gt; 🤓&lt;/p&gt;
&lt;h2&gt;
  
  
  Great, but how does it works?
&lt;/h2&gt;

&lt;p&gt;I will drop here my sample on &lt;a href="https://github.com/arthuravila26/python-function-servicebus-keda" rel="noopener noreferrer"&gt;github&lt;/a&gt;.&lt;br&gt;
In this tutorial, I will show you how simple is create a Azure Function using Python ❤️ and deploy it on AKS. I also will show a pipeline that will deploy this functions automatically to your AKS. &lt;/p&gt;

&lt;p&gt;Alright, talk is cheap and let's code!&lt;/p&gt;
&lt;h2&gt;
  
  
  Pre-requisites
&lt;/h2&gt;

&lt;p&gt;For start our Python functions we need a couple things.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;First of all, we need an &lt;a href="https://azure.microsoft.com/en-us/free/search/?&amp;amp;ef_id=Cj0KCQiA962BBhCzARIsAIpWEL0yJq5fIWttHFgLd9uGDa60_uvpeIwIKkM0Yp7tPV2X5MO-vgYe1IkaAmDjEALw_wcB:G:s&amp;amp;OCID=AID2100014_SEM_Cj0KCQiA962BBhCzARIsAIpWEL0yJq5fIWttHFgLd9uGDa60_uvpeIwIKkM0Yp7tPV2X5MO-vgYe1IkaAmDjEALw_wcB:G:s&amp;amp;dclid=CjgKEAiA962BBhDLtsGQrbzDjhgSJAAz72xRYG7Mk8H3qy1-MUwv68CQOOMrp4__0iXetkmGBVFayPD_BwE" rel="noopener noreferrer"&gt;Azure Subscription&lt;/a&gt; to create our AKS and the Azure ServiceBus. The free-trial it's just great for create it.&lt;/li&gt;
&lt;li&gt;We gonna use the &lt;a href="https://github.com/Azure/azure-functions-core-tools" rel="noopener noreferrer"&gt;Azure Functions Core Tools&lt;/a&gt; for create, start and run our functions.&lt;/li&gt;
&lt;li&gt;Install &lt;a href="https://docs.docker.com/get-docker/" rel="noopener noreferrer"&gt;Docker&lt;/a&gt; and a &lt;a href="https://hub.docker.com" rel="noopener noreferrer"&gt;DockerHub&lt;/a&gt; account is essential.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://kubernetes.io/docs/tasks/tools/install-kubectl/" rel="noopener noreferrer"&gt;Kubectl&lt;/a&gt; to watch the beauty babies getting up.&lt;/li&gt;
&lt;li&gt;A repository on Github, Gitlab, Azure DevOps repos,...&lt;/li&gt;
&lt;li&gt;It's not a must. But if you want to deploy it using the Azure pipeline from the example, you need to log in the Azure DevOps and run the pipeline. But if you have another CI/CD pipeline or want to deploy in a different way, it's fair enough too.&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;
  
  
  Let's start!!
&lt;/h2&gt;

&lt;p&gt;First of all, we must create start the project on Azure DevOps repos, GitHubs or GitLab. You just have to configure your Azure DevOps to connect in your repository to run the pipeline when you want to.&lt;br&gt;
After create, clone your project to your machine.&lt;/p&gt;

&lt;p&gt;From here I will assume that you already have an &lt;a href="https://docs.microsoft.com/en-us/azure/aks/kubernetes-walkthrough" rel="noopener noreferrer"&gt;AKS&lt;/a&gt; cluster created with &lt;a href="https://keda.sh/docs/2.1/deploy/" rel="noopener noreferrer"&gt;Keda 2.1&lt;/a&gt; installed, a &lt;a href="https://docs.microsoft.com/en-us/azure/service-bus-messaging/service-bus-quickstart-cli" rel="noopener noreferrer"&gt;ServiceBus&lt;/a&gt; with a queue and connection string.&lt;/p&gt;
&lt;h4&gt;
  
  
  1. Starting a function
&lt;/h4&gt;

&lt;p&gt;With the Azure Functions Tools it's pretty easy to start a functions. We just need to run&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;func init . --docker
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And that's all folks! Thanks for reading...&lt;br&gt;
Just kidding  🤣&lt;/p&gt;

&lt;p&gt;Following this command, you will see in your terminal something like&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Select a number for worker runtime:
1. dotnet
2. node
3. python
4. powershell
5. custom
Choose option:
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Right here you will choose the option 3. After that, our project will be started with a Dockerfile that is prepared to run a Python function. But it's not done yet. We need to create our function.&lt;/p&gt;

&lt;h4&gt;
  
  
  2. Creating a function
&lt;/h4&gt;

&lt;p&gt;In the step 1 we just started our project. Now we gonna create our function running the command&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;func new
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And now we will choose which template we need for this function. For this tutorial, you must choose the option 11.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Select a number for template:
1. Azure Blob Storage trigger
2. Azure Cosmos DB trigger
3. Durable Functions activity
4. Durable Functions HTTP starter
5. Durable Functions orchestrator
6. Azure Event Grid trigger
7. Azure Event Hub trigger
8. HTTP trigger
9. Azure Queue Storage trigger
10. RabbitMQ trigger
11. Azure Service Bus Queue trigger
12. Azure Service Bus Topic trigger
13. Timer trigger
Choose option: 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After that, we must choose a name for our function. You can choose the name you want.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Azure Service Bus Queue trigger
Function name: [ServiceBusQueueTrigger] 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After create our function, we must change the file  &lt;code&gt;local.settings.json&lt;/code&gt; and include the servicebus connection like&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "IsEncrypted": false,
  "Values": {
    "FUNCTIONS_WORKER_RUNTIME": "python",
    "AzureWebJobsStorage": "&amp;lt;service-bus-connection&amp;gt;"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And  in the folder that was created our functions, we need to include the queue name as well&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "scriptFile": "__init__.py",
  "bindings": [
    {
      "name": "msg",
      "type": "serviceBusTrigger",
      "direction": "in",
      "queueName": "&amp;lt;queue-name&amp;gt;",
      "connection": "AzureWebJobsStorage"
    }
  ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And for now, our function is ready!! 🎉🎊&lt;/p&gt;

&lt;h4&gt;
  
  
  3. Testing it locally
&lt;/h4&gt;

&lt;p&gt;To run our function, just run&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;func start
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and it will start. For test it,  I created a python script that it's in my github sample that I posted up here, but it's simple do it and I show you.&lt;br&gt;
We need to export the &lt;em&gt;AzureWebJobsStorage&lt;/em&gt; and &lt;em&gt;QUEUE_NAME&lt;/em&gt; to our OS like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export AzureWebJobsStorage='&amp;lt;service-bus-connection&amp;gt;'
export QUEUE_NAME=&amp;lt;queue-name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Exporting it, you can create or use the script:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import os
import sys
import time
from logger import logger
from azure.servicebus import ServiceBusClient, ServiceBusMessage

connection_string = os.environ['AzureWebJobsStorage']
queue_name = os.environ['QUEUE_NAME']

queue = ServiceBusClient.from_connection_string(conn_str=connection_string, queue_name=queue_name)


def send_a_list_of_messages(sender):
    messages = [ServiceBusMessage("Message in list") for _ in range(100)]
    sender.send_messages(messages)
    logger.info("Sent a list of 100 messages")


with queue:
    sender = queue.get_queue_sender(queue_name=queue_name)
    with sender:
        send_a_list_of_messages(sender)

logger.info("Done sending messages")
logger.info("-----------------------")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This script will send 100 events to your ServiceBus queue and your function running locally will consume all this events.&lt;/p&gt;

&lt;p&gt;Alright, for now we created our function and tested it! Now we need to deploy it to our AKS. As I wrote up here, I will use an azure-pipeline.yml to deploy.&lt;/p&gt;

&lt;h4&gt;
  
  
  4. Manifests files and deploy pipeline
&lt;/h4&gt;

&lt;p&gt;In this project I created a folder called &lt;code&gt;manifests&lt;/code&gt; and there we have 2 files. The &lt;code&gt;deployment.yml&lt;/code&gt; that we gonna use to configure our pod on AKS and the &lt;code&gt;scaledobject.yml&lt;/code&gt; that is the configuration file that Keda will use to understand when to scale the application. Let's see what is up on &lt;code&gt;deployment.yml&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion : apps/v1
kind: Deployment
metadata:
  name: &amp;lt;pod-name&amp;gt;
  namespace: &amp;lt;namespace&amp;gt;
  labels:
    app: &amp;lt;pod-name&amp;gt;
spec:
  selector:
    matchLabels:
      app: &amp;lt;pod-name&amp;gt;
  template:
    metadata:
      labels:
        app: &amp;lt;pod-name&amp;gt;
    spec:
      containers:
        - image: arthuravila/keda-container
          name: keda-container
          ports:
          - containerPort: 80
          resources:
            requests:
              memory: "64Mi"
              cpu: "50m"
            limits:
              memory: "128Mi"
              cpu: "250m"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is a simple deployment file that I used to configure my pod. In the container I'm using my container image from DockerHub. Feel free to use it or create your own, it's up to you.&lt;/p&gt;

&lt;p&gt;Now, let's see how the &lt;code&gt;scaledobject.yml&lt;/code&gt; looks like&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
  name: &amp;lt;pod-name&amp;gt;
  namespace: &amp;lt;namespace&amp;gt;
spec:
  scaleTargetRef:
    name: &amp;lt;pod-name&amp;gt;
  minReplicaCount: 0
  maxReplicaCount: 10
  pollingInterval: 1
  triggers:
  - type: azure-servicebus
    metadata:
      queueName: &amp;lt;queue-name&amp;gt;
      messageCount: '1'
      connectionFromEnv: AzureWebJobsStorage
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This file is for Keda version 2.x. For version 1.x this may be different. Have a look on &lt;a href="https://keda.sh/docs/2.1/migration/" rel="noopener noreferrer"&gt;Documentation&lt;/a&gt;.&lt;br&gt;
But explaining this file, on &lt;code&gt;scaleTargetRef&lt;/code&gt; we set the minimum of replicas I want. I set 0 because when there is no event in the queue I assume that it's not necessary have a pod running. And The maximum of 10 replicas.&lt;br&gt;
In the triggers, I use the azure-servicebus. It may change when you want to scale from another service. But as this tutorial is about ServiceBus, we must use like that.&lt;br&gt;
In the metadata we must give the name of the queue on  &lt;code&gt;queueName&lt;/code&gt;. In the &lt;code&gt;messageCount&lt;/code&gt; I set 1 because when an event arrives, the pod get up for consume it. And finally the &lt;code&gt;connectionFromEnv&lt;/code&gt; I set the &lt;code&gt;AzureWebJobsStorage&lt;/code&gt;. This variable will use the connection string that we passed in the file &lt;code&gt;local.settings.json&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Cool, now let's talk about the pipeline. This pipeline is pretty simple at all.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;trigger:
- main

resources:
- repo: self

variables:
  # Container registry service connection established during pipeline creation
  dockerRegistryServiceConnection: 'DockerHub'
  imageRepository: 'arthuravila/keda-container'
  dockerfilePath: '**/Dockerfile'
  tag: '$(Build.BuildId)'
  imagePullSecret: 'keda-container'

  # Agent VM image name
  vmImageName: 'ubuntu-latest'

stages:
- stage: Build
  displayName: Build stage
  jobs:
  - job: Build
    displayName: Build
    pool:
      vmImage: $(vmImageName)
    steps:
    - task: Docker@2
      displayName: Build an image
      inputs:
        command: build
        repository: $(imageRepository)
        dockerfile: $(dockerfilePath)
        containerRegistry: $(dockerRegistryServiceConnection)
        tags: |
          $(tag)
    - task: Docker@2
      displayName: Push an image to container registry
      inputs:
        command: push
        repository: $(imageRepository)
        containerRegistry: $(dockerRegistryServiceConnection)
        tags: |
          $(tag)
    - upload: manifests
      artifact: manifests

- stage: DeployNONPROD
  displayName: Deploy NONPROD
  dependsOn: 
  - Build
  condition: and(succeeded(),  eq(variables['Build.SourceBranch'], 'refs/heads/main'))

  jobs:
  - deployment: Deploy
    displayName: Deploy
    pool:
      vmImage: $(vmImageName)
    environment: '&amp;lt;AKS environment&amp;gt;'
    strategy:
      runOnce:
        deploy:
          steps:
          - task: KubernetesManifest@0
            displayName: Create imagePullSecret
            inputs:
              action: createSecret
              secretName: $(imagePullSecret)
              dockerRegistryEndpoint: $(dockerRegistryServiceConnection)

          - task: KubernetesManifest@0
            displayName: Deploy to Kubernetes cluster
            inputs:
              action: deploy
              manifests: |
                $(Pipeline.Workspace)/manifests/deployment.yml
                $(Pipeline.Workspace)/manifests/scaledobject.yml
              imagePullSecrets: |
                $(imagePullSecret)
              containers: |
                $(containerRegistry)/$(imageRepository):$(tag)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This pipeline will trigger when merge to main branch, build an image with the dockerfile created by Azure Function Tools, push this image to our docker registry and deploy to AKS using the &lt;code&gt;deployment.yml&lt;/code&gt; and &lt;code&gt;scaledobject.yml&lt;/code&gt; files.&lt;br&gt;
But we are not done yet, we must configure our Azure DevOps to run our pipeline.&lt;/p&gt;
&lt;h3&gt;
  
  
  5. Configuring Azure DevOps and Deploying to AKS
&lt;/h3&gt;

&lt;p&gt;To run our pipeline and deploy it to our AKS, we must configure a couple things in Azure DevOps.&lt;br&gt;
First of all, we start configuring &lt;code&gt;Service connections&lt;/code&gt;. Here we configure our AKS connection, our repository for trigger when trigger to main branch and our docker registry connection.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzdanywl8sf0nlc19tt6k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzdanywl8sf0nlc19tt6k.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After configuring this connections, we must configure our environments. As you may have note in the pipeline on the stage &lt;code&gt;Deploy&lt;/code&gt; we have the &lt;code&gt;environment&lt;/code&gt;. This environment we set on Pipelines. Here we set this environment to deploy to the AKS.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5lcomfxo9lbi073ouehz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5lcomfxo9lbi073ouehz.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;At the first time that we run a pipeline in Azure DevOps, we must trigger it manually.&lt;br&gt;
To do it, we just go to &lt;em&gt;Pipelines&lt;/em&gt; -&amp;gt; &lt;em&gt;New pipeline&lt;/em&gt; and choose the repository where the project is and select the &lt;code&gt;azure-pipelines.yml&lt;/code&gt; file and run the pipeline.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm8rzrqf3ysohsxbkwikq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm8rzrqf3ysohsxbkwikq.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We are almost done, can you believe that?&lt;/p&gt;

&lt;p&gt;After your pipeline run and deploy, you must see something like that&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbpf3u6r2sh5dod5itzlf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbpf3u6r2sh5dod5itzlf.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
It means that your functions has been deployed with success!! 🎊🥳&lt;/p&gt;

&lt;p&gt;Alright, Alright... Let's see it running mate!&lt;/p&gt;

&lt;p&gt;If you run the command&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get pods -n &amp;lt;your-name-space&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You will see something like this&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnsy67s9rn8jk0tseqpaz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnsy67s9rn8jk0tseqpaz.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
It means that your pod is stopped because you don't have any event in your queue and it's not necessary to stay up consuming resources.&lt;/p&gt;

&lt;p&gt;To watch how beautiful is your serveless function getting up with Keda, you can run the test that I show up here again.&lt;br&gt;
After run that, use the command again to see your pods&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get pods -n &amp;lt;your-name-space&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And you will see all pods getting up like this&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4c9acrv8r8ctxrdvme9w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4c9acrv8r8ctxrdvme9w.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Pretty nice, huh?&lt;br&gt;
This is just one example of many other you can try out using Keda. You can check more examples on Keda's github &lt;a href="https://github.com/kedacore/samples" rel="noopener noreferrer"&gt;Sample's repository&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Well, I finish here... This was huge!&lt;br&gt;
If you have any doubt or feedback, fell free to comment or contact me on &lt;a href="https://www.linkedin.com/in/arthur-%C3%A1vila-502bb889/" rel="noopener noreferrer"&gt;linkedin&lt;/a&gt;. This is the first article I have ever wrote in my life, any feedback is welcome!&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>keda</category>
      <category>python</category>
      <category>azure</category>
    </item>
  </channel>
</rss>
