DEV Community

Cover image for The Curious Developer's Guide to Portable Azure Functions
Linda Nichols
Linda Nichols

Posted on • Updated on

The Curious Developer's Guide to Portable Azure Functions

This article is part of #ServerlessSeptember. You'll find other helpful articles, detailed tutorials, and videos in this all-things-Serverless content collection. New articles from community members and cloud advocates are published every week from Monday to Thursday through September.   

Find out more about how Microsoft Azure enables your Serverless functions at https://docs.microsoft.com/azure/azure-functions/.   


Azure Functions allow you to execute small snippets of code, in the cloud, without concern for cloud infrastructure. These functions are triggered by several different types of event sources, making them the building blocks of an event-driven or "serverless" architecture. They're easy to write, deploy, and connect to other cloud services to create powerful applications.

Azure Functions are also open source!

But did you know they're also... portable?

Pee Wee on Bike                

Function Apps runtimes can run in a container. And containers are managed by Kubernetes. And Kubernetes can run just about anywhere.

Even outside of Azure.                         

Azure Functions, in Kubernetes, running outside of Azure?                         

Evil Witch                                    

What about the event-driven nature of "serverless" applications with Azure Functions? When Function Apps are fully-managed on Azure, container instances are added (or removed) based on the number of incoming trigger events. This makes scaling to support message load nearly seamless. The Azure Functions runtime can run anywhere but what about the scale controller?

The Horizontal Pod Autoscaler in Kubernetes provides some autoscaling to support spikes in CPU intensity or some other custom application metrics. To replicate the event-based scaling that we're used to with Azure Functions, the HPA needs a little help from an open-source project called KEDA.

KEDA, or Kubernetes-based Event Driven Autoscaler, does exactly as described. It extends (but doesn't duplicate) the functionality of the Horizontal Pod Autoscaler. It supports events triggers from a large variety of sources both internal and external to large cloud providers. KEDA's scaler takes metric values from the event source and creates custom metrics to send to the HPA to support scaling based on the event load.                                    

Yada Yada Yada                                

So, let's do it. Let's make portable Azure Functions.                 

Let's Do It                            


Assumption is that you already have these things:

  • Installed Azure Function Core Tools v2

  • An Azure Subscription (this is for the storage queue, not for Azure Functions)

  • A Kubernetes cluster. It can be AKS, GKE, EKS, OpenShift, Kubernetes on-prem, whatever and wherever.

  • kubectl with current-context set to your Kubernetes cluster.

And now we're ready to package up our function apps and take them on the road.

Leslie Knope                                    


1. Make a directory for the Function App for your Azure Functions

mkdir functions-everywhere
cd functions-everywhere
Enter fullscreen mode Exit fullscreen mode

2. Initialize the Functions directory

func init . --docker
Enter fullscreen mode Exit fullscreen mode

--docker flag creates a Dockerfile for a container using a base image that is based on the chosen --worker-runtime

Choose your runtime and language.

3. Create a new Azure Function and define the trigger type

func new
Enter fullscreen mode Exit fullscreen mode

Use the Azure Queue Storage Trigger

You can rename the function, or just leave the default for this demo.

4. Create an Azure storage account and queue

5. Update your function with the storage account names

Get the connection string for your new storage account:

az storage account show-connection-string --name <storage-name> --query connectionString
Enter fullscreen mode Exit fullscreen mode

Edit the local.settings.json file in your function app, which contains the local debug connection string settings. Replace the {AzureWebJobsStorage} with the connection string value:

local.settings.json

{
  "IsEncrypted": false,
  "Values": {
    "FUNCTIONS_WORKER_RUNTIME": "node",
    "AzureWebJobsStorage": "DefaultEndpointsProtocol=https;EndpointSuffix=core.windows.net;AccountName=yourstorageaccount;AccountKey=shhhh==="
  }
}
Enter fullscreen mode Exit fullscreen mode

Now, open the function.json file and set the connection setting value to AzureWebJobsStorage. This tells the function to pull the connection string from the AzureWebJobsStorage key we set above.

function.json

{
  "bindings": [
    {
      "name": "myQueueItem",
      "type": "queueTrigger",
      "direction": "in",
      "queueName": "<your-queue-name>",
      "connection": "AzureWebJobsStorage"
    }
  ]
}
Enter fullscreen mode Exit fullscreen mode

6. Enable the storage queue bundle for the function runtime

Ensure that host.json contains the extensions bundle to allow Azure Storage Queues binding support.

host.json

{
  "version": "2.0",
  "extensionBundle": {
    "id": "Microsoft.Azure.Functions.ExtensionBundle",
    "version": "[1.*, 2.0.0)"
  }
}
Enter fullscreen mode Exit fullscreen mode

7. Install KEDA in your cluster

func kubernetes install --namespace keda
Enter fullscreen mode Exit fullscreen mode

Confirm that KEDA is installed:

kubectl get customresourcedefinition

NAME                        AGE
scaledobjects.keda.k8s.io   2h
Enter fullscreen mode Exit fullscreen mode

8. Deploy your Function App to Kubernetes

Note: This assumes that you have a Docker account and you've already used docker login to sign-in through the cli.

func kubernetes deploy --name <function-app-name-lowercase> --registry <your-docker-registry>
Enter fullscreen mode Exit fullscreen mode

This command build the docker container, push it to the specified registry, generate a YAML file, and deploy to your Kubernetes cluster.

If you'd like to save a copy of the YAML deploy file, use the dry-run flag:

func kubernetes deploy --name <function-app-name-lowercase> --registry <your-docker-registry> --dry-run > func-deployment.yml
Enter fullscreen mode Exit fullscreen mode

9. See your function scaling as messages are added

To add a message to your storage queue, go to your Azure Storage account in the Azure Portal and open the Storage Explorer. Select your storage queue and add a new message.

You should initially see 0 pods since the function has not started scaling yet.

kubectl get deploy
Enter fullscreen mode Exit fullscreen mode

Note: By default, the polling interval set is 30 seconds on the ScaledObject resource and the cooldown period is 300 seconds.

kubectl get pods -w
Enter fullscreen mode Exit fullscreen mode

After all messages are consumed by the function app, and the cooldown period has elapsed, the last pod should scale back down to 0.                    


Congrats! You are now using portable Azure Functions.

Leslie Dancing                        


More Resources

Latest comments (2)

Collapse
 
madebygps profile image
Gwyneth Peña-Siguenza

My mind is blown!

Collapse
 
troywitthoeft profile image
Troy Witthoeft • Edited

Loved this article! Thank you. We have a large library of azure functions, that is experiencing exponential growth. My position has always been to lean into the advantages of your cloud provider, and now the existence of a KEDA portability model makes it easy to dismiss any of the "cloud vendor lock in" boogeyman arguments from other architects, sales teams. With KEDA I can confidently explain that our library of functions is portable to Kubernetes. And while I'm not interested in managing Kubernetes soon, the option for portability now, enables our continued bold growth into Azure Functions. Thanks.