A lot of technical terms in the name of the article, right? Let's dive into it!
Why Azure Function App?
It is not a serious question! Azure Function App is a fantastic serverless infrastructure that allows you to execute your code within a secure environment.
What if you need to perform a maintenance task by schedule? Or execute a PowerShell script within your Azure subscripton to perform a specific operation?
Azure Function App might perform it for you!
There are challenges, of course. This article explains how to overcome them and have an automated pipeline that creates infrastructure and then builds and deploys Azure Function to the correct place!
Why private VNET?
That is the weirdest part. However, some companies prohibit the use of any Azure service with a publicly accessible interface!
There will be more on that later in this article.
Why Flex Consimption?
Microsoft provides several plans for the Azure Function Apps. They are reasonable.
If you have heavy traffic and a lot of executions, your choice is the Premium Plan: about $200 per month...
What if you need to execute your function only several times per day?
So, with all possible options, we don't have a choice. Only Flex Consumption costs about $3 a month (with less than 1000 executions, remember?) And a Flex Consumption Plan-based App might be integrated into a VNET.
2024.12 UPD: Unfortunately, not $3 but $30 per month. Here is the post about that Flex Consumption is not cheap (when in private VNET)
Some technical internals of Azure Function App
To explain the challenge, I need to surface several technical details. Azure Function App is not exactly a separate service.
The usual Azure Function App has to use other Azure services. Some of them are optional, and some of them are mandatory.
Having an Azure Storage Account is a must. The Azure Function App uses the Storage Account to store the Azure Function Code. However, Azure Storage Account is a publicly available service!
The Function App must be able to access the green circle on the next schema. But that means absolutely everyone (who has the permission, of course) can access it too!
If we need to comply with the company's policy, we must turn off that green public interface of the Storage Account.
But how does the Azure Function App reach the Storage Account for the function code?
We should use a private VNET! That means a lot of things:
- we should create Storage Account private endpoints for both Blob and File services;
- we should integrate Azure Function App with one of the subnets in private VNET;
- we should create DNS zone and DNS names that would allow Azure Function App to find out the private IP addresses of the Storage Account services;
- we should provide Azure Function App with the correct DEPLOYMENT_STORAGE_CONNECTION_STRING;
- we should allow Azure Function App identity to access the Storage Account.
More details you can find in the official Microsoft document How to use a secured storage account with Azure Functions
GitLab pipeline
A lot of technical things that must be performed for every Azure Function App deployment... So, that must be automated!
Here is my GitLab project that deploys an Azure Function App based on the Consumption Flex plan: azure-function-app
The pipeline first creates infrastructure according to all the steps mentioned above. Then, the pipeline deploys the code of the Azure Function to the infrastructure created on the first step.
The same pipeline supports two environments: DEV and PROD. The plan is to push all changes to the dev
or dev_bicep
branches to the DEV environment. Any changes made to the main
branch will then be sent automatically to both the DEV and PROD environments.
You might find more details in the pipeline definition and the projects README.md file.
BTW, this pipeline does not use secrets. The authentication is based on OIDC (Open ID Connect). I have written an article that provides a detailed explanation of this method: GitLab, Azure, OpenTofu, and NO secrets!.
Why was this article written?
I tried to automate the Azure Function App (Flex Consumption Plan) deployment behind firewalls using Terraform, and I failed.
As of November 2024, Terraform providers don't support subnet integration with the Azure Function App based on the Flex Consumption Plan.
You can find my Terraform-based failing deployment in the dev
branch of azure-function-app project.
I really prefer to use Terraform, and as soon as it supports Flex Consumption network integration, I will try to revive this project.
So, Terraform fails. But you can set up that Azure Function App configuration manually via the Portal. This implies that either the ARM Template or Bicep should function properly. I took the Microsoft Official Bicep Template, modified it to support network integration, and here it is, up and running.
As a result, I spent so many hours setting everything up, so I decided to share the findings and working example.
Questions, suggestions...
As you may have noticed, there are numerous areas for improvement. Or I could make a mistake that slipped out of my consciousness.
Please feel free to ask questions or share your suggestions. In fact, I need your feedback to understand where to dig next!
Top comments (4)
I have two questions regarding Azure Function App access to a storage account using system-assigned managed identity and RBAC:
Public Storage Account:
Private VNet and Storage Account:
Essentially, I'm trying to understand the scope of the system-assigned managed identity's access from within the function app's execution environment.
These are pretty good questions!
From the Azure Function code perspective, it does not matter which storage account you use. The access operations will be the same.
Here is the PowerShell code:
Here is a link to the code in repo
To perform a login, just use the "-Identity" flag, which will pick up the managed identity.
So, the first answer is definitely yes.
The second one...
There are complications related to the DNS name of the storage account. Microsoft uses ".blob.core.windows.net" DNS zone for the blob service of publicly available storage accounts, ".file.core.windows.net" for file service, and so on.
Your request to st0rag3acc0unt.blob.core.windows.net will be resolved to a publicly accessible IP address via public DNS.
When you make your storage account private, you will have to create a private endpoint for every service in use: blob, file, queue, table...
And then, you will have to create a private DNS zone for every service and record the storage account name in this private DNS zone.
Basically, your record in the private DNS zone should resolve the same name, "st0rag3acc0unt.blob.core.windows.net" to the private IP address of the private endpoint.
All of that means your function app code will not have any clue that it's working with a private service. And that is exactly how the function app works in my example.
Therefore, the answer is no, it does not. (However, to make it working smoothly, you will have to put some efforts to building that DNS and network infrastructure.)
Hello, we are currently deploying this with terraform using azapi_resource. We are able to deploy it, but the function app is not working - every function ran returns "500 Internal Server Error" and we don't have any good logs in App Insights . I noticed in your blog you never talked about securing the function app with a private endpoint, which we are also doing. Any suggestions on our issues? We have the storage account secured and the user assigned identity also setup with RBAC assignments to the storage.
Hi Ron.
It is hard to diagnose, based on what you said. I have seen the 500 Internal Server Errors many times. However, most of the time it happened when I deployed the Function App integrated with VNET.
BTW. That is the reason I gave up on Terraform and used Bicep instead.
In your case... I believe the Function App does not have the connectivity with the storage account. It might be because the Function App is NOT VNET integrated or because there are no correct DNS zones that would resolve the names from the Function App configuration to the specific Storage Account endpoints.
So, the Function App fails with the 500 error because the crutial info about the functions is not available.
BTW. You can check the Storage Account activity log and see if the app can reach it.
Unfortunately, that is all I can assume.