In the past few years, I have been implementing many applications that fully utilize Azure Functions, and at the same time, I have learned how to use Azure Functions without failing.
Recently, I've been doing a lot of hackathons, especially Azure Light-up, and I always try to explain this area whenever I use Azure Functions. In short, we are talking about understanding the advantages and characteristics of Azure Functions and using them wisely.
Always prioritize the use of Bindings / Triggers
One of the best features of Azure Functions is the rich and extensible Bindings and Triggers that I always use.
You don't need to know how to use SDKs such as Azure Storage or Event Hub to use these useful features. Recently, I've been using CosmosDBTrigger
a lot.
Reading and writing Blobs and Queues is a rather boring part of the code since it is almost the same code, but it requires a fair amount of code, so we use Bindings / Triggers to hide it.
It's also important to focus on the actual business logic.
Keep the Function implementation as small
If the implementation of one function is large, it is often difficult to ensure idempotency and retry, so we basically let one function do only one process.
By using Durable Functions, you can implement applications that work with multiple Functions without being aware of calls between instances, making it easier to keep your Function implementations small.
Azure / azure-functions-durable-extension
Durable Task Framework extension for Azure Functions
By combining single-feature Function with Queue Triger and Cosmos DB Change Feed, we can implement the desired application.
If the responsibilities of the functions are not clearly defined, even deployment will be difficult.
Use the latest patterns in .NET
The latest patterns in .NET does not mean using the latest C# language features, but trying to write code according to the base framework.
Since Azure Functions v3 is based on .NET Core 3.1 / ASP.NET Core 3.1, it is possible to write simple code by following its style. In particular, Dependency Injection has become a must, so I immediately discard static classes and methods that are generated when Visual Studio is created.
A New Era of Azure Functions Development
Tatsuro Shibamura ・ Sep 17 '20
However, if you use complex DI, it will have the opposite effect, so I mainly use simple dependency resolution and instance lifetime management as DI without overdoing it.
Design to be retryable
By implementing a function as a single feature, it is easier to deal with retries because there is less to think about. It also makes it easier to ensure idempotency.
The Azure SDK has retries built in from the beginning, so I basically use the built-in retries as is, and use Polly or Azure Function Retry Policy as needed.
A quick review of the Azure Functions new feature "Retry Policy"
Tatsuro Shibamura ・ Nov 5 '20
One of the implementations that I try to avoid these days is the case where a single function writes to multiple storage devices. In such a case, use the Event Grid for Blobs to detect event-driven writes to Blobs and perform subsequent writes.
Recently, using Cosmos DB Change Feed, the main function writes to Cosmos DB, while the SQL DB and Blob writes are implemented as separate functions in many cases.
This simplifies the processing of each function and makes it easier to retry.
The following is an example of using Cosmos DB Change Feed to write from a single data source to SQL DB and Blob Storage.
public class Function1
{
[FunctionName("Function1")]
public async Task<IActionResult> Run(
[HttpTrigger(AuthorizationLevel.Function, "post")] HttpRequest req,
[CosmosDB(
databaseName: "TestDB",
collectionName: "Events",
ConnectionStringSetting = "CosmosConnectionString")] IAsyncCollector<EventData> collector,
ILogger log)
{
// Process using Function input and write to Cosmos DB
await collector.AddAsync(new EventData
{
Id = Guid.NewGuid().ToString()
});
return new OkResult();
}
}
// SQL DB Writer Function
public class Function2
{
public Function2(AppDbContext appDbContext)
{
_appDbContext = appDbContext;
}
private readonly AppDbContext _appDbContext;
[FixedDelayRetry(-1, "00:00:10")]
[FunctionName(nameof(Function2))]
public async Task Run(
[CosmosDBTrigger(
databaseName: "TestDB",
collectionName: "Events",
ConnectionStringSetting = "CosmosConnectionString",
LeaseCollectionName = "leases",
LeaseCollectionPrefix = nameof(Function2))]
JArray input,
ILogger log)
{
var items = input.ToObject<EventData[]>();
if (items != null && items.Length > 0)
{
// Write the data received by the Change Feed to SQL DB
await _appDbContext.EventData.AddRangeAsync(items);
await _appDbContext.SaveChangesAsync();
}
}
}
// Blob Writer Function
public class Function3
{
[FixedDelayRetry(-1, "00:00:10")]
[FunctionName(nameof(Function3))]
public async Task Run(
[CosmosDBTrigger(
databaseName: "TestDB",
collectionName: "Events",
ConnectionStringSetting = "CosmosConnectionString",
LeaseCollectionName = "leases",
LeaseCollectionPrefix = nameof(Function3))]
JArray input,
[Blob("workitems/sample.json", FileAccess.Write)] CloudAppendBlob appendBlob,
ILogger log)
{
var items = input.ToObject<EventData[]>();
if (items != null && items.Length > 0)
{
log.LogInformation("Documents modified " + items.Length);
log.LogInformation("First document Id " + items[0].Id);
// Write the data received by the Change Feed to Append Blob
await appendBlob.AppendTextAsync(JsonConvert.SerializeObject(items));
}
}
}
Since each Function does only one thing, the implementation is easy to understand, and you will find that error handling and retry handling are simple.
Combining Azure Functions with Cosmos DB Change Feed is an easy way to implement a lambda architecture. Both services are highly scalable, making them an excellent choice.
Write code to support Graceful Shutdown
In the past, before deploying to a running Azure Functions, I would check that no processing was taking place in Application Insights, but since Graceful Shutdown started working correctly a while ago, I implemented it to support it, and deployment can be done without being aware of the timing.
Best Practices for Graceful shutdown in Azure Functions
Tatsuro Shibamura ・ Feb 4 '21
Some people may think that Graceful Shutdown is not necessary if you use Deployment Slot, but Deployment Slot in App Service / Azure Functions restarts the non-production slot after the warm-up is complete, and replaces the routing.
Therefore, it is important to support Graceful Shutdown even when deploying using Deployment Slot, because there is no guarantee that the process will finish normally before the swap.
Separate the Function App project by feature
To be precise, it is correct to divide the Function App project by the unit of scaling and deployment, but since there are few cases where you can see that area from the beginning, I try to think about separate it by the unit of feature first.
Sometimes I see an application that implements a large number of functions in one Function App, but if the reason for grouping them in one Function App is cost, it is better to reconsider.
Keep in mind that scaling App Service / Azure Functions can only be done per App Service Plan.
Set up a CI / CD pipeline early on
I think it goes without saying that, with the exception of very small applications, we usually create more than one Function App. Since it is not practical to deploy them manually from Visual Studio, we will automate them using GitHub Actions and Azure Pipelines.
- Use GitHub Actions to make code updates in Azure Functions | Microsoft Docs
- Continuously update function app code using Azure DevOps | Microsoft Docs
If you need a more practical example, please refer to the Azure Functions Boilerplate repository that I created.
shibayan / azure-functions-boilerplate
A boilerplate project for getting started with Azure Functions v4
Deployment itself should not be a problem since it is very easy and Action/Task is provided. In recent years, I've even built a CI/CD pipeline in the middle of a hackathon.
Hope this helps your Azure Serverless life!
Top comments (0)