<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Eugen Podaru</title>
    <description>The latest articles on DEV Community by Eugen Podaru (@eugenpodaru).</description>
    <link>https://dev.to/eugenpodaru</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/eugenpodaru"/>
    <language>en</language>
    <item>
      <title>Copying blob containers between storage accounts using Azure CLI</title>
      <dc:creator>Eugen Podaru</dc:creator>
      <pubDate>Sun, 29 Nov 2020 22:43:10 +0000</pubDate>
      <link>https://dev.to/eugenpodaru/copying-blob-containers-between-storage-accounts-using-azure-cli-32a</link>
      <guid>https://dev.to/eugenpodaru/copying-blob-containers-between-storage-accounts-using-azure-cli-32a</guid>
      <description>&lt;h4&gt;
  
  
  TL;DR
&lt;/h4&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;h4&gt;
  
  
  TL
&lt;/h4&gt;

&lt;p&gt;In the past few weeks, we have been doing a migration of our platform from one Azure account to another and I have been using this as an opportunity to update different parts of our deployment pipelines. While doing this I have become more familiarized with &lt;a href="https://docs.microsoft.com/en-us/cli/azure/install-azure-cli"&gt;Azure CLI&lt;/a&gt; and soon enough I have become a fan (not a literal fan, just a fan of Azure CLI).&lt;/p&gt;

&lt;p&gt;Previously we were using &lt;a href="https://docs.microsoft.com/en-us/azure/azure-resource-manager/templates/overview"&gt;ARM templates&lt;/a&gt; to deploy our resource groups and the resources in them. This worked fine, but I did not particularly like them. One of the reasons is because they are way to verbose. Even if you want a resource with all the settings set to their defaults, you still need to define a boatload of JSON. Of course, you do not have to write it all from scratch. You can start from a &lt;a href="https://github.com/Azure/azure-quickstart-templates"&gt;quick start template&lt;/a&gt;, and change only what you need, but it still takes too much effort to mentally parse it all and understand it. Anyway, it is so much nicer, shorter, and easier to read and write using Azure CLI.&lt;/p&gt;

&lt;p&gt;Migrating the platform also means migrating the data, not just the resources. As it happens, we have data across a wide range of services: &lt;a href="https://docs.microsoft.com/en-us/azure/storage/common/storage-account-overview"&gt;Storage Accounts&lt;/a&gt; (in Blobs, Table Storage, File Shares and &lt;a href="https://docs.microsoft.com/en-us/azure/storage/blobs/data-lake-storage-introduction"&gt;Data Lake Gen 2&lt;/a&gt;), &lt;a href="https://docs.microsoft.com/en-us/azure/data-lake-store/"&gt;Data Lake Gen 1&lt;/a&gt; and &lt;a href="https://docs.microsoft.com/en-us/azure/azure-sql/"&gt;Azure SQL&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;A lot of the data migration tasks can be accomplished by using &lt;a href="https://docs.microsoft.com/en-us/azure/vs-azure-tools-storage-manage-with-storage-explorer?tabs=windows"&gt;Azure Storage Explorer&lt;/a&gt;, such as copying whole blob containers, tables and file shares between accounts, and copying entire file systems between data lake accounts (both Gen 1 and Gen 2). It uses &lt;a href="https://docs.microsoft.com/en-us/azure/storage/common/storage-use-azcopy-v10"&gt;AzCopy&lt;/a&gt; underneath, which uses &lt;a href="https://docs.microsoft.com/en-us/rest/api/storageservices/put-block-from-url"&gt;server to server APIs&lt;/a&gt; to copy data directly between accounts, so it is rather efficient.&lt;/p&gt;

&lt;p&gt;What cannot be done using Azure Storage Explorer is automating any of the above tasks. Also, while copying the blob containers, tables and file shares, you need to do it one by one, which is not great if you have a lot of them in a particular storage account.&lt;/p&gt;

&lt;p&gt;As you have probably guessed from the title, you can use Azure CLI to copy all blob containers from one account to the other. There is not much to it, it is just a simple command, currently in preview, that I found while looking for solutions to do it:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;Keep in mind that it is currently in preview (Azure CLI v2.15.1), so it might change in future releases. It also utilizes server to server APIs to perform the operation. You can use &lt;a href="https://docs.microsoft.com/en-us/azure/storage/common/storage-sas-overview"&gt;SAS tokens&lt;/a&gt; instead of the connection strings:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;You can get the account connection string or generate a SAS token also using Azure CLI:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;The nice thing about Azure CLI is that you can include the scripts in your &lt;a href="https://docs.microsoft.com/en-us/azure/devops/pipelines/get-started/what-is-azure-pipelines?view=azure-devops"&gt;azure pipelines&lt;/a&gt; and run them along with other DevOps tasks manually or on some triggers. Bellow is an example of a pipeline that runs every day to copy all blob containers between two storage accounts. You could use it to keep the production and acceptance environments in sync for example:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;The pipeline assumes that you have some pipeline variables that define the account names and connection strings. The Subscription variable refers to the &lt;a href="https://docs.microsoft.com/en-us/azure/devops/pipelines/library/service-endpoints?view=azure-devops&amp;amp;tabs=yaml"&gt;Azure service connection&lt;/a&gt; to use for this script.&lt;/p&gt;

&lt;p&gt;Conveniently, you could use Azure CLI to create both the service connection and the pipeline that runs the Azure CLI task. Mind blown 🤯! You would do that using the &lt;a href="https://docs.microsoft.com/en-us/azure/devops/cli/?view=azure-devops"&gt;Azure DevOps CLI&lt;/a&gt; extension. Below are the commands and I leave it up to you to figure out how to use them:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;When talking about Azure Storage Explorer I said that underneath it uses AzCopy. Well, AzCopy can also be used to copy all the blob containers between accounts, as well as other interesting scenarios. But that is a matter for a future post!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--S6AAomiN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/62j3bky9sxvrorwslkyx.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--S6AAomiN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/62j3bky9sxvrorwslkyx.gif" alt="That's all Folks"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>azure</category>
      <category>devops</category>
    </item>
    <item>
      <title>Dynamic Http Proxy with Azure Functions</title>
      <dc:creator>Eugen Podaru</dc:creator>
      <pubDate>Sun, 15 Nov 2020 15:04:02 +0000</pubDate>
      <link>https://dev.to/eugenpodaru/dynamic-http-proxy-with-azure-functions-n49</link>
      <guid>https://dev.to/eugenpodaru/dynamic-http-proxy-with-azure-functions-n49</guid>
      <description>&lt;p&gt;In the following article I am going to show you how to build a dynamic http proxy using Azure Functions and truly little code. If you feel like reading code rather than reading an article about code, go straight to &lt;a href="https://github.com/eugenpodaru/azure-functions-proxy" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt; and have it your way.&lt;/p&gt;

&lt;p&gt;You might be asking yourself what is a dynamic http proxy and why would I need one? You could think of it as a service registry that also allows you to proxy the http calls to the services it registers, so a dynamic http proxy, DHP or DHTTPP in short.&lt;/p&gt;

&lt;p&gt;One possible use case for such a service is between your frontends and your backends. Regardless of how fragmented or dynamic your backend services landscape is, the DHP makes it possible for you to define a single, predictable API surface for the frontends to consume. The frontends also become simpler, by only having to know about one endpoint. You could accomplish the same using a solution like &lt;a href="https://azure.microsoft.com/en-us/services/api-management/" rel="noopener noreferrer"&gt;API Management&lt;/a&gt; of course, but while for each modified or added backend you need to reconfigure the API Management, the DHP reconfigures itself.&lt;/p&gt;

&lt;p&gt;If you used Azure Functions before, you might have stumbled upon &lt;a href="https://docs.microsoft.com/en-us/azure/azure-functions/functions-proxies" rel="noopener noreferrer"&gt;Azure Functions Proxies&lt;/a&gt;. They are somewhat like API Management and can be used to achieve the same goal of a single API surface for all your services. But just like API Management, they are not dynamic, you would have to add any additional route and potentially redeploy the functions app. I will not be using them for the DHP.&lt;/p&gt;

&lt;p&gt;The idea is simple, every discoverable service registers itself with the DHP. After registering, the service will be accessible through the proxy.&lt;/p&gt;

&lt;p&gt;Let us start with the service registration. When registering itself a service needs to provide a name, a version and the host where it can be found. I guess a name and a host would have sufficed, but it is nice to have API versioning out-of-the-box and it adds little in terms of extra code to the final solution. In the following code listing you can see how the service registration function looks like:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;The function is as simple as it gets. The DHP accepts service registrations at &lt;em&gt;/api/register&lt;/em&gt; and the payload for the registrations needs to be a ServiceEntry delivered in the body of the post method. Once it gets the ServiceEntry, it simply saves it to a table in an &lt;a href="https://azure.microsoft.com/en-us/services/storage/tables/" rel="noopener noreferrer"&gt;Azure Table Storage&lt;/a&gt;. I will leave service deregistration's (delete) and service queries (get) up to you, since they are not mandatory for the DHP.&lt;/p&gt;

&lt;p&gt;You might have noticed that before saving to the table, we set the RegisteredAt property of the ServiceEntry to the current datetime in UTC. This can be useful in multiple ways. If you make your services register only once, then this will only tell you when that happened of course. However, if you make your services register once every few minutes, then suddenly this tells you which services are up, and which are down. Nice and simple!&lt;/p&gt;

&lt;p&gt;In the following code listing you can see the ServiceEntry class:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;As you can see, I am using the service name and version as partition and row keys, which feels very natural, doesn’t it? Without the service version we would have had to make one of the keys up, which would have been a waste.&lt;/p&gt;

&lt;p&gt;This is all for service registration, let us see how the proxy functions look like. It is only one function and you can see it in the following code listing:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;It is a simple HTTP triggered function, that accepts all HTTP verbs. Using Azure Functions &lt;a href="https://docs.microsoft.com/en-us/azure/azure-functions/functions-bindings-expressions-patterns" rel="noopener noreferrer"&gt;binding expressions&lt;/a&gt;, it maps the partition and row keys of the &lt;a href="https://docs.microsoft.com/en-us/azure/azure-functions/functions-bindings-storage-table-input?tabs=csharp" rel="noopener noreferrer"&gt;table input binding&lt;/a&gt; to the name and version variables extracted from the route. This way, the runtime does the job of retrieving the matched ServiceEntry for us. Next, it builds the target endpoint using the service host and the rest of the path extracted from the route, adjusts the &lt;a href="https://docs.microsoft.com/en-us/dotnet/api/system.net.http.httprequestmessage" rel="noopener noreferrer"&gt;HttpRequestMessage&lt;/a&gt; and sends it using the injected IProxyService. You can see the implementation of the IProxyService in the next code listing:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;I bet you thought this will be complicated. Here we are just wrapping the good old &lt;a href="https://docs.microsoft.com/en-us/dotnet/api/system.net.http.httpclient" rel="noopener noreferrer"&gt;HttpClient&lt;/a&gt; to make the call to our proxied service. The only thing worth mentioning here is the use of &lt;a href="https://docs.microsoft.com/en-us/dotnet/api/system.net.http.httpcompletionoption" rel="noopener noreferrer"&gt;HttpCompletitionOption.ResponseHeadersRead&lt;/a&gt; which means the operation should complete as soon as a response is available and headers are read. Since we are proxying the response, there is no need to wait until all the content is read.&lt;/p&gt;

&lt;p&gt;As promised, little code, yet quite capable!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F62j3bky9sxvrorwslkyx.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F62j3bky9sxvrorwslkyx.gif" alt="That's all Folks"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>azure</category>
      <category>dotnet</category>
      <category>serverless</category>
      <category>azurefunctions</category>
    </item>
    <item>
      <title>Azure Functions Startup Trigger</title>
      <dc:creator>Eugen Podaru</dc:creator>
      <pubDate>Sun, 15 Nov 2020 14:30:30 +0000</pubDate>
      <link>https://dev.to/eugenpodaru/azure-functions-startup-trigger-10gf</link>
      <guid>https://dev.to/eugenpodaru/azure-functions-startup-trigger-10gf</guid>
      <description>&lt;p&gt;Ever wanted to run some heavier initialization when your function app starts, but could not do it in the Startup class? Well, in this article I will walk you through creating a simple Azure Functions trigger that fires only once, exactly when you need it …🥁… when the runtime starts.&lt;/p&gt;

&lt;p&gt;If you'd rather see the code, and skip the explanation, you can go straight to &lt;a href="https://github.com/eugenpodaru/azure-functions-extensions-startup" rel="noopener noreferrer"&gt;Github&lt;/a&gt; and check it out.  Or if you prefer to just use it, and trust me that it works, you can go to &lt;a href="https://www.nuget.org/packages/Devlight.Azure.Functions.Extensions.Startup" rel="noopener noreferrer"&gt;Nuget&lt;/a&gt; and take it from there.&lt;/p&gt;

&lt;p&gt;I went through the trouble of creating a proper library and put it on Nuget, because I am using it in one of my work projects, and I believe it is not such an uncommon scenario. Even if you do not have a use case for this, you might still want to go through the article since it is a great starting point for writing your own custom triggers and bindings for Azure Functions. It has all the pieces that you need to put together for such and endeavor with little extra code.&lt;/p&gt;

&lt;p&gt;As for my use case, one of the solutions I am working on has a backend plane made of a mix of web APIs and function apps. In order to send messages between them, we use a thin wrapper on top of the Azure Service Bus. We have a single topic (one per tenant) in the Azure Service Bus namespace where all the publishers put their messages, and whoever is interested in a particular message type, can subscribe to it. Publishing messages is easy, since the topic is created, and its name is known in advance. Consuming the messages is a bit more complicated, since different services are interested in different types of messages, and each type of message has its own handler.&lt;/p&gt;

&lt;p&gt;In the web APIs we solved this by having some helper code run on startup that goes through all the relevant assemblies and gathers the message type handlers that the service defines, and then creates the corresponding subscriptions on the topic.&lt;/p&gt;

&lt;p&gt;In the function apps we could have had a function per message type. However, this was not acceptable for multiple reasons. First, the Azure Service Bus binding does not support managed identity yet. Second, the binding does not create the subscriptions if they do not exist, so that means that for every new message type, we would have had to create all the required subscriptions in advance. And last, why have some different approach, when the solution for web APIs works so great. The only problem was that we could not run the helper code in the Startup class, since &lt;a href="https://docs.microsoft.com/en-us/azure/azure-functions/functions-dotnet-dependency-injection" rel="noopener noreferrer"&gt;the Startup class is meant for only setup and registration&lt;/a&gt; of services for DI.&lt;/p&gt;

&lt;p&gt;I figured out that I could run the helper code in a function that is triggered at startup. Without knowing how the bindings and triggers work, I knew it was possible, since the Timer trigger has the RunOnStartup attribute which does exactly that.&lt;/p&gt;

&lt;h3&gt;
  
  
  How does it work?
&lt;/h3&gt;

&lt;p&gt;Let's start with the trigger, which is the attribute the marks the function that should run at startup. You can see the code in the following listing:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;Nothing to see here, move along!&lt;/p&gt;

&lt;p&gt;Since the trigger is to be used on a parameter of a function, let's see the parameter type next. A trigger can support multiple parameter types, see the &lt;a href="https://docs.microsoft.com/en-us/azure/azure-functions/functions-bindings-storage-queue-trigger?tabs=csharp#usage" rel="noopener noreferrer"&gt;QueueTrigger&lt;/a&gt; for example, but it would be overkill for this particular case:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;I added some common sense properties of the function app to the StartupInfo that can be useful in the triggered function. In my case, I use the name and the version, for example, to namespace the subscriptions, so that multiple services can subscribe to the same message types.&lt;/p&gt;

&lt;p&gt;And with that we are done with the visible parts of the trigger. Next, let's see how to register the trigger with the runtime.&lt;/p&gt;

&lt;p&gt;When the runtime starts, it scans all the linked assemblies for the &lt;a href="https://docs.microsoft.com/en-us/dotnet/api/microsoft.azure.webjobs.hosting.webjobsstartupattribute?view=azure-dotnet" rel="noopener noreferrer"&gt;WebJobsStartupAttribute&lt;/a&gt; assembly attribute which points to a class that implements the &lt;a href="https://docs.microsoft.com/en-us/dotnet/api/microsoft.azure.webjobs.hosting.iwebjobsstartup?view=azure-dotnet" rel="noopener noreferrer"&gt;IWebJobsStartup&lt;/a&gt; interface. Once it gathers all these implementations, it calls Configure on each.&lt;/p&gt;

&lt;p&gt;This is exactly how the Startup class is discovered as well, since &lt;a href="https://github.com/Azure/azure-functions-dotnet-extensions/blob/main/src/Extensions/DependencyInjection/FunctionsStartupAttribute.cs" rel="noopener noreferrer"&gt;FunctionsStartupAttribute&lt;/a&gt; actually inherits from WebJobsStartupAttribute and &lt;a href="https://github.com/Azure/azure-functions-dotnet-extensions/blob/main/src/Extensions/DependencyInjection/FunctionsStartup.cs" rel="noopener noreferrer"&gt;FunctionsStartup&lt;/a&gt; is an abstract class that implements IWebJobsStartup.&lt;/p&gt;

&lt;p&gt;Here is the relevant code for how I register my extension:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;The only thing that I do in the Configure implementation is call the &lt;a href="https://docs.microsoft.com/en-us/dotnet/api/microsoft.azure.webjobs.webjobsbuilderextensions.addextension?view=azure-dotnet" rel="noopener noreferrer"&gt;AddExtension&lt;/a&gt; extension method of the &lt;a href="https://docs.microsoft.com/en-us/dotnet/api/microsoft.azure.webjobs.iwebjobsbuilder?view=azure-dotnet" rel="noopener noreferrer"&gt;IWebJobsBuilder&lt;/a&gt;. AddExtension expects an implementation of the &lt;a href="https://docs.microsoft.com/en-us/dotnet/api/microsoft.azure.webjobs.host.config.iextensionconfigprovider?view=azure-dotnet" rel="noopener noreferrer"&gt;IExtensionConfigProvider&lt;/a&gt; interface, which you can see in the following listing:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;In the IExtensionConfigProvider I can already make use of DI, so in the above implementation I get some IOptions. These should be registered in the Startup class.&lt;/p&gt;

&lt;p&gt;The really important thing that happens here is actually registering the StartupTriggerAttribute as a binding by calling the &lt;a href="https://docs.microsoft.com/en-us/dotnet/api/microsoft.azure.webjobs.host.config.extensionconfigcontext.addbindingrule?view=azure-dotnet" rel="noopener noreferrer"&gt;AddBindingRule&lt;/a&gt; and then connecting the binding with an implementation of the &lt;a href="https://docs.microsoft.com/en-us/dotnet/api/microsoft.azure.webjobs.host.triggers.itriggerbindingprovider?view=azure-dotnet" rel="noopener noreferrer"&gt;ITriggerBindingProvider&lt;/a&gt; by calling the &lt;a href="https://docs.microsoft.com/en-us/dotnet/api/microsoft.azure.webjobs.host.config.fluentbindingrule-1.bindtotrigger?view=azure-dotnet" rel="noopener noreferrer"&gt;BindToTrigger&lt;/a&gt; method. If you are writing a custom input or output binding you would be calling BindToInput or BindToOutput instead.&lt;/p&gt;

&lt;p&gt;The ITriggerBindingProvider implementation is where I create the actual binding instance. The TryCreateAsync method of the provider is called for all the parameters of the function, but it returns a binding only for the ones that have the binding attribute applied and have a type supported by the binding. Since I only have to deal with parameters of type StartupInfo, the code is really simple:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;The binding is an implementation of &lt;a href="https://docs.microsoft.com/en-us/dotnet/api/microsoft.azure.webjobs.host.triggers.itriggerbinding?view=azure-dotnet" rel="noopener noreferrer"&gt;ITriggerBinding&lt;/a&gt; and it is where the heavy lifting is done. The CreateListenerAsync is called by the runtime to create a listener for the events that trigger the function. The BindingDataContract is how the binding exposes binding parameters that can be used in the &lt;a href="https://docs.microsoft.com/en-us/azure/azure-functions/functions-bindings-expressions-patterns" rel="noopener noreferrer"&gt;binding expressions&lt;/a&gt; of other parameter attributes, and BindAsync is called by the runtime when it needs to bind to a parameter for a function invocation. The input parameter value might not be of the type expected by the trigger parameter, so this is where any conversion should happen. Finally, this method wraps up the parameter in an &lt;a href="https://docs.microsoft.com/en-us/dotnet/api/microsoft.azure.webjobs.host.triggers.itriggerdata?view=azure-dotnet" rel="noopener noreferrer"&gt;ITriggerData&lt;/a&gt; to be used by the runtime for the function invocation. Following is the binding implementation:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;And finally we got to the listener which implements the &lt;a href="https://docs.microsoft.com/en-us/dotnet/api/microsoft.azure.webjobs.host.listeners.ilistener?view=azure-dotnet" rel="noopener noreferrer"&gt;IListener&lt;/a&gt; interface and is the place where listening for the trigger event source happens. In my case, the only event I am interested in, is the one happening when StartAsync is called, so I immediately invoke the function, no need to wait for any other event to happen. One thing to notice is that the listener uses the &lt;a href="https://docs.microsoft.com/en-us/azure/app-service/webjobs-sdk-how-to#singleton-attribute" rel="noopener noreferrer"&gt;singleton attribute&lt;/a&gt; which ensures that only one instance of the listener is running regardless of the number of function app instances:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;It is worth mentioning that the StartAsync method of the listener will be called when the runtime starts. But when does the runtime start? It turns out that it starts either when the function app starts, restarts, scales up or wakes up after being idle. So keep that in mind and make sure it fits with your use case.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F62j3bky9sxvrorwslkyx.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F62j3bky9sxvrorwslkyx.gif" alt="That's all Folks"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>azure</category>
      <category>dotnet</category>
      <category>serverless</category>
      <category>azurefunctions</category>
    </item>
  </channel>
</rss>
