DEV Community

Cover image for Azure Logic Apps Standard - Part 2 Build Pipelines and Provisioning
Dylan Morley
Dylan Morley

Posted on

Azure Logic Apps Standard - Part 2 Build Pipelines and Provisioning

In Part 1, we looked at the requirements and the solution design - now let's think about how to build a hello world application to prove out some deployment Pipelines using Azure DevOps.

I believe making sure your foundations are in a good shape first then building upon them really pays dividends in a new software project - you'll go a bit slower at first whilst you identify and fix issues, but you'll move so much faster as the delivery progresses.

You achieve this through building the necessary automation, testing your build and deploy process until you're happy with a few different scenarios. Invest time and get this right at the beginning. Do this by not focussing on the application itself, which we can just use a walking skeleton / hello-world type application, and instead focus on the deployment process.

What we'll aim to have working as part of this is

  • A simple, HTTP triggered logic application workflow that returns 'Hello World'
  • A build pipeline that produce an artifact, ready for deployment
  • A provisioning pipeline that will use the artifact, create infrastructure and deploy the logic application into

Workflow Extension Model

Logic App Standard is built as an extension on top of the functions runtime. This means the logic app runtime and all of the built-in connectors binaries need to be made available to your application, and you have a couple of choices here how to distribute them.

Bundled runtime

In this mode, you provide values for application settings AzureFunctionsJobHost__extensionBundle__id and AzureFunctionsJobHost__extensionBundle__version. All the required binaries are acquired by the functions host using these settings and this is essentially a managed process for you. This will result in a smaller build time artifacts, but you cannot use custom connectors - only the built in connectors that are in the version of the nuget package are available.

In this mode, you could have a simple deployment story - it's just the JSON files that make up the workflows that would need to be deployed

Package reference mode

In this mode, we don't provide the application settings and instead treat the logic app runtime as just another nuget package. This is available as package Microsoft.Azure.Workflows.WebJobs.Extension, which contains the logic app capabilities and the built-in connectors. We provide a package reference for the version of the runtime we want to consume, and any other packages we want in our solution. This is a more traditional model - we package up everything needed to run the application and deploy it.

If you want to use custom connectors, you must use package reference mode - and as that's a requirement for us, this choice is made for us.

Built in and Custom Connectors

Why built in connectors?

Connectors are either built-in or managed, but what's the difference? When you choose a managed connector, you're going out of process and are making a call into a Microsoft provided connector. This will run in it's own infrastructure, and may come with certain limitations such as maximum size or number of calls per minute. When working in the consumption model, managed API connections can be a good choice and enable powerful integrations with little effort, but the limitations may impact your ability to perform particularly large workloads or have the performance profile you're aiming for,

With single tenant logic apps, we can provision our own plans to run our applications and built-in connectors run in-process on that infrastructure. Therefore, we don't have any rate limits enforced and the only limit is the infra itself. Keeping the call in-process will also provide performance benefits not previously available to us.

While there are a number of built-in connectors (and more on the way) when there's something not currently available, logic apps has an extensibility model that allows you to design your own functionality and make available via a nuget package. ASOS open sourced the Asos Cosmos Connector, which shows how you might go about building a connector.

The output of that is a nuget package, which we'll need to include in our deployments - that's easy enough, as it's just a csproj package reference. However, in order to get a local design time experience, the package also needs to be installed locally in your logic app runtime directory.

Custom Connectors - local experience

When working with a custom connector locally, you'll be using the VS Code IDE to view the workflows at design time, so any custom connectors you create need to be available to the IDE in order to understand how to display them to you, and display the relevant UI for particular actions based on the connector specification. The current version of the extension bundle is installed on your local machine in your user profile, in a sub-directory azure-functions-core-tools - this contains all the built-in connectors code, which are distributed as DLLS alongside the extension code.

Therefore, if we want our custom connectors to also be available, we need to register the connector so it's picked up by the extension. IF you've packaged up your connector as a nuget package, then you can use this to make it available on the local file system so that VS Code can display the relevant information.

Here, we demonstrate how we can follow a convention based pattern to distribute and register our custom connectors using a powershell script. The script we've used for this always expects the libraries to follow a certain pattern and to be available in a nuget feed. This then allows us to extract the package locally, and install the contents in the extensions directory. It's available as a gist to demonstrate -

At runtime, the application also needs access to the package details in order execute and display the workflow details in the portal - this is a simple process, achieved by packaging all binaries and deploying via a zip process.

Other local considerations

When working as part of a team, the barrier for entry should be as low as possible when someone goes to work on a repository. It should be simple to get up and running - everything needed to get the application running should be understood. A new engineer should be able to clone, build, test and run with minimal effort - how simple this is can be a key key indicator of the health of the solution.

This goes for any local settings files as well - it should be easy getting local settings populated, and this might include sensitive values such as connection strings. Including a script with your project that an engineer can run that will fully populate their local setting file will save you time and make for a more secure solution, so we'll demonstrate how you can do this.

Build Pipelines

Since we're building a project that's consuming nuget packages, we just need to produce a zip file that we'll deploy to our logic app that we'll provision. The order is

1) Get Sources
2) Perform versioning
3) Build projects
4) Run all non-integration tests
5) Publish test results
6) Perform Packaging
7) Publish Artifact

We want a reusable set of pipelines that will let us build any component we built, and follow a 'build once, deploy multiple times' approach.

We now have an artifact, that contains everything what we need to deploy and test our logic app.

What to provision?

Based on the architecture requirements from Part 1, I see the provisioning requirements for this in 3 parts

1) The network - vnet, subnets, public ips and nat. Both the logic application and bastion will depend on this, and this should be provisioned independently from both of them
2) The application and dependencies - This is the main deployment and would contain everything needed to have a working application. Can be created and destroyed independently from the network
3) The Bastion and VM for support purposes

In this way, we can think about the provisioning in smaller chunks - this also allows us to destroy parts of the infrastructure independently from each other - e.g. I may choose to delete my infrastructure at night or at weekends in my dev environment.

Both the application and the bastion VMs depend on the network, but we can disconnect and remove them from the vnet independently, whilst leaving the network in place. This can be helpful, as we may incurr costs from our application deployment if it depends on compute and other PaaS services, but the network is generally low cost and dependent on the traffic generated.

Provisioning logic apps using Terraform

There are a number of tools you could choose from, but the PaaS nature of the architecture fits nicely for Terraform. We'll follow a modules and blueprints approach for our solution, which will mean our integration applications can easily reuse what we build.

At the initial time of design, Terraform didn't support logic apps standard, so an ARM template approach was the only viable solution. Since then, I've created a resource for this in Terraform so you can follow the native approach - documentation is at azurerm_logic_app_standard

resource "azurerm_logic_app_standard" "example" {
  name                       = "test-azure-logic-app-standard"
  location                   = azurerm_resource_group.example.location
  resource_group_name        =
  app_service_plan_id        =
  storage_account_name       =
  storage_account_access_key = azurerm_storage_account.example.primary_access_key

  app_settings = {
    "CONFIG_SETTING" = "example"
Enter fullscreen mode Exit fullscreen mode

Our provisioning pipeline should

  • Take the build artifact for the workflow
  • Run the Terraform stage to create the infrastructure
  • Zip Deploy the application into the newly provisioned infrastructure

The Terraform is just about provisioning the empty shell of the logic app - it doesn't contain any of the workflows themselves, which we'll provide by deploying into the newly provisioned application.

Provisioning considerations

As we want to secure our applications using VNET and service endpoints, there are a few settings and other considerations to ensure the application is in a working state and correctly associated with the network subnets.


We've provisioned our application and now we're ready to deploy - this is the simple bit as this is no different from an App Service or Azure Functions type deployment, so we simply take the zip artifact we produced and use a zip deploy task to upload to Azure.

Demo project

A demo project that shows how this will all fit together will be available in github soon.

Top comments (1)

gegao profile image
Ge Gao

Hi Dylan, looking forward to the part 3. I'm wondering if you have found out a way to disable public network access via terraform logic app standard for the inbound traffic. Any idea?