DEV Community

Daniel Jonathan
Daniel Jonathan

Posted on

Running Azure Logic Apps Standard on Azure Container Apps

Should you use Logic Apps Standard on ACA instead of n8n?

n8n is popular for workflow automation — Docker-native, visual editor, hundreds of integrations. But if you're already in Azure, it means running and paying for another self-hosted service on top of your existing infrastructure.

Logic Apps Standard on ACA is a cost-effective alternative if your workflows stay within the built-in connector set: Azure Blob, Queue, Service Bus, Event Hubs, HTTP, OpenAI, AI Search. No extra services, no OAuth setup. Durable run history, GitOps-friendly JSON definitions, and event-driven triggers — all included at container economics instead of an always-on App Service plan.

Hard limits — know them before you start:

  • No managed connectors. Gallery connectors (O365, SharePoint, SQL, etc.) require an App Service MSI endpoint that ACA doesn't provide.
  • No XSLT maps. The Transform XML action uses NetFxWorker.exe — a Windows-only .NET Framework binary that won't run on Linux. Liquid/JSON transforms work fine (they run in-process).
  • Rebuild to deploy. Workflows are baked into the image. Any change = Docker build + push + ACA update.
  • Visual designer needs local Docker. Design and test locally (Part 2), then deploy.
  • Cold starts. Scale-to-zero means latency after idle — matters for synchronous HTTP workflows.

If any of those are blockers, use App Service Standard instead. If they're not — keep reading.


What we're building

Six workflows deployed as a single Docker container on ACA:

Workflow Trigger Purpose
wf1 HTTP GET Stateful HTTP request/response
wf2 Azure Blob Storage Fires on blob upload, reads metadata
wf3 Azure Queue Storage Processes queue messages
wf4 Azure Service Bus Processes messages from wf4queue
wf5 Azure Service Bus Receives SB message, calls external HTTP endpoint
wf6 HTTP POST JSON-to-JSON transform via Liquid map

The Docker image

No official pre-built image exists for Logic Apps Standard — you build your own with the Functions Core Tools and your workflow files baked in:

FROM mcr.microsoft.com/dotnet/sdk:8.0

ENV DEBIAN_FRONTEND=noninteractive
WORKDIR /home/site/wwwroot

RUN apt-get update && \
    apt-get install -y curl gnupg unzip coreutils && \
    curl -fsSL https://deb.nodesource.com/setup_18.x | bash - && \
    apt-get install -y nodejs && \
    npm install -g azure-functions-core-tools@4 --unsafe-perm=true && \
    apt-get clean && rm -rf /var/lib/apt/lists/*

COPY . .

ENV FUNCTIONS_WORKER_RUNTIME="node"
ENV FUNCTIONS_WORKER_RUNTIME_VERSION="~4"
ENV AzureWebJobsFeatureFlags="EnableMultiLanguageWorker"
ENV AzureWebJobsSecretStorageType="Files"
ENV APP_KIND="workflowapp"

EXPOSE 7074
ENTRYPOINT ["func", "start", "--verbose", "--port", "7074"]
Enter fullscreen mode Exit fullscreen mode

Workflow JSON files are baked in at COPY . .. The runtime reads and executes them — no compilation step.


Project structure

LABasicDemo/
├── host.json                  # Extension bundle declaration
├── connections.json           # Service provider connections
├── Dockerfile
├── Artifacts/Maps/            # Liquid maps (wf6)
├── wf1/workflow.json ... wf6/workflow.json
Enter fullscreen mode Exit fullscreen mode

connections.json maps each connection name (e.g. servicebus) to a serviceProviderId and a connection string via @appsetting(...). The runtime resolves these at startup — no ARM roundtrip.


Bicep infrastructure

The sections below highlight the non-obvious parts. ACR, Log Analytics, and ACA Environment are standard boilerplate.

Service Bus — Basic SKU is enough

Basic SKU covers queues. Standard is only needed for topics or managed API connections — which don't work in ACA anyway.

The critical env vars — stability fixes

The Logic Apps runtime generates a 15-character LAIdentifier hash to namespace all Azure Table Storage tables for run history. By default the hash is derived from the host ID — if that changes on restart, run history appears lost.

Three env vars pin the identity across pod restarts:

{ name: 'AzureFunctionsWebHost__hostid', value: appName }
{ name: 'WEBSITE_HOSTNAME',              value: '${appName}.${acaEnv.properties.defaultDomain}' }
{ name: 'WEBSITE_CONTENTSHARE',          value: contentShareName }
Enter fullscreen mode Exit fullscreen mode

Without AzureFunctionsWebHost__hostid, every restart generates a new host ID, a new LAIdentifier, new storage tables — and prior run history is effectively orphaned.

Azure Files mount — critical path

Mount at .azure-webjobs-hosts, not at /home/site/wwwroot. Mounting at the root wipes all workflow files baked into the image.

volumeMounts: [{ volumeName: 'content-share', mountPath: '/home/site/wwwroot/.azure-webjobs-hosts' }]
Enter fullscreen mode Exit fullscreen mode

This directory holds blob trigger checkpoints and distributed locks — persisting it prevents replaying already-processed blobs after a restart.

Full env var list

env: [
  { name: 'AzureWebJobsStorage',                      secretRef: 'storage-connection-string' }
  { name: 'WORKFLOWS_STORAGE_CONNECTION_STRING',      secretRef: 'storage-connection-string' }
  { name: 'AzureBlob_connectionString',               secretRef: 'storage-connection-string' }
  { name: 'azurequeues_connectionString',              secretRef: 'storage-connection-string' }
  { name: 'servicebus_connectionString',              secretRef: 'servicebus-connection-string' }
  { name: 'FUNCTIONS_WORKER_RUNTIME',                 value: 'node' }
  { name: 'FUNCTIONS_WORKER_RUNTIME_VERSION',         value: '~4' }
  { name: 'AzureWebJobsFeatureFlags',                 value: 'EnableMultiLanguageWorker' }
  { name: 'APP_KIND',                                 value: 'workflowapp' }
  { name: 'WEBSITE_SITE_NAME',                        value: appName }
  { name: 'APPINSIGHTS_INSTRUMENTATIONKEY',           value: appInsightsKey }
  { name: 'WEBSITE_CONTENTAZUREFILECONNECTIONSTRING', secretRef: 'storage-connection-string' }
  { name: 'WEBSITE_CONTENTSHARE',                     value: contentShareName }
  { name: 'WEBSITE_HOSTNAME',                         value: '${appName}.${acaEnv.properties.defaultDomain}' }
  { name: 'AzureFunctionsWebHost__hostid',            value: appName }
  { name: 'WEBSITE_RESOURCE_GROUP',                   value: resourceGroup().name }
  { name: 'WEBSITE_OWNER_NAME',                       value: '${subscription().subscriptionId}+${resourceGroup().name}-WestEuropewebspace' }
]
Enter fullscreen mode Exit fullscreen mode

Deployment

deploy.sh — provision infrastructure

az deployment group create \
  --resource-group LogicAppHubRG \
  --template-file infra/main.bicep \
  --parameters infra/main.bicepparam \
  --output table
Enter fullscreen mode Exit fullscreen mode

build-push.sh — build and deploy the image

ACA caches the image digest at revision creation time — deploying with :latest may leave the container on a stale image. Always pin the exact digest:

az acr build --registry labasicdemoacr --image logicapp-basicdemo:latest \
  --file ../LABasicDemo/Dockerfile ../LABasicDemo

DIGEST=$(az acr repository show-manifests --name labasicdemoacr \
  --repository logicapp-basicdemo --orderby time_desc --query "[0].digest" -o tsv)

az containerapp update --name la-basicdemo --resource-group LogicAppHubRG \
  --image "labasicdemoacr.azurecr.io/logicapp-basicdemo@$DIGEST"
Enter fullscreen mode Exit fullscreen mode

az acr build runs the Docker build in the cloud — no local Docker daemon needed.

What lands in Azure

Azure resource group after deployment

The Service Bus namespace and storage account live in a separate shared resource group.


Notable workflows

wf5 — Service Bus → HTTP action

wf5 originally used a managed API connection for Service Bus. It was redesigned to use the service provider connector (connection string auth) + a built-in HTTP action after managed connections proved unworkable. The service provider trigger polls wf5queue; on receipt it fires a GET to an external endpoint.

wf6 — Liquid JSON transform

Liquid maps work in Linux containers — processed in-process with no external binary. Map stored in Artifacts/Maps/PersonToContact.liquid:

{
  "fullName": "{{content.firstName}} {{content.lastName}}",
  "email": "{{content.email}}"
}
Enter fullscreen mode Exit fullscreen mode

Action in workflow.json:

{
  "type": "Liquid",
  "kind": "JsonToJson",
  "inputs": {
    "content": "@triggerBody()",
    "map": { "source": "LogicApp", "name": "PersonToContact.liquid" }
  }
}
Enter fullscreen mode Exit fullscreen mode

Test: POST {"firstName":"John","lastName":"Doe","email":"john@example.com"}{"fullName":"John Doe","email":"john@example.com"}


Agentic flows work out of the box

One capability worth calling out explicitly: AI agent workflows are fully supported in ACA containers. Azure OpenAI and Azure AI Search are both built-in service provider connectors — they authenticate via API key in app settings, no ARM token required. This means you can build agentic patterns (LLM calls, RAG pipelines, tool-use loops) directly in Logic Apps Standard and deploy them to ACA with no additional setup.

This is a meaningful advantage over n8n, which relies on community nodes for OpenAI integration. Logic Apps gives you native stateful orchestration, durable run history per step, and retry policies — all built into the agent workflow without extra infrastructure.


The connector boundary

Logic Apps connectors come in two families:

Service provider connectors (built-in) — authenticate via connection strings, no ARM roundtrip. Work in containers:
Azure Blob, Azure Queue, Azure Service Bus, Azure Event Hubs, HTTP/HTTPS, Azure OpenAI, Azure AI Search.

Managed API connections (Microsoft.Web/connections) — the 400+ gallery connectors. Require an ARM token acquired via the App Service MSI endpoint (IDENTITY_ENDPOINT + IDENTITY_HEADER). App Service injects this automatically; ACA does not.

We tried two approaches: service principal via WORKFLOWAPP_AAD_CLIENTID / TENANTID / CLIENTSECRET, and user-assigned managed identity via AZURE_CLIENT_ID. Neither worked — the WORKFLOWAPP_AAD_* variables are only active in the Hybrid Deployment Model (Arc-enabled AKS + ACA Logic Apps extension), not the custom image approach.

XSLT maps also don't work: the Transform XML action delegates to NetFxWorker.exe — a Windows PE32 binary that the Linux kernel refuses to execute.

Feature ACA (Linux)
Service provider connectors
Liquid / JSON transforms
Managed API connections ❌ No MSI endpoint
XSLT maps ❌ Windows-only binary

Verifying stability across restarts

The key test: trigger each workflow, stop and restart the container, then check that the same run IDs are still visible in history.

The easiest way to inspect run history is the Logic Apps Run History View Tool VS Code extension — connect it to the deployed ACA endpoint and browse runs per workflow directly in the editor, with full input/output per action visible.

Before the AzureFunctionsWebHost__hostid fix, run history was orphaned on every restart. After the fix it survives indefinitely.


Cost comparison vs n8n

n8n (self-hosted) Logic Apps Standard on ACA
Compute Fixed VM/container cost Serverless, scale to 0
State storage SQLite / Postgres Azure Table Storage (~pennies)
Built-in connectors 400+ community nodes Service providers + HTTP
Managed connectors (O365 etc.) ❌ App Service only
XSLT maps ❌ Windows only
Liquid transforms
Run history Basic Full input/output per action
Visual designer ✅ VS Code (local)
GitOps / IaC Manual Native JSON + Bicep

Sweet spot: Azure-native event-driven pipelines — blob, queue, Service Bus, outbound HTTP — where you want durable run history and GitOps deployment without an always-on App Service plan.


What's next

For local development — running the same container with Docker, Azurite, and the Logic Apps VS Code extension — see:

Logic Apps Local Dev Tools — Visual Walkthrough

That post covers the full design → test → deploy loop without repeating what's here.

Top comments (0)