DEV Community

Cover image for Running Azure Functions in Docker: Why and How
Martin Oehlert
Martin Oehlert

Posted on

Running Azure Functions in Docker: Why and How

Azure Functions Beyond the Basics
Continues from Azure Functions for .NET Developers (Parts 1-9)

  • Part 1: Running Azure Functions in Docker: Why and How (you are here)

When zip-deploy stops fitting

Your Azure Function needs to generate PDF invoices, so you add Puppeteer to your project. Zip-deploy works fine on your machine, but the Consumption plan doesn't have the Chromium dependencies installed. The function throws a cryptic error about missing shared libraries, and you're stuck choosing between a workaround that limits your architecture or a deployment model that gives you full control over the OS.

Most Azure Functions never hit this wall. Zip-deploy and run-from-package handle the majority of workloads well: your code and dependencies get packaged, uploaded, and run on Microsoft's managed infrastructure. You don't think about the OS, the runtime image, or what's installed underneath. That's the point, and it's a good default.

Containerizing a Function adds real operational cost. You own the base image, the patching cycle, the registry, and the build pipeline. If zip-deploy already works, containerizing your Function adds overhead with no payoff.

But there are specific problems where Docker earns that overhead back.

Native dependencies are the most common trigger. FFmpeg for media processing, Puppeteer or Playwright for headless browser work, libgdiplus for image manipulation: these require OS-level packages that the default Azure Functions host doesn't include. A custom Docker image lets you install exactly what the function needs.

Reproducible builds across environments matter when your team needs the same OS, the same SDK version, and the same native tooling from local dev through staging to production. A Dockerfile pins all of it in version control.

Running Functions alongside other containers is the third case. If you're already deploying to Azure Container Apps or AKS, packaging your Function as a container lets it sit next to your APIs, workers, and sidecars in the same orchestration layer. One deployment model, one scaling configuration, one set of infrastructure to manage.

If one of those three problems is yours, the container tax is worth paying.

The Dockerfile: multi-stage build for .NET 10

Start with the complete Dockerfile, then walk through what each stage does.

FROM --platform=linux/amd64 mcr.microsoft.com/dotnet/sdk:10.0 AS build
WORKDIR /src

COPY *.csproj .
RUN dotnet restore

COPY . .
RUN dotnet publish -c Release -o /app/publish

FROM --platform=linux/amd64 mcr.microsoft.com/azure-functions/dotnet-isolated:4-dotnet-isolated10.0 AS runtime
WORKDIR /home/site/wwwroot
COPY --from=build /app/publish .
Enter fullscreen mode Exit fullscreen mode

Fourteen lines. That's the whole thing.

The --platform=linux/amd64 flag on both FROM lines pins the image architecture. The Azure Functions base images only ship for linux/amd64, so without this flag, builds on Apple Silicon pull the wrong manifest and fail. Pinning the platform makes the Dockerfile work identically on Intel and ARM machines.

The build stage uses the .NET 10 SDK image to compile your project. The COPY *.csproj then dotnet restore pattern caches NuGet packages in a Docker layer, so subsequent builds skip the restore unless your dependencies change. The dotnet publish step compiles your code and produces a deployment-ready output in /app/publish.

The runtime stage switches to the Azure Functions base image. This image ships with the Functions host process, the dotnet-isolated worker runtime, and the three environment variables your app needs:

  • AzureWebJobsScriptRoot=/home/site/wwwroot
  • FUNCTIONS_WORKER_RUNTIME=dotnet-isolated
  • AzureFunctionsJobHost__Logging__Console__IsEnabled=true

You don't need to set any of these yourself. The base image handles it. Your only job is to place the published output at /home/site/wwwroot, which is why WORKDIR must point there. Get that path wrong and the Functions host starts but finds zero functions.

The final COPY --from=build pulls the compiled output from the build stage into the runtime image, keeping the SDK and all intermediate build artifacts out of your production container.

What you should know before building

Pin your SDK version in global.json. The base image 4-dotnet-isolated10.0 bundles a specific .NET 10 runtime. If your local SDK rolls ahead of what the image ships, subtle mismatches at runtime can show up. Pinning keeps builds deterministic across laptops, CI, and the image:

{
  "sdk": {
    "version": "10.0.201",
    "rollForward": "latestPatch"
  }
}
Enter fullscreen mode Exit fullscreen mode

Package version floor for .NET 10. The isolated worker packages below 2.x don't target .NET 10. You need at minimum:

  • Microsoft.Azure.Functions.Worker 2.50.0 or later
  • Microsoft.Azure.Functions.Worker.Sdk 2.0.5 or later

If you're upgrading an existing project from .NET 8, bumping just the TargetFramework without updating these two packages is the most common failure mode.

.NET 10 doesn't run on the Linux Consumption plan. This is a hard platform constraint, not a preview gap. If your current app runs on Linux Consumption and you want .NET 10, you need to migrate to the Flex Consumption plan first. Premium, ACA, and AKS (covered later) all support .NET 10 without this restriction.

The Functions host runs on .NET 8 internally. Even in the dotnet-isolated10.0 image, the host process itself targets .NET 8. Your worker process runs on .NET 10. This is expected behavior for the isolated model, not a bug: the two processes communicate over gRPC, so the runtime versions are independent.

These images are linux/amd64 only. If you're on Apple Silicon, Docker Desktop runs them under Rosetta or QEMU emulation. Builds work fine. Performance is noticeably slower than native ARM execution, so keep local integration test suites short.

No slim variant exists for .NET 10 yet. The base image is Ubuntu-based (the .NET 10 container images moved from Debian to Ubuntu), and the full image weighs roughly 1.5 GB. A Mariner-based or distroless option may come later, but as of April 2026, this is what ships.

You own base image updates. Microsoft publishes monthly security patches to the base images, but unlike managed Functions deployments, custom containers do not auto-update. You pull the latest tag, rebuild, and redeploy. Set a calendar reminder or wire it into your CI pipeline. The official docs are explicit about this: maintaining your container is your responsibility.

Local development with Docker Compose and Azurite

Your function app needs storage. Timer triggers use it for lease management, queue triggers read from it directly, and durable functions store orchestration state there. In production that's an Azure Storage account. Locally, you need Azurite, Microsoft's storage emulator, running alongside your function container.

Here's the full Docker Compose file:

services:
  azurite:
    image: mcr.microsoft.com/azure-storage/azurite
    command: >-
      azurite
      --blobHost 0.0.0.0
      --queueHost 0.0.0.0
      --tableHost 0.0.0.0
      --loose
      --skipApiVersionCheck
    ports:
      - "10000:10000"
      - "10001:10001"
      - "10002:10002"
    volumes:
      - azurite-data:/data
    healthcheck:
      test: nc -z 127.0.0.1 10000
      interval: 3s
      retries: 5
      start_period: 5s

  functions:
    build: .
    ports:
      - "8080:80"
    environment:
      - AzureWebJobsStorage=DefaultEndpointsProtocol=http;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;BlobEndpoint=http://azurite:10000/devstoreaccount1;QueueEndpoint=http://azurite:10001/devstoreaccount1;TableEndpoint=http://azurite:10002/devstoreaccount1
    depends_on:
      azurite:
        condition: service_healthy

volumes:
  azurite-data:
Enter fullscreen mode Exit fullscreen mode

The --blobHost 0.0.0.0 flags (and their queue/table equivalents) tell Azurite to listen on all network interfaces, not just localhost. Without them, your function container can't reach Azurite across the Docker network. --loose relaxes strict API validation. --skipApiVersionCheck prevents version mismatch errors when the Functions runtime targets a newer Storage API than Azurite supports.

The named volume azurite-data keeps your storage data intact between docker compose down and docker compose up. Queue messages, blob uploads, table entities: all survive restarts. Drop the volume only when you want a clean slate (docker compose down -v).

The health check deserves attention. Without it, Docker starts both containers simultaneously. Your function app boots in seconds, tries to connect to Azurite, and fails because Azurite hasn't finished initializing. The nc -z 127.0.0.1 10000 check confirms Azurite is actually accepting connections before the function container starts.

Now for the part that will cost you an hour if you don't know about it.

Your first instinct for the storage connection string will be UseDevelopmentStorage=true. That's what every Azure Functions tutorial uses, and it works fine when Azurite runs on your host machine. Inside Docker, it breaks. The shorthand expands to endpoints pointing at 127.0.0.1, which inside the function container means "myself," not "the Azurite container next door."

The fix is the explicit connection string you see in the Compose file above. The critical difference: every endpoint URL uses azurite as the hostname (the Compose service name) instead of 127.0.0.1. Docker's internal DNS resolves azurite to the correct container IP automatically. The account name and key are Azurite's well-known development credentials, the same ones UseDevelopmentStorage=true uses under the hood.

One practical tip: that connection string is long and ugly. Don't try to split it across multiple lines in your Compose file or inject it from a .env file with line breaks. YAML will quietly mangle it. Keep it on a single line, or use a .env file with the entire value on one line and reference it with ${AzureWebJobsStorage} in your Compose file.

Run docker compose up --build and you should see Azurite report all three services listening, followed by your function app discovering its triggers. If the function container restarts in a loop, check the connection string first. Nine times out of ten, that's the problem.

Debugging in containers: VS Code and Rider

Add a debug stage to your Dockerfile that installs the .NET debugger:

FROM build AS debug
RUN dotnet tool install --tool-path /tools dotnet-dump
RUN apt-get update && apt-get install -y --no-install-recommends \
    curl unzip procps \
    && curl -sSL https://aka.ms/getvsdbgsh | bash /dev/stdin -v latest -l /vsdbg \
    && apt-get clean && rm -rf /var/lib/apt/lists/*

ENV DOTNET_USE_POLLING_FILE_WATCHER=1
ENTRYPOINT ["dotnet", "run", "--project", "/src/HttpTriggerDemo"]
Enter fullscreen mode Exit fullscreen mode

The DOTNET_USE_POLLING_FILE_WATCHER environment variable is required because Docker volume mounts don't support inotify. Without it, file change detection silently fails.

VS Code with pipeTransport

Point your launch.json at the container using pipeTransport instead of opening a debug port:

{
  "name": "Attach to Docker",
  "type": "coreclr",
  "request": "attach",
  "processId": "${command:pickRemoteProcess}",
  "pipeTransport": {
    "pipeProgram": "docker",
    "pipeArgs": ["exec", "-i", "my-functions-debug"],
    "debuggerPath": "/vsdbg/vsdbg",
    "pipeCwd": "${workspaceFolder}"
  },
  "sourceFileMap": {
    "/src": "${workspaceFolder}"
  }
}
Enter fullscreen mode Exit fullscreen mode

pipeTransport sends debug commands through docker exec, so you never expose a debug port. The sourceFileMap entry maps the container's /src path back to your workspace so breakpoints resolve correctly. Start the container, hit F5 in VS Code, pick the dotnet process, and you're attached.

Rider

Rider handles most of this automatically. Open Run > Attach to Process, select the Docker tab, and pick your container. Rider installs its own debug agent on first attach. If you use Docker Compose, Rider also supports a native Docker Compose run configuration that builds, starts, and attaches in one step.

Docker Compose debug profile

Separate your debug configuration using a Compose profile so it doesn't interfere with production builds:

services:
  functions-debug:
    build:
      context: .
      target: debug
    volumes:
      - ./src:/src
    environment:
      - DOTNET_USE_POLLING_FILE_WATCHER=1
      - AzureWebJobsStorage=DefaultEndpointsProtocol=http;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;BlobEndpoint=http://azurite:10000/devstoreaccount1;QueueEndpoint=http://azurite:10001/devstoreaccount1;TableEndpoint=http://azurite:10002/devstoreaccount1
    depends_on:
      - azurite
    profiles: [debug]
Enter fullscreen mode Exit fullscreen mode

Run it with docker compose --profile debug up. The target: debug directive tells Compose to stop at your debug stage, which includes the SDK and vsdbg but skips the production publish step.

Hot reload: set expectations

Using dotnet watch to wrap func start inside the container works, but every code change triggers a full restart. Expect 4-6 second cycles. That's usable for occasional debugging sessions, not for rapid iteration.

The pragmatic split: run func start on your host machine for day-to-day development. Keep Azurite and any dependencies (Service Bus emulator, CosmosDB emulator) in Docker. Reserve full-container debugging for integration testing or reproducing environment-specific issues. You get fast inner-loop feedback without giving up the production-parity benefits of containerized dependencies.

Where to deploy: ACA vs Premium vs AKS

You have a containerized Function. Now you need somewhere to run it. Three options exist, and each makes a different trade-off between operational control and managed convenience.

Azure Container Apps (ACA)

ACA is the recommended default for containerized Azure Functions. The platform reads your Function triggers and configures KEDA scaling rules automatically, so you never write ScaledObject YAML yourself.

Deploy with the Azure CLI:

az containerapp create \
  --name my-functions \
  --resource-group my-rg \
  --environment my-env \
  --image myregistry.azurecr.io/my-functions:latest \
  --registry-server myregistry.azurecr.io \
  --ingress external --target-port 80 \
  --min-replicas 0 \
  --max-replicas 30
Enter fullscreen mode Exit fullscreen mode

Set --min-replicas 0 and your app scales to zero when idle, meaning zero compute cost during quiet periods.

Pricing follows the Container Apps model. On the Consumption plan, you pay per vCPU-second and GiB-second, with a monthly free grant of 180,000 vCPU-seconds and 360,000 GiB-seconds per subscription. For a Function that processes a few thousand events per day and idles overnight, you could land under $5/month. Dedicated workload profiles are available if you need guaranteed compute or GPU access, billed per instance rather than per resource consumed.

Cold start is the main gotcha. When your app scales from zero, the platform needs to pull the container image, provision resources, and start the Functions host. For a typical .NET isolated Function, teams commonly report 5-15 seconds on the first request after an idle period (Microsoft doesn't publish official cold start numbers). You can eliminate this by setting --min-replicas 1, but that means you pay for at least one instance around the clock. Keeping your container image small (pin to a specific tag, avoid unnecessary layers) helps reduce cold start time.

What ACA does not support: deployment slots, Functions access keys via the portal, and Functions proxies. If you rely on staging slots for zero-downtime swaps, you'll need to use ACA's built-in blue-green deployment with traffic splitting instead.

Azure Functions Premium Plan

The Premium plan (Elastic Premium, SKUs starting with EP) is the original way to run custom containers in Azure Functions. It predates ACA and still has one killer feature: always-ready instances with prewarmed buffers.

az functionapp plan create \
  --resource-group my-rg \
  --name my-premium-plan \
  --location eastus \
  --sku EP1 \
  --is-linux

az functionapp create \
  --resource-group my-rg \
  --plan my-premium-plan \
  --name my-functions \
  --deployment-container-image-name myregistry.azurecr.io/my-functions:latest
Enter fullscreen mode Exit fullscreen mode

Three SKU sizes are available:

Premium plan SKU sizes: EP1 (1 vCPU, 3.5 GB), EP2 (2 vCPUs, 7 GB), EP3 (4 vCPUs, 14 GB)

The billing model is the critical difference from ACA. Premium plan charges per core-second and memory across all allocated instances, with no execution charge. At least one instance must always be running. An EP1 instance running 24/7 costs roughly $155-175/month (varies by region). You cannot scale to zero. That always-on instance is the price you pay for eliminating cold starts entirely.

Where the Premium plan shines is latency-sensitive HTTP traffic. When load spikes, prewarmed instances are already initialized and waiting. No container pull, no cold start. For Functions that must respond in under 200ms consistently, this matters.

Watch out for the SKU naming confusion. EP1 is Elastic Premium (dynamic scaling). P1V2 is a Dedicated App Service plan (no dynamic scaling). Pick the wrong one and you'll pay more for less flexibility.

Maximum scale-out is up to 100 instances. The default maximumElasticWorkerCount in ARM templates is 20, so you may need to raise that limit explicitly.

AKS with KEDA

If your team already operates a Kubernetes cluster, running Functions there avoids introducing a new compute platform. You install KEDA as an AKS add-on, deploy your Function container as a standard Kubernetes deployment, and KEDA handles scaling based on event triggers.

apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
  name: my-functions-scaler
spec:
  scaleTargetRef:
    name: my-functions
  minReplicaCount: 0
  maxReplicaCount: 50
  triggers:
    - type: azure-servicebus
      metadata:
        queueName: orders
        messageCount: "5"
      authenticationRef:
        name: servicebus-auth
Enter fullscreen mode Exit fullscreen mode

KEDA supports these Azure Functions triggers directly: Azure Storage Queues, Azure Service Bus, Azure Event Hubs / IoT Hubs, Apache Kafka, and RabbitMQ. HTTP triggers work, but KEDA does not manage them directly; you configure HTTP scaling through the Horizontal Pod Autoscaler or Container Apps' HTTP scaler instead.

This is the only option that is community-supported, not Microsoft-supported. The docs are explicit: "Best-effort support is provided by contributors and from the community." If something breaks at 2am, you're opening a GitHub issue, not filing a support ticket. You also own the full Kubernetes stack: node pools, networking, RBAC, upgrades, monitoring.

Cost depends entirely on your cluster. If you're already paying for AKS nodes, adding a Function container is effectively free at the compute layer. If you'd be spinning up a new cluster just for Functions, the minimum AKS cost (one node with a Standard_D2s_v3 VM) starts around $70/month before you've deployed anything. KEDA itself is free and runs as a lightweight deployment in your cluster.

Cold start on AKS matches whatever your cluster can provision. With KEDA's scale-to-zero, a cold start involves scheduling a pod, pulling the image (if not cached), and starting the container. On a warm cluster with cached images, that's 3-10 seconds. On a cluster that needs to scale up a node, it could be 2-4 minutes.

Trade-offs at a glance

Trade-offs comparison: ACA vs Premium Plan vs AKS with KEDA

The decision tree is short. If you don't already run Kubernetes, don't start now for a single Function app. If your Function handles latency-sensitive HTTP requests and cold starts are unacceptable, use the Premium plan and accept the always-on cost. For everything else, ACA with the Consumption plan gives you scale-to-zero, automatic KEDA configuration, and the lowest operational overhead.

When Docker adds value

The deployment choice assumes the container was worth building in the first place. Every custom container you ship is infrastructure you now own: a registry to manage, a base image to patch monthly, a CI pipeline stage that didn't exist before. Zip-deploy skips all of that. Microsoft patches the managed host, and you never think about it.

That trade-off only flips when the managed host can't do what your function requires. Puppeteer needs Chromium installed at the OS level. Your compliance team mandates identical images from laptop to production. Your platform team already runs everything on AKS and adding a second deployment model would create more problems than it solves. Those are real constraints, not preferences.

The setup cost is lower than it looks. Twelve lines of Dockerfile, a Compose file with Azurite, and one container image that deploys to ACA, Premium, or AKS without changes. The ongoing cost is the part that matters: monthly base image pulls, rebuild-and-redeploy cycles, and one more thing to monitor.

Is your function broken without OS-level control, or would zip-deploy work fine if you tried it first?

Top comments (0)