DEV Community

Cover image for Understanding the Isolated Worker Model
Martin Oehlert
Martin Oehlert

Posted on

Understanding the Isolated Worker Model

Azure Functions for .NET Developers: Series


The problem the isolated model solves

You add a NuGet reference to Newtonsoft.Json 13.0. Your code compiles. Your unit tests pass. You deploy to Azure Functions, and at runtime, your function silently uses version 12.0.3.

No error. No warning. Just the host's copy of the assembly winning the load, because your function code and the Azure Functions runtime shared a single process.

This was the in-process model, and for years it was the only way to build .NET Azure Functions. Your function assemblies loaded directly into the same CLR instance as the Functions host. The host pinned its own versions of core packages: Newtonsoft.Json, Microsoft.Extensions.DependencyInjection, ASP.NET Core libraries, and dozens of others. If your code depended on a different version, the host's version won at load time. You had no way to override it.

The version conflict problem went beyond JSON serialization. The host determined which .NET runtime your code ran on. When .NET 7 shipped, you could not target it until the Functions team updated the host. When .NET 8 arrived, the same waiting game. Your application's target framework was not your decision; it was the host's.

Startup control was equally limited. The in-process model offered FunctionsStartup as an extension point for dependency injection:

[assembly: FunctionsStartup(typeof(MyStartup))]

public class MyStartup : FunctionsStartup
{
    public override void Configure(IFunctionsHostBuilder builder)
    {
        builder.Services.AddSingleton<IOrderService, OrderService>();
    }
}
Enter fullscreen mode Exit fullscreen mode

This gave you a DI container, but nothing else. No middleware pipeline. No request/response interception. No control over serialization settings, logging providers, or configuration sources beyond what the host exposed. If you wanted to add authentication middleware, or swap the JSON serializer for System.Text.Json, or wire up OpenTelemetry tracing, you were working against the grain.

FunctionsStartup was a workaround bolted onto a hosting model that was never designed for extensibility. The isolated worker model replaced it entirely.

Two processes, one function app

The isolated model splits your function app into two separate OS processes. The Azure Functions host (func.exe) handles triggers, bindings, and routing. Your code runs in a separate worker process (dotnet.exe) with its own CLR, its own assembly loader, and its own dependency graph.

A single environment variable controls this split. Setting FUNCTIONS_WORKER_RUNTIME to dotnet-isolated tells the host to spawn a worker process instead of loading your assemblies directly.

Azure Functions Host (func.exe)
  ├── Trigger listeners (HTTP, Service Bus, Event Hub...)
  ├── Binding infrastructure
  ├── host.json configuration
  └── gRPC server
         ↕ gRPC / Protobuf over HTTP/2
Worker Process (dotnet.exe / your app)
  ├── Program.cs bootstrap
  ├── Your DI container
  ├── Your middleware pipeline
  └── Your function code
Enter fullscreen mode Exit fullscreen mode

The two processes communicate over gRPC using Protocol Buffers serialized over HTTP/2. When a trigger fires (an HTTP request arrives, a Service Bus message lands), the host serializes the trigger data into a Protobuf message and sends it to the worker. The worker executes your function, serializes the result, and sends it back.

These C4 diagrams show the two-process architecture in more detail:

C4 container diagram showing the isolated worker architecture

C4 dynamic diagram showing the request flow between host and worker

Your Program.cs sets up the gRPC client, DI container, and middleware pipeline in one place:

var builder = FunctionsApplication.CreateBuilder(args);

builder.ConfigureFunctionsWebApplication();

builder.UseMiddleware<ExceptionHandlingMiddleware>();
builder.UseMiddleware<CorrelationIdMiddleware>();

builder.Services
    .AddApplicationInsightsTelemetryWorkerService()
    .ConfigureFunctionsApplicationInsights()
    .AddSingleton<IOrderService, OrderService>();

builder.Build().Run();
Enter fullscreen mode Exit fullscreen mode

ConfigureFunctionsWebApplication() registers the gRPC client that talks back to the host, enables ASP.NET Core integration for HTTP triggers, and gives you the middleware pipeline shown above. If you do not need HTTP trigger support, ConfigureFunctionsWorkerDefaults() does the same setup without the ASP.NET Core integration.

Because each process loads its own assemblies independently, the version conflict problem disappears. Your worker targets .NET 10 and references Newtonsoft.Json 13.0.3? That is what runs. The host still uses whatever versions it needs internally, and the two never collide.

The trade-off is that every function invocation crosses a process boundary. The host serializes trigger data, sends it over gRPC, and the worker deserializes it. On the same machine, the latency cost is negligible for most workloads. Where you will notice it is cold starts: the runtime now needs to spin up two processes instead of one. For high-throughput, latency-sensitive functions that fire thousands of times per second, measure the overhead in your specific scenario. For the vast majority of production workloads (processing orders, handling webhooks, running scheduled cleanup jobs), the isolation is worth far more than the milliseconds it costs.

What you gain

The isolated worker model removes real constraints that made the in-process model frustrating in production.

No more assembly conflicts

Your worker runs in its own process with its own dependency graph. The host loads whatever versions it needs; your application loads whatever versions you reference. Two processes, two sets of assemblies, zero conflicts. The Newtonsoft.Json problem from the opening of this article cannot happen in the isolated model.

Full startup control via Program.cs

var builder = FunctionsApplication.CreateBuilder(args);

builder.ConfigureFunctionsWebApplication();

builder.Services
    .AddSingleton<IOrderService, OrderService>()
    .AddApplicationInsightsTelemetryWorkerService()
    .ConfigureFunctionsApplicationInsights();

builder.Build().Run();
Enter fullscreen mode Exit fullscreen mode

This replaces the [assembly: FunctionsStartup(typeof(MyStartup))] attribute and the Startup class you had to wire up in the in-process model. The whole application now bootstraps through the .NET Generic Host, the same pattern you already know from ASP.NET Core and worker services.

FunctionsApplication.CreateBuilder(args) sets up the host builder with Functions-specific defaults. ConfigureFunctionsWebApplication() enables ASP.NET Core integration so your HTTP triggers can work with HttpRequest and HttpResponse directly instead of the SDK's custom types.

The Services block is standard dependency injection. AddSingleton<IOrderService, OrderService>() registers your own service. AddApplicationInsightsTelemetryWorkerService() and ConfigureFunctionsApplicationInsights() wire up telemetry for the worker process (both are needed: the first adds the base SDK, the second configures Functions-specific log filtering).

builder.Build().Run() starts the worker and connects it to the host over gRPC. If you have written a .NET 8 web API, this code should look familiar, because it is the same hosting model.

Middleware pipeline

The in-process model had no middleware. If you needed cross-cutting behavior (logging correlation IDs, catching unhandled exceptions, validating tokens) you were stuck wiring it through WebJobs SDK extension points or scattering try/catch blocks across every function.

The isolated model gives you an ASP.NET Core-style pipeline around every function invocation:

builder.UseMiddleware<CorrelationIdMiddleware>();
builder.UseMiddleware<ExceptionHandlingMiddleware>();
Enter fullscreen mode Exit fullscreen mode

Each middleware runs in order before the function executes, then unwinds in reverse order after. You build one by implementing IFunctionsWorkerMiddleware:

public class CorrelationIdMiddleware : IFunctionsWorkerMiddleware
{
    public async Task Invoke(FunctionContext context, FunctionExecutionDelegate next)
    {
        // Runs before the function
        var correlationId = Guid.NewGuid().ToString();

        await next(context);

        // Runs after the function
    }
}
Enter fullscreen mode Exit fullscreen mode

FunctionContext gives you access to the invocation metadata, bindings, and the IServiceProvider. The next delegate calls either the next middleware in the chain or the function itself. Everything before await next(context) runs on the way in; everything after runs on the way out.

In production, you would typically read an incoming correlation ID from a header or message property, fall back to generating one if it is missing, then stash it in context.Items so the function and downstream services can pick it up. Exception-handling middleware wraps the next call in a try/catch, logs the failure with structured context, and returns a consistent error response instead of letting the host surface a generic 500.

.NET version flexibility

The worker process runs whatever .NET version you target, independently of the host. The host stays on its own runtime; your code stays on yours.

Today, the isolated model supports .NET 8, .NET 9, .NET 10, and even .NET Framework 4.8 (for teams that cannot migrate legacy libraries). The in-process model was capped at .NET 8 with v4 of the Functions runtime and will never support .NET 9 or later. When .NET 12 ships, you will update your TargetFramework, redeploy, and move on. No waiting for the Azure Functions team to update the host.

Your release cycle and the host's release cycle are decoupled. You upgrade on your schedule.


What changes from in-process

Most of the migration is mechanical. The conceptual model shifts, but the actual code changes follow a predictable pattern. Once you have seen each one, you can work through a real codebase systematically.

The function attribute

// In-process
[FunctionName("ProcessOrder")]
public IActionResult Run([HttpTrigger] HttpRequest req, ILogger log)

// Isolated (with ASP.NET Core integration)
[Function("ProcessOrder")]
public IActionResult Run([HttpTrigger] HttpRequest req)
Enter fullscreen mode Exit fullscreen mode

[FunctionName] comes from the WebJobs SDK (Microsoft.Azure.WebJobs). [Function] comes from the Functions Worker SDK (Microsoft.Azure.Functions.Worker). The attribute names differ by one word, which makes a global search-and-replace dangerous: you need to update the package reference and the attribute name together, or you will reference an attribute that does not exist in your new dependencies.

HTTP types

The isolated model gives you two ways to handle HTTP triggers, and the difference matters:

Option Types used When to choose
ASP.NET Core integration HttpRequest / IActionResult New projects, or migrating from in-process
Built-in model HttpRequestData / HttpResponseData Legacy compatibility only

ASP.NET Core integration is the path to take. It means your HTTP functions look exactly like ASP.NET Core controller actions, and all the routing, model binding, and result types you already know apply. It requires two things: the Microsoft.Azure.Functions.Worker.Extensions.Http.AspNetCore package, and ConfigureFunctionsWebApplication() in Program.cs instead of ConfigureFunctionsWorkerDefaults(). If you find tutorials using HttpRequestData, they predate the ASP.NET Core integration and are using the older built-in types. You can use either, but the ASP.NET Core path removes an entire class of "why does this work differently than my API controllers?" questions.

Output bindings

In-process used out parameters for output bindings. Isolated uses return values with attributes:

// In-process: out parameters
public IActionResult Run(..., out string outputMessage)

// Isolated: return value with output binding attribute
[Function("ProcessOrder")]
[QueueOutput("orders-processed")]
public string Run([QueueTrigger("orders-pending")] string message)
{
    return processedMessage;
}
Enter fullscreen mode Exit fullscreen mode

The out parameter approach was a side effect of how the WebJobs SDK wired up bindings. In isolated, bindings are attributes on the return type, which makes the data flow explicit: what the function returns is what gets written to the binding.

When you need multiple outputs (for example, writing to a queue and returning an HTTP response), you define a dedicated return type:

public record MultiOutputResult(
    [property: QueueOutput("dead-letter")] string? DeadLetterMessage,
    IActionResult Response
);
Enter fullscreen mode Exit fullscreen mode

Each property carries its own binding attribute. The runtime inspects the returned record and routes each value to the appropriate destination. This is more verbose than out parameters for simple cases, but it makes multi-output functions far easier to read: every output is explicit in the return type definition rather than scattered across a function signature.

Dependency injection

// In-process: FunctionsStartup + IFunctionsHostBuilder
[assembly: FunctionsStartup(typeof(Startup))]
public class Startup : FunctionsStartup
{
    public override void Configure(IFunctionsHostBuilder builder)
    {
        builder.Services.AddSingleton<IMyService, MyService>();
    }
}

// Isolated: Program.cs (standard Generic Host)
var builder = FunctionsApplication.CreateBuilder(args);
builder.Services.AddSingleton<IMyService, MyService>();
builder.Build().Run();
Enter fullscreen mode Exit fullscreen mode

FunctionsStartup was a Functions-specific extension point built on top of the Generic Host. In isolated, there is no extension point: your function app is a Generic Host application. Program.cs is the entry point, and the Services property is a standard IServiceCollection. Delete Startup.cs, delete the Microsoft.Azure.Functions.Extensions package reference, and move your service registrations into Program.cs. There is nothing Functions-specific about how DI works after that.

ILogger injection

// In-process: ILogger passed as function parameter
public IActionResult Run(..., ILogger log)

// Isolated: inject ILogger<T> via constructor
public class OrderFunction(ILogger<OrderFunction> logger)
{
    [Function("ProcessOrder")]
    public IActionResult Run([HttpTrigger] HttpRequest req)
    {
        logger.LogInformation("Processing order");
        // ...
    }
}
Enter fullscreen mode Exit fullscreen mode

In the in-process model, the runtime injected ILogger directly into the function method as a parameter. That was a WebJobs SDK feature with no equivalent in the isolated model. In isolated, your function class is an ordinary class that the DI container constructs. You inject ILogger<T> through the constructor, exactly as you would in any .NET service. The generic type parameter means your logs are automatically scoped to the class name in Application Insights.

Every function that currently takes ILogger log as a method parameter needs to become an instance class with a constructor. That is one of the more time-consuming parts of migration for large codebases.

Package references and project type

<!-- In-process -->
<OutputType>Library</OutputType>  <!-- implicit, often omitted -->
<PackageReference Include="Microsoft.NET.Sdk.Functions" Version="4.x" />

<!-- Isolated: a real executable -->
<OutputType>Exe</OutputType>
<FrameworkReference Include="Microsoft.AspNetCore.App" />
<PackageReference Include="Microsoft.Azure.Functions.Worker" Version="1.21.0" />
<PackageReference Include="Microsoft.Azure.Functions.Worker.Sdk" Version="1.17.2" />
<PackageReference Include="Microsoft.Azure.Functions.Worker.Extensions.Http.AspNetCore" Version="1.2.1" />
<PackageReference Include="Microsoft.ApplicationInsights.WorkerService" Version="2.22.0" />
<PackageReference Include="Microsoft.Azure.Functions.Worker.ApplicationInsights" Version="1.2.0" />
Enter fullscreen mode Exit fullscreen mode

<OutputType>Exe</OutputType> is not optional. The isolated worker is a process that starts, connects to the host over gRPC, and runs until it is shut down. It is not a library that gets loaded into another process. The old Microsoft.NET.Sdk.Functions meta-package is replaced by separate packages: the core worker, the build SDK (which handles source generation for bindings), the HTTP ASP.NET Core extension, and two Application Insights packages.

Your binding extensions also change package namespace. Every Microsoft.Azure.WebJobs.Extensions.* package becomes a Microsoft.Azure.Functions.Worker.Extensions.* equivalent. The NuGet package names differ; the binding attributes inside them often keep the same names, which reduces the code changes needed.

Static classes become instance classes

// In-process: static class (common pattern)
public static class OrderFunction
{
    [FunctionName("ProcessOrder")]
    public static IActionResult Run([HttpTrigger] HttpRequest req, ILogger log)
    { ... }
}

// Isolated: instance class required for constructor injection
public class OrderFunction(ILogger<OrderFunction> logger)
{
    [Function("ProcessOrder")]
    public IActionResult Run([HttpTrigger] HttpRequest req)
    { ... }
}
Enter fullscreen mode Exit fullscreen mode

Static function classes were idiomatic in in-process because the runtime called your method directly by reflection and supplied everything through parameters. Constructor injection is impossible on a static class, so isolated requires instance classes. The compiler will not tell you this immediately: your static class will compile fine, but the runtime will fail to instantiate it because it cannot inject dependencies into a static constructor. Make the class non-static and add a constructor for your dependencies.

JSON serialization

In-process defaulted to Newtonsoft.Json for binding serialization. Isolated defaults to System.Text.Json. This is the change most likely to produce silent runtime bugs rather than compilation errors.

[JsonProperty("field_name")] does not exist in System.Text.Json. The equivalent is [JsonPropertyName("field_name")]. CamelCase naming defaults differ between the two libraries. Null handling, reference loop handling, and enum serialization all differ. If your functions receive JSON payloads, serialize objects to queues, or return JSON from HTTP triggers, test each one end-to-end after migration. A mismatch between what your function now serializes and what downstream consumers expect will not show up at compile time.

Imperative bindings

IBinder, the in-process mechanism for creating bindings at runtime (for example, writing to a blob whose path you only know after reading a message), has no equivalent in isolated. The recommended replacement is injecting the Azure SDK client directly: BlobServiceClient, QueueClient, ServiceBusClient. This is cleaner code in either model: SDK clients are testable, type-safe, and do not require the Functions binding infrastructure to work.


The .NET 10 requirement

.NET 10 only supports the isolated model. There is no in-process support for .NET 10, and none is planned. If you are starting a new project today and targeting .NET 10, you are already using isolated by necessity; this article just explains the architecture behind it.

The support matrix makes the direction clear:

.NET version In-process Isolated
.NET Framework 4.8 No Yes
.NET 8 (LTS) Yes (final version) Yes
.NET 9 (STS) No Yes
.NET 10 (LTS) No Yes

.NET 8 is the last version the in-process model will ever support. If your function app runs on .NET 8 today, you are already at the ceiling. Staying on in-process means staying on .NET 8 until November 2026, then losing support entirely. Migrating to isolated means you can move to .NET 10 now, get the latest runtime improvements, and upgrade again when .NET 12 arrives without any coordination with the Azure Functions team.


The November 2026 deadline

In-process model support ends on November 10, 2026. That date is not arbitrary: it aligns with the end-of-life date for .NET 8 LTS. After that point, in-process function apps receive no security patches, no bug fixes, and no platform updates. The deadline is firm.

Start your inventory now. This PowerShell script identifies every in-process function app in your subscription:

$FunctionApps = Get-AzFunctionApp

foreach ($App in $FunctionApps) {
    if ($App.Runtime -eq 'dotnet') {
        Write-Output "$($App.Name) - in-process, needs migration"
    }
}
Enter fullscreen mode Exit fullscreen mode

A Runtime value of dotnet means in-process. dotnet-isolated means already migrated. Run this across every subscription where you might have deployed function apps, including non-production environments where older versions sometimes linger.

The migration itself is mechanical for most functions: update packages, add Program.cs, fix compilation errors, update attributes. The problem is not the mechanical work; it is the edge cases that surface during testing. A function that uses IBinder, a binding attribute with changed properties, a JSON payload that now serializes differently: each one is a small investigation. In a codebase with dozens of functions, those investigations add up.

The recommended order of attack:

  1. Run the script above and produce a full inventory with function counts per app.
  2. Start with the simplest apps: HTTP triggers, no output bindings, no IBinder.
  3. Update packages and Program.cs first, then fix compilation errors function by function.
  4. Test locally with func start before touching Azure.
  5. Deploy to a staging slot and run your smoke tests before swapping to production.

The staging slot step matters more here than in typical deployments. When you swap, the FUNCTIONS_WORKER_RUNTIME app setting switches from dotnet to dotnet-isolated. If your code and the app setting get out of sync even briefly, the app enters an error state. Slots let you validate the new configuration in production infrastructure before making it live.

Waiting until Q3 2026 to start leaves no room for the edge cases. Start with your least critical app now, learn the migration pattern in a low-stakes context, and work forward from there.


One more thing: Flex Consumption is isolated-only

The Flex Consumption plan is Microsoft's newest Azure Functions hosting option. It scales each function independently rather than scaling the whole app, supports always-ready instances that eliminate cold starts for your busiest functions, and bills at a finer granularity than the standard Consumption plan. If any of that sounds appealing, the isolated worker model is a prerequisite.

In-process cannot run on Flex Consumption at all. The two are architecturally incompatible: Flex Consumption requires the worker process model to manage per-function scaling, and in-process has no worker process to manage.

If you are evaluating hosting options for a new function app, that decision is already made for you: Flex Consumption is isolated-only, and isolated is where all future platform investment is going. Starting on in-process today means either migrating before you can move to Flex Consumption, or accepting a hosting model that cannot take advantage of the newest platform capabilities.


Configuration: two surfaces, not one

After migration, one category of bug appears repeatedly: a developer configures something in Program.cs and it has no effect on trigger behavior. The root cause is always the same: the isolated model has two separate configuration surfaces, one for the host process and one for the worker process.

Host configuration governs triggers, bindings, and scaling. Two sources feed it:

  • host.json: extension settings, retry policies, connection concurrency, scale behavior.
  • Environment variables and Azure App Service application settings: connection strings that the host uses to connect to Service Bus, Storage, Event Hub, and other binding sources.

Worker configuration governs your application code. Two sources feed it:

  • appsettings.json: loaded automatically by FunctionsApplication.CreateBuilder(), accessible through IConfiguration.
  • Anything you wire up in Program.cs: additional configuration providers, secrets, feature flags.

The critical rule: connection strings for bindings go in environment variables (or App Service application settings in production), not in appsettings.json. The host process that initializes the Service Bus trigger listener or the Storage queue poller reads from environment variables. It cannot read your worker's appsettings.json. A connection string that lives only in appsettings.json will work fine for any code in your worker that reads it directly (for example, an Azure SDK client you construct manually) but will cause the binding itself to fail at startup with a cryptic "missing connection string" error.

Locally, local.settings.json maps its Values section into environment variables when the Functions host starts, which is why everything works in local development even when you have not thought carefully about this split. In Azure, you configure application settings in the portal or via deployment scripts, and they become environment variables for both processes.


Migration gotchas worth knowing in advance

The following issues are not obvious from the migration documentation and tend to surface late, when you are integrating and testing rather than making mechanical code changes.

  • ILogger as a method parameter is gone. The runtime no longer injects it. Every function that currently takes ILogger log as a parameter needs to become an instance class with a constructor that accepts ILogger<T>. In large codebases, this touches many files.

  • JSON serialization changes silently. Moving from Newtonsoft.Json to System.Text.Json changes how your bindings serialize and deserialize data. [JsonProperty] becomes [JsonPropertyName]. Null values, camelCase defaults, and enum handling all differ. A function that processes messages from a queue may silently start deserializing them incorrectly if the attribute names change. Test every binding that touches JSON.

  • Application Insights log filtering moves from host.json to Program.cs. Log levels configured under logging.logLevel in host.json no longer apply to code running in the worker process. To filter worker logs, call ConfigureFunctionsApplicationInsights() in Program.cs and configure the log level there. Without this, you may find your worker logs missing from Application Insights, or flooded with debug output you expected to filter out.

  • IAsyncCollector<T> has no direct equivalent. If your functions write multiple messages to a queue or table using IAsyncCollector<T>, replace it with an array property on a dedicated return type. IAsyncCollector<string> becomes string[] on a record that your function returns.

  • Blob binding attribute properties changed. [Blob("container/path", FileAccess.Write)] does not exist in the isolated extension. The equivalent is [BlobOutput("container/path")]. The FileAccess enum property was removed; the direction is now expressed by whether you use [BlobInput] or [BlobOutput]. This is a compilation error, which means you will catch it, but expect to update every blob binding attribute.

  • Custom configuration in Program.cs is invisible to the host. If you read a connection string from appsettings.json in Program.cs and wire it up to a service, that configuration does not flow to the binding runtime. Trigger connections must come from environment variables. This is a duplicate of the two-surfaces rule above, but it is worth restating because it manifests as a confusing runtime error rather than a compilation failure.

  • FUNCTIONS_WORKER_RUNTIME and the deployed code must change together. If the app setting in Azure says dotnet but you deploy isolated code (or the reverse), the function app enters an error state on startup. Use deployment slots to change the app setting and deploy the code atomically, then validate in staging before swapping to production.


Where to go from here

The isolated model is the foundation everything else in this series sits on. The HTTP trigger patterns in Part 2 and the timer, queue, and blob triggers in Part 3 all assumed isolated; now you know why those patterns look the way they do. The local development setup in Part 4 works as it does because the worker process can be debugged independently of the host. The architecture is not incidental; it shapes every practical detail.

If you are migrating an existing in-process app, the Microsoft migration guide walks through the steps with tooling support including a migration tool that handles some of the mechanical changes automatically. Use it as a checklist, but read through the sections on JSON serialization and Application Insights filtering before you declare the migration done: those two areas produce the most post-migration bugs.

If you are starting a new project, start on isolated and .NET 10. The in-process model has no future, and building on it today means doing this migration under deadline pressure later.

Which part of the migration gave you the most trouble, or are you starting fresh with isolated from day one?


Azure Functions for .NET Developers: Series


Top comments (0)