DEV Community

Cover image for Beyond HTTP: Timer, Queue, and Blob Triggers
Martin Oehlert
Martin Oehlert

Posted on

Beyond HTTP: Timer, Queue, and Blob Triggers

Azure Functions for .NET Developers: Series


Every .NET team has that one Windows Service nobody wants to touch: the nightly cleanup job, the queue processor, the file watcher duct-taped to a scheduled task. Azure Functions replaces all three with a few lines of C# and a NuGet package.

In Part 2, you built an HTTP-triggered function, the classic request/response pattern. That's the front door. This article opens the rest of the building: the scheduled jobs, the message processors, and the file handlers that run without anyone clicking a button.

Three trigger types beyond HTTP:

  • Timer triggers: scheduled jobs on a cron expression. Think nightly cleanups, periodic syncs, report generation.
  • Queue triggers: asynchronous message processing. Decouple your services and handle work at your own pace.
  • Blob triggers: react to files being created or updated in Azure Storage. Image processing, data imports, log analysis.

All examples use .NET 10 with the isolated worker model and C# 14, same as Part 2. Each trigger gets a real-world scenario, working code, and guidance on when to pick it over the alternatives.

Prerequisites

If you followed Part 2, you already have .NET 10 SDK, Azure Functions Core Tools v4, and VS Code installed. That setup still applies. Head back to that post if you need to get your environment ready.

One tool that becomes essential this time: Azurite. For HTTP triggers it was optional, but timer, queue, and blob triggers all rely on Azure Storage for checkpoint and lease management. Without Azurite running locally, none of these triggers will start. Make sure it is installed and running before you continue.

One thing from Part 2's Program.cs you won't need here: ConfigureFunctionsWebApplication(). That call (and the FrameworkReference to Microsoft.AspNetCore.App) is specific to HTTP triggers with ASP.NET Core integration. For a project with only timer, queue, and blob triggers, your entry point can be as minimal as:

var builder = FunctionsApplication.CreateBuilder(args);
builder.Build().Run();
Enter fullscreen mode Exit fullscreen mode

You will also need three new NuGet packages, one per trigger type. Each trigger section below shows its specific package, and the Putting It All Together section at the end has the complete .csproj reference with all packages in one place.

Timer Triggers: Scheduled Jobs

What They Are

If you've ever set up a cron job on Linux or a scheduled task on Windows, timer triggers will feel instantly familiar. They're the serverless equivalent: code that runs on a schedule, without you managing the scheduler infrastructure. Think cleanup jobs that purge stale records, nightly report generation, periodic health checks against downstream APIs, cache warming, or syncing data between systems on a regular cadence.

CRON Expression Syntax

Azure Functions uses NCrontab format, and here's the first thing that trips people up: it's a six-field expression, not the standard five-field cron you probably know from Linux. The extra field at the beginning represents seconds.

The format is: {second} {minute} {hour} {day} {month} {day-of-week}

Here are the schedules you'll reach for most often:

Schedule Expression
Every 5 minutes 0 */5 * * * *
Every hour 0 0 * * * *
Daily at 2 AM UTC 0 0 2 * * *
Weekdays at 9 AM 0 0 9 * * 1-5
First of every month 0 0 0 1 * *

All times are UTC by default. If you need a different timezone, set the WEBSITE_TIME_ZONE app setting to a valid timezone identifier (like Eastern Standard Time on Windows or America/New_York on Linux). Don't skip this if your business logic is timezone-sensitive: "daily at 2 AM" means very different things in UTC versus your local time.

And seriously, remember the six fields. If you paste a five-field expression from Stack Overflow, you'll get a cryptic startup error. The leading 0 for seconds is easy to forget.

Code Sample

Add the timer extension to your project:

<PackageReference Include="Microsoft.Azure.Functions.Worker.Extensions.Timer" Version="4.3.1" />
Enter fullscreen mode Exit fullscreen mode

Here's a timer function that runs a daily cleanup job at 2 AM UTC:

using Microsoft.Azure.Functions.Worker;
using Microsoft.Extensions.Logging;

namespace TriggerDemo;

public class CleanupFunction(ILogger<CleanupFunction> logger)
{
    [Function(nameof(CleanupFunction))]
    public Task Run(
        [TimerTrigger("0 0 2 * * *")] TimerInfo timer)
    {
        logger.LogInformation("Running daily cleanup at {Time}", DateTime.UtcNow);

        if (timer.IsPastDue)
        {
            logger.LogWarning("Timer is running late — a scheduled run was missed");
        }

        logger.LogInformation(
            "Next occurrence: {Next}",
            timer.ScheduleStatus?.Next);

        // Cleanup logic here
        return Task.CompletedTask;
    }
}
Enter fullscreen mode Exit fullscreen mode

The class uses a primary constructor (CleanupFunction(ILogger<CleanupFunction> logger)) to inject the logger directly. No boilerplate field assignments needed. This is the same dependency injection pattern we used with HTTP triggers.

The [Function(nameof(CleanupFunction))] attribute registers the function with the host. Using nameof() instead of a hardcoded string keeps things refactor-safe: rename the class and the function name follows automatically.

The [TimerTrigger("0 0 2 * * *")] attribute is where you specify the CRON expression. This one fires once daily at 2 AM UTC.

The TimerInfo parameter is what the runtime hands you. It exposes two properties you'll actually use: IsPastDue tells you if this execution was delayed (the host was down or restarting when the trigger should have fired), and ScheduleStatus gives you Last, Next, and LastUpdated timestamps so you can log or act on schedule metadata.

Externalizing the Schedule

Hardcoding a CRON expression works for demos, but in real projects you'll want different schedules per environment. Your dev cleanup doesn't need to run at the same cadence as production. Use the %AppSettingName% syntax to pull the schedule from configuration:

[TimerTrigger("%CLEANUP_SCHEDULE%")]
Enter fullscreen mode Exit fullscreen mode

Then define CLEANUP_SCHEDULE in your local.settings.json for local development and in App Settings for each deployed environment. Now you can run every 30 seconds in dev and every 24 hours in production without changing a line of code.

Key Behaviors and Gotchas

Singleton execution. Only one instance of a timer function runs at a time, even when your function app is scaled out to multiple instances. The runtime uses blob leases to coordinate this. You don't need to build your own distributed locking.

UseMonitor. Defaults to true for schedules with intervals of one minute or longer. When enabled, the runtime persists schedule status to blob storage so it can detect missed executions across restarts. That's how IsPastDue works. For sub-minute schedules, it's disabled automatically.

RunOnStartup. Setting this to true fires the function immediately whenever the host starts. Sounds convenient for testing, but it's a trap in production. Every deployment, every scale-out event, every platform restart triggers your function. Leave it off.

No automatic retry by default. If your timer function throws an exception, the runtime won't retry it. It simply waits for the next scheduled occurrence. If your cleanup job fails at 2 AM, it won't run again until 2 AM tomorrow. For critical work, add a retry policy to the method: [FixedDelayRetry(5, "00:00:10")] retries up to five times with a ten-second delay between attempts. One caveat: timer retries don't survive instance failures. If the host crashes mid-retry sequence, the count is lost and won't resume on a new instance.

Testing Timer Functions Locally

You don't want to wait until 2 AM to test your function. During local development, you can trigger any timer function on demand by POSTing to the admin endpoint:

curl -X POST http://localhost:7071/admin/functions/CleanupFunction \
  -H "Content-Type: application/json" \
  -d '{"input": null}'
Enter fullscreen mode Exit fullscreen mode

This returns a 202 Accepted and fires the function immediately, regardless of the CRON schedule. It's a lifesaver, especially for schedules like "first of every month" where waiting for the natural trigger isn't exactly practical. The same admin endpoint works for any function type, but you'll use it most with timers.

Queue Triggers: Async Message Processing

Not everything needs to happen right now. When a user places an order, they don't need to wait while you generate an invoice, send a confirmation email, and update your analytics dashboard. Queue triggers let you drop a message onto an Azure Storage Queue and process it asynchronously, decoupling the thing that creates work from the thing that does work.

This is the backbone of reliable distributed systems. Producers and consumers scale independently, failures don't cascade, and traffic spikes get absorbed by the queue instead of hammering your downstream services.

Common use cases: order processing, sending emails, image or document processing, fan-out patterns where one event kicks off multiple downstream tasks.

Code Sample: Order Processor

Add the queue storage extension to your project:

<PackageReference Include="Microsoft.Azure.Functions.Worker.Extensions.Storage.Queues" Version="5.5.3" />
Enter fullscreen mode Exit fullscreen mode
using Microsoft.Azure.Functions.Worker;
using Microsoft.Extensions.Logging;

namespace TriggerDemo;

public record OrderMessage(string OrderId, string CustomerId, decimal Amount);

public class OrderProcessor(ILogger<OrderProcessor> logger)
{
    [Function(nameof(OrderProcessor))]
    public async Task Run(
        [QueueTrigger("orders", Connection = "AzureWebJobsStorage")]
        OrderMessage order)
    {
        logger.LogInformation(
            "Processing order {OrderId} for {CustomerId}: {Amount:C}",
            order.OrderId, order.CustomerId, order.Amount);

        // Process order...
    }
}
Enter fullscreen mode Exit fullscreen mode

The OrderMessage record defines the shape of your queue message. Records are perfect here: they're immutable, concise, and the positional syntax gives you a clean one-liner. The runtime deserializes the JSON message body straight into this type automatically.

QueueTrigger("orders") tells the runtime to watch the queue named orders. Every time a message lands there, your function fires.

The Connection parameter isn't the connection string itself; it's the name of the setting that holds the connection string. Locally, that's a key in local.settings.json. In Azure, it's an app setting or a Key Vault reference.

The runtime handles auto-deserialization from JSON to your POCO or record. But you're not limited to custom types; you can also bind to string (raw message content), byte[], BinaryData, or QueueMessage if you need access to metadata like dequeue count or insertion time.

Queue Output Binding

Often you'll want to chain processing steps: read from one queue, do some work, and drop a message onto another queue. Output bindings handle this:

[Function(nameof(OrderProcessor))]
[QueueOutput("notifications", Connection = "AzureWebJobsStorage")]
public string Run(
    [QueueTrigger("orders", Connection = "AzureWebJobsStorage")]
    OrderMessage order)
{
    logger.LogInformation("Processing order {OrderId}", order.OrderId);
    return $"Order {order.OrderId} processed for {order.CustomerId}";
}
Enter fullscreen mode Exit fullscreen mode

The pattern here is simple: the function's return value becomes the output message. The QueueOutput attribute on the method tells the runtime where to send it. Whatever string you return gets written to the notifications queue. Each function does one job and passes work to the next stage.

You'll notice this version is synchronous: public string Run(...) instead of async Task Run(...). When the only job is to transform input and return an output message, there's no async work involved. If you need to do async work before emitting the output message, change the return type to Task<string> and await normally inside.

Poison Queue Handling

What happens when processing fails? The runtime retries. And retries again. By default, if a message fails 5 times (the maxDequeueCount), it gets moved to a special queue named {queue-name}-poison (in our case, orders-poison).

You can tune this in host.json:

{
  "extensions": {
    "queues": {
      "maxDequeueCount": 3
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

Poison queues are your safety net, but only if you're watching them. A message sitting in a poison queue means something broke and data isn't being processed. Set up alerts on the poison queue message count. Azure Monitor can do this, or you can write a simple timer trigger (see what we did there?) that checks the count and pings your team in Slack or Teams. Don't let poison messages pile up silently.

Scaling and Concurrency

The Functions runtime continuously polls the queue and scales out based on queue length. More messages means more instances; you don't configure this, it just happens.

What you can configure in host.json:

  • batchSize (default 16): how many messages each instance grabs at once
  • newBatchThreshold (default batchSize / 2 on Consumption plan, scaled by vCPU count on App Service and Premium plans): when to fetch the next batch
  • visibilityTimeout: how long a message stays invisible to other consumers while being processed

One critical thing to understand: there are no ordering guarantees. Messages may be processed out of order, and with at-least-once delivery semantics, the same message could be processed more than once. This means your processing logic must be idempotent, meaning safe to run twice with the same input without creating duplicate side effects. Use an idempotency key (like the OrderId above) and check whether you've already processed a message before doing the work. Skip this, and you'll eventually process the same order twice on a busy day.

Testing Locally

Start Azurite to get a local storage emulator running (you should already have it from the timer trigger section). Your function will connect using the UseDevelopmentStorage=true connection string.

To put test messages on the queue, you've got a few options:

  • Azure Storage Explorer: right-click the queue and add a message manually. Great for quick one-offs.
  • Azure CLI: az storage message put --queue-name orders --content '{"OrderId":"123","CustomerId":"C1","Amount":49.99}' with the --connection-string flag pointing at Azurite.
  • An HTTP trigger: create a simple function that accepts an order via HTTP and writes it to the queue. This is genuinely useful beyond testing; it's often how messages end up on queues in production too.

Whichever approach you choose, watch the terminal output from func start; you'll see your function pick up the message within seconds.

Blob Triggers: File Processing

Files show up and need processing. A customer uploads an invoice PDF, a partner drops a data feed into storage, a mobile app pushes photos for moderation. You could poll for new files on a timer, but that's wasteful and slow. Blob triggers let your function react to files being created or updated in Azure Blob Storage. The moment a blob lands, your code runs.

This is the pattern behind image processing pipelines (generate thumbnails on upload), file format conversion (CSV lands, gets parsed into database rows), log ingestion (application dumps a log file, your function indexes it), data imports, and virus scanning workflows. Anywhere you'd previously have a Windows Service watching a folder or a scheduled job sweeping a directory, blob triggers are the serverless replacement.

Blob Trigger vs Input Binding

This distinction confuses people coming from a pure HTTP background, but it matters. Azure provides two blob-related attributes, and they solve different problems:

Blob Trigger Blob Input Binding
Purpose Starts function execution Reads data during execution
When it fires New or modified blob detected After another trigger fires
Attribute [BlobTrigger] [BlobInput]
Typical use case React to file uploads Read a known config or template file

A blob trigger is the event source; it causes your function to run. A blob input binding is a convenience for reading a blob inside a function that was triggered by something else. For example, you might have a queue trigger that processes orders, and that function needs to read a pricing template stored as a blob. The queue message triggers execution; the blob input binding reads the template. Two different roles, two different attributes.

Don't use a blob trigger when you already know which blob you need. And don't use an input binding when you need to react to new files arriving. That's the trigger's job.

Code Sample: Image Processor

Add the blob storage extension to your project:

<PackageReference Include="Microsoft.Azure.Functions.Worker.Extensions.Storage.Blobs" Version="6.8.0" />
Enter fullscreen mode Exit fullscreen mode

Here's a function that watches an uploads container and generates a thumbnail every time an image arrives:

using Microsoft.Azure.Functions.Worker;
using Microsoft.Extensions.Logging;

namespace TriggerDemo;

public class ImageProcessor(ILogger<ImageProcessor> logger)
{
    [Function(nameof(ImageProcessor))]
    [BlobOutput("thumbnails/{name}", Connection = "AzureWebJobsStorage")]
    public byte[] Run(
        [BlobTrigger("uploads/{name}", Connection = "AzureWebJobsStorage")]
        byte[] imageData,
        string name)
    {
        logger.LogInformation("Processing uploaded image: {Name} ({Size} bytes)",
            name, imageData.Length);

        // Generate thumbnail...
        byte[] thumbnailBytes = GenerateThumbnail(imageData);
        return thumbnailBytes;
    }

    private static byte[] GenerateThumbnail(byte[] imageData)
    {
        // Thumbnail generation logic
        return imageData; // placeholder
    }
}
Enter fullscreen mode Exit fullscreen mode

The [BlobTrigger("uploads/{name}")] attribute is doing two things at once. First, it tells the runtime to watch the uploads container for new or modified blobs. Second, the {name} path pattern is a binding expression; it captures the blob's filename and makes it available as a method parameter. When someone uploads uploads/profile-photo.jpg, the name parameter receives "profile-photo.jpg". This is how you know which file triggered your function.

The Connection parameter works the same way it does with queue triggers; it's the name of the app setting that holds your storage connection string, not the connection string itself. AzureWebJobsStorage points to Azurite during local development.

The method parameter byte[] imageData receives the entire blob content as a byte array. This is the simplest binding type and works well when your files fit comfortably in memory. But you're not limited to byte[]; the runtime supports several binding types depending on your needs:

  • Stream: best for large files where you want to process data without loading everything into memory at once
  • BinaryData: a modern Azure SDK type that wraps binary content with convenient conversion methods
  • string: for text-based blobs like CSV, JSON, or log files
  • byte[]: when you need the raw bytes (image processing, binary formats)
  • BlobClient: gives you the full Azure Storage SDK client, useful when you need blob metadata, properties, or want to copy/move the blob rather than just read it

The [BlobOutput("thumbnails/{name}")] attribute on the method tells the runtime to write the return value to a different container. Notice the {name} pattern appears here too, reusing the same captured filename, so uploads/profile-photo.jpg produces thumbnails/profile-photo.jpg. The function's return type matches the output: return a byte[], and that's what gets written. This input-container-to-output-container pattern is the standard approach for blob processing, and the separation is deliberate (more on why in the gotchas section).

Path Patterns and Filtering

The path pattern in [BlobTrigger] is more powerful than a simple container name. You can use it to filter which blobs trigger your function and to extract structured metadata from your blob paths.

Filter by file extension to react only to specific file types:

[BlobTrigger("uploads/{name}.csv", Connection = "AzureWebJobsStorage")]
Enter fullscreen mode Exit fullscreen mode

This function only fires for CSV files. Upload a .csv and the function runs; upload a .png and nothing happens. The name parameter receives just the filename without the extension. Upload report-2026.csv and name is "report-2026".

Extract path segments when your storage uses a meaningful directory structure:

[BlobTrigger("uploads/{category}/{filename}.{ext}", Connection = "AzureWebJobsStorage")]
public void Run(byte[] data, string category, string filename, string ext)
Enter fullscreen mode Exit fullscreen mode

Now each segment of the path binds to its own parameter. A blob at uploads/invoices/INV-2026-001.pdf gives you category = "invoices", filename = "INV-2026-001", and ext = "pdf". You can route processing logic based on the category, log the extension, or construct output paths dynamically.

Design your container directory structure with these patterns in mind. A flat dump of files into a single container works, but a structured layout like uploads/{department}/{year}/{filename} gives you filtering and metadata for free.

Key Gotchas

Infinite loops will ruin your day. If your function triggers on blobs in the uploads container and writes output back to the same uploads container, the output blob triggers the function again. Which writes another blob. Which triggers again. This loop runs until you notice it, your storage bill spikes, or you hit a concurrency limit. Always use separate containers for input and output. The code sample above reads from uploads and writes to thumbnails. That separation is intentional and non-negotiable. If you remember one thing from this section, make it this.

Latency is worse than you'd expect. The default blob trigger uses a polling mechanism that scans for changes. On the Consumption plan, this can take up to 10 minutes to detect a new blob. That's not a typo. Minutes, not seconds. If near-instant detection matters for your scenario (and it usually does), use the Event Grid-based blob trigger instead. It subscribes to Azure Event Grid notifications and fires within seconds of blob creation. For anything going to production, Event Grid is the recommended approach. The polling-based trigger is fine for development and low-urgency workloads, but don't ship it to production assuming instant response times. Locally with Azurite, detection typically takes under 60 seconds.

Blob receipts track what's been processed. The runtime stores receipts in the azure-webjobs-hosts container to avoid reprocessing the same blob. If processing fails 5 times, the blob's receipt is marked as poisoned and the runtime stops retrying. Unlike queue triggers, there's no separate "poison container"; the receipt just gets flagged. You'll need to monitor your logs to catch these failures, because nothing moves visibly. If you need to reprocess a failed blob, delete its receipt from azure-webjobs-hosts and the runtime will pick it up again.

Cold starts compound the latency problem. On the Consumption plan, if your function app has been idle, it may need to cold-start before it can even begin polling for blobs. Combined with the polling interval, you could be looking at significant delays. Two mitigations: use a timer trigger as a keep-alive (a function that runs every few minutes just to keep the host warm), or switch to an App Service Plan with Always On enabled. The Flex Consumption plan also helps here with its pre-warmed instance pool.

Choosing the Right Trigger

You've now seen three trigger types in action: timer, queue, and blob. Each one solves a fundamentally different problem, and picking the right one usually comes down to a single question: what event starts the work?

If the answer is "a clock" (something needs to happen at a specific time or on a recurring interval), that's a timer trigger. If the answer is "a message" (another service is telling you there's work to do), that's a queue trigger. If the answer is "a file showed up" (someone uploaded something, or an external system dropped a CSV into storage), that's a blob trigger.

In practice, the choice is usually obvious once you frame it that way. But here's a quick reference for common scenarios:

Scenario Trigger Why
Run at 2 AM daily Timer Schedule-based, no external event needed
Process uploaded files Blob Reacts to blob storage events
Background task after user action Queue Decouple request from processing
Process orders asynchronously Queue Reliable message processing with retries
Health check every 5 minutes Timer Periodic polling
ETL pipeline from file drops Blob File arrival triggers processing
flowchart TD
    A([What starts the work?]) --> B{Clock or schedule?}
    B -->|Yes| Timer[⏰ Timer Trigger]
    B -->|No| C{Message from another service?}
    C -->|Yes| Q{Need ordering or pub/sub?}
    Q -->|No| Queue[📨 Queue Trigger\nStorage Queue]
    Q -->|Yes| SB[🚌 Service Bus Trigger]
    C -->|No| D{File arrived in storage?}
    D -->|Yes| Blob[📁 Blob Trigger]
    D -->|No| HTTP([HTTP Trigger\nor other event])
Enter fullscreen mode Exit fullscreen mode

Timer for time, queue for commands, blob for files. If you're unsure, ask yourself whether the trigger is driven by the clock, by another service's intent, or by data arriving in storage. The answer almost always maps cleanly to one of the three.

And in many real-world systems, you'll combine them. A timer trigger runs nightly to check for missing invoices, drops a message per missing invoice onto a queue, and the queue trigger processes each one. Or a blob trigger picks up an uploaded CSV, validates it, and enqueues individual rows for downstream processing. These triggers compose naturally. That's by design.

When to Consider Service Bus Over Storage Queues

Everything in this article uses Azure Storage Queues; they're simple, cheap, and included with any storage account you're already using. For most use cases, they're the right starting point.

But Storage Queues have limits, and you'll hit them eventually. Here's when to look at Azure Service Bus instead:

  • You need guaranteed ordering. Storage Queues offer no ordering guarantees at all. Service Bus sessions give you FIFO processing for messages that share a session ID, which is essential for scenarios where event order matters, like processing state changes for the same entity.
  • You need pub/sub. Storage Queues are point-to-point: one message, one consumer. If the same event needs to reach multiple subscribers (say, an order placed event triggers inventory, billing, and notifications), Service Bus topics and subscriptions let multiple consumers each get their own copy.
  • Your messages exceed 64 KB. That's the Storage Queue message size limit. Service Bus supports messages up to 256 KB on the Standard tier and 100 MB on Premium. If you're passing around documents or large payloads, Storage Queues won't cut it.
  • You need dead-letter queues with inspection. Storage Queues have the poison queue mechanism we covered earlier, but Service Bus dead-letter queues are richer; you can peek messages, inspect failure reasons, and resubmit without custom tooling.
  • You need duplicate detection or transactions. Service Bus can automatically detect and discard duplicate messages within a configurable window, and it supports transactions that span multiple queues or topics. Storage Queues have neither.

The rule of thumb: start with Storage Queues. Upgrade to Service Bus when you hit a specific limitation. Don't pay the complexity and cost premium upfront "just in case." Storage Queues handle the vast majority of message processing scenarios, and migrating later is not painful. Your function code barely changes. You swap QueueTrigger for ServiceBusTrigger, update the connection string, and adjust the NuGet package. The processing logic inside stays the same.

If you're starting a new project and genuinely aren't sure, go with Storage Queues. You'll know when you've outgrown them. It'll be the moment you need ordering, pub/sub, or messages bigger than 64 KB. Until then, keep it simple.

Testing All Triggers Locally with Azurite

You've seen the individual trigger sections mention Azurite in passing. Let's talk about why it's required and how to set up a smooth local development workflow for all three trigger types at once.

Timer, queue, and blob triggers all depend on Azure Storage behind the scenes, even if your function doesn't explicitly use queues or blobs. Timer triggers store checkpoint data in blob storage so the runtime knows when the last execution happened (that's how IsPastDue works). Queue triggers obviously need a queue service. Blob triggers use blob storage for both the trigger source and internal bookkeeping through blob receipts. Without a storage account, none of these triggers can even start.

Azurite is Microsoft's official local storage emulator. It gives you blob, queue, and table services running on your machine. No Azure subscription needed, no network latency, no cost. You connect to it with a well-known connection string, and your functions behave exactly as they would against real Azure Storage.

Configuration

Your local.settings.json needs exactly two settings to make everything work:

{
  "IsEncrypted": false,
  "Values": {
    "AzureWebJobsStorage": "UseDevelopmentStorage=true",
    "FUNCTIONS_WORKER_RUNTIME": "dotnet-isolated"
  }
}
Enter fullscreen mode Exit fullscreen mode

UseDevelopmentStorage=true is a shorthand connection string that the Azure SDK recognizes. It tells the runtime to connect to Azurite's default endpoints on localhost: port 10000 for blobs, 10001 for queues, and 10002 for tables. No keys, no account names, no URLs to configure.

Starting Azurite

Open a terminal and run:

npx azurite --silent --location /tmp/azurite
Enter fullscreen mode Exit fullscreen mode

The --silent flag suppresses the per-request logging that clutters your terminal (you've got enough output from func start). The --location flag tells Azurite where to persist data on disk. Use /tmp/azurite for throwaway test data, or a project-local folder if you want data to survive reboots. Azurite will create the folder if it doesn't exist.

Leave this running in its own terminal window. Then, in a second terminal, start your function app:

func start
Enter fullscreen mode Exit fullscreen mode

The host will connect to Azurite, create its internal containers (azure-webjobs-hosts for checkpoints, etc.), and register all three trigger types. If Azurite isn't running, you'll see connection-refused errors and the host will fail to start, so always start Azurite first.

A Practical Testing Workflow

Once both Azurite and the function host are running, here's how to exercise all three triggers in a single session:

Timer triggers are the easiest. POST to the admin endpoint:

curl -X POST http://localhost:7071/admin/functions/CleanupFunction \
  -H "Content-Type: application/json" \
  -d '{"input": null}'
Enter fullscreen mode Exit fullscreen mode

You'll get a 202 Accepted and see the function execute immediately in the terminal output. No waiting for the CRON schedule.

Queue triggers need a message on the queue. Use Azure Storage Explorer, the Azure CLI, or a quick curl to push a JSON message:

az storage message put \
  --queue-name orders \
  --content '{"OrderId":"ORD-001","CustomerId":"C42","Amount":129.99}' \
  --connection-string "UseDevelopmentStorage=true"
Enter fullscreen mode Exit fullscreen mode

Your OrderProcessor function will pick it up within seconds.

Blob triggers fire when you upload a file to the watched container. Drop a test file in using the CLI:

az storage blob upload \
  --container-name uploads \
  --name test-image.png \
  --file ./test-image.png \
  --connection-string "UseDevelopmentStorage=true"
Enter fullscreen mode Exit fullscreen mode

Detection can take up to 60 seconds locally given the polling-based mechanism. If your function doesn't fire immediately, give it a minute before assuming something is wrong. For production, you'd use Event Grid for near-instant detection, but locally the default polling works fine.

The admin endpoint (POST http://localhost:7071/admin/functions/{FunctionName} with {"input": null}) works for all function types, not just timers. It's your universal "just run this thing right now" button during development.

Putting It All Together

If you've been following along, you already have the individual NuGet packages installed. Here's the complete .csproj <ItemGroup> with everything you need for HTTP, timer, queue, and blob triggers in a single project:

<ItemGroup>
  <PackageReference Include="Microsoft.Azure.Functions.Worker" Version="2.51.0" />
  <PackageReference Include="Microsoft.Azure.Functions.Worker.Sdk" Version="2.0.7" />
  <PackageReference Include="Microsoft.Azure.Functions.Worker.Extensions.Http.AspNetCore" Version="2.1.0" />
  <PackageReference Include="Microsoft.Azure.Functions.Worker.Extensions.Timer" Version="4.3.1" />
  <PackageReference Include="Microsoft.Azure.Functions.Worker.Extensions.Storage.Queues" Version="5.5.3" />
  <PackageReference Include="Microsoft.Azure.Functions.Worker.Extensions.Storage.Blobs" Version="6.8.0" />
</ItemGroup>
Enter fullscreen mode Exit fullscreen mode

The first three packages are the foundation: the worker runtime, the SDK tooling, and the HTTP extension you set up in Part 2. The last three are one extension per trigger type. That's the pattern: each trigger type ships as its own NuGet package. You only install what you use, and you can add triggers incrementally as your project grows. Starting with just timers? Leave out the queue and blob packages. Adding blob processing next sprint? Drop in the one-liner and you're ready.

The extensions share the Microsoft.Azure.Functions.Worker.Extensions.* naming convention, which makes them easy to discover. If you find yourself searching for a new trigger type down the road (Service Bus, Cosmos DB, Event Grid), it'll follow the same Extensions.{ServiceName} pattern.

What's Next

Three more trigger types are now in your toolkit. Timer triggers use six-field CRON expressions and run as singletons even when scaled out; if the host was down when a run was due, IsPastDue tells you so. Queue triggers give you at-least-once delivery with automatic retries and a poison queue for messages that keep failing. Blob triggers fire on file arrival, not a polling loop; separate input and output containers are non-negotiable.

A few principles that cut across all three:

  • Keep functions focused. One function, one job. Chain them with output bindings rather than cramming everything into a single method.
  • Make processing idempotent. Messages can be delivered more than once, timer functions can fire late, blob triggers can re-detect files. Design for "safe to run twice" from day one.
  • Use separate containers for blob I/O. Writing output to the same container you trigger on creates an infinite loop. Always read from one container and write to another.
  • Test locally with Azurite. Every trigger type in this article works without an Azure subscription. There's no excuse for deploying untested functions.
  • Use [LoggerMessage] for production logging. The logger.LogInformation(...) calls in this article are intentionally straightforward, but they box value types (decimal, int, DateTime) on every invocation, even when that log level is disabled. Projects with recommended analyzer settings will flag this as CA1873. The fix is the [LoggerMessage] source generation attribute, which creates strongly-typed log methods at compile time and eliminates the boxing:
[LoggerMessage(Level = LogLevel.Information, Message = "Processing order {OrderId}: {Amount:C}")]
private partial void LogOrderProcessing(string orderId, decimal amount);
Enter fullscreen mode Exit fullscreen mode

This is a performance and code quality concern, not a correctness issue; the inline calls in this article work fine for getting started.

All the code samples from this article are available in the sample repository on GitHub.

Next in the series: Part 4 (Local Development Setup: Tools, Debugging, and Hot Reload) will take your local workflow to the next level. We'll cover debugging with breakpoints, hot reload for faster iteration, and the tooling that makes Azure Functions development feel as smooth as working on a regular ASP.NET Core app.


Azure Functions for .NET Developers: Series


Which trigger type do you use most in your projects, and is there one from this article you're planning to try for the first time?

Top comments (0)