DEV Community

Cover image for Adding Queue and Cron Handlers to Your Cloudflare Worker (Part 2)
teaganga
teaganga

Posted on

Adding Queue and Cron Handlers to Your Cloudflare Worker (Part 2)

In Part 1, I explained why Queues are the right solution for long-running background jobs. Now let me show you exactly how to set it up.

This is the copy-paste-ready guide I wish I had when I started.


What We're Building

By the end of this post, your Worker will support all three invocation types:

  1. HTTP (fetch) - your API and UI
  2. Cron (scheduled) - automatic periodic jobs
  3. Queue (queue) - on-demand background processing

All in one deployment, using Cloudflare's official Worker runtime.

Let's go step by step.


Step 1: Add Queue Configuration

First, open your wrangler.toml (or wrangler.jsonc) and add a queue binding:

# wrangler.toml

name = "my-worker"
main = "src/index.ts"

[[queues.producers]]
queue = "background-jobs"
binding = "JOB_QUEUE"

[[queues.consumers]]
queue = "background-jobs"
max_batch_size = 10
max_batch_timeout = 30
Enter fullscreen mode Exit fullscreen mode

What this does:

  • producers - lets your Worker send messages to the queue
  • consumers - lets your Worker receive messages from the queue
  • binding - the name you'll use in your code (like env.JOB_QUEUE)

Important: The queue name (background-jobs) connects producers and consumers. Use the same name for both.

Now deploy:

wrangler deploy
Enter fullscreen mode Exit fullscreen mode

Cloudflare will automatically create the queue if it doesn't exist yet. No manual setup needed!


Step 2: Add Cron Schedule

While you're in wrangler.toml, add a cron trigger:

# wrangler.toml (continued)

[triggers]
crons = ["0 */6 * * *"]  # Every 6 hours
Enter fullscreen mode Exit fullscreen mode

This uses standard cron syntax:

  • */10 * * * * - every 10 minutes
  • 0 2 * * * - daily at 2am UTC
  • 0 */6 * * * - every 6 hours

You can add multiple schedules:

[triggers]
crons = [
  "0 2 * * *",      # Daily refresh
  "*/30 * * * *"    # Health check every 30 min
]
Enter fullscreen mode Exit fullscreen mode

Step 3: Add the Handler Functions

Now the fun part. Your Worker needs to export an object with all three handlers:

// src/index.ts

export default {
  // 1️⃣ HTTP Handler
  async fetch(request, env, ctx) {
    const url = new URL(request.url);

    if (url.pathname === '/api/trigger-job') {
      // Enqueue a background job
      await env.JOB_QUEUE.send({
        type: 'heavy-processing',
        timestamp: Date.now(),
        requestedBy: 'admin'
      });

      return new Response('Job queued successfully!', { status: 202 });
    }

    return new Response('Hello World!');
  },

  // 2️⃣ Cron Handler
  async scheduled(event, env, ctx) {
    console.log('[cron] Running scheduled job');

    // Option A: Do the work directly
    await doPeriodicMaintenance(env);

    // Option B: Enqueue work for the queue handler
    await env.JOB_QUEUE.send({
      type: 'scheduled-job',
      scheduledTime: event.scheduledTime
    });

    console.log('[cron] Job complete');
  },

  // 3️⃣ Queue Handler
  async queue(batch, env, ctx) {
    console.log(`[queue] Processing ${batch.messages.length} messages`);

    for (const message of batch.messages) {
      try {
        const { type, timestamp } = message.body;

        console.log(`[queue] Starting job ${message.id}`);
        await processHeavyJob(env, message.body);

        // Mark as successfully processed
        message.ack();

      } catch (error) {
        console.error(`[queue] Job failed:`, error);

        // Return to queue for retry
        message.retry();
      }
    }
  }
};
Enter fullscreen mode Exit fullscreen mode

Key points:

  • Each handler gets env (bindings) and ctx (execution context)
  • message.ack() tells Cloudflare "this message is done"
  • message.retry() puts it back in the queue for another attempt
  • You can enqueue from any handler (HTTP, cron, or even queue)

Step 4: Implement Your Job Logic

Here's a clean pattern I use to keep job logic separate:

// src/jobs/processor.ts

export async function processHeavyJob(env, payload) {
  console.log('Starting heavy job...', payload);

  // This can run for minutes with unlimited CPU time!
  await fetchExternalAPIs();
  await processLargeDataset();
  await writeResultsToStorage(env);

  console.log('Job complete!');
}
Enter fullscreen mode Exit fullscreen mode

Then import it in your main Worker:

// src/index.ts
import { processHeavyJob } from './jobs/processor';

export default {
  async queue(batch, env, ctx) {
    for (const message of batch.messages) {
      await processHeavyJob(env, message.body);
      message.ack();
    }
  }
};
Enter fullscreen mode Exit fullscreen mode

This keeps your handlers clean and your job logic testable.


Step 5: Trigger It From Your Admin UI

Now when an admin clicks "Run Job" in your UI:

// Inside your fetch handler

if (url.pathname === '/admin/run-job' && request.method === 'POST') {
  // Validate admin auth first!

  await env.JOB_QUEUE.send({
    type: 'admin-triggered',
    userId: 'admin-123',
    priority: 'high'
  });

  return new Response(JSON.stringify({
    success: true,
    message: 'Job queued'
  }), {
    status: 202,
    headers: { 'Content-Type': 'application/json' }
  });
}
Enter fullscreen mode Exit fullscreen mode

The request returns instantly (under 1ms typically), and the job runs in the background with no time limits.


Understanding What You Just Built

You now have three ways to trigger work:

Trigger Handler Use Case CPU Limit
HTTP Request fetch() User actions, API calls 10-50ms*
Cron Schedule scheduled() Periodic maintenance 30s
Queue Message queue() Heavy background jobs Unlimited

*50ms on paid plans, 30s on Business+

The magic: Queue handlers have unlimited CPU time. You can process for minutes without hitting limits.


Common Patterns I Use

Pattern 1: Immediate + Scheduled

Use cron for regular updates, but let admins trigger on-demand:

export default {
  async fetch(request, env, ctx) {
    if (request.url.endsWith('/refresh-now')) {
      await env.JOB_QUEUE.send({ source: 'manual' });
      return new Response('Queued');
    }
  },

  async scheduled(event, env, ctx) {
    // Same job, runs automatically every 6 hours
    await env.JOB_QUEUE.send({ source: 'cron' });
  },

  async queue(batch, env, ctx) {
    // Processes both manual and automatic triggers
    for (const msg of batch.messages) {
      await doTheWork(env, msg.body);
      msg.ack();
    }
  }
};
Enter fullscreen mode Exit fullscreen mode

Pattern 2: Progressive Work

Break huge jobs into chunks:

async queue(batch, env, ctx) {
  for (const message of batch.messages) {
    const { items, cursor } = message.body;

    // Process this batch
    await processBatch(items);

    // If more work remains, enqueue next chunk
    if (cursor) {
      await env.JOB_QUEUE.send({
        items: await fetchNextBatch(cursor),
        cursor: nextCursor
      });
    }

    message.ack();
  }
}
Enter fullscreen mode Exit fullscreen mode

This lets you process unlimited amounts of data without hitting timeouts.

Pattern 3: Dead Letter Queue

Track failed jobs:

async queue(batch, env, ctx) {
  for (const message of batch.messages) {
    try {
      await processJob(message.body);
      message.ack();

    } catch (error) {
      if (message.attempts >= 3) {
        // Save to storage for manual review
        await env.FAILED_JOBS.put(
          message.id,
          JSON.stringify({ error, body: message.body })
        );
        message.ack(); // Don't retry again
      } else {
        message.retry(); // Try again
      }
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

Should You Keep Everything in One Worker?

You can put all three handlers in one Worker. They don't compete for resources at runtime—each invocation is isolated.

But I usually split them once my job logic gets heavy:

Worker 1: API & UI

  • Handles fetch() only
  • Stays small and fast
  • Just enqueues messages

Worker 2: Job Runner

  • Handles queue() and scheduled()
  • Can import heavy dependencies
  • Focuses on background work

You configure this by creating two separate wrangler.toml files (or two directories), and pointing the queue consumer to the job worker.

I'll cover this architecture in Part 3 if there's interest!


Testing Your Handlers Locally

Cloudflare's dev server supports all three handlers:

wrangler dev
Enter fullscreen mode Exit fullscreen mode

To test queues locally:

# Terminal 1: Run your worker
wrangler dev

# Terminal 2: Send a test message
wrangler queues producer send background-jobs '{"test": true}'
Enter fullscreen mode Exit fullscreen mode

To test cron locally:

Cron doesn't run in dev mode, but you can simulate it:

// Add a test endpoint
if (url.pathname === '/__test-cron') {
  await this.scheduled({ scheduledTime: Date.now() }, env, ctx);
  return new Response('Cron simulated');
}
Enter fullscreen mode Exit fullscreen mode

Wrapping Up

You now have a Worker that can:

  • ✅ Serve HTTP traffic quickly
  • ✅ Run scheduled maintenance jobs
  • ✅ Process long-running background work on-demand

The complete code is less than 100 lines, and it gives you the full power of Cloudflare's infrastructure.

Next steps:

  • Add monitoring (I use Sentry)
  • Set up alerts for failed jobs
  • Consider splitting into multiple Workers as you scale

What would you like to see in Part 3? Two-Worker architecture? Error handling patterns? Let me know in the comments!


Quick Reference

// Full minimal example
export default {
  async fetch(request, env, ctx) {
    await env.JOB_QUEUE.send({ type: 'work' });
    return new Response('Queued!');
  },

  async scheduled(event, env, ctx) {
    await env.JOB_QUEUE.send({ type: 'cron' });
  },

  async queue(batch, env, ctx) {
    for (const msg of batch.messages) {
      await doWork(msg.body);
      msg.ack();
    }
  }
};
Enter fullscreen mode Exit fullscreen mode
# wrangler.toml
[[queues.producers]]
queue = "my-jobs"
binding = "JOB_QUEUE"

[[queues.consumers]]
queue = "my-jobs"

[triggers]
crons = ["0 */6 * * *"]
Enter fullscreen mode Exit fullscreen mode

Top comments (0)