In Part 1, I explained why Queues solve the long-running job problem. In Part 2, I showed you how to set up all three handler types.
Now let's talk about the part that tripped me up the most: how do you actually test this stuff locally?
I spent way too long trying to figure out if I needed to deploy to production just to test a queue message. Turns out, Wrangler has everything built-in—once you know how it works.
The Challenge: Three Different Handler Types
Your Worker now has three ways to get invoked:
-
HTTP requests →
fetch()handler -
Cron schedules →
scheduled()handler -
Queue messages →
queue()handler
Each one needs to be tested differently because Cloudflare doesn't "magically fire" cron or queue events in dev mode. You have to manually trigger them.
Here's how I do it.
Step 1: Start Your Dev Server
First, get your Worker running locally:
wrangler dev
This starts:
- ✅ Your HTTP server (usually on
localhost:8787) - ✅ Your queue consumer (waiting for messages)
- ✅ Your scheduled handler (waiting for triggers)
Important: Wrangler creates a real local queue in memory. You don't need any external services or database. It's all built-in.
You should see something like:
⎔ Starting local server...
[wrangler:inf] Ready on http://localhost:8787
Step 2: Testing HTTP Handlers (Easy)
This is the straightforward one. Just hit your endpoints:
curl http://localhost:8787/api/status
Or open it in your browser:
http://localhost:8787/admin
Your fetch() handler runs exactly like it would in production. Nothing special needed.
Step 3: Testing Queue Handlers (The Tricky One)
Here's where it gets interesting. Your queue consumer is running, but it won't receive messages automatically. You have to send them manually.
Wrangler provides a CLI command for this:
wrangler queues send domain-jobs '{"type": "test-job", "data": "hello"}'
Replace domain-jobs with whatever your queue name is (from your wrangler.jsonc).
What happens:
- The message goes into the local in-memory queue
- Wrangler immediately delivers it to your
queue()handler - You see the logs in your terminal
You should see output like:
[queue] Processing 1 messages
[queue] Starting job abc-123
[queue] Job complete!
Testing Batch Delivery
Want to test how your Worker handles multiple messages? Send several in quick succession:
wrangler queues send domain-jobs '{"id": 1}'
wrangler queues send domain-jobs '{"id": 2}'
wrangler queues send domain-jobs '{"id": 3}'
Depending on your max_batch_size and timing, Wrangler might deliver these as one batch:
[queue] Processing 3 messages
This lets you test your batching logic locally before deploying.
Step 4: Testing Scheduled Handlers (Cron)
Cron jobs don't fire automatically in dev mode (thank goodness—imagine getting pinged every minute while coding).
You have two ways to trigger them:
Option A: Press a Key
While wrangler dev is running, press the s key in your terminal.
This sends a scheduled event to your Worker. You'll see:
Trigger schedule event (s to trigger)
[scheduled] Running scheduled job...
Option B: HTTP Endpoint
Wrangler exposes a special endpoint:
curl "http://localhost:8787/__scheduled?cron=*+*+*+*+*"
This manually fires your scheduled() handler.
Pro tip: I add a test-only endpoint in development:
export default {
async fetch(request, env, ctx) {
const url = new URL(request.url);
// Only in development
if (url.pathname === '/__test-cron' && env.ENVIRONMENT === 'dev') {
const event = { scheduledTime: Date.now(), cron: '* * * * *' };
await this.scheduled(event, env, ctx);
return new Response('Cron simulated');
}
// ... rest of your handlers
},
async scheduled(event, env, ctx) {
console.log('[scheduled] Running at', new Date(event.scheduledTime));
// Your job logic here
}
};
Then I can just hit http://localhost:8787/__test-cron in my browser during development.
The Complete Local Testing Workflow
Here's my typical dev loop:
Terminal 1: Run the Worker
wrangler dev
Leave this running. Watch the logs.
Terminal 2: Send Test Commands
# Test HTTP
curl http://localhost:8787/api/health
# Test queue
wrangler queues send my-jobs '{"test": true}'
# Test multiple messages
for i in {1..5}; do
wrangler queues send my-jobs "{\"id\": $i}"
done
Terminal 1: Press Keys to Trigger Events
- Press
sto fire a scheduled event - Press
cto clear the console - Press
xto exit
This gives me full control over when things execute, which is perfect for debugging.
Understanding the Local Queue
When I first started, I wondered: "Is this a real queue, or just a mock?"
It's real. Here's what you get locally:
| Feature | Local Queue | Production Queue |
|---|---|---|
| Batch delivery | ✅ Yes | ✅ Yes |
message.ack() |
✅ Yes | ✅ Yes |
message.retry() |
✅ Yes | ✅ Yes |
| Automatic retries | ✅ Yes | ✅ Yes |
| Dead letter queue | ❌ No | ✅ Yes |
| Persistence | ❌ No (in-memory) | ✅ Yes (durable) |
| Delays/scheduling | ⚠️ Simplified | ✅ Full featured |
The key difference: Local queues are in-memory only. When you stop Wrangler, the queue disappears.
But for development, this is perfect. You get full queue semantics without needing any infrastructure.
Queue Configuration Deep Dive
Let me show you the correct wrangler.jsonc format, because I got this wrong the first time:
{
"name": "my-worker",
"main": "src/index.ts",
"queues": {
// Producer: lets you SEND messages
"producers": [
{
"queue": "background-jobs",
"binding": "JOB_QUEUE" // Used in env.JOB_QUEUE.send()
}
],
// Consumer: lets you RECEIVE messages
"consumers": [
{
"queue": "background-jobs", // Must match producer queue name
"max_batch_size": 10, // Max messages per batch
"max_batch_timeout": 30, // Max seconds to wait for full batch
"max_retries": 10, // Retry failed messages up to 10x
"dead_letter_queue": "background-jobs-dlq" // Where failed messages go
}
]
}
}
Key points:
-
producersneed a binding (that's how you reference it in code) -
consumersdo NOT have a binding (you just implement the handler) - The queue name connects producers to consumers
- You can have multiple producers sending to the same queue
- You can have multiple consumers processing the same queue (in different Workers)
Dead Letter Queue Setup
If you configure a dead_letter_queue, you need to also consume it:
{
"queues": {
"producers": [
{ "queue": "main-jobs", "binding": "MAIN_QUEUE" }
],
"consumers": [
{
"queue": "main-jobs",
"max_retries": 3,
"dead_letter_queue": "failed-jobs"
},
{
"queue": "failed-jobs", // Consume the DLQ
"max_batch_size": 1 // Process failures carefully
}
]
}
}
Then handle DLQ messages differently:
export default {
async queue(batch, env, ctx) {
for (const message of batch.messages) {
// Check if this is from the DLQ
if (message.queue === 'failed-jobs') {
await logFailureForManualReview(message);
message.ack(); // Don't retry again
} else {
// Normal processing
await processJob(message.body);
message.ack();
}
}
}
};
Debugging Tips I Wish I Knew Earlier
1. Use Structured Logging
Don't just console.log('queue handler'). Include context:
async queue(batch, env, ctx) {
console.log(`[queue] Received ${batch.messages.length} messages`);
for (const message of batch.messages) {
console.log(`[queue] Processing message ${message.id}`, {
attempt: message.attempts,
timestamp: message.timestamp,
body: message.body
});
try {
await processJob(message.body);
message.ack();
console.log(`[queue] ✓ Message ${message.id} complete`);
} catch (error) {
console.error(`[queue] ✗ Message ${message.id} failed:`, error);
message.retry();
}
}
}
This makes it so much easier to see what's happening when you have multiple messages in flight.
2. Test Retry Logic Explicitly
Force a failure to see if retries work:
# Send a message that will fail
wrangler queues send my-jobs '{"shouldFail": true}'
async queue(batch, env, ctx) {
for (const message of batch.messages) {
if (message.body.shouldFail) {
console.log(`[queue] Simulating failure (attempt ${message.attempts})`);
message.retry();
continue;
}
await processJob(message.body);
message.ack();
}
}
Watch your logs to see the retry attempts increment.
3. Simulate Production Conditions
Test with realistic batch sizes:
# Send 10 messages quickly
for i in {1..10}; do
wrangler queues send my-jobs "{\"id\": $i}" &
done
wait
This helps you catch race conditions or batching issues before deploying.
Testing Scheduled Events Realistically
Here's a pattern I use to test different cron schedules:
async scheduled(event, env, ctx) {
const hour = new Date(event.scheduledTime).getUTCHours();
// Different behavior based on time
if (hour % 6 === 0) {
console.log('[scheduled] Running full refresh');
await env.JOB_QUEUE.send({ type: 'full-refresh' });
} else {
console.log('[scheduled] Running incremental update');
await env.JOB_QUEUE.send({ type: 'incremental' });
}
}
Then test different times:
# Simulate 6am UTC
curl "http://localhost:8787/__scheduled?scheduledTime=2024-01-01T06:00:00Z"
# Simulate 3pm UTC
curl "http://localhost:8787/__scheduled?scheduledTime=2024-01-01T15:00:00Z"
This lets me verify the logic works correctly at different hours without waiting for cron to actually fire.
Common Gotchas I Ran Into
Gotcha 1: Queue Messages Aren't Persisted Locally
If you send messages and then restart wrangler dev, they're gone. The local queue is in-memory only.
Solution: Use a script to set up test data:
#!/bin/bash
# setup-test-queue.sh
echo "Sending test messages..."
wrangler queues send my-jobs '{"id": 1, "type": "test"}'
wrangler queues send my-jobs '{"id": 2, "type": "test"}'
wrangler queues send my-jobs '{"id": 3, "type": "test"}'
echo "Done!"
Run this after starting dev mode.
Gotcha 2: Bindings Don't Auto-Update
If you change your wrangler.jsonc bindings, you need to restart wrangler dev.
Just saving the file isn't enough. Stop and restart the dev server.
Gotcha 3: Environment Variables
Make sure you have your env vars set locally:
# .dev.vars file
DATABASE_URL=http://localhost:5432
API_KEY=test-key-123
Wrangler loads these automatically in dev mode.
My Complete Testing Checklist
Before deploying, I test:
- [ ] HTTP endpoints respond correctly
- [ ] Queue messages get processed
- [ ] Batch processing handles multiple messages
- [ ] Failed messages retry correctly
- [ ] Scheduled events trigger the right jobs
- [ ] All three handlers can access environment bindings
- [ ] Error handling works as expected
- [ ] Logs are clear and informative
This usually takes 10-15 minutes and catches 95% of issues before they hit production.
What About Integration Tests?
For unit tests, you can mock the queue:
// tests/queue.test.ts
import { expect, test } from 'vitest';
test('processes messages correctly', async () => {
const mockEnv = {
JOB_QUEUE: {
send: vi.fn()
}
};
const batch = {
messages: [
{
id: 'test-1',
body: { type: 'test' },
ack: vi.fn(),
retry: vi.fn()
}
]
};
await worker.queue(batch, mockEnv, {});
expect(batch.messages[0].ack).toHaveBeenCalled();
});
But honestly? I prefer testing with real Wrangler because it catches configuration issues that mocks don't.
Wrapping Up
Testing Workers locally is easier than I expected once I understood the tools:
-
wrangler devgives you a real local queue -
wrangler queues sendlets you trigger queue messages - Press
sto trigger scheduled events - Everything runs in-memory, nothing to configure
The local dev experience is actually really good. I rarely need to deploy to staging anymore just to test something.
Next time: I'll show you how to set up monitoring and alerts so you know when things break in production. Because they will. 😅
Questions? Hit me in the comments! What's your local testing workflow like?
Top comments (0)