DEV Community

Cover image for Building a Poor Man's Queue with Cloudflare Workers: From Zero to Production
Ahmed Rakan
Ahmed Rakan

Posted on

Building a Poor Man's Queue with Cloudflare Workers: From Zero to Production

How to build a robust, scalable queue system without breaking the bank


For one of my solutions, I needed Cloudflare Queues — but since they’re only available on the paid plan, that could prevent a lot of users from even trying my solution. So what did I do? Go for Cloudflare’s enterprise queues?

wazzaaaap

What's a Queue Anyway?

Imagine you're running a pizza shop and orders are coming in faster than you can make pizzas. What do you do?

One option is working on all orders simultaneously, but this depends on how many pizza makers you have and their skill level. Even the most skilled team will eventually get overwhelmed.

Instead, the logical approach is simple: write orders down and work through them one by one. That's exactly what a queue does in software.

Rather than juggling everything at once, we process orders using either:

  • FIFO (First-In-First-Out) - handle the oldest orders first
  • LIFO (Last-In-First-Out) - handle the newest orders first

FIFO is the standard approach in most distributed queue design.

A message queue is straightforward: it stores tasks (messages) that need processing later. Think of it as a digital to-do list that multiple workers can pull from.

[New Order] → [Queue: Order1, Order2, Order3] → [Pizza Chef] → [Happy Customer]

Enter fullscreen mode Exit fullscreen mode

Why Do We Need Queues?

Problem: Your web app gets 1000 signup emails to send, but your email service can only handle 10 per second.

Without Queue: Your website freezes for 100 seconds while sending emails. Users hate you.

With Queue: You instantly add all 1000 emails to a queue, respond to the user in 50ms, then process emails in the background. Users love you.

The Enterprise Queue Dilemma

Traditional queue solutions are powerful but expensive:

  • Amazon SQS: $0.40 per million messages (adds up fast)
  • Redis: Requires dedicated servers ($50-500/month)
  • RabbitMQ: Complex setup, maintenance nightmares
  • Apache Kafka: Overkill for most apps, needs a DevOps team to manage and maintain

For small projects or startups, these costs can kill your budget before you even validate your idea.

Enter Cloudflare Workers: The Game Changer

Cloudflare Workers run your code on their global edge network. Here's why they're perfect for a "poor man's queue : cheap and powerful":

The Magic Combination:

  • Workers: Serverless functions that run globally (~$0.50/million requests)
  • Durable Objects: Special type of Cloudflare worker that combines compute with Persistent storage with built-in coordination and enable distributed and stateful applications (~$0.15/million requests)
  • R2 Storage: Cheap persistent storage (~$0.015/GB/month)
  • Cron Triggers: Scheduled tasks (free!)

Why This Works:

  • Global Edge: Your queue runs in 200+ cities worldwide
  • Auto-scaling: Handles 0 to millions of requests automatically
  • No Servers: Zero infrastructure management
  • Cheap: Often 10-100x cheaper than traditional solutions sustainable at ( 0$ ) until you hit real traffic.

The Journey: Building Our Queue System

Let's walk through building a production-ready queue system step by step.

  1. Publisher: Acts as our low-latency REST client, sending events to the queue.
  2. Enqueue: The message/job is placed into the Durable Object’s in-memory queue.
  3. Persist: The message/job is safely persisted to R2 storage for durability.
  4. Poll Trigger: At each configurable polling interval, the queue begins consuming jobs.
  5. Batch Retrieval: Jobs are retrieved in controlled batches to avoid overwhelming the system.
  6. Processing: Jobs are processed with built-in retry logic and exponential backoff.
  7. Push Updates: Results and statistics are pushed downstream or stored for reporting.
  8. Acknowledge: Successfully processed jobs are acknowledged and removed from the queue.
  9. Continuous Polling: The cron process runs continuously, polling for new messages if any remain.

We work with limited memory in durable object therefore we will use smart buffer where we pool incoming messages in memory as they are coming. When the limit of the buffer is reached we will flush them into r2 as patch of messages shard.

This allows for maximum throughput and latency which we are aiming for , for both the enqueue, dequeue of messages.

The Complete Architecture

Here's our final system:

┌─────────────────┐    ┌──────────────────┐    ┌─────────────────┐
│   Your App      │───▶│  Cloudflare      │───▶│   Durable       │
│                 │    │  Worker          │    │   Object        │
│ client.publish()│    │  (API Gateway)   │    │   (Queue)       │
└─────────────────┘    └──────────────────┘    └─────────────────┘
                                                        │
                                                        ▼
                       ┌──────────────────┐    ┌─────────────────┐
                       │   Cron Trigger   │───▶│   R2 Storage    │
                       │  (Every Minute)  │    │  (Persistence)  │
                       │                  │    │                 │
                       │ processJobs()    │    │ • Job Data      │
                       └──────────────────┘    │ • Metadata      │
                                               │ • Dead Letters  │
                                               └─────────────────┘

Enter fullscreen mode Exit fullscreen mode

Component Breakdown:

Cloudflare Worker (API Gateway)

  • Handles authentication
  • Validates request sizes
  • Routes to appropriate services

Durable Object (Queue Manager)

  • Stores jobs in memory for speed
  • Manages retry logic
  • Handles memory limits intelligently

R2 Storage (Persistence Layer)

  • Durable storage for all jobs
  • Dead letter queue for failed jobs
  • Large payload storage

Cron Triggers (Job Processor)

  • Polls for jobs every minute
  • Processes jobs with timeout protection
  • Handles failures gracefully

Real-World Performance

Performance

Important metrices:

  • Publishing: 15-50ms globally ( what we are optimizing for )
  • Throughput: 1000+ jobs/minute easily ( cloudflare free workers are limited by 1000 request/min burst )

Cost Breakdown (1M jobs/month):

  • Workers: ~$0.50 (1M requests)
  • Durable Objects: ~$0.15 (1M requests)
  • R2 Storage: ~$0.02 (assuming 1GB total)
  • Total: ~$0.67/month

Compare to SQS: $400/month for same volume!


Getting Started: 5-Minute Setup

Want to try it out ?

https://github.com/ARAldhafeeri/cfw-poor-man-queue


The Bottom Line

Building a queue system used to require deep infrastructure knowledge, significant costs, and ongoing maintenance. With Cloudflare Workers, you can have a production-ready, globally distributed queue system running in minutes for pennies.

This "poor man's queue" isn't actually poor—it's smart. It leverages modern serverless architecture to deliver enterprise-grade reliability at startup-friendly prices. Most importantly the performance is very rich even comparable with other serverless queues.

Key Takeaways:

  1. Queues solve real problems: Don't block users waiting for background tasks
  2. Cloudflare Workers are powerful: Global edge + low cost = winning combination
  3. KISS Principle works: Simple systems are easier to build, debug, and maintain
  4. Handle limitations gracefully: Every platform has constraints, design around them
  5. Start small, scale up: Begin with this, migrate to enterprise solutions when you need them

Top comments (0)