DEV Community

Matthew Gladding
Matthew Gladding

Posted on • Originally published at gladlabs.io

The Solo Developer's Background Job Dilemma

A cozy home office setup with a laptop open, displaying code on the screen. A notepad and coffee cup are visible, symbolizing the solo developer worki

Solo developers often face a paradox: building a robust application requires handling background tasks (like sending emails or processing images), but adding a full-featured job queue system introduces complexity that outweighs the problem. Tools like Celery or Sidekiq demand infrastructure, monitoring, and maintenance--overkill for small projects that might never scale beyond a single user. This creates a painful trade-off: either build something too complex from day one, or delay critical features entirely. According to Why Your PostgreSQL App Will Crumble Before Your First User (And How to Stop It), premature optimization for scale is a common pitfall that does crumble apps before they even launch.

Why LISTEN/NOTIFY Fits the Bill

An abstract diagram showcasing two elements: one labeled 'LISTEN' and another labeled 'NOTIFY', connected by a flow of digital pulses or waves, set ag

PostgreSQL's built-in LISTEN/NOTIFY mechanism solves this perfectly. It's a zero-cost, persistent feature that requires no external dependencies. Unlike message brokers (RabbitMQ, Redis), it leverages your existing database connection. This avoids:

  • Adding new infrastructure to manage
  • Learning a new system's idiosyncrasies
  • Worrying about queue persistence or retries

The key advantage is simplicity: you're using a feature already available in your database. The trade-off is clear: it's not for production systems needing guaranteed delivery, retries, or high throughput. But for a solo developer building a personal project or MVP, it's exactly the right tool. As the PostgreSQL documentation states: "The LISTEN and NOTIFY commands allow a client to register for notification events." This is precisely what we need for a lightweight queue.

Setting Up the Queue: A Step-by-Step Guide

No new services or configurations are needed. Start with a simple jobs table to track pending work (optional but recommended for debugging):

CREATE TABLE jobs (
    id SERIAL PRIMARY KEY,
    type TEXT NOT NULL,
    payload JSONB NOT NULL,
    created_at TIMESTAMPTZ DEFAULT NOW()
);
Enter fullscreen mode Exit fullscreen mode

The queue itself is a database channel. Create a channel name (e.g., background_jobs) that's unique to your app. There's no setup--just use it directly.

Handling Jobs: Code Examples and Best Practices

Producer: Adding Work to the Queue

A close-up image of a computer screen showing lines of code being typed or edited, with an open terminal window in the background demonstrating comman

This script sends a job to the background_jobs channel. It's safe for a single connection (use a connection pool for multiple producers).

import psycopg2
from psycopg2.extras import Json

# Connect to your database (use environment variables for secrets!)
conn = psycopg2.connect(
    dbname="your_db",
    user="user",
    password="pass",
    host="localhost",
)
cur = conn.cursor()

# Insert the job and capture its id
cur.execute(
    "INSERT INTO jobs (type, payload) VALUES (%s, %s) RETURNING id",
    ("send_email", Json({"to": "user@example.com", "subject": "Welcome!"})),
)
job_id = cur.fetchone()[0]
conn.commit()

# Notify the channel with the job id as the payload so the worker knows
# which row to process. Use pg_notify() rather than NOTIFY so psycopg2
# can safely parameterize the payload.
cur.execute("SELECT pg_notify(%s, %s)", ("background_jobs", str(job_id)))
conn.commit()
cur.close()
conn.close()
Enter fullscreen mode Exit fullscreen mode

Worker: Processing Jobs

This worker listens on the channel and processes jobs. It uses psycopg2's connection.notifies to handle notifications asynchronously.

import psycopg2
import psycopg2.extensions
import time

def process_job(cur, job_id):
    """Fetch the job row and do the actual work (e.g., send email)."""
    cur.execute(
        "SELECT type, payload FROM jobs WHERE id = %s",
        (job_id,),
    )
    row = cur.fetchone()
    if not row:
        print(f"job {job_id} not found -- maybe already processed")
        return
    job_type, payload = row
    print(f"Processing job {job_id}: {job_type} -- {payload}")
    # Real implementation would include error handling and logging

def start_worker():
    conn = psycopg2.connect(
        dbname="your_db",
        user="user",
        password="pass",
        host="localhost",
    )
    # LISTEN must run outside a transaction
    conn.set_isolation_level(psycopg2.extensions.ISOLATION_LEVEL_AUTOCOMMIT)
    cur = conn.cursor()

    # Subscribe to the channel
    cur.execute("LISTEN background_jobs")

    print("Worker started. Waiting for jobs...")
    while True:
        conn.poll()
        while conn.notifies:
            notify = conn.notifies.pop(0)
            # notify.payload is the job id as a string
            try:
                job_id = int(notify.payload)
            except ValueError:
                print(f"unexpected payload: {notify.payload!r}")
                continue
            process_job(cur, job_id)

        time.sleep(0.1)  # keep the loop responsive without burning CPU

if __name__ == "__main__":
    start_worker()
Enter fullscreen mode Exit fullscreen mode

Why this works: the producer writes the real job payload into the jobs table and sends only the row id through NOTIFY. The worker uses the id to fetch the actual work, which keeps notify payloads tiny (Postgres caps them at 8000 bytes) and gives you durable job state that survives a worker crash. psycopg2's poll() and notifies handle the LISTEN/NOTIFY protocol natively, and the jobs table doubles as a debug log (SELECT * FROM jobs WHERE id = 42).

Critical Best Practices for Solo Developers

  1. No Retries or Dead-Letters: If a job fails, it's lost. For critical tasks (e.g., payment processing), this is unacceptable. But for non-critical jobs (e.g., analytics logging), it's acceptable.
  2. Connection Limits: LISTEN uses a dedicated database connection. For a solo project with <100 jobs/day, this is negligible. If your app scales beyond 100 concurrent jobs, switch to a proper queue.
  3. Avoid Long-Running Jobs: The worker loop must process jobs quickly. If a job takes >1 second, it blocks the notification loop. Offload heavy work to a separate process if needed.
  4. Use JSONB for Payloads: PostgreSQL's JSONB type allows efficient storage and querying of job data (e.g., WHERE payload->>'to' = 'user@example.com').

When to Avoid This Approach

This pattern fails when:

  • You need guaranteed delivery (e.g., email retries after failure).
  • Jobs require complex scheduling (e.g., "run at 2 PM tomorrow").
  • Your application scales beyond 100 concurrent jobs per minute.
  • You need distributed workers (multiple servers processing the same queue).

In these cases, switch to a dedicated queue like RQ (for Python) or Bunny (for Go). But for 90% of solo projects--where the app is a personal tool, a small SaaS, or an MVP--LISTEN/NOTIFY is sufficient.

The Solo Developer's Takeaway

Stop over-engineering. If your background task is "send a welcome email on signup," you don't need a distributed queue. Use PostgreSQL's built-in LISTEN/NOTIFY instead. It's:

  • Free (no extra cost)
  • Simple (one table, two commands)
  • Reliable for your scale (no new failure points)

Start with the code examples above. Add the jobs table for visibility. Test with 10 jobs. If your app grows beyond that, migrate to a proper queue. But until then, you've just avoided adding 300 lines of config and 3 new dependencies.

The goal isn't to build a system that could handle millions of users--it's to ship a working feature today. That's what LISTEN/NOTIFY delivers for solo developers.

Top comments (0)