TL;DR:
- PostgreSQL's
LISTEN/NOTIFYsystem can power real-time features without Redis or Kafka- This tutorial walks through a full auction system: live bids, countdowns, item broadcasts, and discontinuation events
- A single trigger function fans out to every connected browser via Socket.io — Postgres does the heavy lifting
- 55.6% of developers already use PostgreSQL (Source: Stack Overflow Developer Survey, 2025) — you probably already have it in your stack
You're watching a live auction. The price just changed — $1,240, then $1,300, then $1,450 in eight seconds. Behind the scenes, no message broker is running. No Redis cluster. No Kafka topic. Just Postgres.
I read Adam's brilliant post about building real-time chat with just PostgreSQL and pg_notify and had one thought: can I take this further? Can I build a full auction system — live bids, countdown timers, live item broadcasts, auction discontinuation — all without a single extra service? Turns out you can. This is how I did it.
The Architecture in One Diagram
The whole system collapses into a single data flow. A browser fires a bid, Postgres validates and stores it, a trigger fires a notification, and Node.js fans that notification out to every connected client. Five hops. Zero extra services.
Browser (bid placed)
↓ socket.emit('place_bid')
Node.js / Socket.io
↓ INSERT INTO bids
PostgreSQL
↓ TRIGGER fires → pg_notify('auction_update', payload)
Node.js (LISTEN)
↓ io.emit('update', data)
All Browsers (live price update, flash animation)
Node.js acts as a bridge in two directions. It accepts socket events from browsers and writes to Postgres. It also holds a persistent LISTEN connection that receives notifications and broadcasts them to every connected socket. Postgres handles the pub/sub. Node handles the fan-out. The browser renders the result.
Why PostgreSQL?
PostgreSQL is now the most widely used database in the world, with 55.6% of all developers — and 58.2% of professional developers — reporting they use it (Source: Stack Overflow Developer Survey, 2025). That's the largest single-year jump in the survey's history. It has held the #1 spot as both the most admired and most desired database for four consecutive years.
The operational argument is simple. You already have Postgres in your stack. Adding Redis or Kafka for a side project or MVP means a second service to provision, secure, monitor, and pay for. For many workloads, that overhead isn't justified.
Supabase's entire Realtime product — handling over 250,000 concurrent users (Source: Supabase Realtime Benchmarks, supabase.com/docs/guides/realtime/benchmarks) — is architecturally built on pg_notify and WebSockets (Source: Supabase Realtime Architecture docs, supabase.com/docs/guides/realtime/architecture). If it's good enough for Supabase's scale, it's more than good enough for your auction app.
The Database Layer
The schema is the engine. Get this right, and the Node.js code becomes almost trivial. Two tables, two triggers, and one pg_notify call is all you need.
-- init.sql
CREATE TABLE items (
id SERIAL PRIMARY KEY,
title TEXT NOT NULL,
description TEXT,
image_url TEXT,
start_price NUMERIC(10,2) NOT NULL DEFAULT 0,
current_bid NUMERIC(10,2) NOT NULL DEFAULT 0,
ends_at TIMESTAMPTZ NOT NULL,
status TEXT NOT NULL DEFAULT 'active'
CHECK (status IN ('active', 'closed', 'discontinued')),
winner_id INTEGER,
created_at TIMESTAMPTZ DEFAULT NOW()
);
CREATE TABLE bids (
id SERIAL PRIMARY KEY,
item_id INTEGER NOT NULL REFERENCES items(id),
user_id INTEGER NOT NULL,
username TEXT NOT NULL,
amount NUMERIC(10,2) NOT NULL,
placed_at TIMESTAMPTZ DEFAULT NOW()
);
-- Trigger: fires on every new bid
CREATE OR REPLACE FUNCTION handle_new_bid()
RETURNS TRIGGER AS $$
BEGIN
-- Update the item's current bid price
UPDATE items
SET current_bid = NEW.amount
WHERE id = NEW.item_id;
-- Notify all listeners — send only the delta, not the full row
PERFORM pg_notify(
'auction_update',
json_build_object(
'type', 'NEW_BID',
'item_id', NEW.item_id,
'amount', NEW.amount,
'username', NEW.username,
'bid_id', NEW.id
)::text
);
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
CREATE TRIGGER on_bid_placed
AFTER INSERT ON bids
FOR EACH ROW EXECUTE FUNCTION handle_new_bid();
-- Trigger: fires on item INSERT (new listing) or UPDATE (status change)
CREATE OR REPLACE FUNCTION handle_item_change()
RETURNS TRIGGER AS $$
BEGIN
-- New item listed
IF (TG_OP = 'INSERT') THEN
PERFORM pg_notify(
'auction_update',
json_build_object(
'type', 'NEW_ITEM',
'item_id', NEW.id,
'title', NEW.title,
'start_price', NEW.start_price,
'ends_at', NEW.ends_at
)::text
);
-- Auction closed (timer expired)
ELSIF (TG_OP = 'UPDATE' AND NEW.status = 'closed' AND OLD.status = 'active') THEN
PERFORM pg_notify(
'auction_update',
json_build_object(
'type', 'AUCTION_CLOSED',
'item_id', NEW.id,
'winner_id', NEW.winner_id,
'final_bid', NEW.current_bid
)::text
);
-- Auction manually discontinued
ELSIF (TG_OP = 'UPDATE' AND NEW.status = 'discontinued') THEN
PERFORM pg_notify(
'auction_update',
json_build_object(
'type', 'AUCTION_DISCONTINUED',
'item_id', NEW.id,
'title', NEW.title
)::text
);
END IF;
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
CREATE TRIGGER on_item_change
AFTER INSERT OR UPDATE ON items
FOR EACH ROW EXECUTE FUNCTION handle_item_change();
pg_notify takes two arguments: a channel name and a text payload. Every LISTEN-ing connection subscribed to that channel receives the payload instantly after the transaction commits.
That last part matters. PostgreSQL only delivers notifications on transaction commit (Source: PostgreSQL official docs, postgresql.org/docs/current/sql-notify.html). A bid that fails validation and rolls back never fires a notification. No phantom updates, no stale prices. The atomicity is a feature, not a limitation.
Payload size callout: PostgreSQL's
NOTIFYhas an 8,000-byte payload limit (Source: PostgreSQL official docs). Rule of thumb: send the ID and the delta, not the full row. A JSON object with four or five fields is almost never going to hit that ceiling.

The Node.js Server
The server is the glue. It's about 120 lines of code, and most of that is the Socket.io event handlers. The real-time machinery itself is surprisingly compact.
// server.js
const express = require('express');
const http = require('http');
const { Server } = require('socket.io');
const { Client } = require('pg');
const { Pool } = require('pg');
const app = express();
const server = http.createServer(app);
const io = new Server(server);
// Pool for regular queries (bids, inserts, selects)
const pool = new Pool({ connectionString: process.env.DATABASE_URL });
// Dedicated client for LISTEN — must NOT go through a pool
// A pooled connection can be swapped out; LISTEN requires a persistent session
const pgClient = new Client({ connectionString: process.env.DATABASE_URL });
app.use(express.static('public'));
// ─── 1. Start the persistent LISTEN connection ─────────────────────────────
async function startListener() {
await pgClient.connect();
await pgClient.query('LISTEN auction_update');
// ─── 2. Fan-out: every pg_notify fires this handler ──────────────────────
pgClient.on('notification', (msg) => {
try {
const payload = JSON.parse(msg.payload);
io.emit('update', payload); // broadcast to ALL connected sockets
} catch (err) {
console.error('Bad notification payload:', err);
}
});
console.log('Listening on channel: auction_update');
}
// ─── 3. Auction Master Loop ───────────────────────────────────────────────
// Closes expired auctions every second.
// Good enough for an MVP. In production, replace with a pg_cron job
// so the closer runs inside the DB and survives Node restarts.
setInterval(async () => {
try {
await pool.query(`
UPDATE items
SET
status = 'closed',
winner_id = (
SELECT user_id FROM bids
WHERE item_id = items.id
ORDER BY amount DESC
LIMIT 1
)
WHERE status = 'active'
AND ends_at < NOW()
`);
// The on_item_change trigger fires and sends AUCTION_CLOSED automatically
} catch (err) {
console.error('Auction closer error:', err);
}
}, 1000);
// ─── Socket.io event handlers ─────────────────────────────────────────────
io.on('connection', async (socket) => {
console.log('Client connected:', socket.id);
// Send current state on connect
const { rows } = await pool.query(
"SELECT * FROM items WHERE status = 'active' ORDER BY created_at DESC"
);
socket.emit('init_items', rows);
// ─── 4. Place a bid ────────────────────────────────────────────────────
socket.on('place_bid', async ({ item_id, user_id, username, amount }) => {
try {
// Validate: bid must exceed current price
const { rows: [item] } = await pool.query(
'SELECT current_bid, status FROM items WHERE id = $1',
[item_id]
);
if (!item || item.status !== 'active') {
return socket.emit('bid_error', { message: 'Auction is not active.' });
}
if (parseFloat(amount) <= parseFloat(item.current_bid)) {
return socket.emit('bid_error', { message: 'Bid must exceed current price.' });
}
// Insert — the trigger does the rest
await pool.query(
'INSERT INTO bids (item_id, user_id, username, amount) VALUES ($1, $2, $3, $4)',
[item_id, user_id, username, amount]
);
} catch (err) {
console.error('place_bid error:', err);
socket.emit('bid_error', { message: 'Server error placing bid.' });
}
});
// ─── 5. Add a new auction item ─────────────────────────────────────────
// Node just writes to the DB. pg_notify('auction_update', NEW_ITEM) is
// Postgres's job. Node doesn't manually broadcast anything here.
socket.on('add_item', async ({ title, description, image_url, start_price, duration_seconds }) => {
try {
const ends_at = new Date(Date.now() + duration_seconds * 1000);
await pool.query(
`INSERT INTO items (title, description, image_url, start_price, current_bid, ends_at)
VALUES ($1, $2, $3, $4, $4, $5)`,
[title, description, image_url, start_price, ends_at]
);
// Trigger fires automatically — no io.emit here
} catch (err) {
console.error('add_item error:', err);
}
});
// ─── 6. Discontinue an auction ─────────────────────────────────────────
socket.on('discontinue_item', async ({ item_id }) => {
try {
await pool.query(
"UPDATE items SET status = 'discontinued' WHERE id = $1 AND status = 'active'",
[item_id]
);
// Trigger fires automatically — no io.emit here
} catch (err) {
console.error('discontinue_item error:', err);
}
});
socket.on('disconnect', () => {
console.log('Client disconnected:', socket.id);
});
});
startListener().then(() => {
server.listen(3000, () => console.log('Server running on http://localhost:3000'));
});
Notice the add_item and discontinue_item handlers. They're just DB writes. There's no io.emit call in either one. The notification to all users is entirely Postgres's job. Node stays thin.
Socket.io reaches roughly 9.7 million weekly npm downloads (Source: Snyk, 2025), making it the dominant WebSocket library in the Node ecosystem. The io.emit('update', payload) call in the notification handler is all it takes to fan out to every connected browser simultaneously.
Three Events, Three Triggers
The entire real-time behavior of this system flows through three notification types, all on the same channel. Here's how they map to Postgres triggers.
| Event | Trigger | Channel | Who listens |
|---|---|---|---|
| Bid placed |
on_bid_placed (bids INSERT) |
auction_update |
All browsers |
| New item listed |
on_item_change (items INSERT) |
auction_update |
All browsers |
| Auction discontinued |
on_item_change (items UPDATE status→discontinued) |
auction_update |
All browsers |
Walk through a concrete scenario. Alice fills out the "Add Item" form and submits. Node.js runs a single INSERT INTO items. PostgreSQL's handle_item_change trigger fires immediately after, calls pg_notify('auction_update', '{"type":"NEW_ITEM",...}'). The dedicated LISTEN client in Node receives that notification and calls io.emit('update', {type: 'NEW_ITEM', item: {...}}). Every browser gets a new auction card rendered in real time — with a gold glow animation for three seconds.
That's it. Alice's browser doesn't get any special treatment. Every client — including Alice's — learns about the new item the same way: through Postgres.
Chart 2: pg_notify Payload Sizes — Fitting Within the 8KB Limit
The Browser Side
The client code is straightforward Socket.io work. What makes it feel polished is the layered handling of each event type.
On connect, the server sends init_items — an array of all active auctions. The browser renders every card immediately. From that point on, everything is driven by update events.
When a NEW_BID event arrives, the relevant card's price element flashes amber, then fades to white over 600ms. A AUCTION_CLOSED event triggers a winner modal with a confetti burst and disables the bid input. NEW_ITEM appends a new card with a 3-second gold border glow. AUCTION_DISCONTINUED re-renders the card with a red "Discontinued" badge and fires a toast notification to everyone in the room.
Countdown timers run client-side with setInterval. Each timer transitions through three urgency states: slate text when plenty of time remains, amber text when under two minutes, and a red pulse animation when under thirty seconds. The visual cues are intentional — urgency drives bidding.
Every pg_notify event also appends to an activity feed sidebar. This gives the room a sense of shared presence: you can see who just outbid whom, which items just opened, and which ones closed, all in a live scrolling log.
The Honest Part — When This Doesn't Scale
This architecture has real limits. Here they are, plainly.
One connection per listener. Every Node.js process that calls LISTEN holds a dedicated, persistent Postgres connection. Pool that and it breaks — PgBouncer's transaction-mode pooling is incompatible with LISTEN/NOTIFY and requires session pooling instead (Source: brandur.org, 2024). At 200–500 concurrent Node processes, connection pressure becomes a genuine problem. The "notifier pattern" described by Brandur reduces this to one connection per process (Source: brandur.org/notifier, 2024), which helps, but it's not magic. Beyond roughly 1,000 concurrent listeners, this pattern stops being suitable (Source: pedroalonso.net, 2024).
Lock contention under high write volume. recall.ai reported that NOTIFY-induced lock contention caused commit delays reaching 1,015ms with 400+ queued concurrent transactions (Source: recall.ai blog, March 2025). Under a high write concurrency scenario — think flash sale with thousands of simultaneous bidders — this is a real concern, not a theoretical one.
No message persistence. If the Node.js listener is offline when a notification fires, the message is gone forever. For an auction system this is acceptable — the database is always the source of truth, and a reconnecting client can query current state. For guaranteed delivery semantics, you need a message queue.
Closing thought: for an auction app with hundreds of concurrent bidders, this works great. For Twitter-scale real-time? That's where Redis earns its keep.
pg_notify vs Redis Pub/Sub — Quick Comparison
| Dimension | pg_notify | Redis Pub/Sub |
|---|---|---|
| Latency | ~1–5ms | ~0.1–1ms |
| Max practical listeners | ~200–500 | Tens of thousands |
| Message persistence | None | None (by default) |
| Transaction atomicity | Yes | No |
| Operational complexity | Zero | High |
| Cost | $0 | Additional service |
Running It Yourself
Five commands and you have a working auction room.
# 1. Clone and install
git clone https://github.com/your-username/pg-auction.git
cd pg-auction && npm install
# 2. Create the database
createdb auction_db
# 3. Run the schema and triggers
psql -d auction_db -f init.sql
# 4. Set your connection string and start
DATABASE_URL=postgres://localhost/auction_db node server.js
# 5. Open two browser windows
open http://localhost:3000
Open two browser windows side by side. Place a bid in one. Watch it update in the other in under 5ms. Add an item in one window and watch the new card appear in both. That's the whole system working.
What I'd Do Differently in Production
A few changes would make this production-worthy.
- Replace the
setIntervalauction closer with apg_cronjob. The cron job runs inside the database, survives Node restarts, and scales independently of your application tier. - Use a connection pool configured for session pooling, not transaction pooling. PgBouncer's transaction mode destroys
LISTENsessions. - Add a
bid_historytable as an append-only audit log. This gives you replay capability and protects against disputes. - Put the Node server behind a load balancer with sticky sessions. Socket.io's default in-memory store doesn't share state across processes — sticky sessions or the
socket.io-redisadapter are required for multi-instance deployments. - Implement row-level locking on the
itemstable during bid validation. ASELECT ... FOR UPDATEin the bid handler prevents race conditions when two users submit the same highest bid within milliseconds of each other.
Final Thoughts
The point isn't "never use Redis." Redis is genuinely excellent at what it does, and at scale, it's the right choice. The point is that Postgres is dramatically more powerful than most developers give it credit for. pg_notify is a production-grade pub/sub system that ships for free with the database you almost certainly already have running.
This whole system — live bids, item broadcasts, countdown timers, discontinuation events — runs with zero additional infrastructure. One database. One application server. Done.
This project was directly inspired by Adam's article No Redis, No Kafka — Just Postgres, I Built Chat. If you haven't read it, go read it first — it's the foundation this auction system is built on.
Check out the full source code on GitHub. If this was useful, share it with someone who's about to spin up a Redis instance they don't need.
FAQ
What is PostgreSQL LISTEN/NOTIFY?
LISTEN and NOTIFY are built-in PostgreSQL commands for asynchronous pub/sub messaging. A client issues LISTEN channel_name to subscribe. Any session can then call NOTIFY channel_name, 'payload' or pg_notify() to broadcast a message. Notifications are delivered to all subscribers only when the sending transaction commits (Source: PostgreSQL official docs, postgresql.org/docs/current/sql-notify.html).
Can PostgreSQL replace Redis for pub/sub?
For low-to-medium concurrency workloads — under roughly 200–500 concurrent listeners — yes, pg_notify is a practical Redis Pub/Sub alternative. It adds transaction atomicity that Redis lacks. However, Redis handles tens of thousands of concurrent subscribers and sub-millisecond latency that pg_notify can't match. Choose based on your actual scale (Source: brandur.org, 2024; pedroalonso.net, 2024).
What is the payload limit for pg_notify?
PostgreSQL's NOTIFY payload has a hard limit of 8,000 bytes (Source: PostgreSQL official docs). The total notification queue is capped at 8 GB. Best practice is to send only the row ID and key delta fields in the payload, then let the client fetch full data if needed. Embedding image URLs or large text blobs in the payload will cause errors.
How does Socket.io connect to PostgreSQL notifications?
A dedicated pg.Client instance (not a pool) issues LISTEN auction_update. When Postgres fires pg_notify, the client's notification event handler receives the payload. That handler calls io.emit('update', parsedPayload), broadcasting to all connected Socket.io clients simultaneously. Socket.io reaches ~9.7 million weekly npm downloads, making it the dominant WebSocket abstraction in Node.js (Source: Snyk, 2025).
What is the difference between AUCTION_CLOSED and AUCTION_DISCONTINUED?
AUCTION_CLOSED fires when an auction's timer expires naturally. The system sets a winner, calculates the final price, and broadcasts the result. AUCTION_DISCONTINUED fires when an administrator manually cancels an active auction before it ends. No winner is set. The browser re-renders the card with a red "Discontinued" badge and shows a toast notification to all connected users.---
title: "No Redis, No Kafka: I Built a Real-Time Auction System with Just PostgreSQL"
description: "How I used PostgreSQL's pg_notify/LISTEN, Node.js, and Socket.io to build a full real-time auction system — live bids, countdown timers, new item broadcasts, and auction discontinuation — with zero additional infrastructure."
Live Demo: postgres-live-auction

Top comments (0)