DEV Community

Cover image for I Replaced Redis with PostgreSQL (And It's Faster)
Polliog
Polliog

Posted on

I Replaced Redis with PostgreSQL (And It's Faster)

I had a typical web app stack:

  • PostgreSQL for persistent data
  • Redis for caching, pub/sub, and background jobs

Two databases. Two things to manage. Two points of failure.

Then I realized: PostgreSQL can do everything Redis does.

I ripped out Redis entirely. Here's what happened.


The Setup: What I Was Using Redis For

Before the change, Redis handled three things:

1. Caching (70% of usage)

// Cache API responses
await redis.set(`user:${id}`, JSON.stringify(user), 'EX', 3600);
Enter fullscreen mode Exit fullscreen mode

2. Pub/Sub (20% of usage)

// Real-time notifications
redis.publish('notifications', JSON.stringify({ userId, message }));
Enter fullscreen mode Exit fullscreen mode

3. Background Job Queue (10% of usage)

// Using Bull/BullMQ
queue.add('send-email', { to, subject, body });
Enter fullscreen mode Exit fullscreen mode

The pain points:

  • Two databases to backup
  • Redis uses RAM (expensive at scale)
  • Redis persistence is... complicated
  • Network hop between Postgres and Redis

Why I Considered Replacing Redis

Reason #1: Cost

My Redis setup:

  • AWS ElastiCache: $45/month (2GB)
  • Growing to 5GB would cost $110/month

PostgreSQL:

  • Already paying for RDS: $50/month (20GB storage)
  • Adding 5GB of data: $0.50/month

Potential savings: ~$100/month

Reason #2: Operational Complexity

With Redis:

Postgres backup ✅
Redis backup ❓ (RDB? AOF? Both?)
Postgres monitoring ✅
Redis monitoring ❓
Postgres failover ✅
Redis Sentinel/Cluster ❓
Enter fullscreen mode Exit fullscreen mode

Without Redis:

Postgres backup ✅
Postgres monitoring ✅
Postgres failover ✅
Enter fullscreen mode Exit fullscreen mode

One less moving part.

Reason #3: Data Consistency

The classic problem:

// Update database
await db.query('UPDATE users SET name = $1 WHERE id = $2', [name, id]);

// Invalidate cache
await redis.del(`user:${id}`);

// ⚠️ What if Redis is down?
// ⚠️ What if this fails?
// Now cache and DB are out of sync
Enter fullscreen mode Exit fullscreen mode

With everything in Postgres: transactions solve this.


PostgreSQL Feature #1: Caching with UNLOGGED Tables

Redis:

await redis.set('session:abc123', JSON.stringify(sessionData), 'EX', 3600);
Enter fullscreen mode Exit fullscreen mode

PostgreSQL:

CREATE UNLOGGED TABLE cache (
  key TEXT PRIMARY KEY,
  value JSONB NOT NULL,
  expires_at TIMESTAMPTZ NOT NULL
);

CREATE INDEX idx_cache_expires ON cache(expires_at);
Enter fullscreen mode Exit fullscreen mode

Insert:

INSERT INTO cache (key, value, expires_at)
VALUES ($1, $2, NOW() + INTERVAL '1 hour')
ON CONFLICT (key) DO UPDATE
  SET value = EXCLUDED.value,
      expires_at = EXCLUDED.expires_at;
Enter fullscreen mode Exit fullscreen mode

Read:

SELECT value FROM cache
WHERE key = $1 AND expires_at > NOW();
Enter fullscreen mode Exit fullscreen mode

Cleanup (run periodically):

DELETE FROM cache WHERE expires_at < NOW();
Enter fullscreen mode Exit fullscreen mode

What is UNLOGGED?

UNLOGGED tables:

  • Skip the Write-Ahead Log (WAL)
  • Much faster writes
  • Don't survive crashes (perfect for cache!)

Performance:

Redis SET: 0.05ms
Postgres UNLOGGED INSERT: 0.08ms
Enter fullscreen mode Exit fullscreen mode

Close enough for caching.


PostgreSQL Feature #2: Pub/Sub with LISTEN/NOTIFY

This is where it gets interesting.

PostgreSQL has native pub/sub that most developers don't know about.

Redis Pub/Sub

// Publisher
redis.publish('notifications', JSON.stringify({ userId: 123, msg: 'Hello' }));

// Subscriber
redis.subscribe('notifications');
redis.on('message', (channel, message) => {
  console.log(message);
});
Enter fullscreen mode Exit fullscreen mode

PostgreSQL Pub/Sub

-- Publisher
NOTIFY notifications, '{"userId": 123, "msg": "Hello"}';
Enter fullscreen mode Exit fullscreen mode
// Subscriber (Node.js with pg)
const client = new Client({ connectionString: process.env.DATABASE_URL });
await client.connect();

await client.query('LISTEN notifications');

client.on('notification', (msg) => {
  const payload = JSON.parse(msg.payload);
  console.log(payload);
});
Enter fullscreen mode Exit fullscreen mode

Performance comparison:

Redis pub/sub latency: 1-2ms
Postgres NOTIFY latency: 2-5ms
Enter fullscreen mode Exit fullscreen mode

Slightly slower, but:

  • No extra infrastructure
  • Can use in transactions
  • Can combine with queries

Real-World Example: Live Tail

In my log management app, I needed real-time log streaming.

With Redis:

// When new log arrives
await db.query('INSERT INTO logs ...');
await redis.publish('logs:new', JSON.stringify(log));

// Frontend listens
redis.subscribe('logs:new');
Enter fullscreen mode Exit fullscreen mode

Problem: Two operations. What if publish fails?

With PostgreSQL:

CREATE FUNCTION notify_new_log() RETURNS TRIGGER AS $$
BEGIN
  PERFORM pg_notify('logs_new', row_to_json(NEW)::text);
  RETURN NEW;
END;
$$ LANGUAGE plpgsql;

CREATE TRIGGER log_inserted
AFTER INSERT ON logs
FOR EACH ROW EXECUTE FUNCTION notify_new_log();
Enter fullscreen mode Exit fullscreen mode

Now it's atomic. Insert and notify happen together or not at all.

// Frontend (via SSE)
app.get('/logs/stream', async (req, res) => {
  const client = await pool.connect();

  res.writeHead(200, {
    'Content-Type': 'text/event-stream',
    'Cache-Control': 'no-cache',
  });

  await client.query('LISTEN logs_new');

  client.on('notification', (msg) => {
    res.write(`data: ${msg.payload}\n\n`);
  });
});
Enter fullscreen mode Exit fullscreen mode

Result: Real-time log streaming with zero Redis.


PostgreSQL Feature #3: Job Queues with SKIP LOCKED

Redis (using Bull/BullMQ):

queue.add('send-email', { to, subject, body });

queue.process('send-email', async (job) => {
  await sendEmail(job.data);
});
Enter fullscreen mode Exit fullscreen mode

PostgreSQL:

CREATE TABLE jobs (
  id BIGSERIAL PRIMARY KEY,
  queue TEXT NOT NULL,
  payload JSONB NOT NULL,
  attempts INT DEFAULT 0,
  max_attempts INT DEFAULT 3,
  scheduled_at TIMESTAMPTZ DEFAULT NOW(),
  created_at TIMESTAMPTZ DEFAULT NOW()
);

CREATE INDEX idx_jobs_queue ON jobs(queue, scheduled_at) 
WHERE attempts < max_attempts;
Enter fullscreen mode Exit fullscreen mode

Enqueue:

INSERT INTO jobs (queue, payload)
VALUES ('send-email', '{"to": "user@example.com", "subject": "Hi"}');
Enter fullscreen mode Exit fullscreen mode

Worker (dequeue):

WITH next_job AS (
  SELECT id FROM jobs
  WHERE queue = $1
    AND attempts < max_attempts
    AND scheduled_at <= NOW()
  ORDER BY scheduled_at
  LIMIT 1
  FOR UPDATE SKIP LOCKED
)
UPDATE jobs
SET attempts = attempts + 1
FROM next_job
WHERE jobs.id = next_job.id
RETURNING *;
Enter fullscreen mode Exit fullscreen mode

The magic: FOR UPDATE SKIP LOCKED

This makes PostgreSQL a lock-free queue:

  • Multiple workers can pull jobs concurrently
  • No job is processed twice
  • If a worker crashes, job becomes available again

Performance:

Redis BRPOP: 0.1ms
Postgres SKIP LOCKED: 0.3ms
Enter fullscreen mode Exit fullscreen mode

Negligible difference for most workloads.


PostgreSQL Feature #4: Rate Limiting

Redis (classic rate limiter):

const key = `ratelimit:${userId}`;
const count = await redis.incr(key);
if (count === 1) {
  await redis.expire(key, 60); // 60 seconds
}

if (count > 100) {
  throw new Error('Rate limit exceeded');
}
Enter fullscreen mode Exit fullscreen mode

PostgreSQL:

CREATE TABLE rate_limits (
  user_id INT PRIMARY KEY,
  request_count INT DEFAULT 0,
  window_start TIMESTAMPTZ DEFAULT NOW()
);

-- Check and increment
WITH current AS (
  SELECT 
    request_count,
    CASE 
      WHEN window_start < NOW() - INTERVAL '1 minute'
      THEN 1  -- Reset counter
      ELSE request_count + 1
    END AS new_count
  FROM rate_limits
  WHERE user_id = $1
  FOR UPDATE
)
UPDATE rate_limits
SET 
  request_count = (SELECT new_count FROM current),
  window_start = CASE
    WHEN window_start < NOW() - INTERVAL '1 minute'
    THEN NOW()
    ELSE window_start
  END
WHERE user_id = $1
RETURNING request_count;
Enter fullscreen mode Exit fullscreen mode

Or simpler with a window function:

CREATE TABLE api_requests (
  user_id INT NOT NULL,
  created_at TIMESTAMPTZ DEFAULT NOW()
);

-- Check rate limit
SELECT COUNT(*) FROM api_requests
WHERE user_id = $1
  AND created_at > NOW() - INTERVAL '1 minute';

-- If under limit, insert
INSERT INTO api_requests (user_id) VALUES ($1);

-- Cleanup old requests periodically
DELETE FROM api_requests WHERE created_at < NOW() - INTERVAL '5 minutes';
Enter fullscreen mode Exit fullscreen mode

When Postgres is better:

  • Need to rate limit based on complex logic (not just counts)
  • Want rate limit data in same transaction as business logic

When Redis is better:

  • Need sub-millisecond rate limiting
  • Extremely high throughput (millions of requests/sec)

PostgreSQL Feature #5: Sessions with JSONB

Redis:

await redis.set(`session:${sessionId}`, JSON.stringify(sessionData), 'EX', 86400);
Enter fullscreen mode Exit fullscreen mode

PostgreSQL:

CREATE TABLE sessions (
  id TEXT PRIMARY KEY,
  data JSONB NOT NULL,
  expires_at TIMESTAMPTZ NOT NULL
);

CREATE INDEX idx_sessions_expires ON sessions(expires_at);

-- Insert/Update
INSERT INTO sessions (id, data, expires_at)
VALUES ($1, $2, NOW() + INTERVAL '24 hours')
ON CONFLICT (id) DO UPDATE
  SET data = EXCLUDED.data,
      expires_at = EXCLUDED.expires_at;

-- Read
SELECT data FROM sessions
WHERE id = $1 AND expires_at > NOW();
Enter fullscreen mode Exit fullscreen mode

Bonus: JSONB Operators

You can query inside the session:

-- Find all sessions for a specific user
SELECT * FROM sessions
WHERE data->>'userId' = '123';

-- Find sessions with specific role
SELECT * FROM sessions
WHERE data->'user'->>'role' = 'admin';
Enter fullscreen mode Exit fullscreen mode

You can't do this with Redis!


Real-World Benchmarks

I ran benchmarks on my production dataset:

Test Setup

  • Hardware: AWS RDS db.t3.medium (2 vCPU, 4GB RAM)
  • Dataset: 1 million cache entries, 10k sessions
  • Tool: pgbench (custom scripts)

Results

Operation Redis PostgreSQL Difference
Cache SET 0.05ms 0.08ms +60% slower
Cache GET 0.04ms 0.06ms +50% slower
Pub/Sub 1.2ms 3.1ms +158% slower
Queue push 0.08ms 0.15ms +87% slower
Queue pop 0.12ms 0.31ms +158% slower

PostgreSQL is slower... but:

  • All operations still under 1ms
  • Eliminates network hop to Redis
  • Reduces infrastructure complexity

Combined Operations (The Real Win)

Scenario: Insert data + invalidate cache + notify subscribers

With Redis:

await db.query('INSERT INTO posts ...');       // 2ms
await redis.del('posts:latest');                // 1ms (network hop)
await redis.publish('posts:new', data);         // 1ms (network hop)
// Total: ~4ms
Enter fullscreen mode Exit fullscreen mode

With PostgreSQL:

BEGIN;
INSERT INTO posts ...;                          -- 2ms
DELETE FROM cache WHERE key = 'posts:latest';  -- 0.1ms (same connection)
NOTIFY posts_new, '...';                        -- 0.1ms (same connection)
COMMIT;
-- Total: ~2.2ms
Enter fullscreen mode Exit fullscreen mode

PostgreSQL is faster when operations are combined.


When to Keep Redis

Don't replace Redis if:

1. You Need Extreme Performance

Redis: 100,000+ ops/sec (single instance)
Postgres: 10,000-50,000 ops/sec
Enter fullscreen mode Exit fullscreen mode

If you're doing millions of cache reads/sec, keep Redis.

2. You're Using Redis-Specific Data Structures

Redis has:

  • Sorted sets (leaderboards)
  • HyperLogLog (unique count estimates)
  • Geospatial indexes
  • Streams (advanced pub/sub)

Postgres equivalents exist but are clunkier:

-- Leaderboard in Postgres (slower)
SELECT user_id, score
FROM leaderboard
ORDER BY score DESC
LIMIT 10;

-- vs Redis
ZREVRANGE leaderboard 0 9 WITHSCORES
Enter fullscreen mode Exit fullscreen mode

3. You Have a Separate Caching Layer Requirement

If your architecture mandates a separate cache tier (e.g., microservices), keep Redis.


Migration Strategy

Don't rip out Redis overnight. Here's how I did it:

Phase 1: Side-by-Side (Week 1)

// Write to both
await redis.set(key, value);
await pg.query('INSERT INTO cache ...');

// Read from Redis (still primary)
let data = await redis.get(key);
Enter fullscreen mode Exit fullscreen mode

Monitor: Compare hit rates, latency.

Phase 2: Read from Postgres (Week 2)

// Try Postgres first
let data = await pg.query('SELECT value FROM cache WHERE key = $1', [key]);

// Fallback to Redis
if (!data) {
  data = await redis.get(key);
}
Enter fullscreen mode Exit fullscreen mode

Monitor: Error rates, performance.

Phase 3: Write to Postgres Only (Week 3)

// Only write to Postgres
await pg.query('INSERT INTO cache ...');
Enter fullscreen mode Exit fullscreen mode

Monitor: Everything still works?

Phase 4: Remove Redis (Week 4)

# Turn off Redis
# Watch for errors
# Nothing breaks? Success!
Enter fullscreen mode Exit fullscreen mode

Code Examples: Complete Implementation

Cache Module (PostgreSQL)

// cache.js
class PostgresCache {
  constructor(pool) {
    this.pool = pool;
  }

  async get(key) {
    const result = await this.pool.query(
      'SELECT value FROM cache WHERE key = $1 AND expires_at > NOW()',
      [key]
    );
    return result.rows[0]?.value;
  }

  async set(key, value, ttlSeconds = 3600) {
    await this.pool.query(
      `INSERT INTO cache (key, value, expires_at)
       VALUES ($1, $2, NOW() + INTERVAL '${ttlSeconds} seconds')
       ON CONFLICT (key) DO UPDATE
         SET value = EXCLUDED.value,
             expires_at = EXCLUDED.expires_at`,
      [key, value]
    );
  }

  async delete(key) {
    await this.pool.query('DELETE FROM cache WHERE key = $1', [key]);
  }

  async cleanup() {
    await this.pool.query('DELETE FROM cache WHERE expires_at < NOW()');
  }
}

module.exports = PostgresCache;
Enter fullscreen mode Exit fullscreen mode

Pub/Sub Module

// pubsub.js
class PostgresPubSub {
  constructor(pool) {
    this.pool = pool;
    this.listeners = new Map();
  }

  async publish(channel, message) {
    const payload = JSON.stringify(message);
    await this.pool.query('SELECT pg_notify($1, $2)', [channel, payload]);
  }

  async subscribe(channel, callback) {
    const client = await this.pool.connect();

    await client.query(`LISTEN ${channel}`);

    client.on('notification', (msg) => {
      if (msg.channel === channel) {
        callback(JSON.parse(msg.payload));
      }
    });

    this.listeners.set(channel, client);
  }

  async unsubscribe(channel) {
    const client = this.listeners.get(channel);
    if (client) {
      await client.query(`UNLISTEN ${channel}`);
      client.release();
      this.listeners.delete(channel);
    }
  }
}

module.exports = PostgresPubSub;
Enter fullscreen mode Exit fullscreen mode

Job Queue Module

// queue.js
class PostgresQueue {
  constructor(pool) {
    this.pool = pool;
  }

  async enqueue(queue, payload, scheduledAt = new Date()) {
    await this.pool.query(
      'INSERT INTO jobs (queue, payload, scheduled_at) VALUES ($1, $2, $3)',
      [queue, payload, scheduledAt]
    );
  }

  async dequeue(queue) {
    const result = await this.pool.query(
      `WITH next_job AS (
        SELECT id FROM jobs
        WHERE queue = $1
          AND attempts < max_attempts
          AND scheduled_at <= NOW()
        ORDER BY scheduled_at
        LIMIT 1
        FOR UPDATE SKIP LOCKED
      )
      UPDATE jobs
      SET attempts = attempts + 1
      FROM next_job
      WHERE jobs.id = next_job.id
      RETURNING jobs.*`,
      [queue]
    );

    return result.rows[0];
  }

  async complete(jobId) {
    await this.pool.query('DELETE FROM jobs WHERE id = $1', [jobId]);
  }

  async fail(jobId, error) {
    await this.pool.query(
      `UPDATE jobs
       SET attempts = max_attempts,
           payload = payload || jsonb_build_object('error', $2)
       WHERE id = $1`,
      [jobId, error.message]
    );
  }
}

module.exports = PostgresQueue;
Enter fullscreen mode Exit fullscreen mode

Performance Tuning Tips

1. Use Connection Pooling

const { Pool } = require('pg');

const pool = new Pool({
  max: 20,  // Max connections
  idleTimeoutMillis: 30000,
  connectionTimeoutMillis: 2000,
});
Enter fullscreen mode Exit fullscreen mode

2. Add Proper Indexes

CREATE INDEX CONCURRENTLY idx_cache_key ON cache(key) WHERE expires_at > NOW();
CREATE INDEX CONCURRENTLY idx_jobs_pending ON jobs(queue, scheduled_at) 
  WHERE attempts < max_attempts;
Enter fullscreen mode Exit fullscreen mode

3. Tune PostgreSQL Config

# postgresql.conf
shared_buffers = 2GB           # 25% of RAM
effective_cache_size = 6GB     # 75% of RAM
work_mem = 50MB                # For complex queries
maintenance_work_mem = 512MB   # For VACUUM
Enter fullscreen mode Exit fullscreen mode

4. Regular Maintenance

-- Run daily
VACUUM ANALYZE cache;
VACUUM ANALYZE jobs;

-- Or enable autovacuum (recommended)
ALTER TABLE cache SET (autovacuum_vacuum_scale_factor = 0.1);
Enter fullscreen mode Exit fullscreen mode

The Results: 3 Months Later

What I saved:

  • ✅ $100/month (no more ElastiCache)
  • ✅ 50% reduction in backup complexity
  • ✅ One less service to monitor
  • ✅ Simpler deployment (one less dependency)

What I lost:

  • ❌ ~0.5ms latency on cache operations
  • ❌ Redis's exotic data structures (didn't need them)

Would I do it again? Yes, for this use case.

Would I recommend it universally? No.


Decision Matrix

Replace Redis with Postgres if:

  • ✅ You're using Redis for simple caching/sessions
  • ✅ Cache hit rate is < 95% (lots of writes)
  • ✅ You want transactional consistency
  • ✅ You're okay with 0.1-1ms slower operations
  • ✅ You're a small team with limited ops resources

Keep Redis if:

  • ❌ You need 100k+ ops/second
  • ❌ You use Redis data structures (sorted sets, etc.)
  • ❌ You have dedicated ops team
  • ❌ Sub-millisecond latency is critical
  • ❌ You're doing geo-replication

Resources

PostgreSQL Features:

Tools:

Alternative Solutions:


TL;DR

I replaced Redis with PostgreSQL for:

  1. Caching → UNLOGGED tables
  2. Pub/Sub → LISTEN/NOTIFY
  3. Job queues → SKIP LOCKED
  4. Sessions → JSONB tables

Results:

  • Saved $100/month
  • Reduced operational complexity
  • Slightly slower (0.1-1ms) but acceptable
  • Transactional consistency guaranteed

When to do this:

  • Small to medium apps
  • Simple caching needs
  • Want to reduce moving parts

When NOT to do this:

  • High-performance requirements (100k+ ops/sec)
  • Using Redis-specific features
  • Have dedicated ops team

Have you replaced Redis with Postgres (or vice versa)? What was your experience? Drop your benchmarks in the comments! 👇

P.S. - Want a follow-up on "PostgreSQL Hidden Features" or "When Redis is Actually Better"? Let me know!

Top comments (38)

Collapse
 
peacebinflow profile image
PEACEBINFLOW

This is a really solid write-up, and what I appreciate most is that you’re not doing the usual “Postgres good / Redis bad” bait — you’re showing where the boundary actually is.

The key insight for me isn’t that Postgres can replace Redis, it’s that once you collapse the network hop and regain transactional locality, the system changes shape. A lot of Redis usage in small–mid systems isn’t about raw speed, it’s about coordination — cache invalidation, job ownership, consistency between writes and side effects. And those problems are fundamentally easier when everything lives inside the same transactional envelope.

Your examples around LISTEN/NOTIFY + triggers and SKIP LOCKED are especially on point. That’s not “Postgres pretending to be Redis” — that’s Postgres doing what it’s actually very good at: coordinating state transitions safely under concurrency. The fact that it’s a few hundred microseconds slower per op just doesn’t matter when you remove retries, out-of-sync caches, and failure modes.

I also like that you explicitly call out where Redis still wins. Too many posts stop at “it worked for me,” but your decision matrix makes it clear this is an architectural trade, not a dogma. High-throughput, specialized data structures, or hard latency SLAs? Redis earns its keep. Otherwise, it’s often just another moving part we inherited by default.

If there’s a hidden lesson here, it’s this:
most systems don’t fail because one component is slow — they fail because coordination spans too many systems. Reducing surface area beats shaving microseconds almost every time.

Curious to see that follow-up on Postgres “boring power features.” There’s a lot of quiet capability there that only shows up once you start treating the database as part of the system, not just storage.

Collapse
 
polliog profile image
Polliog

This is exactly the kind of insight I was hoping to spark with this article. You've crystallized something I was trying to express but didn't quite articulate as clearly: the win isn't raw speed, it's transactional locality.

Your point about coordination vs. performance is on point. In most small-to-mid systems, the real pain isn't that cache reads take 0.05ms instead of 0.04ms, it's the mental overhead and failure modes that come from managing state across two systems. The cache invalidation bugs, the "what if Redis is down" logic, the retry mechanisms, the monitoring of two separate failure domains. That cognitive load adds up.

What I've found is that once you remove that split-brain architecture, the entire system becomes more reasonable. You can actually reason about what state exists at any given moment because it's all in one transactional boundary. The ACID guarantees aren't just academic, they eliminate entire classes of bugs.

Re: the "boring power features", absolutely planning that follow-up. I'm thinking of covering things like LATERAL joins (underrated!), recursive CTEs for hierarchical data, advisory locks for distributed coordination, and some of the lesser-known JSONB operators that can replace entire microservices. Postgres has this incredible depth that only reveals itself when you stop treating it as "just a SQL database."

Thanks for such a thoughtful response. This is exactly the kind of technical discussion I was hoping for.

Collapse
 
peacebinflow profile image
PEACEBINFLOW

ppreciate this — and I’m glad you called out the cognitive load part, because that’s the silent killer nobody budgets for.

One thing I’ve seen teams underestimate is how often “Redis as a separate tier” turns into policy-by-convention. People start using it for anything that feels stateful or time-based (rate limits, queues, sessions, dedupe, feature flags), and now you’ve got two systems both acting like they’re “the truth,” just with different failure behavior. That’s where the weird bugs live.

Also +1 on “reasoning about state.” Once you move coordination into Postgres, you don’t just simplify infra — you simplify thinking. Being able to say “this write, this invalidate, this notify” all happened together (or didn’t happen at all) is an underrated kind of speed.

For the follow-up post you’re planning, I’d love to see one section that’s basically: “Postgres can do it, but here’s the fine print.” Stuff like:

LISTEN/NOTIFY delivery semantics (great for signals, not a guaranteed message bus — what patterns did you use to avoid missed events?)

queue-table bloat & vacuum/autovacuum tuning (SKIP LOCKED is elite, but the table still tells the truth over time)

advisory locks vs row locks (when you’d pick each, and how you avoid turning locks into accidental global contention)

UNLOGGED cache recovery behavior (what “perfect for cache” actually means operationally after a crash)

Because that’s the real credibility boost: not “Postgres replaces Redis,” but “here’s exactly where Postgres wins, and here’s how to avoid the foot-guns when you lean into it.”

If you drop that “boring power features” follow-up, I’ll 100% be in the comments again — especially if you get into LATERAL + JSONB + indexes combos. That’s where Postgres stops being a database and starts being a Swiss army knife.

Collapse
 
manishdeshpande profile image
Manish Deshpande

Beautifully explained my friend. I'm totally in line with your thoughts. Reducing the inter component coordination and limiting the transaction envelope are the key factors for a unified architect to be in place.

Collapse
 
tylusjdawkins profile image
Tylus J Dawkins

Thanks Chat GPT. Dead internet theory going strong.

Collapse
 
art_light profile image
Art light

This is a really solid breakdown—clear, practical, and grounded in real numbers, which I appreciate a lot. I like how you didn’t oversell Postgres but showed where it genuinely simplifies things and where Redis still wins. The transactional consistency angle is especially convincing, and it makes me rethink how often Redis is added by default. I’d be very interested to see a follow-up on more “hidden” Postgres features or edge cases where this approach starts to bend.
😎

Collapse
 
moopet profile image
Ben Sinclair

We just did something very similar at my work. We had a very agnostic system which user different back-ends for everything, and Redis and Postgres were working side-by-side. I think part of the reason was some legacy concern about pricing (this was all on Azure). Anyway, it was over-engineered and took a lot more brainpower to debug, and there were no longer any cost considerations between one and the other, so we moved everything to Postgres. Much better.

Collapse
 
polliog profile image
Polliog

Exactly! Reducing the 'moving parts' in a stack is underrated. It’s not just about the monthly bill; it’s about making the system easier to reason about. Moving everything to Postgres is a huge relief for maintenance and monitoring. Thanks for sharing your experience

Collapse
 
harry_copper24 profile image
Harry

Thank you for this fantastic well written, well organized, solutions providing and focused article! I appreciate the detailed query examples. Thank you for doing the comparisons, including cost. This was a refreshing down to earth piece that allowed me to learn about what is possible, what I need to do, and how I can effectively start addressing several of the issues looming in the horizon of my one man web service project. I appreciate the loving detail and care you put into this article. Thank you, @polliog!

I would like to see "PostgreSQL Hidden Features"

Collapse
 
polliog profile image
Polliog • Edited

Thank you so much for the kind words! I'm really glad the article resonated with you, especially as a fellow solo developer working on web services. Writing detailed query examples and cost comparisons was important to me because I know how frustrating it can be to read "just use X instead of Y" without any concrete guidance on how to actually make the switch. When you're managing everything yourself, you need practical, actionable information - not just theory. 'm definitely planning to write the "PostgreSQL Hidden Features" follow-up! There's so much quiet power in Postgres that doesn't get enough attention, things like LATERAL joins, recursive CTEs, full-text search, and some really clever ways to use window functions and custom aggregates. If there are specific features you'd like me to cover, let me know! Thanks again for taking the time to share such thoughtful feedback. It means a lot!

Collapse
 
harry_copper24 profile image
Harry

Thank you, @polliog! What an unexpexted and welcome response. I am curious to find out about the "PostgreSQL Hidden Features" and what is possible. (FYI. I work with MySQL, and have not used PostgreSQL.) Again, I really enjoyed your post! Thank you. :)

Collapse
 
jodaut profile image
José David Ureña Torres

Great article. Those benchmarks are really helpful to understand the pros and cons of using Postgres and Redis for this use case.

Collapse
 
smali_kazmi profile image
Mudaser Ali

thank you so much for writing this detail blog;

Collapse
 
frickingruvin profile image
Doug Wilson

I've been meaning to explore this. Really helpful examples. Thank you!

Collapse
 
sophia_devy profile image
Sophia Devy

This post provides an insightful look into the decision to replace Redis with PostgreSQL for various common use cases like caching, pub/sub, and job queues. The author highlights the advantages of simplifying the stack by consolidating everything into PostgreSQL, which offers performance close to Redis for most tasks, but with added benefits of transactional consistency, reduced operational complexity, and significant cost savings.

By utilizing PostgreSQL features like UNLOGGED tables for caching, LISTEN/NOTIFY for pub/sub, and SKIP LOCKED for job queues, the author effectively eliminates the need for Redis in their architecture. The tradeoff is a slight increase in latency (0.1-1ms), but this is acceptable for many use cases. The migration also reduces the need for a separate Redis infrastructure, backups, and monitoring, which simplifies deployment and maintenance.

However, the post also makes it clear that this solution is not suitable for everyone. Redis remains the better option for high-performance applications, complex data structures, or cases requiring sub-millisecond latency.

Overall, this post is a great resource for anyone looking to simplify their tech stack without compromising on performance, especially for smaller teams or applications with simple caching and job queue requirements.

Collapse
 
meir_meir_ba97d0e4663bddc profile image
Meir Meir

A m a z i n g. Thanks

Collapse
 
menard_codes profile image
Menard Maranan

I'll bookmark this one

Some comments may only be visible to logged-in visitors. Sign in to view all comments.