DEV Community

Cover image for I Replaced Redis with PostgreSQL (And It's Faster)

I Replaced Redis with PostgreSQL (And It's Faster)

Polliog on January 09, 2026

I had a typical web app stack: PostgreSQL for persistent data Redis for caching, pub/sub, and background jobs Two databases. Two things to mana...
Collapse
 
peacebinflow profile image
PEACEBINFLOW

This is a really solid write-up, and what I appreciate most is that you’re not doing the usual “Postgres good / Redis bad” bait — you’re showing where the boundary actually is.

The key insight for me isn’t that Postgres can replace Redis, it’s that once you collapse the network hop and regain transactional locality, the system changes shape. A lot of Redis usage in small–mid systems isn’t about raw speed, it’s about coordination — cache invalidation, job ownership, consistency between writes and side effects. And those problems are fundamentally easier when everything lives inside the same transactional envelope.

Your examples around LISTEN/NOTIFY + triggers and SKIP LOCKED are especially on point. That’s not “Postgres pretending to be Redis” — that’s Postgres doing what it’s actually very good at: coordinating state transitions safely under concurrency. The fact that it’s a few hundred microseconds slower per op just doesn’t matter when you remove retries, out-of-sync caches, and failure modes.

I also like that you explicitly call out where Redis still wins. Too many posts stop at “it worked for me,” but your decision matrix makes it clear this is an architectural trade, not a dogma. High-throughput, specialized data structures, or hard latency SLAs? Redis earns its keep. Otherwise, it’s often just another moving part we inherited by default.

If there’s a hidden lesson here, it’s this:
most systems don’t fail because one component is slow — they fail because coordination spans too many systems. Reducing surface area beats shaving microseconds almost every time.

Curious to see that follow-up on Postgres “boring power features.” There’s a lot of quiet capability there that only shows up once you start treating the database as part of the system, not just storage.

Collapse
 
polliog profile image
Polliog

This is exactly the kind of insight I was hoping to spark with this article. You've crystallized something I was trying to express but didn't quite articulate as clearly: the win isn't raw speed, it's transactional locality.

Your point about coordination vs. performance is on point. In most small-to-mid systems, the real pain isn't that cache reads take 0.05ms instead of 0.04ms, it's the mental overhead and failure modes that come from managing state across two systems. The cache invalidation bugs, the "what if Redis is down" logic, the retry mechanisms, the monitoring of two separate failure domains. That cognitive load adds up.

What I've found is that once you remove that split-brain architecture, the entire system becomes more reasonable. You can actually reason about what state exists at any given moment because it's all in one transactional boundary. The ACID guarantees aren't just academic, they eliminate entire classes of bugs.

Re: the "boring power features", absolutely planning that follow-up. I'm thinking of covering things like LATERAL joins (underrated!), recursive CTEs for hierarchical data, advisory locks for distributed coordination, and some of the lesser-known JSONB operators that can replace entire microservices. Postgres has this incredible depth that only reveals itself when you stop treating it as "just a SQL database."

Thanks for such a thoughtful response. This is exactly the kind of technical discussion I was hoping for.

Collapse
 
peacebinflow profile image
PEACEBINFLOW

ppreciate this — and I’m glad you called out the cognitive load part, because that’s the silent killer nobody budgets for.

One thing I’ve seen teams underestimate is how often “Redis as a separate tier” turns into policy-by-convention. People start using it for anything that feels stateful or time-based (rate limits, queues, sessions, dedupe, feature flags), and now you’ve got two systems both acting like they’re “the truth,” just with different failure behavior. That’s where the weird bugs live.

Also +1 on “reasoning about state.” Once you move coordination into Postgres, you don’t just simplify infra — you simplify thinking. Being able to say “this write, this invalidate, this notify” all happened together (or didn’t happen at all) is an underrated kind of speed.

For the follow-up post you’re planning, I’d love to see one section that’s basically: “Postgres can do it, but here’s the fine print.” Stuff like:

LISTEN/NOTIFY delivery semantics (great for signals, not a guaranteed message bus — what patterns did you use to avoid missed events?)

queue-table bloat & vacuum/autovacuum tuning (SKIP LOCKED is elite, but the table still tells the truth over time)

advisory locks vs row locks (when you’d pick each, and how you avoid turning locks into accidental global contention)

UNLOGGED cache recovery behavior (what “perfect for cache” actually means operationally after a crash)

Because that’s the real credibility boost: not “Postgres replaces Redis,” but “here’s exactly where Postgres wins, and here’s how to avoid the foot-guns when you lean into it.”

If you drop that “boring power features” follow-up, I’ll 100% be in the comments again — especially if you get into LATERAL + JSONB + indexes combos. That’s where Postgres stops being a database and starts being a Swiss army knife.

Collapse
 
manishdeshpande profile image
Manish Deshpande

Beautifully explained my friend. I'm totally in line with your thoughts. Reducing the inter component coordination and limiting the transaction envelope are the key factors for a unified architect to be in place.

Collapse
 
tylusjdawkins profile image
Tylus J Dawkins

Thanks Chat GPT. Dead internet theory going strong.

Collapse
 
art_light profile image
Art light

This is a really solid breakdown—clear, practical, and grounded in real numbers, which I appreciate a lot. I like how you didn’t oversell Postgres but showed where it genuinely simplifies things and where Redis still wins. The transactional consistency angle is especially convincing, and it makes me rethink how often Redis is added by default. I’d be very interested to see a follow-up on more “hidden” Postgres features or edge cases where this approach starts to bend.
😎

Collapse
 
moopet profile image
Ben Sinclair

We just did something very similar at my work. We had a very agnostic system which user different back-ends for everything, and Redis and Postgres were working side-by-side. I think part of the reason was some legacy concern about pricing (this was all on Azure). Anyway, it was over-engineered and took a lot more brainpower to debug, and there were no longer any cost considerations between one and the other, so we moved everything to Postgres. Much better.

Collapse
 
polliog profile image
Polliog

Exactly! Reducing the 'moving parts' in a stack is underrated. It’s not just about the monthly bill; it’s about making the system easier to reason about. Moving everything to Postgres is a huge relief for maintenance and monitoring. Thanks for sharing your experience

Collapse
 
harry_copper24 profile image
Harry

Thank you for this fantastic well written, well organized, solutions providing and focused article! I appreciate the detailed query examples. Thank you for doing the comparisons, including cost. This was a refreshing down to earth piece that allowed me to learn about what is possible, what I need to do, and how I can effectively start addressing several of the issues looming in the horizon of my one man web service project. I appreciate the loving detail and care you put into this article. Thank you, @polliog!

I would like to see "PostgreSQL Hidden Features"

Collapse
 
polliog profile image
Polliog • Edited

Thank you so much for the kind words! I'm really glad the article resonated with you, especially as a fellow solo developer working on web services. Writing detailed query examples and cost comparisons was important to me because I know how frustrating it can be to read "just use X instead of Y" without any concrete guidance on how to actually make the switch. When you're managing everything yourself, you need practical, actionable information - not just theory. 'm definitely planning to write the "PostgreSQL Hidden Features" follow-up! There's so much quiet power in Postgres that doesn't get enough attention, things like LATERAL joins, recursive CTEs, full-text search, and some really clever ways to use window functions and custom aggregates. If there are specific features you'd like me to cover, let me know! Thanks again for taking the time to share such thoughtful feedback. It means a lot!

Collapse
 
harry_copper24 profile image
Harry

Thank you, @polliog! What an unexpexted and welcome response. I am curious to find out about the "PostgreSQL Hidden Features" and what is possible. (FYI. I work with MySQL, and have not used PostgreSQL.) Again, I really enjoyed your post! Thank you. :)

Collapse
 
jodaut profile image
José David Ureña Torres

Great article. Those benchmarks are really helpful to understand the pros and cons of using Postgres and Redis for this use case.

Collapse
 
smali_kazmi profile image
Mudaser Ali

thank you so much for writing this detail blog;

Collapse
 
frickingruvin profile image
Doug Wilson

I've been meaning to explore this. Really helpful examples. Thank you!

Collapse
 
sophia_devy profile image
Sophia Devy

This post provides an insightful look into the decision to replace Redis with PostgreSQL for various common use cases like caching, pub/sub, and job queues. The author highlights the advantages of simplifying the stack by consolidating everything into PostgreSQL, which offers performance close to Redis for most tasks, but with added benefits of transactional consistency, reduced operational complexity, and significant cost savings.

By utilizing PostgreSQL features like UNLOGGED tables for caching, LISTEN/NOTIFY for pub/sub, and SKIP LOCKED for job queues, the author effectively eliminates the need for Redis in their architecture. The tradeoff is a slight increase in latency (0.1-1ms), but this is acceptable for many use cases. The migration also reduces the need for a separate Redis infrastructure, backups, and monitoring, which simplifies deployment and maintenance.

However, the post also makes it clear that this solution is not suitable for everyone. Redis remains the better option for high-performance applications, complex data structures, or cases requiring sub-millisecond latency.

Overall, this post is a great resource for anyone looking to simplify their tech stack without compromising on performance, especially for smaller teams or applications with simple caching and job queue requirements.

Collapse
 
meir_meir_ba97d0e4663bddc profile image
Meir Meir

A m a z i n g. Thanks

Collapse
 
menard_codes profile image
Menard Maranan

I'll bookmark this one

Collapse
 
huixia0010 profile image
时鹏亮

This is a truly fascinating technical blog post. In this era of rampant AI, I am deeply gratified to read such a thoughtfully written article. Thank you for your contribution.

Collapse
 
xwero profile image
david duymelinck

The one thing where I want to add some push back is the combined operations section.
You can execute multiple Redis operations as a transaction, which means there will be less network latency.
It will be slower, but not as much as shown in the post.

Collapse
 
polliog profile image
Polliog

True, Redis transactions definitely help with the latency issue. But I’d argue that once you start writing complex Lua scripts or managing MULTI/EXEC blocks to keep things fast, you're adding another layer of logic to maintain outside your primary DB. For me, the 'speed' of Postgres in that scenario is as much about architectural simplicity as it is about milliseconds. Have you found that managing complex Redis transactions becomes a maintenance burden as the app scales?

Collapse
 
xwero profile image
david duymelinck

I find it a cheap shot to add complexity to Redis transactions to favor the Postgres solution, when the post shows no complex actions.
I could fire back with, what if the Postgres transaction becomes complex?

In each transaction use case composition is your friend to reduce complexity.
Single actions are easier to prepare than all the actions at once.

There is no need to add an extra layer of logic, just a mechanism that can switch between database types. The actions will be the same, it are the queries/commands that will be different.
An example of a mechanism is to collect all the actions, split them by database type, and add them to the appropriate transaction.

So the complexity of the transaction is not really a factor when it comes to architecture or maintenance of the code.

Thread Thread
 
polliog profile image
Polliog

You raise a valid point about transaction complexity, and I appreciate you pushing back on this. Let me clarify my position.

You're absolutely right that with proper abstraction, transaction complexity shouldn't be a deciding factor. If you have a well designed data access layer where you compose actions and route them to the appropriate database, then managing Redis + Postgres transactions isn't inherently harder than managing just Postgres transactions.
However, my argument wasn't really about code complexity. it was about architectural complexity and failure modes. Even with perfect abstraction:

(1) Failure coordination: If the Postgres transaction succeeds but the Redis operation fails (network partition, Redis restart, etc.), you now need compensating logic. With everything in Postgres, the transaction either commits or rolls back atomically.

(2) Operational overhead: Two databases means two backup strategies, two monitoring systems, two points of failure, two sets of access controls, etc. This isn't code complexity, it's operational reality.

(3) Consistency guarantees: The moment you split operations across two systems, you lose strict consistency unless you implement two-phase commit or saga patterns, which does add real complexity.

You're right that for simple operations, the difference is minimal. But the whole point of the article was to argue that for many use cases, that extra architectural layer isn't earning its keep.

That said, I explicitly call out in the "When to Keep Redis" section that if you have high throughput needs or use Redis specific features, the architectural split is worth it. It's about choosing the right tool for the actual requirements, not dogma.

Does that address your concern, or do you see flaws in this reasoning?

Thread Thread
 
xwero profile image
david duymelinck

1) it isn't that hard to commit two transactions, check for an error in either of both and do a rollback in case of an error.

2) true it creates operational overhead.

3) same solution as 1 just for a different reason.

I'm all for simplicity, so when one database is good enough to handle the operations and data volume go for it.
There are many reasons to go for a more specific database, and the choice depends on the application needs and restrictions (budget and time are the two big ones).

It wasn't my intention to question the core of the post. It are the comments that seem to be cutting corners to prove your point.
I do think we are on the same page.

Thread Thread
 
polliog profile image
Polliog

Fair enough, I think we are on the same page fundamentally. You're right that I may have oversold the transaction coordination difficulty. With proper error handling and rollback logic, it's definitely manageable, especially for straightforward cases.
My main concern isn't that it's impossible to coordinate transactions across Redis and Postgres, it's that it's an additional thing to get right, test, and maintain. And in my experience, those "simple" dual-transaction patterns tend to accumulate edge cases over time (timeouts, partial failures, retry logic, idempotency considerations). But you're absolutely correct that with disciplined engineering, it's a solved problem.

I appreciate you calling out where I might have cut corners in the comments. The goal wasn't to create a Redis vs Postgres dogma, it was to show that for many use cases, the simpler option works fine and the complexity trade-off isn't worth it.

You nailed it with "the choice depends on the application needs and restrictions." Budget and time are huge factors that often get ignored in these architectural discussions. Sometimes the "right" technical choice is the one that ships this quarter with the team you have.

Thanks for the thoughtful pushback, these kinds of discussions make the post better.

Collapse
 
zenbeni profile image
Benjamin Houdu

Wonderful post, happy to see all new options with Postgres!

Don't forget you can actually run lua scripts on redis which is the real trick to improve performance as you remove network roundtrips that often are more expensive than your multiple inmemory redis queries. It also serves as a "soft" transaction mecanism.

So your benchmark is great, but to be fair, you should compare what you define in PLSQL with Lua scripts and I'm afraid Redis will really outmatch the results in any case, but you pay a lot more for that, in money and complexity.

Collapse
 
polliog profile image
Polliog

Excellent point. Lua scripts are definitely the way to go for high-performance Redis setups. The only thing I’d add is that even with Lua providing a 'soft' transaction, you still face the 'dual-write' problem if your primary data lives in Postgres.

The real magic of the Postgres approach isn't just the speed of the individual operation, but the fact that the cache update and the database write are wrapped in a single ACID transaction. You’re right, though if we’re talking pure throughput, a Lua optimized Redis will win the benchmark every time. It’s all about where you want to spend your 'complexity budget'

Collapse
 
zenbeni profile image
Benjamin Houdu

Agreed, even if I enjoyed using Redis for lots of queries and rankings, I would prefer not adding another database in any system if possible. Postgres just offers so much more in features, also maintainability will always be better with SQL than custom-made key-value store design.

Collapse
 
devhammed profile image
Hammed Oyedele • Edited

The good news is that when using something like Laravel, there is already database drivers for session, cache, queue, etc.

In fact, that is the defaults in all starter kits now.

Collapse
 
hrgdavor profile image
Davor Hrg • Edited

Nice breakdown. And especially thanks for showing what more one can do if already using postgres.

Collapse
 
teminian profile image
Robert Teminian

Whoa. Two thumbs up.

Collapse
 
varma_sagi_82f557ba2d8a09 profile image
Varma Sagi

Sessions in db is only good for applications with low traffic.

Collapse
 
polliog profile image
Polliog

Depends, of what you mean for low traffic

Collapse
 
pauloxnet profile image
Paolo Melchiorre

Interesting article. I shared it around and it got some comments:
lobste.rs/s/weiyij/

Collapse
 
polliog profile image
Polliog

Thank you^^

Collapse
 
salmanaziz4425 profile image
Muhammad Salman Aziz

Nice article I am confused about one thing how you decided the percentage split between shared_buffers and other parameters in the config?