DEV Community

Cover image for I replaced Redis with PostgreSQL and it was faster (and yes, I was surprised too)
<devtips/>
<devtips/>

Posted on

I replaced Redis with PostgreSQL and it was faster (and yes, I was surprised too)

Redis didn’t get worse. I just finally paid attention to what was actually happening.

There’s a specific kind of discomfort you feel as a developer when reality gently taps you on the shoulder and says, hey… you sure about that?
Not a crash. Not an outage. Just a graph that refuses to confirm your beliefs.

This started as one of those moments.

Redis is supposed to be fast. That’s not even debatable it’s backend law. You add Redis when Postgres can’t keep up. You cache to go faster. You reduce load. You scale responsibly. This stuff is so ingrained that questioning it feels like asking whether indexes are overrated.

And yet, there it was: latency that wouldn’t budge. Requests that felt heavier than they should. Nothing on fire. Nothing obviously broken. Just a quiet mismatch between what should be true and what the system was actually doing.

So I did what most of us do first I assumed I messed up. Misconfigured Redis. Bad TTLs. Serialization overhead. Some rookie mistake hiding in plain sight. I retraced everything. Then I traced it again. And eventually I hit a thought that felt borderline illegal:

What if Redis… isn’t helping here?

That thought is usually followed by denial. Then a bit of fear. Then curiosity wins. I didn’t rip anything out immediately. I measured. I timed calls. I traced request paths end to end. I looked at Postgres not as “the slow database,” but as the thing already sitting in the critical path doing more work than it gets credit for.

And the result was uncomfortable in the best way.

TL;DR: I removed Redis from a real production path, leaned into modern PostgreSQL features, and the system got faster. Not hypothetically. Not “in benchmarks.” Faster in the only way that matters real requests, real traffic, real latency. With fewer moving parts and less code to maintain.

This isn’t a Redis hit piece. Redis is great. That’s almost the problem. It’s so good that we stopped asking whether we actually needed it here.

Why redis was the default belief (and why nobody questioned it)

Redis didn’t enter the system because something was on fire. It showed up earlier than that at the planning stage, when everything still felt clean and optimistic and future-proof.

Performance came up, as it always does. Not because the app was slow, but because one day it might be slow. And when developers talk about the future, we don’t reach for measurements. We reach for patterns.

“Let’s cache it.”

That sentence carries an entire worldview. It implies responsibility. Experience. Scar tissue. Someone has been burned before, and Redis is how we promise ourselves it won’t happen again.

So Redis got added. Not after a load test. Not after a postmortem. Just as part of the mental starter pack: app server, database, cache. Like adding sunscreen before you even see the sun.

And to be fair, it worked. Things were fast. Pages loaded instantly. Metrics looked healthy. Redis did its job quietly, efficiently, without complaint. Which is exactly why it became invisible. No alerts. No incidents. No reason to look closer.

That’s how tooling turns into dogma.

Over time, the cache logic grew. Invalidation rules became… interpretive. Edge cases appeared. New developers asked why certain things were cached and got the answer every team eventually gives: “performance.” Not which performance. Not measured performance. Just performance in the abstract.

The uncomfortable truth is this: Redis wasn’t added to fix a problem. It was added to prevent an imagined one. And once a tool exists to guard against a hypothetical future, removing it feels reckless even if the present no longer needs it.

No one asked the simplest question. The one that feels almost rude to ask in a mature codebase:

Is this actually making things faster right now?

Because Redis wasn’t wrong. It was obvious. And obvious tools rarely get challenged until the graphs force you to.

The problem nobody wants to admit: network hops

Here’s the part that took me longer than it should have: Redis wasn’t slow.

On its own, Redis was doing exactly what it’s famous for returning values absurdly fast. Sub-millisecond. Blink-and-you-miss-it fast. If you benchmark Redis in isolation, it wins every time. End of argument.

But production systems don’t live in isolation. They live on networks. And networks add tax.

Every “fast” cached request still looked like this:

App → network → Redis → network → app

Two extra trips. Plus serialization. Plus connection handling. None of these are huge individually, but they stack. Quietly. Reliably. On every request.

Postgres, meanwhile, was already in the critical path. The connection was warm. The query was indexed. One hop. One round trip. Fewer chances for things to wobble.

That’s when it clicked: the “slower” database sometimes wins because it’s the shorter walk.

Once I timed the full round-trip instead of just trusting component speed, the graphs made sense. Redis wasn’t broken. It was just adding a cost I didn’t need to pay.

The dangerous realization wasn’t that Redis was slow.
It was that I never measured whether it was actually helping.

Redis is fast, but speed isn’t the same as system performance

This is where I had to unlearn a shortcut in my own thinking.

Redis is fast. That part isn’t a myth. It lives in memory. It’s brutally efficient at what it does. The mistake is assuming that a fast component automatically makes the whole system faster.

It doesn’t.

What matters in real systems isn’t how quickly one box responds it’s how much work happens between boxes. Every handoff costs something. Serialization. Deserialization. Queueing. CPU churn. Tiny delays that don’t show up in happy-path benchmarks but absolutely show up in tail latency.

Once I stopped asking “which tool is faster?” and started asking “which path is shorter?”, the picture changed.

Postgres wasn’t winning because it suddenly became magical. It was winning because it removed steps. Fewer hops. Fewer translations. Less glue code. Less opportunity for variance.

That’s the quiet trap with “fast” tech. We measure raw speed and forget to measure friction. Redis didn’t slow anything down on purpose it just introduced more surface area than the workload actually needed.

And that realization sets up the next uncomfortable thought:

If Postgres is already here… and already fast enough… what else have we been underestimating?

Postgres quietly leveled up while we weren’t looking

Part of the reason this surprised me is that my mental model of Postgres was outdated.

In my head, Postgres was still the reliable but boring database. Solid. Predictable. Not the thing you reach for when performance conversations start getting spicy. That reputation sticks, even when the product moves on.

But modern Postgres is not the Postgres a lot of us learned years ago.

JSONB changed how data can be modeled. Partial and covering indexes made common access patterns stupidly fast. Query planners got smarter. Extensions filled in gaps we used to solve with extra services. All while Postgres stayed… quiet. No hype cycles. No “killer feature” blog posts every week.

Once I actually looked at the queries, most of them weren’t expensive at all. With proper indexes, they were already fast enough that caching them didn’t meaningfully help it just added indirection.

Postgres didn’t beat Redis by being faster in isolation. It beat it by being good enough in the exact place the system already depended on.

And that’s the part we miss when we default to extra tooling: sometimes the boring tool has been getting better the whole time.

When redis still makes sense (and when it really doesn’t)

This isn’t the part where I pretend Redis is suddenly useless. It’s not. Redis is still great at the things it was built for and trying to replace it everywhere with Postgres would be just as lazy as adding it everywhere by default.

Redis shines when you need speed and volatility. Pub/sub. Ephemeral state. Distributed rate limiting. High-churn counters. Situations where losing the data is acceptable and coordination matters more than structure. That’s Redis territory, no question.

Where it starts to feel unnecessary is when it’s just sitting in front of a database that’s already doing fine. If your queries are indexed, predictable, and cheap, adding Redis doesn’t remove work it just moves it and adds another place for things to go weird.

This is where teams get tripped up. Redis feels like a safety blanket. Removing it feels risky, even when the risk is imaginary. So it stays, not because it’s required, but because no one wants to be the person who takes it out.

The healthier question isn’t “should we use Redis?”
It’s “what problem is Redis solving right now?”

If the answer is vague, abstract, or future-tense, that’s your signal. Redis is a tool, not a virtue. And like any tool, it works best when it’s chosen deliberately not reflexively.

Unlearning the reflex

The hardest part of this whole thing wasn’t the refactor. It was letting go of a belief I’d been carrying around for years.

Caching had become muscle memory. Slow equals add Redis. Scale equals add Redis. Responsible backend equals add Redis. I’d internalized that pattern so deeply that I stopped seeing it as a choice at all.

But this experience forced me to slow down and ask a better question: what is the system actually doing right now? Not what it might do someday. Not what a blog post warned me about. Just what the traffic, the queries, and the graphs were saying.

And they were saying something boring but powerful: fewer moving parts usually win.

Postgres didn’t replace Redis because it’s “better.” It replaced it because it was already there, already fast enough, and already understood. One less service to deploy. One less thing to monitor. One less layer to explain to the next person who opens the codebase.

That doesn’t make Redis bad. It makes assumptions dangerous.

The takeaway for me wasn’t “use Postgres for everything.” It was “measure before you believe.” Because most of the time, we don’t suffer from slow tools we suffer from inherited decisions we never stopped to question.

And honestly? That lesson is way bigger than Redis.

Conclusion: fewer tools, fewer lies

This whole thing started because I trusted a pattern more than I trusted the data. And that’s a mistake most of us make, especially once we’ve been burned a few times and start building defenses before the threat actually shows up.

Redis didn’t fail me. My assumptions did.

What replacing Redis with Postgres really gave me wasn’t just lower latency it gave me clarity. Fewer moving parts. Fewer places for behavior to hide. A system that was easier to reason about when something felt off. That kind of simplicity compounds over time in ways raw performance never does.

The uncomfortable truth is that modern backend stacks are bloated not because developers are careless, but because we inherit best practices that slowly turn into unquestioned rules. We keep adding tools to protect ourselves from hypothetical futures, and then wonder why everything feels fragile.

This isn’t an argument for minimalism at all costs. It’s an argument for intent. Use Redis when it earns its place. Use Postgres when it’s already good enough. Measure first. Remove fear-driven architecture whenever you can.

If there’s a takeaway here, it’s this: the fastest system is often the one that lies to you the least.

And if this made you uncomfortable, good. That usually means it’s worth discussing.

Helpful resources


Top comments (0)