DEV Community

Cover image for How Redis Cut My Database Reads from ~26K to Almost Zero
Dnyaneshwar Vitthal Shekade
Dnyaneshwar Vitthal Shekade

Posted on • Originally published at dnyan.vercel.app

How Redis Cut My Database Reads from ~26K to Almost Zero

I used to hit Supabase on every single page load—blogs, individual posts, experiences, toolboxes, services, connections, profile info, role visibility, skills… basically my entire personal dashboard depended on direct database queries.

The result?

~26,000 database reads per day.

Slow responses.

Unnecessary load.

And occasional connection warnings.

So I introduced Redis as a read-through cache with a small prewarm script—and everything changed.


What I Cached

I focused on the hottest read-heavy data:

  • Blogs → published list, per-post data, combined blog payload

  • Experiences → active + full history

  • Toolboxes → all, software, hardware

  • Services → active + all

  • Connections → full list

  • Profile info → singleton record

  • Role visibility → sidebar & quick actions

  • Skills → full list + category variants

These were perfect cache candidates because they:

  • Change infrequently

  • Are read constantly

  • Don’t require real-time consistency

How the Caching Works

1) Read-Through Cache Pattern

Each GET endpoint wraps a helper:

getCached(key, fetcher, ttl = 300)
Enter fullscreen mode Exit fullscreen mode

Flow:

Request → Check Redis
        → Cache hit → return instantly
        → Cache miss → fetch from Supabase → store in Redis → return
Enter fullscreen mode Exit fullscreen mode

This ensures:

  • Only the first request touches the DB

  • All other requests are served from Redis in milliseconds

2) Smart Invalidation on Writes

Whenever data changes via:

  • POST

  • PUT

  • DELETE

I call:

invalidateKeys([...])
Enter fullscreen mode Exit fullscreen mode

This clears only the affected cache prefixes, keeping everything:

  • Fresh

  • Consistent

  • Fast

3) Prewarming the Cache

To avoid cold-start latency after deploys, I built:

scripts/prewarm-redis.mjs
Enter fullscreen mode Exit fullscreen mode

It simply calls the public API endpoints—no DB credentials needed.

Run it like:

BASE_URL=http://localhost:3000 node scripts/prewarm-redis.mjs
Enter fullscreen mode Exit fullscreen mode

Now Redis is fully populated before real users arrive.


4) Visibility & Health Monitoring

I added a Data tab UI showing:

  • Redis health status

  • Total cached items

  • Cached datasets overview

So if Redis goes down…

I know immediately.


The Results 📉

Before Redis

  • ~26K DB reads/day

  • Higher latency

  • Risk of max connection limits

After Redis

  • Cold start: one DB hit per dataset

  • Warm traffic: almost zero DB reads

  • Latency: single-digit milliseconds

  • Stability: no connection pressure

In short:

Database became a backup. Redis became the primary read layer.


What Made the Biggest Difference

1) Cache the hottest reads

Lists and singleton data deliver massive ROI when cached.

2) Keep TTL modest

I used 5 minutes to balance:

  • Freshness

  • Performance

3) Always prewarm on deploy

Prewarming removes the cold-start penalty completely.

4) Monitor cache health

Visibility prevents silent performance regressions.

How You Can Try This

1) Configure Redis

REDIS_URL=...
(or host/port/user/password)
Enter fullscreen mode Exit fullscreen mode

2) Start your app

3) Run the prewarm script

4) Open dashboard/blog pages

Then watch:

  • DB metrics drop

  • Redis hit rate rise

  • Latency shrink

Final Thoughts

Adding Redis wasn’t just a performance optimization

it fundamentally changed how my app handles reads at scale.

From:

“Query the database every time”

to:

“Serve instantly from memory, and only hit DB when necessary.”

That single shift reduced 26K daily reads to nearly zero.

And the best part?

It took less than a day to implement.


Top comments (0)