DEV Community

Arman Phaugat
Arman Phaugat

Posted on

Using Redis to Optimize Backend Queries

How I Optimized My /leaderboard API from 200ms to 20ms Using Redis

Performance issues don’t usually scream at you.

They whisper.

My /leaderboard endpoint was one of those whispers.

It worked.
It returned the top 10 players correctly.
But under load… it felt slow.

Around ~200ms per request.

Not terrible.

But not great either.

And as someone building backend systems, I knew this would eventually become a bottleneck.

So I decided to fix it.

The Original Approach (The Comfortable One)

The endpoint logic was simple:

Query database

Sort users by score

Return top 10

Classic.

SELECT * FROM users ORDER BY score DESC LIMIT 10;

With proper indexing, it worked fine at small scale.

But leaderboards are:

Frequently accessed

Frequently updated

Competitive, real-time data

And hitting the database every single time didn’t feel right.

I needed something faster.

First Attempt: Let’s Use Redis

Redis felt like the perfect fit.

In-memory.
Fast.
Built for ranking.

But here’s the twist.

Redis wasn’t even working.

When I tried starting it locally, I got:

Error: Address already in use

Port 6379 was already occupied.

I tried restarting services.
Killing processes.
Nothing clean worked.

Instead of wasting more time debugging system-level conflicts, I decided to isolate it properly.

The Fix: Dockerizing Redis

I ran:

docker run -d -p 6379:6379 --name redis-server redis

And that solved everything instantly.

Now Redis was:

Isolated

Portable

Cleanly running

Easy to restart

Sometimes the real optimization is fixing your environment first.

Now I could finally move forward.

Enter Sorted Sets (ZSET)

Redis has a powerful data structure called Sorted Sets (ZSET).

Each entry has:

A member (user ID)

A score (points)

Redis automatically keeps them sorted by score.

That meant I no longer needed:

SQL sorting

ORDER BY queries

Heavy DB reads

When a user’s score updates:

await redis.zadd("leaderboard", score, userId);

To fetch top 10:

await redis.zrevrange("leaderboard", 0, 9, "WITHSCORES");

Now the ranking logic lived entirely in memory.

Latency improved immediately.

But something still felt off.

The Hidden Bottleneck I Didn’t See Coming

After fetching the top 10 user IDs, I needed user details:

username

avatar

level

metadata

So I wrote:

for (let userId of topUsers) {
await redis.hgetall(user:${userId});
}

It worked.

But performance wasn’t where I expected it to be.

Then it hit me.

I had recreated the N+1 problem.

Not in the database.

In Redis.

Here’s what was happening:

**1 request → fetch leaderboard

10 requests → fetch each user

That’s 11 network round trips.**

Even if Redis responds in microseconds, network latency doesn’t.

That’s where my missing 100ms was hiding.

The Real Fix: Redis Pipelining

Redis supports pipelining.

Instead of:

request → response
request → response
request → response

You send all commands together and receive all responses together.

So I rewrote it:

_**const pipeline = redis.pipeline();

for (let userId of topUsers) {
pipeline.hgetall(user:${userId});
}

const users = await pipeline.exec();**_

Now:

Only 1 network round trip

All commands executed in batch

No more N+1 latency

This was the real breakthrough.

The Results

Here’s how it evolved:

Stage Latency
DB sorting ~200ms
Redis (no pipeline) ~120ms
Redis + pipeline ~20ms

A full 10x improvement.

And interestingly, the biggest win wasn’t just using Redis.

It was reducing network round trips.

What This Taught Me

  1. Infrastructure problems come first

If Redis isn’t running cleanly, nothing else matters.

  1. Data structures matter

ZSET eliminated repeated sorting completely.

  1. N+1 problems aren’t just database issues

They can happen in any remote system.

  1. Network latency is invisible — but expensive

Even “fast” systems become slow when you talk to them too many times.

  1. Docker simplifies backend life

Instead of fighting your OS, containerize dependencies.

Final Architecture

Now the /leaderboard flow looks like this:

Score update → ZADD

Fetch top 10 → ZREVRANGE

Batch fetch user data → pipeline + exec

Return response

No DB hit.
Fully in-memory.
Minimal network calls.
~20ms response time.

Clean.

Closing Thought

Optimization isn’t about throwing tools at a problem.

It’s about identifying where time is actually being spent.

For me, it wasn’t just the database.

It was:

The environment

The data structure

And the network

And fixing those made all the difference.

Top comments (0)