- đ„Connect: https://xam-heisenberg-company.vercel.app/
- đ„GitHub: https://github.com/Subham-Maity
- đ„Twitter: https://twitter.com/TheSubhamMaity
- đ„LinkedIn: https://www.linkedin.com/in/subham-xam
- đ„Insta: https://www.instagram.com/subham_xam
Building a Real-Time Delivery Tracking System Using Socket, Redis, Redis streams adapter, Kafka
I recently worked on an exciting project for a clientâa delivery app similar to Zomato, where users can track their driver's location live on a map.
The complete application used Flutter for the frontend with both NestJS and Golang powering different versions of the backend.
While I developed two separate implementations, this article focuses purely on the core tracking logic that's completely language-independent.
If you're curious about the actual code, everything is available on GitHub: https://github.com/Subham-Maity/RTLS-Scale.
But don't worry about the specific programming languagesâI've designed this guide to be accessible to anyone interested in understanding the fundamental architecture of real-time location tracking systems.
Let me walk you through how I built this prototype, how it works, and how to scale it for real-world applications.
Important disclaimer: this is not production-ready code, as a full commercial implementation would require additional business logic, security considerations, battery optimization, and many other factors I won't cover here. I'm also not addressing driver matching algorithms or distance calculationsâthis article focuses exclusively on the real-time tracking system architecture.
Along the way, I'll share insights from my experience, including practical advice on backend-frontend communication and what I learned about building reliable real-time systems. By the end, you'll understand exactly how that little moving dot on your food delivery app actually works behind the scenes!

đ„Connect: https://www.subham.online
đ„Repo: https://github.com/Subham-Maity/RTLS-Scale
đ„Twitter: https://twitter.com/TheSubhamMaity
đ„LinkedIn: https://www.linkedin.com/in/subham-xam
How the Prototype Works
Imagine this: you open the prototype in a browser, and there are two buttonsâEnter as User or Enter as Driver.
Pretty straightforward, right?
If you pick Driver, the app starts sending your location (latitude and longitude) to the server every few seconds.
If you pick User, you see the driverâs location updating live on a map.
To test it, I opened the driver page on my phone and the user page on my laptop. I walked around a bit with my phone, and on my laptop, I could see my position moving on the map in real-time. It felt satisfying, like âhaan, this is working!â But this was just a prototype. In a real app, youâd need proper authentication, middleware, and all that stuff. Here, my focus was on the core logic: how to send the driverâs location to the user continuously, without any hiccups.
Server 1: The Basic WebSocket Setup
Letâs start with the simplest way I did this, using WebSockets. The code for this is in
Hereâs how it works, step-by-step:
-
Driver Sends Location: The driverâs app connects to the server using WebSockets and sends a
send-locationevent with their latitude and longitude every few seconds. Think of it like the driver saying, âHey server, hereâs where I am right now!â Server Broadcasts It: The server listens for this event and sends the location to all connected clients (like the userâs app) using a
receive-locationevent. Itâs like the server shouting, âEveryone, hereâs the driverâs new position!âUser Updates Map: The userâs app listens for
receive-locationevents and moves the driverâs dot on the map. Simple and quick.
For a small setup, this works like a charm. But then I started thinkingâwhat if there are hundreds or thousands of drivers? Will this still hold up?
Is This Scalable? Not Quite
Hereâs where I hit a wall:
- Too Many Connections: Every WebSocket connection uses server resourcesâCPU, memory, etc. With thousands of drivers and users, one server canât handle it alone. Itâll slow down or crash.
- Wasting Data: The server sends every driverâs update to all users. So if there are 100 drivers, each user gets 100 updates every few seconds, even though they only care about their own driver. Thatâs a lot of useless data clogging the system.
- Adding More Servers: If I add more servers to share the load, how do I make sure the right updates reach the right users? Without some clever trick, itâs a headache. Assuming you're a clever programmer, feel free to drop any tricky solutions in the comments đ
Verdict: This is fine for a prototype or a small app with less than 100 drivers. But for a big delivery app? No chanceâitâll break.
Server 2: Adding Redis Pub/Sub
So, I needed a better way. Thatâs when I brought in Redis Pub/Sub. Redis is this super-fast in-memory store, and its publish-subscribe system is perfect for scaling real-time stuff. Check the code in
2. server (socket + redis pub-sub)/src/websockets/location.gateway.ts. Hereâs how I made it work, step-by-step:
2. server (socket + redis pub-sub)/src/websockets/location.gateway.ts
-
Driver Publishes Location: When the driver sends a
send-locationevent, the server doesnât broadcast it directly. Instead, it publishes the location to a Redis channel calledlocation-updates. Hereâs the code:
@SubscribeMessage('send-location')
handleLocation(client: Socket, data: { latitude: number; longitude: number }) {
const locationData = {
id: client.id,
latitude: data.latitude,
longitude: data.longitude,
};
this.pubSubService.publish('location-updates', JSON.stringify(locationData));
}
-
Server Subscribes and Targets Updates: The server subscribes to the
location-updateschannel and sends the update only to specific users using WebSocket rooms. Each driver has a room (named after their ID), and users join that room to track them. Hereâs how itâs set up in the constructor:
constructor(private pubSubService: PubSubService) {
this.pubSubService.subscribe('location-updates', (message) => {
const locationData = JSON.parse(message);
this.server.to(locationData.id).emit('receive-location', locationData);
});
}
And when a user wants to track a driver:
@SubscribeMessage('track-driver')
handleTrackDriver(client: Socket, driverId: string) {
client.join(driverId);
}
-
Scaling with Multiple Servers: Redis makes this easy. Multiple NestJS servers can subscribe to the same
location-updateschannel. When a driverâs location is published, all servers get it and send it to the right room. No mess, no fuss.
Why This Is Better
- Targeted Updates: Only users tracking a specific driver get their updates. No more flooding everyone with data they donât need.
- Horizontal Scaling: Add more servers, and Redis handles the coordination. Each server manages its own clients, and the load gets shared.
This is a big step up from the basic setup. But I found something even betterâkeep reading!
Server 3: Redis Streams Adapter for the Win
While Redis Pub/Sub was good, I stumbled upon the Redis Streams Adapter for Socket.IO, and itâs like Pub/Sub ka bada bhai đâmore powerful and reliable. The code for this is in
3. server (socket + redis streams adapter)/src/redis/redis.module.ts.
3. server (socket + redis streams adapter)/src/redis/redis-io-adapter.ts.
3. server (socket + redis streams adapter)/src/websockets/location.gateway.ts.
Hereâs how it works, step-by-step:
-
Set Up the Adapter: I created a
RedisIoAdapterin3. server (socket + redis streams adapter)/src/redis/redis-io-adapter.tsto use Redis Streams with Socket.IO:
export class RedisIoAdapter extends IoAdapter {
private redisClient: Redis;
constructor(app: INestApplication, redisClient: Redis) {
super(app);
this.redisClient = redisClient;
}
createIOServer(port: number, options?: ServerOptions): any {
const server = super.createIOServer(port, options);
server.adapter(createAdapter(this.redisClient));
return server;
}
}
-
Driver Sends Location: Same as beforeâthe driver sends a
send-locationevent, and the server emits it to their room:
@SubscribeMessage('send-location')
handleLocation(client: Socket, data: { latitude: number; longitude: number }) {
const locationData = {
id: client.id,
latitude: data.latitude,
longitude: data.longitude,
};
this.server.to(client.id).emit('receive-location', locationData);
}
-
Users Track Drivers: Users join the driverâs room with a
track-driverevent:
@SubscribeMessage('track-driver')
handleTrackDriver(client: Socket, driverId: string) {
client.join(driverId);
}
- Magic of Streams: The Redis Streams Adapter handles everything else. It distributes updates across all server instances, ensures no messages are lost, and keeps rooms working seamlessly.
Why This Beats Pub/Sub
Hereâs a quick comparison:
| Feature | Redis Pub/Sub | Redis Streams Adapter |
|---|---|---|
| Reliability | If a server is down, it misses updates. | Stores messages, so servers catch up later. |
| Scalability | Good for medium loads, but struggles with huge volumes. | Uses consumer groups for big scale. |
| Message Order | Order isnât always guaranteed. | Strict order, great for tracking. |
| Ease of Use | You manage pub/sub yourself. | Socket.IO does it allâless code! |
- Reliability: If a server crashes with Pub/Sub, it misses updates. With Streams, messages are saved, so nothing gets lost.
- Scalability: Streams can handle way more drivers and users with consumer groups splitting the work.
- Simplicity: No need to write pub/sub logicâSocket.IO handles it behind the scenes.
This is perfect for a large app with lots of users. But what about massive scale? Thatâs where Kafka comes in.
Future-Proofing with Kafka
Now, imagine your app grows hugeâthousands of drivers, millions of users, and you want to do fancy things like analytics or logging alongside tracking. Thatâs when Kafka enters the picture. Itâs a distributed streaming platform built for handling tons of real-time data.
Hereâs the basic plan:
- Driver sends location via WebSockets (
send-locationevent). - Server pushes it to a Kafka topic, like
driver-locations. - A consumer service reads from the topic and sends updates to users via WebSockets.
Kafka is overkill for small apps, but for enterprise-level scale, itâs a game-changer. Iâll add a Kafka setup to my GitHub repo soonâkeep an eye out!
What to Tell Frontend Devs
As a backend dev, I was scratching my head about what to tell the frontend team. Turns out, itâs pretty simple:
-
Driver App:
- Connect to the WebSocket server.
- Send
send-locationevents with latitude and longitude every few seconds. - Maybe show the driverâs own location on a map, if needed.
-
User App:
- Connect to the WebSocket server.
- Listen for
receive-locationevents and update the map. - Send a
track-driverevent with the driverâs ID to join their room.
Thatâs it! The frontend devs will love how easy this isâjust a few events, and the backend handles the heavy lifting.
Comparing the Approaches
Letâs break it down with a table to see how these methods stack up:
| Approach | Pros | Cons | Best For |
|---|---|---|---|
| Basic WebSockets | Easy to set up, works for small apps. | Not scalable, sends too much data. | Prototypes, small apps. |
| Redis Pub/Sub | Scales better, targets updates. | Misses updates if servers crash. | Medium-sized apps. |
| Redis Streams Adapter | Reliable, scalable, less code. | Slightly tricky to set up. | Large apps with many users. |
| Kafka | Handles huge scale, extra features. | Too much for small apps, needs infra. | Enterprise-level apps. |
So, thatâs the full story! From a basic prototype to scaling for a real delivery app, this is how you make real-time tracking work. The codeâs all on GitHubâgo check it out.
Next time youâre waiting for your food and watching that driver dot move, youâll know whatâs happening behind the scenes.
Hope this clears things upâlet me know if you have questions!













Top comments (0)