Hey DEV Community! π
I'm Geetansh Garg, a backend engineer who loves building real-time, scalable systems. In my previous post, I shared how we built a real-time notification service using FastAPI and Redis Streams.
Today, I want to go deeper into why we moved from Redis Pub/Sub to Redis Streams β and why this shift made our real-time infrastructure much more reliable and production-ready.
β Why Redis Pub/Sub Didn't Work for Us
We initially chose Redis Pub/Sub because it was simple to implement and worked well in early testing. But as the system grew, we hit several limitations:
- β Message loss: If a user wasn't connected when the message was published, it was gone forever.
- β No persistence: Pub/Sub doesnβt store message history.
- β No retries or acknowledgment: We couldnβt confirm message delivery.
- β Hard to scale: Horizontal scaling of consumers was error-prone and lacked coordination.
In short, Pub/Sub is great for fire-and-forget messages, but not suitable when reliability matters.
β Why Redis Streams Was a Better Fit
We explored alternatives and landed on Redis Streams, which offered exactly what we needed:
- β Durability: Messages are stored until consumed.
- β Consumer Groups: Support for multiple parallel consumers with coordinated delivery.
- β Acknowledgement: Messages are marked delivered only after explicit acknowledgment.
- β Replayability: Consumers can resume from the last processed ID.
These features helped us build a fault-tolerant, reliable real-time pipeline for notifications, especially for users who might be temporarily offline.
βοΈ Sample Code: Redis Streams in Action
Hereβs how we use Redis Streams for our notification service:
1. Pushing a notification to the stream:
redis.xadd("notifications_stream", {
"user_id": "123",
"message": "Youβve got a new alert",
"lang": "en",
"priority": "high"
})
2. Reading from the stream as a consumer:
redis.xreadgroup(
groupname="notifier_group",
consumername="worker-1",
streams={"notifications_stream": ">"}
)
3. Acknowledging processed messages:
redis.xack("notifications_stream", "notifier_group", message_id)
We also set up logic to retry unacknowledged messages using XPENDING and XCLAIM.
π§ Lessons Learned
Redis Streams was a game-changer for our use case, allowing us to scale safely and avoid message loss.
- We added a background task to periodically trim the stream using
XTRIM
to keep memory usage in check. - Creating a resumable consumer logic gave us high availability even during service restarts.
- Implementing a WebSocket layer on top helped us push these events live to users with JWT-secured sessions.
π οΈ When NOT to Use Redis Streams
Redis Streams isnβt the perfect fit for all cases. You might not want to use it when:
- β‘ You need ultra-low latency fire-and-forget messaging (Pub/Sub is simpler here).
- π Your system has complex message routing or massive scale β in that case, Kafka might be a better tool.
- π You donβt need history or delivery guarantees.
π Final Takeaway
Redis Streams gave us exactly what we needed β a scalable, reliable, and production-friendly way to deliver real-time notifications with FastAPI and WebSockets.
If you're building something similar, I highly recommend giving Redis Streams a serious look β especially if you're struggling with Pub/Sub limitations.
π¬ Letβs Talk!
Have you worked with Redis Streams or built similar systems?
Would love to hear how you tackled durability, replay, or scale challenges!
Drop a comment or DM β always happy to nerd out on system design π
Top comments (0)