Article
Polling your database every second isn't real-time. It's an expensive illusion.
I was building a Telegram channel monitoring system and needed instant updates in the browser. My first instinct was the "simple" approach: poll the database every second and check for new messages.
It worked. But it also created problems that got worse as the system grew.
The Polling Problem
Here's what happens when you poll:
Constant load with no payoff. Your database handles queries even when there's nothing new. 10 clients polling every second = 10 queries per second. 1000 clients = 1000 queries. Most return empty results.
Delays that users notice. A 1-second polling interval means up to 1 second of latency. Users feel this, especially in chat-like interfaces. And if you poll less frequently to reduce load, the delays get worse.
Scaling becomes painful. More clients means linearly more database queries. Your "simple" solution is now your bottleneck.
Code complexity creeps in. You need timers, deduplication logic, error handling for failed polls, backoff strategies. The "simple" solution isn't simple anymore.
The Solution: Let Data Push Itself
What if, instead of constantly asking "is there new data?", the data just showed up when it existed?
That's the mental shift. Stop pulling. Start pushing.
The pattern I use:
Event Source → Redis Pub/Sub → WebSocket → Browser
Three components, each with a clear job:
- Redis Pub/Sub acts as a message bus. It broadcasts events to all interested subscribers.
- WebSocket maintains a persistent connection between server and browser. No repeated handshakes.
- The database stays out of the delivery path entirely. It stores data. Redis broadcasts it.
How It Works in Practice
Here's the concrete flow from my Telegram monitoring project:
Telegram API → Telethon Handler → Redis Pub/Sub → FastAPI WebSocket → React
When a new message appears in a monitored Telegram channel:
- Telethon (Python Telegram client) receives the event
- Handler publishes the message to a Redis channel
- WebSocket server, subscribed to that Redis channel, receives it
- Server pushes it to all connected browsers
- React updates the UI
Time from Telegram event to browser: 50-100ms.
Database queries for delivery: zero.
The Code
Let me show you the key pieces. This isn't a complete tutorial, but enough to understand the pattern.
Publishing to Redis
When new data arrives, publish it:
from redis.asyncio import Redis
class BroadcastService:
def __init__(self, redis: Redis):
self.redis = redis
async def broadcast(self, channel: str, data: dict):
await self.redis.publish(
channel,
json.dumps(data)
)
Usage is straightforward:
# In your event handler
await broadcast_service.broadcast(
f"channel:{channel_id}",
{"type": "new_message", "data": message_data}
)
WebSocket Manager with Redis Subscription
The WebSocket server subscribes to Redis and forwards messages to connected clients:
from fastapi import WebSocket
from redis.asyncio import Redis
class ConnectionManager:
def __init__(self, redis: Redis):
self.redis = redis
self.connections: dict[str, list[WebSocket]] = {}
async def subscribe(self, websocket: WebSocket, channel: str):
# Track connection
if channel not in self.connections:
self.connections[channel] = []
self.connections[channel].append(websocket)
# Subscribe to Redis channel
pubsub = self.redis.pubsub()
await pubsub.subscribe(channel)
try:
async for message in pubsub.listen():
if message["type"] == "message":
await websocket.send_text(message["data"])
finally:
await pubsub.unsubscribe(channel)
self.connections[channel].remove(websocket)
The FastAPI endpoint:
@app.websocket("/ws/{channel_id}")
async def websocket_endpoint(websocket: WebSocket, channel_id: str):
await websocket.accept()
await manager.subscribe(websocket, f"channel:{channel_id}")
Frontend: useWebSocket Hook
On the client side, a custom hook handles the connection with automatic reconnection:
import { useEffect, useRef, useState, useCallback } from 'react';
type WebSocketStatus = 'connecting' | 'connected' | 'disconnected';
interface UseWebSocketOptions {
reconnect?: boolean;
reconnectInterval?: number;
maxRetries?: number;
}
export function useWebSocket<T>(
url: string,
options: UseWebSocketOptions = {}
) {
const {
reconnect = true,
reconnectInterval = 3000,
maxRetries = 5,
} = options;
const [data, setData] = useState<T | null>(null);
const [status, setStatus] = useState<WebSocketStatus>('disconnected');
const wsRef = useRef<WebSocket | null>(null);
const retriesRef = useRef(0);
const connect = useCallback(() => {
setStatus('connecting');
const ws = new WebSocket(url);
ws.onopen = () => {
setStatus('connected');
retriesRef.current = 0;
};
ws.onmessage = (event) => {
const parsed = JSON.parse(event.data) as T;
setData(parsed);
};
ws.onclose = () => {
setStatus('disconnected');
if (reconnect && retriesRef.current < maxRetries) {
retriesRef.current++;
setTimeout(connect, reconnectInterval);
}
};
wsRef.current = ws;
}, [url, reconnect, reconnectInterval, maxRetries]);
useEffect(() => {
connect();
return () => wsRef.current?.close();
}, [connect]);
return { data, status };
}
Usage:
function ChannelMonitor({ channelId }: { channelId: string }) {
const { data, status } = useWebSocket<Message>(
`wss://api.example.com/ws/${channelId}`
);
if (status === 'connecting') return <Spinner />;
if (!data) return <Empty />;
return <MessageList messages={data} />;
}
Gotchas I Learned the Hard Way
This pattern works great, but there are a few things that will bite you if you're not careful:
Connection Management
When a client disconnects (closes tab, loses network), clean up properly. Orphaned subscriptions leak memory and can cause issues with Redis.
# Always use try/finally for cleanup
try:
async for message in pubsub.listen():
await websocket.send_text(message["data"])
finally:
await pubsub.unsubscribe(channel)
# Remove from connection tracking
Graceful Shutdown
When you deploy a new version, existing WebSocket connections need to close gracefully. Don't just kill the process. Signal clients to reconnect, drain connections, then shut down.
Reconnection Strategy
Clients will disconnect. Network blips happen. Your frontend needs to reconnect automatically, but with backoff. Don't hammer the server with immediate reconnection attempts.
// Exponential backoff
const delay = Math.min(
reconnectInterval * Math.pow(2, retriesRef.current),
30000 // cap at 30 seconds
);
setTimeout(connect, delay);
Keep Connections Alive
WebSocket connections can go stale. Implement ping/pong heartbeats to detect dead connections and clean them up.
# Server-side ping
async def heartbeat(websocket: WebSocket):
while True:
await asyncio.sleep(30)
try:
await websocket.send_text('{"type": "ping"}')
except:
break # Connection dead, cleanup will happen
The Results
After switching from polling to WebSocket + Redis Pub/Sub:
| Metric | Polling | WebSocket + Redis |
|---|---|---|
| Latency | 1-5 seconds | <100ms |
| DB load for delivery | High | Zero |
| Scales with clients | Poorly | Well |
| Code complexity | Growing | Contained |
The architecture is cleaner too. Database handles persistence. Redis handles broadcasting. WebSocket handles delivery. Each component does one thing well.
When to Use This Pattern
This pattern shines for:
- Real-time dashboards and monitoring
- Chat and messaging features
- Live notifications
- Collaborative editing
- Any "instant update" requirement
It might be overkill for:
- Simple apps with few users
- Updates that can wait minutes
- Systems where eventual consistency is fine
Wrapping Up
Polling was the right solution for a different era. If you're building something that needs real-time updates today, WebSocket + Redis Pub/Sub is a battle-tested pattern that scales.
The key insight: separate data persistence from data delivery. Your database stores data. Redis broadcasts it. WebSocket delivers it.
Each component does what it's good at. That's good architecture.
What's your approach to real-time in your projects? Have you tried this pattern? I'd love to hear about your experiences in the comments.
Top comments (0)