DEV Community

Safal Bhandari
Safal Bhandari

Posted on

Understanding Backpressure in web socket

Backpressure in WebSockets refers to a flow-control mechanism that prevents a fast data producer (server or client) from overwhelming a slow consumer (the other side of the WebSocket). It ensures stability, avoids memory bloat, and keeps throughput consistent.

Let’s go step-by-step in depth — focusing on what it is, why it happens, how it’s handled, and how to implement it.


1. Core Concept

A WebSocket connection is a full-duplex TCP stream. Both sides can send data at any time.
However, network speed, client CPU, or I/O delays mean one side can’t always consume data as fast as the other produces it.

Backpressure occurs when:

  • The sender keeps writing messages faster than the receiver (or network) can process or transmit them.

This leads to:

  • Increasing memory usage (buffers fill up).
  • Higher latency.
  • Eventually, crashes or forced connection closures.

2. How WebSockets Send Data Internally

When you call something like:

ws.send(data);
Enter fullscreen mode Exit fullscreen mode

Internally:

  • The data is queued in a TCP send buffer.
  • The OS tries to send it over the network.
  • If the buffer is full, the send() call doesn’t immediately fail—it just queues the data.
  • As more sends happen, that buffer can grow in memory if the application doesn’t monitor it.

So, the write call being non-blocking is both good and bad:

  • ✅ Good: It keeps the app responsive.
  • ❌ Bad: The app might not realize that it’s flooding the buffer.

3. How Backpressure Builds Up

Consider this timeline:

Time Sender Action Receiver Condition Result
t1 ws.send() fast loop Receiver processes slowly Data accumulates
t2 TCP buffer fills up Network can’t drain fast OS backpressure
t3 Sender still queues messages Memory grows Risk of crash

4. Detecting Backpressure

In Node.js and browser WebSockets:

  • The .send() method returns a boolean (in Node.js ws library).
  • If it returns false, it means the internal buffer is full.
  • You must pause sending until the 'drain' event fires.

Example (Node.js ws library):

function sendData(ws, data) {
  if (!ws.send(data, { binary: false }, (err) => {
    if (err) console.error('Send error:', err);
  })) {
    // Buffer full, wait for 'drain'
    ws.once('drain', () => sendData(ws, data));
  }
}
Enter fullscreen mode Exit fullscreen mode

In browsers, you don’t get a drain event. You can only throttle your send frequency manually (e.g., via intervals or queues).


5. Handling Backpressure

(a) Queue messages manually

Instead of calling ws.send() directly, you push data into a queue and only send if the buffer is ready.

const queue = [];
let sending = false;

function send(ws, message) {
  queue.push(message);
  if (!sending) processQueue(ws);
}

function processQueue(ws) {
  if (queue.length === 0) {
    sending = false;
    return;
  }
  sending = true;
  const message = queue.shift();
  const ok = ws.send(message);
  if (!ok) {
    ws.once('drain', () => processQueue(ws));
  } else {
    processQueue(ws);
  }
}
Enter fullscreen mode Exit fullscreen mode

(b) Limit per-client throughput

If multiple clients connect, apply per-client rate limiting:

  • Send N messages per second per connection.
  • Drop or batch messages beyond a limit.

Example: token bucket or leaky bucket algorithms.


(c) Apply backpressure at application layer

Instead of sending all game updates or chat messages in real time:

  • Compress or batch messages (e.g., send state diffs every 50ms).
  • Drop outdated data (like old player positions).

6. In Redis or Message Queue Context

When using WebSockets with Redis or Kafka:

  • The backend may push data faster than the WebSocket can deliver.
  • Implement backpressure propagation:

    • If WebSocket buffer is full, pause consuming from Redis.
    • Resume when WebSocket drain event fires.

Example pattern:

redisSub.on('message', (channel, msg) => {
  if (!ws.send(msg)) {
    redisSub.pause();
    ws.once('drain', () => redisSub.resume());
  }
});
Enter fullscreen mode Exit fullscreen mode

7. Monitoring Metrics in Production

Track:

  • outboundQueueLength (number of pending messages).
  • averageSendTime.
  • memoryUsage growth per connection.
  • TCP retransmissions and socket buffer sizes.

Use these to auto-scale or drop slow clients.


8. Summary Table

Concept Description Fix
Backpressure Sender faster than receiver Pause or queue sends
TCP buffer OS-managed send queue Monitor size via .send() return
Browser WebSocket No drain event Manual throttling
Node ws Emits drain Use event to resume sending
Redis/Kafka Integration May flood Pause upstream on pressure

9. Key Takeaway

Backpressure = controlled data flow.
Without it, your WebSocket server becomes memory-heavy, latency increases, and you lose control over delivery rate.

You must:

  1. Detect pressure (buffer full or slow client).
  2. Stop sending.
  3. Wait for drain.
  4. Resume safely.

Top comments (0)