DEV Community

Wilson Xu
Wilson Xu

Posted on

Building a Real-Time Dashboard with Socket.io and React

Building a Real-Time Dashboard with Socket.io and React

Real-time dashboards are everywhere — monitoring production systems, tracking live sales, watching IoT sensor feeds. The technical requirements are always the same: low latency, reliable reconnection, efficient data transfer, and a UI that updates without full page refreshes.

In this article we'll build a complete real-time metrics dashboard from scratch using Socket.io for WebSocket communication and React for the UI. We'll cover production concerns most tutorials skip: proper reconnection handling, scaling to multiple servers with the Redis adapter, and avoiding the common pitfalls that turn prototypes into memory-leak machines in production.

By the end you'll have a working dashboard displaying live CPU/memory metrics, a line chart updating in real time, and a server architecture that can scale horizontally.


What We're Building

The finished application has:

  • A Node.js + Express + Socket.io server that emits system metrics every second
  • A React frontend with live-updating charts using Chart.js
  • Proper connection state management (reconnecting, connected, disconnected)
  • Redis pub/sub for horizontal scaling (so multiple server instances work together)
  • TypeScript throughout for type safety

The full source is available at the end of each section, and the architecture is production-ready — not just a demo.


Project Setup

mkdir realtime-dashboard && cd realtime-dashboard

# Server
mkdir server && cd server
npm init -y
npm install express socket.io ioredis @socket.io/redis-adapter systeminformation
npm install -D typescript @types/node @types/express ts-node nodemon
npx tsc --init

cd ..

# Client
npx create-react-app client --template typescript
cd client
npm install socket.io-client chart.js react-chartjs-2
Enter fullscreen mode Exit fullscreen mode

Your tsconfig.json on the server should have:

{
  "compilerOptions": {
    "target": "ES2020",
    "module": "commonjs",
    "lib": ["ES2020"],
    "outDir": "./dist",
    "rootDir": "./src",
    "strict": true,
    "esModuleInterop": true,
    "skipLibCheck": true
  }
}
Enter fullscreen mode Exit fullscreen mode

Socket.io Server Setup

Create server/src/index.ts:

import express from "express";
import { createServer } from "http";
import { Server, Socket } from "socket.io";
import { createClient } from "ioredis";
import { createAdapter } from "@socket.io/redis-adapter";
import si from "systeminformation";

const app = express();
const httpServer = createServer(app);

const io = new Server(httpServer, {
  cors: {
    origin: process.env.CLIENT_URL || "http://localhost:3000",
    methods: ["GET", "POST"],
    credentials: true,
  },
  // Tune connection parameters for production
  pingTimeout: 20000,
  pingInterval: 25000,
  transports: ["websocket", "polling"], // websocket first, polling as fallback
});

// Health check endpoint
app.get("/health", (req, res) => {
  res.json({ status: "ok", connections: io.engine.clientsCount });
});

export { io, httpServer };
Enter fullscreen mode Exit fullscreen mode

Defining Shared Types

Create server/src/types.ts — these will be shared (or duplicated) in the client:

export interface MetricsPayload {
  timestamp: number;
  cpu: {
    usage: number;          // percentage 0-100
    cores: number;
    speed: number;          // GHz
  };
  memory: {
    total: number;          // bytes
    used: number;           // bytes
    usagePercent: number;   // percentage 0-100
  };
  network: {
    rx_sec: number;         // bytes/second received
    tx_sec: number;         // bytes/second transmitted
  };
  uptime: number;           // seconds
}

export interface ClientToServerEvents {
  subscribe_room: (room: string) => void;
  unsubscribe_room: (room: string) => void;
  request_snapshot: (callback: (data: MetricsPayload) => void) => void;
}

export interface ServerToClientEvents {
  metrics_update: (data: MetricsPayload) => void;
  alert: (data: { level: "info" | "warning" | "critical"; message: string }) => void;
  connected_count: (count: number) => void;
}
Enter fullscreen mode Exit fullscreen mode

The Metrics Collector

Create server/src/metrics.ts:

import si from "systeminformation";
import { MetricsPayload } from "./types";

let previousNetworkStats: si.Systeminformation.NetworkStatsData | null = null;
let lastCollected: number = Date.now();

export async function collectMetrics(): Promise<MetricsPayload> {
  const [cpuLoad, memory, networkStats] = await Promise.all([
    si.currentLoad(),
    si.mem(),
    si.networkStats(),
  ]);

  const now = Date.now();
  const elapsed = (now - lastCollected) / 1000; // seconds
  lastCollected = now;

  // Calculate network throughput
  const netData = networkStats[0] || { rx_bytes: 0, tx_bytes: 0 };
  const rx_sec = previousNetworkStats
    ? Math.max(0, (netData.rx_bytes - previousNetworkStats.rx_bytes) / elapsed)
    : 0;
  const tx_sec = previousNetworkStats
    ? Math.max(0, (netData.tx_bytes - previousNetworkStats.tx_bytes) / elapsed)
    : 0;
  previousNetworkStats = netData;

  return {
    timestamp: now,
    cpu: {
      usage: Math.round(cpuLoad.currentLoad * 10) / 10,
      cores: cpuLoad.cpus.length,
      speed: cpuLoad.cpus[0]?.speed || 0,
    },
    memory: {
      total: memory.total,
      used: memory.used,
      usagePercent: Math.round((memory.used / memory.total) * 1000) / 10,
    },
    network: {
      rx_sec: Math.round(rx_sec),
      tx_sec: Math.round(tx_sec),
    },
    uptime: process.uptime(),
  };
}
Enter fullscreen mode Exit fullscreen mode

Connection Handling and Broadcasting

Back in server/src/index.ts, add the connection handler:

import { collectMetrics } from "./metrics";
import type { ClientToServerEvents, ServerToClientEvents } from "./types";

// Type the Socket.io server for full type safety
const typedIo = io as Server<ClientToServerEvents, ServerToClientEvents>;

// Track active intervals per socket to prevent leaks
const socketIntervals = new Map<string, NodeJS.Timer>();

typedIo.on("connection", (socket: Socket<ClientToServerEvents, ServerToClientEvents>) => {
  console.log(`Client connected: ${socket.id} (total: ${typedIo.engine.clientsCount})`);

  // Broadcast updated connection count to everyone
  typedIo.emit("connected_count", typedIo.engine.clientsCount);

  // Allow clients to subscribe to specific rooms (e.g., different servers)
  socket.on("subscribe_room", (room) => {
    socket.join(room);
    console.log(`${socket.id} joined room: ${room}`);
  });

  socket.on("unsubscribe_room", (room) => {
    socket.leave(room);
  });

  // Allow clients to request an immediate snapshot without waiting for the interval
  socket.on("request_snapshot", async (callback) => {
    try {
      const metrics = await collectMetrics();
      callback(metrics);
    } catch (err) {
      console.error("Snapshot collection failed:", err);
    }
  });

  // Start per-socket metrics emission — 1 update per second
  const interval = setInterval(async () => {
    try {
      const metrics = await collectMetrics();

      // Emit to this specific socket
      socket.emit("metrics_update", metrics);

      // Emit alert if CPU is critically high
      if (metrics.cpu.usage > 90) {
        socket.emit("alert", {
          level: "critical",
          message: `CPU usage critical: ${metrics.cpu.usage}%`,
        });
      } else if (metrics.cpu.usage > 75) {
        socket.emit("alert", {
          level: "warning",
          message: `CPU usage high: ${metrics.cpu.usage}%`,
        });
      }
    } catch (err) {
      console.error("Metrics collection error:", err);
    }
  }, 1000);

  socketIntervals.set(socket.id, interval);

  // Clean up on disconnect — critical to prevent memory leaks
  socket.on("disconnect", (reason) => {
    console.log(`Client disconnected: ${socket.id} — reason: ${reason}`);

    const interval = socketIntervals.get(socket.id);
    if (interval) {
      clearInterval(interval);
      socketIntervals.delete(socket.id);
    }

    // Update connection count
    typedIo.emit("connected_count", typedIo.engine.clientsCount);
  });
});

const PORT = process.env.PORT || 4000;
httpServer.listen(PORT, () => {
  console.log(`Server running on port ${PORT}`);
});
Enter fullscreen mode Exit fullscreen mode

React Hooks for Socket.io

Well-structured React applications encapsulate socket logic in custom hooks. This keeps components clean and makes it easy to mock WebSocket behavior in tests.

The Base useSocket Hook

Create client/src/hooks/useSocket.ts:

import { useEffect, useRef, useState, useCallback } from "react";
import { io, Socket } from "socket.io-client";
import type { ClientToServerEvents, ServerToClientEvents } from "../types";

type TypedSocket = Socket<ServerToClientEvents, ClientToServerEvents>;

export type ConnectionStatus = "connecting" | "connected" | "disconnected" | "reconnecting";

interface UseSocketReturn {
  socket: TypedSocket | null;
  status: ConnectionStatus;
  error: string | null;
}

export function useSocket(url: string): UseSocketReturn {
  const socketRef = useRef<TypedSocket | null>(null);
  const [status, setStatus] = useState<ConnectionStatus>("connecting");
  const [error, setError] = useState<string | null>(null);

  useEffect(() => {
    // Create socket with auto-reconnect configuration
    const socket: TypedSocket = io(url, {
      transports: ["websocket", "polling"],
      reconnection: true,
      reconnectionAttempts: Infinity,
      reconnectionDelay: 1000,
      reconnectionDelayMax: 10000,   // cap at 10 seconds between retries
      randomizationFactor: 0.5,      // jitter to prevent thundering herd
      timeout: 20000,
    });

    socketRef.current = socket;

    socket.on("connect", () => {
      setStatus("connected");
      setError(null);
      console.log("Socket connected:", socket.id);
    });

    socket.on("disconnect", (reason) => {
      console.log("Socket disconnected:", reason);

      // "io server disconnect" means the server explicitly disconnected us
      // We should NOT reconnect automatically in this case
      if (reason === "io server disconnect") {
        setStatus("disconnected");
        socket.connect(); // manual reconnect for server-initiated disconnects
      } else {
        setStatus("reconnecting");
      }
    });

    socket.on("connect_error", (err) => {
      setStatus("reconnecting");
      setError(err.message);
      console.error("Connection error:", err.message);
    });

    socket.io.on("reconnect", (attempt) => {
      setStatus("connected");
      setError(null);
      console.log(`Reconnected after ${attempt} attempts`);
    });

    socket.io.on("reconnect_attempt", (attempt) => {
      setStatus("reconnecting");
      console.log(`Reconnection attempt ${attempt}`);
    });

    socket.io.on("reconnect_failed", () => {
      setStatus("disconnected");
      setError("Failed to reconnect after maximum attempts");
    });

    return () => {
      socket.removeAllListeners();
      socket.disconnect();
      socketRef.current = null;
    };
  }, [url]);

  return { socket: socketRef.current, status, error };
}
Enter fullscreen mode Exit fullscreen mode

The Metrics Hook

Create client/src/hooks/useMetrics.ts:

import { useEffect, useRef, useState } from "react";
import type { TypedSocket } from "./useSocket";
import type { MetricsPayload } from "../types";

const MAX_HISTORY_POINTS = 60; // 60 seconds of history

export interface MetricsHistory {
  timestamps: number[];
  cpuUsage: number[];
  memoryUsage: number[];
  networkRx: number[];
  networkTx: number[];
}

interface UseMetricsReturn {
  latest: MetricsPayload | null;
  history: MetricsHistory;
  isReceiving: boolean;
}

const emptyHistory = (): MetricsHistory => ({
  timestamps: [],
  cpuUsage: [],
  memoryUsage: [],
  networkRx: [],
  networkTx: [],
});

export function useMetrics(socket: Socket | null): UseMetricsReturn {
  const [latest, setLatest] = useState<MetricsPayload | null>(null);
  const [history, setHistory] = useState<MetricsHistory>(emptyHistory());
  const [isReceiving, setIsReceiving] = useState(false);
  const timeoutRef = useRef<NodeJS.Timeout | null>(null);

  useEffect(() => {
    if (!socket) return;

    // Request immediate snapshot on connect
    socket.emit("request_snapshot", (snapshot) => {
      setLatest(snapshot);
      setIsReceiving(true);
    });

    socket.on("metrics_update", (data: MetricsPayload) => {
      setLatest(data);
      setIsReceiving(true);

      // Update history, capping at MAX_HISTORY_POINTS
      setHistory((prev) => {
        const append = <T>(arr: T[], val: T): T[] => {
          const next = [...arr, val];
          return next.length > MAX_HISTORY_POINTS
            ? next.slice(next.length - MAX_HISTORY_POINTS)
            : next;
        };

        return {
          timestamps: append(prev.timestamps, data.timestamp),
          cpuUsage:   append(prev.cpuUsage, data.cpu.usage),
          memoryUsage: append(prev.memoryUsage, data.memory.usagePercent),
          networkRx:  append(prev.networkRx, data.network.rx_sec / 1024), // KB/s
          networkTx:  append(prev.networkTx, data.network.tx_sec / 1024), // KB/s
        };
      });

      // Mark as not-receiving if no update in 3 seconds (stale data indicator)
      if (timeoutRef.current) clearTimeout(timeoutRef.current);
      timeoutRef.current = setTimeout(() => setIsReceiving(false), 3000);
    });

    return () => {
      socket.off("metrics_update");
      if (timeoutRef.current) clearTimeout(timeoutRef.current);
    };
  }, [socket]);

  return { latest, history, isReceiving };
}
Enter fullscreen mode Exit fullscreen mode

Live Charts with Chart.js

The MetricsChart Component

Create client/src/components/MetricsChart.tsx:

import React, { useEffect, useRef } from "react";
import {
  Chart,
  LineController,
  LineElement,
  PointElement,
  LinearScale,
  TimeScale,
  Filler,
  Tooltip,
  Legend,
  ChartConfiguration,
} from "chart.js";
import "chartjs-adapter-date-fns";
import type { MetricsHistory } from "../hooks/useMetrics";

// Register only what we use (tree-shaking friendly)
Chart.register(
  LineController, LineElement, PointElement,
  LinearScale, TimeScale, Filler, Tooltip, Legend
);

interface MetricsChartProps {
  history: MetricsHistory;
  title: string;
  datasets: {
    label: string;
    dataKey: keyof Omit<MetricsHistory, "timestamps">;
    color: string;
    fill?: boolean;
  }[];
  yAxisLabel?: string;
  yMax?: number;
}

export const MetricsChart: React.FC<MetricsChartProps> = ({
  history,
  title,
  datasets,
  yAxisLabel = "",
  yMax,
}) => {
  const canvasRef = useRef<HTMLCanvasElement>(null);
  const chartRef  = useRef<Chart | null>(null);

  // Initialize chart
  useEffect(() => {
    if (!canvasRef.current) return;

    const config: ChartConfiguration = {
      type: "line",
      data: {
        labels: [],
        datasets: datasets.map((d) => ({
          label: d.label,
          data: [],
          borderColor: d.color,
          backgroundColor: d.fill ? `${d.color}33` : "transparent",
          fill: d.fill ?? false,
          tension: 0.3,
          pointRadius: 0,        // no dots — cleaner for high-frequency data
          borderWidth: 2,
        })),
      },
      options: {
        animation: false,        // disable animation for real-time performance
        responsive: true,
        maintainAspectRatio: false,
        interaction: {
          intersect: false,
          mode: "index",
        },
        plugins: {
          legend: { position: "top" },
          tooltip: {
            callbacks: {
              title: (items) => new Date(items[0].parsed.x).toLocaleTimeString(),
            },
          },
        },
        scales: {
          x: {
            type: "time",
            time: {
              unit: "second",
              displayFormats: { second: "HH:mm:ss" },
            },
            ticks: { maxTicksLimit: 6 },
          },
          y: {
            min: 0,
            max: yMax,
            title: { display: !!yAxisLabel, text: yAxisLabel },
            ticks: {
              callback: (val) => `${val}${yAxisLabel === "%" ? "%" : ""}`,
            },
          },
        },
      },
    };

    chartRef.current = new Chart(canvasRef.current, config);

    return () => {
      chartRef.current?.destroy();
      chartRef.current = null;
    };
  }, []); // Only create once

  // Update chart data when history changes
  useEffect(() => {
    const chart = chartRef.current;
    if (!chart || history.timestamps.length === 0) return;

    chart.data.labels = history.timestamps;
    datasets.forEach((d, i) => {
      if (chart.data.datasets[i]) {
        chart.data.datasets[i].data = history[d.dataKey] as number[];
      }
    });

    chart.update("none"); // "none" = no animation, maximum performance
  }, [history, datasets]);

  return (
    <div style={{ position: "relative", height: "200px" }}>
      <canvas ref={canvasRef} />
    </div>
  );
};
Enter fullscreen mode Exit fullscreen mode

The Dashboard Layout

Create client/src/components/Dashboard.tsx:

import React, { useState } from "react";
import { useSocket } from "../hooks/useSocket";
import { useMetrics } from "../hooks/useMetrics";
import { MetricsChart } from "./MetricsChart";
import { ConnectionBadge } from "./ConnectionBadge";
import { StatCard } from "./StatCard";
import { AlertFeed } from "./AlertFeed";

const SOCKET_URL = process.env.REACT_APP_SOCKET_URL || "http://localhost:4000";

export const Dashboard: React.FC = () => {
  const { socket, status, error } = useSocket(SOCKET_URL);
  const { latest, history, isReceiving } = useMetrics(socket);

  const formatBytes = (bytes: number): string => {
    if (bytes < 1024) return `${bytes} B`;
    if (bytes < 1024 * 1024) return `${(bytes / 1024).toFixed(1)} KB`;
    return `${(bytes / (1024 * 1024)).toFixed(1)} MB`;
  };

  return (
    <div className="dashboard">
      <header className="dashboard-header">
        <h1>System Monitor</h1>
        <ConnectionBadge status={status} error={error} isReceiving={isReceiving} />
      </header>

      {/* Stat Cards Row */}
      <div className="stat-grid">
        <StatCard
          label="CPU Usage"
          value={latest ? `${latest.cpu.usage.toFixed(1)}%` : ""}
          subtitle={latest ? `${latest.cpu.cores} cores` : ""}
          alert={latest && latest.cpu.usage > 80}
        />
        <StatCard
          label="Memory"
          value={latest ? `${latest.memory.usagePercent.toFixed(1)}%` : ""}
          subtitle={
            latest
              ? `${formatBytes(latest.memory.used)} / ${formatBytes(latest.memory.total)}`
              : ""
          }
          alert={latest && latest.memory.usagePercent > 85}
        />
        <StatCard
          label="Network In"
          value={latest ? `${(latest.network.rx_sec / 1024).toFixed(1)} KB/s` : ""}
        />
        <StatCard
          label="Network Out"
          value={latest ? `${(latest.network.tx_sec / 1024).toFixed(1)} KB/s` : ""}
        />
      </div>

      {/* Charts */}
      <div className="chart-grid">
        <div className="chart-card">
          <h3>CPU & Memory Usage</h3>
          <MetricsChart
            history={history}
            title="CPU & Memory"
            yAxisLabel="%"
            yMax={100}
            datasets={[
              {
                label: "CPU %",
                dataKey: "cpuUsage",
                color: "#ef4444",
                fill: true,
              },
              {
                label: "Memory %",
                dataKey: "memoryUsage",
                color: "#3b82f6",
                fill: false,
              },
            ]}
          />
        </div>

        <div className="chart-card">
          <h3>Network Throughput (KB/s)</h3>
          <MetricsChart
            history={history}
            title="Network"
            yAxisLabel="KB/s"
            datasets={[
              {
                label: "Inbound",
                dataKey: "networkRx",
                color: "#10b981",
                fill: true,
              },
              {
                label: "Outbound",
                dataKey: "networkTx",
                color: "#f59e0b",
                fill: false,
              },
            ]}
          />
        </div>
      </div>

      <AlertFeed socket={socket} />
    </div>
  );
};
Enter fullscreen mode Exit fullscreen mode

Reconnection Handling in Depth

Poor reconnection handling is the number-one reliability problem in real-time applications. Here's what production code needs to handle:

Exponential Backoff

Socket.io's built-in reconnection uses exponential backoff by default. The parameters matter:

const socket = io(url, {
  reconnectionDelay: 1000,        // start at 1s
  reconnectionDelayMax: 30000,    // never wait more than 30s
  randomizationFactor: 0.5,       // ±50% jitter to stagger reconnects
});
Enter fullscreen mode Exit fullscreen mode

Without jitter (randomizationFactor), all clients that disconnected simultaneously (e.g., during a brief server restart) will reconnect at exactly the same moment, creating a thundering-herd problem.

Detecting Stale Connections

Network interruptions (mobile switching from WiFi to cellular, NAT timeouts) can silently break WebSocket connections. The server and client will both think they're connected, but no data flows.

Defend against this with heartbeats:

// Server-side: tune ping settings
const io = new Server(httpServer, {
  pingInterval: 10000,  // ping every 10 seconds
  pingTimeout: 5000,    // consider disconnected if no pong within 5 seconds
});
Enter fullscreen mode Exit fullscreen mode
// Client-side: monitor heartbeat lag
socket.io.engine.on("heartbeat", () => {
  console.log("Heartbeat received");
});
Enter fullscreen mode Exit fullscreen mode

State Recovery After Reconnect

When a client reconnects after a long absence, it has stale data. Use connection state recovery to handle this gracefully:

// Server: enable connection state recovery
const io = new Server(httpServer, {
  connectionStateRecovery: {
    maxDisconnectionDuration: 2 * 60 * 1000, // 2 minutes
    skipMiddlewares: true,
  }
});

// Server: store missed events for recovery
socket.on("disconnect", () => {
  // Socket.io handles this automatically with connectionStateRecovery
  // Events emitted during disconnect are buffered and replayed on reconnect
});
Enter fullscreen mode Exit fullscreen mode
// Client: detect if state was recovered
socket.on("connect", () => {
  if (socket.recovered) {
    console.log("Connection recovered — missed events replayed automatically");
  } else {
    console.log("Fresh connection — requesting full snapshot");
    socket.emit("request_snapshot", (data) => {
      // Re-initialize state from snapshot
    });
  }
});
Enter fullscreen mode Exit fullscreen mode

Connection Status Component

// client/src/components/ConnectionBadge.tsx
import React from "react";
import type { ConnectionStatus } from "../hooks/useSocket";

interface Props {
  status: ConnectionStatus;
  error: string | null;
  isReceiving: boolean;
}

const statusConfig: Record<ConnectionStatus, { color: string; label: string }> = {
  connected:    { color: "#10b981", label: "Live" },
  connecting:   { color: "#f59e0b", label: "Connecting..." },
  reconnecting: { color: "#f59e0b", label: "Reconnecting..." },
  disconnected: { color: "#ef4444", label: "Disconnected" },
};

export const ConnectionBadge: React.FC<Props> = ({ status, error, isReceiving }) => {
  const { color, label } = statusConfig[status];
  const stale = status === "connected" && !isReceiving;

  return (
    <div style={{ display: "flex", alignItems: "center", gap: 8 }}>
      <span
        style={{
          width: 10,
          height: 10,
          borderRadius: "50%",
          backgroundColor: stale ? "#f59e0b" : color,
          animation: status === "connected" && isReceiving
            ? "pulse 2s infinite"
            : "none",
        }}
      />
      <span style={{ color, fontSize: 14 }}>
        {stale ? "Stale data" : label}
      </span>
      {error && <span style={{ color: "#ef4444", fontSize: 12 }}>{error}</span>}
    </div>
  );
};
Enter fullscreen mode Exit fullscreen mode

Scaling with the Redis Adapter

A single Node.js process can handle ~10,000 WebSocket connections (depending on message frequency). For anything beyond that — or for zero-downtime deployments — you need multiple server instances.

The problem: Socket.io's io.emit() only broadcasts to clients connected to the current process. With multiple servers behind a load balancer, clients connected to server B won't receive events emitted on server A.

The Redis adapter solves this by using Redis pub/sub as a message bus between server instances.

Setting Up Redis Adapter

// server/src/index.ts
import { createClient } from "ioredis";
import { createAdapter } from "@socket.io/redis-adapter";

async function setupRedisAdapter() {
  const pubClient = createClient({
    host: process.env.REDIS_HOST || "localhost",
    port: parseInt(process.env.REDIS_PORT || "6379"),
    retryStrategy: (times) => Math.min(times * 50, 2000),
  });

  const subClient = pubClient.duplicate();

  await Promise.all([
    pubClient.connect(),
    subClient.connect(),
  ]);

  io.adapter(createAdapter(pubClient, subClient));
  console.log("Redis adapter connected");
}

setupRedisAdapter().catch((err) => {
  console.error("Failed to connect Redis adapter:", err);
  console.warn("Running in single-server mode");
});
Enter fullscreen mode Exit fullscreen mode

Why two clients? Redis pub/sub requires a dedicated subscriber connection — a client in subscriber mode can't execute other commands. The adapter needs one client for publishing and one for subscribing.

Sticky Sessions

Even with the Redis adapter, Socket.io's HTTP upgrade handshake requires sticky sessions at the load balancer level. If the initial HTTP polling requests go to different servers, the socket upgrade will fail.

With nginx:

upstream socketio_backend {
  ip_hash;  # sticky sessions by client IP
  server server1:4000;
  server server2:4000;
  server server3:4000;
}

server {
  listen 80;

  location / {
    proxy_pass http://socketio_backend;
    proxy_http_version 1.1;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection "upgrade";
    proxy_set_header Host $host;
    proxy_cache_bypass $http_upgrade;
  }
}
Enter fullscreen mode Exit fullscreen mode

Broadcasting to Rooms Across Servers

With the Redis adapter, rooms work transparently across servers:

// On server 1: join a room
socket.join("datacenter-us-east");

// On server 2: emit to that room — Redis adapter routes it correctly
io.to("datacenter-us-east").emit("metrics_update", metricsData);
Enter fullscreen mode Exit fullscreen mode

This is how the dashboard supports multiple "rooms" for different data sources — each client subscribes to their relevant room, and servers broadcast to rooms without caring which server each client is connected to.


Memory Leak Prevention

Memory leaks in Socket.io servers are caused by forgetting to clean up resources when clients disconnect. Here's a complete checklist:

const socketResources = new Map<string, {
  intervals: NodeJS.Timer[];
  listeners: Array<{ event: string; handler: Function }>;
}>();

io.on("connection", (socket) => {
  const resources = { intervals: [], listeners: [] };
  socketResources.set(socket.id, resources);

  // Track all intervals
  const metricsInterval = setInterval(async () => { /* ... */ }, 1000);
  resources.intervals.push(metricsInterval);

  // Clean up EVERYTHING on disconnect
  socket.on("disconnect", () => {
    const res = socketResources.get(socket.id);
    if (res) {
      res.intervals.forEach(clearInterval);
      socketResources.delete(socket.id);
    }

    // Remove all socket-specific listeners
    socket.removeAllListeners();
  });
});

// Monitor for leaks in production
setInterval(() => {
  const activeConnections = io.engine.clientsCount;
  const trackedResources = socketResources.size;

  if (trackedResources > activeConnections * 1.1) {
    console.warn(`Possible resource leak: ${trackedResources} tracked vs ${activeConnections} connections`);
  }
}, 30000);
Enter fullscreen mode Exit fullscreen mode

Testing

Socket.io applications are notoriously difficult to test. Here's a practical approach using Jest:

// server/src/__tests__/socket.test.ts
import { createServer } from "http";
import { Server } from "socket.io";
import { io as ioc, Socket } from "socket.io-client";
import { AddressInfo } from "net";

describe("Socket.io metrics server", () => {
  let io: Server;
  let clientSocket: Socket;
  let port: number;

  beforeAll((done) => {
    const httpServer = createServer();
    io = new Server(httpServer);

    // Set up handlers
    setupSocketHandlers(io);

    httpServer.listen(() => {
      port = (httpServer.address() as AddressInfo).port;
      done();
    });
  });

  beforeEach((done) => {
    clientSocket = ioc(`http://localhost:${port}`, {
      transports: ["websocket"],
    });
    clientSocket.on("connect", done);
  });

  afterEach(() => {
    clientSocket.disconnect();
  });

  afterAll(() => {
    io.close();
  });

  test("emits metrics_update within 1.5 seconds of connecting", (done) => {
    clientSocket.on("metrics_update", (data) => {
      expect(data).toHaveProperty("cpu");
      expect(data).toHaveProperty("memory");
      expect(data.cpu.usage).toBeGreaterThanOrEqual(0);
      expect(data.cpu.usage).toBeLessThanOrEqual(100);
      done();
    });
  }, 2000);

  test("responds to request_snapshot", (done) => {
    clientSocket.emit("request_snapshot", (data) => {
      expect(data).toHaveProperty("timestamp");
      expect(data.timestamp).toBeCloseTo(Date.now(), -3); // within 1 second
      done();
    });
  });
});
Enter fullscreen mode Exit fullscreen mode

Production Deployment Checklist

Before going live:

  1. Environment variables: CLIENT_URL, REDIS_HOST, REDIS_PORT, PORT
  2. Rate limiting: Add socket.io-rate-limiter to prevent event flooding
  3. Authentication: Validate tokens in io.use() middleware before accepting connections
  4. Graceful shutdown: Drain connections before process exit
// Graceful shutdown
process.on("SIGTERM", async () => {
  console.log("SIGTERM received — shutting down gracefully");

  io.emit("alert", { level: "info", message: "Server restarting in 5 seconds" });

  // Give clients time to see the message
  await new Promise((resolve) => setTimeout(resolve, 5000));

  io.close(() => {
    httpServer.close(() => {
      process.exit(0);
    });
  });
});
Enter fullscreen mode Exit fullscreen mode

Summary

You now have a complete real-time dashboard architecture:

  • A typed Socket.io server that collects and broadcasts system metrics
  • React hooks that handle connection lifecycle, reconnection, and data history
  • Chart.js integration that updates efficiently without animation overhead
  • The Redis adapter for horizontal scaling
  • Memory leak prevention patterns

The most important insight: real-time applications fail not from the happy path but from edge cases — reconnections, stale connections, memory leaks, and thundering herds after server restarts. The patterns in this article address each of these. Build them in from day one rather than retrofitting them after your first production incident.

The full source code for this project is available on GitHub.

Top comments (0)