DEV Community

Cover image for Server-Sent Events Beat WebSockets for 95% of Real-Time Apps (Here's Why)
Polliog
Polliog

Posted on

Server-Sent Events Beat WebSockets for 95% of Real-Time Apps (Here's Why)

Everyone defaults to WebSockets for real-time features. Most shouldn't.

The reality: 95% of "real-time" applications only need server → client updates. Chat notifications. Live dashboards. Stock tickers. Log streams. AI responses.

WebSockets give you bidirectional communication. But bidirectional comes with a tax: complexity, resource overhead, scaling challenges, debugging nightmares.

Server-Sent Events (SSE) do one thing: stream data from server to client. They do it brilliantly. And for most applications, that's all you need.

Here's why SSE should be your default for real-time features, with real production numbers and honest trade-offs.

The WebSocket Assumption

The conversation usually goes like this:

Developer: "We need real-time updates."
Team: "Use WebSockets."
Developer: "Why?"
Team: "Because they're real-time."

Nobody questions it. WebSockets became the default for anything involving "live" or "real-time."

But here's the uncomfortable truth: bidirectional communication is rarely necessary.

Let's look at what "real-time" actually means in production applications:

Real-Time Features That Don't Need Bidirectional:

Dashboards:

  • Server pushes metrics
  • Client renders charts
  • Updates flow one way: server → client

Notifications:

  • Server sends alerts
  • Client displays them
  • One direction only

Live Feeds:

  • Server streams new items (tweets, posts, events)
  • Client appends to feed
  • Unidirectional

AI Chat (ChatGPT-style):

  • Server streams tokens as they're generated
  • Client displays word-by-word
  • Response flows server → client (user input happens via separate POST)

Stock Tickers:

  • Server pushes price updates
  • Client updates UI
  • One way

Log Streaming:

  • Server tails logs
  • Client displays in real-time
  • Server → client only

Build Status / CI/CD:

  • Server sends progress updates
  • Client shows build steps
  • Unidirectional

Notice a pattern? These are 95% of "real-time" use cases.

Real-Time Features That Actually Need Bidirectional:

Multiplayer Games:

  • Player sends moves
  • Server broadcasts to all players
  • Constant two-way traffic

Collaborative Editing (Google Docs):

  • User edits document
  • Server reconciles changes
  • Broadcasts to all editors
  • High-frequency bidirectional

Video Calls / WebRTC Signaling:

  • Peer discovery
  • ICE candidate exchange
  • Continuous negotiation

Trading Platforms:

  • User places order
  • Server confirms
  • Market updates stream back
  • Both directions simultaneously

These are the 5% of use cases where WebSockets shine.

For everything else? You're paying the WebSocket tax for features you don't use.

What Is Server-Sent Events (SSE)?

SSE is dead simple: HTTP connection that stays open. Server writes to it whenever there's new data.

Protocol:

GET /api/stream HTTP/1.1
Host: example.com
Accept: text/event-stream

HTTP/1.1 200 OK
Content-Type: text/event-stream
Cache-Control: no-cache

data: {"message": "Hello World"}

data: {"message": "Another update"}

data: {"message": "And another"}

Enter fullscreen mode Exit fullscreen mode

That's it. Plain text over HTTP. No protocol upgrade. No handshake dance.

Client code (JavaScript):

const eventSource = new EventSource('/api/stream');

eventSource.onmessage = (event) => {
  const data = JSON.parse(event.data);
  console.log('Received:', data);
};

eventSource.onerror = (error) => {
  console.error('Connection error:', error);
  // Browser automatically reconnects
};
Enter fullscreen mode Exit fullscreen mode

Server code (Node.js/Fastify):

fastify.get('/api/stream', (req, reply) => {
  reply.raw.writeHead(200, {
    'Content-Type': 'text/event-stream',
    'Cache-Control': 'no-cache',
    'Connection': 'keep-alive'
  });

  const interval = setInterval(() => {
    const data = JSON.stringify({ message: 'Update', timestamp: Date.now() });
    reply.raw.write(`data: ${data}\n\n`);
  }, 1000);

  req.raw.on('close', () => {
    clearInterval(interval);
  });
});
Enter fullscreen mode Exit fullscreen mode

10 lines. No libraries needed. Just HTTP.

Compare that to WebSockets:

const ws = new WebSocket('ws://example.com/socket');

ws.onopen = () => {
  console.log('Connected');
  // Now what? Send ping? Subscribe to channels?
};

ws.onmessage = (event) => {
  console.log('Received:', event.data);
};

ws.onerror = (error) => {
  console.error('Error:', error);
  // Manual reconnection logic required
};

ws.onclose = () => {
  console.log('Connection closed');
  // Implement exponential backoff, reconnect...
};

// Send heartbeat to keep connection alive
setInterval(() => {
  if (ws.readyState === WebSocket.OPEN) {
    ws.send(JSON.stringify({ type: 'ping' }));
  }
}, 30000);
Enter fullscreen mode Exit fullscreen mode

Already more complex. And we haven't even handled reconnection, message queueing, or protocol negotiation.

SSE vs WebSocket: The Real Comparison

Let's look at production numbers from real deployments.

Performance Benchmarks (Timeplus, 2024)

Test setup: 100,000 events/second, 10-30 concurrent connections

Results:

Metric SSE WebSocket Difference
Max throughput 3M events/sec 3M events/sec Tie
CPU usage (batch 50) ~42% ~40% SSE +5%
Latency (50ms target) 48ms 45ms WS -6%
Implementation complexity 10 lines 50+ lines SSE 5x simpler

Conclusion: Performance is essentially identical. SSE uses slightly more CPU (negligible), WebSocket has slightly lower latency (3ms difference).

For 100k events/second, the difference is irrelevant.

Resource Usage (Production, 2025)

Scenario: Real-time dashboard, 10,000 concurrent connections

SSE:

  • Memory: ~20MB (connection state only)
  • CPU: 15% idle, 35% under load
  • Network: Standard HTTP
  • Scaling: Horizontal (stateless with backplane)

WebSocket:

  • Memory: ~50MB (connection + frame buffers)
  • CPU: 25% idle (ping/pong frames), 45% under load
  • Network: Persistent TCP + WebSocket protocol overhead
  • Scaling: Requires sticky sessions OR message backplane

Why the difference?

SSE is just HTTP. No frame masking, no protocol negotiation, no ping/pong.

WebSocket frames have overhead:

[Frame Header (2-14 bytes)] [Payload]
Enter fullscreen mode Exit fullscreen mode

Every message gets wrapped. Client→server messages are masked (XOR operation = CPU cost).

SSE just writes text:

data: {...}\n\n
Enter fullscreen mode Exit fullscreen mode

No framing. No masking. Minimal CPU.

Latency Deep Dive

Question: If WebSocket is 3ms faster, does it matter?

Answer: Almost never.

Typical application latency budget:

User action: 0ms
↓
Frontend validation: 5ms
↓
Network RTT: 20-100ms (varies by location)
↓
Backend processing: 10-500ms (depends on query)
↓
Database query: 5-50ms
↓
Response render: 10ms
↓
Total: 50-665ms
Enter fullscreen mode Exit fullscreen mode

3ms difference between SSE and WebSocket? Lost in the noise.

When latency matters:

  • Gaming (60 FPS = 16ms budget per frame)
  • Trading (microseconds matter)
  • VoIP/video (jitter sensitive)

For these? Use WebSocket (or UDP-based solutions like WebRTC, WebTransport).

For dashboards, notifications, feeds? 3ms is irrelevant.

Why SSE Wins for Most Applications

1. It's Just HTTP

SSE runs on port 80/443. No special firewall rules. No proxy configuration. It works everywhere HTTP works.

WebSocket requires protocol upgrade:

GET /socket HTTP/1.1
Upgrade: websocket
Connection: Upgrade
Sec-WebSocket-Key: dGhlIHNhbXBsZSBub25jZQ==
Enter fullscreen mode Exit fullscreen mode

Some corporate firewalls block Upgrade headers. Some reverse proxies don't support WebSocket. Some CDNs have issues.

SSE just works. It's HTTP. Proxies understand it. CDNs cache it (with proper headers). Load balancers route it.

Real story (from Stack Overflow):

"We deployed WebSockets. Worked perfectly in dev. In production, corporate network blocked ws:// protocol. Spent 2 weeks debugging. Switched to SSE. Worked immediately."

2. Auto-Reconnect Built-In

SSE (EventSource API):

const eventSource = new EventSource('/stream');
// That's it. Browser handles reconnection automatically.
Enter fullscreen mode Exit fullscreen mode

Connection drops? Browser waits 3 seconds, retries. You do nothing.

WebSocket:

let ws;
let reconnectAttempts = 0;
const maxReconnectDelay = 30000;

function connect() {
  ws = new WebSocket('ws://example.com/socket');

  ws.onopen = () => {
    reconnectAttempts = 0;
    console.log('Connected');
  };

  ws.onclose = () => {
    const delay = Math.min(1000 * Math.pow(2, reconnectAttempts), maxReconnectDelay);
    reconnectAttempts++;
    console.log(`Reconnecting in ${delay}ms...`);
    setTimeout(connect, delay);
  };

  ws.onerror = (error) => {
    console.error('Error:', error);
    ws.close();
  };
}

connect();
Enter fullscreen mode Exit fullscreen mode

You write the reconnection logic. Exponential backoff. Maximum attempts. Jitter to prevent thundering herd.

Or you use Socket.IO (adds 40KB to bundle) which does this for you.

SSE: 1 line.
WebSocket: 20+ lines OR external library.

3. HTTP/2 Multiplexing

Remember the "6 connection limit" criticism of SSE?

HTTP/1.1: Browsers limit to 6 connections per domain.

HTTP/2: One TCP connection, unlimited streams via multiplexing.

In 2026, HTTP/2 is everywhere:

  • Chrome: 97% of requests use HTTP/2
  • Production servers: NGINX, Caddy, Cloudflare all default to HTTP/2

SSE over HTTP/2 = no connection limit.

You can have 1000 SSE streams over one TCP connection. Efficient. Fast. Low overhead.

4. Works with curl (Debugging)

SSE:

curl -N https://api.example.com/stream
data: {"message": "Update 1"}

data: {"message": "Update 2"}

^C
Enter fullscreen mode Exit fullscreen mode

You can debug SSE streams with curl. No special tools. No browser. Just curl.

WebSocket:

# Need wscat or similar
npm install -g wscat
wscat -c ws://example.com/socket
Connected (press CTRL+C to quit)
> {"type": "ping"}
< {"type": "pong"}
Enter fullscreen mode Exit fullscreen mode

Requires special tools. Less straightforward.

5. CDN Friendly

SSE is HTTP. CDNs understand HTTP.

Want to cache SSE streams? Set headers:

res.setHeader('Cache-Control', 'public, max-age=1');
Enter fullscreen mode Exit fullscreen mode

Cloudflare, Fastly, CloudFront all handle SSE transparently.

WebSocket? Most CDNs treat it as a special case. Some don't support it at all. Configuration is trickier.

Real Production Use Cases

ChatGPT / OpenAI API (2025)

How ChatGPT streams responses:

const response = await fetch('https://api.openai.com/v1/chat/completions', {
  method: 'POST',
  headers: {
    'Content-Type': 'application/json',
    'Authorization': `Bearer ${API_KEY}`
  },
  body: JSON.stringify({
    model: 'gpt-4',
    messages: [...],
    stream: true  // Enable streaming
  })
});

const reader = response.body.getReader();
const decoder = new TextDecoder();

while (true) {
  const { done, value } = await reader.read();
  if (done) break;

  const chunk = decoder.decode(value);
  const lines = chunk.split('\n').filter(line => line.trim());

  for (const line of lines) {
    if (line.startsWith('data: ')) {
      const data = line.slice(6);
      if (data === '[DONE]') continue;
      const parsed = JSON.parse(data);
      console.log(parsed.choices[0].delta.content);
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

That's SSE. OpenAI uses Server-Sent Events for streaming completions.

Why? Unidirectional. User sends prompt via POST. AI streams response. SSE is perfect.

Shopify BFCM Live Map (2022)

Challenge: Black Friday / Cyber Monday live sales visualization. Millions of concurrent users watching real-time sales data.

Solution: SSE

Architecture:

  • Flink processes Kafka stream (sales events)
  • Aggregates data (sales by region, trending products)
  • SSE server pushes updates to frontends
  • Horizontally scaled behind NGINX load balancer

Results:

  • 323 billion events processed in 4 days
  • Millions of concurrent SSE connections
  • Latency <300ms globally
  • Zero WebSocket complexity

Why SSE?

"We only need server-to-client delivery. SSE allowed us to remove client polling entirely and leverage existing HTTP infrastructure."

Split.io Real-Time Feature Flags (2025)

Use case: Feature flag platform. Clients need instant flag updates.

Scale:

  • 1 trillion events per month
  • <300ms average global latency
  • Millions of concurrent connections

Technology: SSE

Why not WebSocket?

"Flags change server-side. Clients just listen. We don't need bidirectional. SSE gives us HTTP simplicity at WebSocket scale."

Personal Project: Real-Time Log Streaming

Scenario: Open-source log management platform. Users watch logs arrive in real-time (think tail -f in the browser).

Requirements:

  • 1000+ concurrent users watching different log streams
  • Sub-50ms latency from log ingestion to browser
  • Deployed on $20/month server

Implementation: PostgreSQL LISTEN/NOTIFY + SSE

// Backend: Listen to Postgres
const pgClient = new Client({ connectionString: DB_URL });
await pgClient.connect();

await pgClient.query(`LISTEN logs_${orgId}`);

pgClient.on('notification', (msg) => {
  const log = JSON.parse(msg.payload);
  // Push to connected SSE clients for this org
  sseManager.broadcast(orgId, log);
});

// SSE endpoint
app.get('/api/logs/stream', (req, reply) => {
  const { orgId } = req.user;

  reply.raw.writeHead(200, {
    'Content-Type': 'text/event-stream',
    'Cache-Control': 'no-cache',
    'Connection': 'keep-alive',
    'X-Accel-Buffering': 'no'  // Disable nginx buffering
  });

  const clientId = generateId();
  sseManager.addClient(orgId, clientId, reply.raw);

  // Heartbeat every 30s
  const heartbeat = setInterval(() => {
    reply.raw.write(': heartbeat\n\n');
  }, 30000);

  req.raw.on('close', () => {
    clearInterval(heartbeat);
    sseManager.removeClient(orgId, clientId);
  });
});
Enter fullscreen mode Exit fullscreen mode

Results:

  • 1000 concurrent connections on 4 vCPU server
  • p50 latency: 45ms, p95: 120ms
  • CPU usage: ~30%
  • Memory: ~500MB
  • Zero WebSocket libraries. Zero protocol complexity.

Why SSE?

Logs flow server → client. Users don't send logs through the stream. If they upload logs, that's a separate POST request.

SSE is perfect. Simple. Fast. Scales.

When WebSocket Actually Wins

Let's be honest. WebSocket isn't always overkill.

Use WebSocket When:

1. True Bidirectional Communication

Both parties send messages continuously.

Examples:

  • Multiplayer games (constant player input + server updates)
  • Collaborative editing (local edits + remote edits simultaneously)
  • VoIP/video call signaling

2. Low Latency is Critical

Sub-10ms latency required.

Examples:

  • High-frequency trading
  • FPS games (60+ FPS = <16ms budget)
  • Live auctions

3. Binary Data

Sending images, audio, video frames.

WebSocket supports binary:

ws.send(new Uint8Array([1, 2, 3, 4]));
Enter fullscreen mode Exit fullscreen mode

SSE is text-only. You'd have to Base64 encode binary (33% overhead).

Use SSE When:

1. Server → Client Only

Data flows one direction.

Examples:

  • Dashboards
  • Notifications
  • Live feeds
  • AI streaming responses
  • Log tailing
  • Stock tickers
  • Build status
  • Analytics

2. Simplicity Matters

No reconnection logic. No frame handling. Just HTTP.

3. Works Everywhere

Corporate firewalls. Proxies. CDNs. HTTP works.

4. Debugging is Important

curl works. Browser DevTools Network tab shows SSE streams clearly.

Production Gotchas (And How to Fix Them)

1. Nginx Buffering

Problem: Nginx buffers responses by default. SSE events get stuck.

Symptom: Events arrive in bursts, not real-time.

Fix:

location /api/stream {
    proxy_pass http://backend;
    proxy_buffering off;
    proxy_set_header Connection '';
    proxy_http_version 1.1;
    chunked_transfer_encoding off;
}
Enter fullscreen mode Exit fullscreen mode

OR set response header:

res.setHeader('X-Accel-Buffering', 'no');
Enter fullscreen mode Exit fullscreen mode

2. Load Balancer Sticky Sessions

Problem: Multiple backend servers. Client connects to Server A. Server A crashes. Client reconnects to Server B. Lost messages.

Solution: Use a message backplane (Redis Pub/Sub, RabbitMQ, Kafka).

Architecture:

Client 1 → LB → Server A ─┐
Client 2 → LB → Server B ─┼→ Redis Pub/Sub
Client 3 → LB → Server C ─┘
Enter fullscreen mode Exit fullscreen mode

Event published → Redis → All servers → All connected clients

Implementation (Node.js + Redis):

const redis = require('redis');
const subscriber = redis.createClient();
const publisher = redis.createClient();

// Subscribe to channel
await subscriber.subscribe('updates');

subscriber.on('message', (channel, message) => {
  // Broadcast to all SSE clients connected to THIS server
  sseClients.forEach(client => {
    client.write(`data: ${message}\n\n`);
  });
});

// Publish event (from any server)
publisher.publish('updates', JSON.stringify({ data: 'New event' }));
Enter fullscreen mode Exit fullscreen mode

Now scale horizontally. Add/remove servers. Clients don't care.

3. Heartbeats

Problem: Proxies close idle connections (60-120 seconds typical).

Solution: Send heartbeat every 30 seconds.

const heartbeat = setInterval(() => {
  res.write(': heartbeat\n\n');  // Comment line, ignored by client
}, 30000);

req.on('close', () => clearInterval(heartbeat));
Enter fullscreen mode Exit fullscreen mode

4. Authentication

Problem: EventSource doesn't support custom headers.

Solutions:

A) Use query parameters:

const eventSource = new EventSource(`/stream?token=${authToken}`);
Enter fullscreen mode Exit fullscreen mode

B) Use cookies:

// Server sets cookie on login
res.cookie('auth', token, { httpOnly: true });

// EventSource automatically sends cookies
const eventSource = new EventSource('/stream');
Enter fullscreen mode Exit fullscreen mode

C) Use Authorization URL (non-standard but works):

const eventSource = new EventSource(`https://${authToken}@api.example.com/stream`);
Enter fullscreen mode Exit fullscreen mode

5. Concurrent Connection Limits

Problem: EventSource counts towards browser connection limit (6 per domain on HTTP/1.1).

Solutions:

A) Use HTTP/2 (best option - unlimited streams)

B) Use subdomain sharding:

const shard = userId % 4;
const eventSource = new EventSource(`https://stream${shard}.example.com/events`);
Enter fullscreen mode Exit fullscreen mode

C) Close unused connections:

eventSource.close();  // When no longer needed
Enter fullscreen mode Exit fullscreen mode

Implementation Patterns

Pattern 1: Simple Broadcast

Use case: Send same data to all clients (stock ticker, news feed)

const clients = new Set();

app.get('/stream', (req, reply) => {
  reply.raw.writeHead(200, {
    'Content-Type': 'text/event-stream',
    'Cache-Control': 'no-cache'
  });

  clients.add(reply.raw);

  req.raw.on('close', () => {
    clients.delete(reply.raw);
  });
});

// Broadcast function
function broadcast(data) {
  const message = `data: ${JSON.stringify(data)}\n\n`;
  clients.forEach(client => client.write(message));
}

// Example: Update every second
setInterval(() => {
  broadcast({ timestamp: Date.now(), price: Math.random() * 100 });
}, 1000);
Enter fullscreen mode Exit fullscreen mode

Pattern 2: Per-User Streams

Use case: Different data per user (notifications, personalized feeds)

const userClients = new Map();  // userId -> Set of connections

app.get('/stream', (req, reply) => {
  const userId = req.user.id;

  reply.raw.writeHead(200, {
    'Content-Type': 'text/event-stream',
    'Cache-Control': 'no-cache'
  });

  if (!userClients.has(userId)) {
    userClients.set(userId, new Set());
  }
  userClients.get(userId).add(reply.raw);

  req.raw.on('close', () => {
    const clients = userClients.get(userId);
    clients.delete(reply.raw);
    if (clients.size === 0) {
      userClients.delete(userId);
    }
  });
});

// Send to specific user
function sendToUser(userId, data) {
  const clients = userClients.get(userId);
  if (!clients) return;

  const message = `data: ${JSON.stringify(data)}\n\n`;
  clients.forEach(client => client.write(message));
}
Enter fullscreen mode Exit fullscreen mode

Pattern 3: Event Types

Use case: Multiple event types on one stream (logs + metrics + alerts)

app.get('/stream', (req, reply) => {
  reply.raw.writeHead(200, {
    'Content-Type': 'text/event-stream',
    'Cache-Control': 'no-cache'
  });

  // Named events
  function sendEvent(eventType, data) {
    reply.raw.write(`event: ${eventType}\n`);
    reply.raw.write(`data: ${JSON.stringify(data)}\n\n`);
  }

  sendEvent('log', { level: 'info', message: 'App started' });
  sendEvent('metric', { cpu: 45, memory: 2048 });
  sendEvent('alert', { severity: 'high', message: 'Disk full' });
});

// Client
const eventSource = new EventSource('/stream');

eventSource.addEventListener('log', (e) => {
  console.log('Log:', JSON.parse(e.data));
});

eventSource.addEventListener('metric', (e) => {
  console.log('Metric:', JSON.parse(e.data));
});

eventSource.addEventListener('alert', (e) => {
  console.log('Alert:', JSON.parse(e.data));
});
Enter fullscreen mode Exit fullscreen mode

Migration: WebSocket → SSE

Scenario: You have WebSocket. Want to simplify. How?

Step 1: Identify Communication Pattern

Audit your WebSocket usage:

// What messages does CLIENT send?
ws.send({ type: 'subscribe', channel: 'updates' });
ws.send({ type: 'ping' });

// What messages does SERVER send?
ws.send({ type: 'update', data: {...} });
ws.send({ type: 'pong' });
Enter fullscreen mode Exit fullscreen mode

If client only sends:

  • Subscription/configuration (at connection start)
  • Ping/heartbeat (keepalive)

You can use SSE + HTTP POST.

Step 2: Move Client→Server to HTTP

WebSocket:

ws.send({ type: 'subscribe', channel: 'metrics' });
Enter fullscreen mode Exit fullscreen mode

SSE equivalent:

// Subscribe via query param or POST
const eventSource = new EventSource('/stream?channel=metrics');

// OR
await fetch('/subscribe', {
  method: 'POST',
  body: JSON.stringify({ channel: 'metrics' })
});
const eventSource = new EventSource('/stream');
Enter fullscreen mode Exit fullscreen mode

Step 3: Replace WebSocket Server with SSE

Before (WebSocket):

const WebSocket = require('ws');
const wss = new WebSocket.Server({ port: 3000 });

wss.on('connection', (ws) => {
  ws.on('message', (message) => {
    // Handle client message
  });

  setInterval(() => {
    ws.send(JSON.stringify({ data: 'update' }));
  }, 1000);
});
Enter fullscreen mode Exit fullscreen mode

After (SSE):

app.get('/stream', (req, reply) => {
  reply.raw.writeHead(200, {
    'Content-Type': 'text/event-stream',
    'Cache-Control': 'no-cache'
  });

  const interval = setInterval(() => {
    reply.raw.write(`data: ${JSON.stringify({ data: 'update' })}\n\n`);
  }, 1000);

  req.raw.on('close', () => clearInterval(interval));
});
Enter fullscreen mode Exit fullscreen mode

Step 4: Update Client

Before (WebSocket):

const ws = new WebSocket('ws://localhost:3000');
ws.onmessage = (e) => console.log(JSON.parse(e.data));
Enter fullscreen mode Exit fullscreen mode

After (SSE):

const es = new EventSource('http://localhost:3000/stream');
es.onmessage = (e) => console.log(JSON.parse(e.data));
Enter fullscreen mode Exit fullscreen mode

Result: Simpler code. Same functionality.

The Bottom Line

For 95% of real-time applications, SSE is the better choice.

Why:

  • Simpler (just HTTP)
  • Easier to debug (curl works)
  • Auto-reconnect (built-in)
  • Works everywhere (no proxy issues)
  • Performance equivalent (for most use cases)
  • Scales horizontally (stateless with backplane)

When to use WebSocket:

  • True bidirectional (both sides send frequently)
  • Binary data (audio/video frames)
  • Ultra-low latency required (<10ms)
  • Gaming, collaborative editing, VoIP

Default to SSE. Only reach for WebSocket when you have a specific need for its features.

The best technology isn't the most powerful. It's the simplest one that solves your problem.

For server-to-client streaming, that's Server-Sent Events.


Simple. Reliable. Fast.

Sometimes boring tech is the right tech.


Further Reading

MDN Web Docs:

Production Case Studies:

Specifications:

Top comments (0)