DEV Community

SEN LLC
SEN LLC

Posted on

SSE Is the Right Answer More Often Than You Think — A Hono + TypeScript Reference Service

SSE Is the Right Answer More Often Than You Think — A Hono + TypeScript Reference Service

Ticker, log tail, progress bar, in-memory pub-sub. Four endpoints, one wire format, zero WebSockets.

📦 GitHub: https://github.com/sen-ltd/sse-ticker

Screenshot

The default answer to "how do I push from server to browser" has quietly become WebSockets. Whenever I see it in a code review I ask the same question: which direction is the data flowing? And nine times out of ten the answer is from the server to the client. Maybe there's a click or two going back — a subscribe message, a pause — but the volume, the frequency, the thing that actually forced the decision, is all one-way.

That's the case SSE was built for. Server-Sent Events is a tiny HTML5 spec with native browser support through EventSource, it rides inside a regular HTTP response, and every reverse proxy, load balancer, auth middleware, and log shipper already understands what it's looking at. The wire format is four fields and a blank line. If your problem is "push notifications", "live log tail", "progress bar for a long job", or "stock ticker", SSE is almost certainly the simpler tool, and the one that won't make your ops team grumble.

I built sse-ticker as a reference service that you can drop into a team's internal dashboard stack tomorrow — ticker, log feed, progress stream, and a minimal in-memory pub-sub, all in about 500 lines of TypeScript on top of Hono + @hono/node-server. This article is the design rationale behind it.

The problem — WebSockets as reflex, not choice

WebSockets solve a real problem: two parties talking to each other, in binary, at full duplex, with very low overhead. For a collaborative editor, a multiplayer game, a chat app, or a real-time trading desk where clients send frequent commands, they're the right tool. But every one of those is bidirectional.

If your client isn't sending anything, or is sending one-off commands that an ordinary HTTP POST would cover perfectly well, then asking for WebSockets means paying all of their costs with none of the benefits:

  • A separate protocol upgrade (Upgrade: websocket), which every reverse proxy in your path has to be configured for. I've personally debugged nginx, HAProxy, Cloudflare, and AWS ALB WebSocket passthrough issues. None of them were fun.
  • A custom message framing. You invent your own envelope (JSON? Protobuf? plain strings?), event types, reconnect logic, acknowledgments. There's no spec help here — you're building an ad-hoc protocol.
  • Authentication on a non-HTTP channel. Your Bearer-token middleware runs on the upgrade request, not on the messages inside it. If an operator wants to force reauthentication mid-stream, you're writing tickets.
  • No HTTP semantics for debugging. curl doesn't really help. Your request logs have one line per connection, not one per message.

SSE throws all of that out. The response is a normal HTTP 200 with Content-Type: text/event-stream. Your auth middleware runs on the opening request. curl -N prints the stream live to your terminal. Your JSON access log captures it. Your WAF can inspect it. EventSource in the browser auto-reconnects with a last-event-id for free.

The tradeoff to name honestly: SSE is one-way only. If your client needs to send to the server, use an ordinary POST — that's literally what sse-ticker's /publish endpoint does. And SSE is text-only; there's no binary path. Those are the two hard limits. For everything else — where "server decides, client watches" is the shape — SSE wins.

The wire format, which is genuinely tiny

Here is the entire thing you need to emit to push an event to a browser:

event: tick
id: 42
data: {"symbol":"BTC","price":40123}

Enter fullscreen mode Exit fullscreen mode

Four fields are defined:

  • event: — the named event type. EventSource dispatches on it. Default is message.
  • id: — echoed back by the browser as Last-Event-ID on reconnect. Perfect for resumption.
  • data: — the payload. Multi-line values become multiple data: lines; the client joins them with \n.
  • retry: — how long the browser should wait before reconnecting after a drop, in milliseconds.

Lines starting with : are comments and are discarded by the client. We'll use comments as heartbeats.

Events are separated by a blank line — in wire terms, \n\n. This is the one thing you're most likely to get wrong writing SSE by hand. If the terminator isn't there, the browser buffers forever and never fires your event handler.

Here's the whole encoder from sse-ticker:

export function formatEvent(ev: SseEvent): string {
  const lines: string[] = [];

  if (ev.retry !== undefined) lines.push(`retry: ${ev.retry}`);
  if (ev.event !== undefined) lines.push(`event: ${ev.event}`);
  if (ev.id    !== undefined) lines.push(`id: ${ev.id}`);

  const payload =
    typeof ev.data === 'string' ? ev.data : JSON.stringify(ev.data);

  // Per spec: every newline in `data` becomes a new `data:` field.
  for (const line of payload.split('\n')) {
    lines.push(`data: ${line}`);
  }

  // Trailing blank line terminates the event. Without this the
  // client buffers forever and never fires the handler.
  return lines.join('\n') + '\n\n';
}
Enter fullscreen mode Exit fullscreen mode

That's it. That's the whole format. If you wrote that function in three minutes and plugged it into a streaming HTTP response, you'd have server-to-browser push working. No upgrade handshake, no protocol library, no Socket.IO, no fallback matrix.

The tick route — streaming with Hono

Hono ships a streamSSE helper that handles the response headers and hands you a stream object with write, sleep, close, onAbort, and aborted. That last pair is what makes disconnect handling safe. Here's the ticker route, simplified slightly for the article:

import { streamSSE } from 'hono/streaming';
import { formatEvent } from '../sse.js';
import { inc, dec } from '../connections.js';

app.get('/', async (c) => {
  const { symbol, interval } = parseTickQuery(c);
  c.header('X-Accel-Buffering', 'no'); // nginx: flush immediately

  return streamSSE(c, async (stream) => {
    inc('tick');
    try {
      let price = seedPrice(symbol);
      let i = 0;

      // First event carries the retry directive. Browsers will wait
      // 10 seconds before reconnecting instead of the 3-second default.
      await stream.write(
        formatEvent({
          event: 'tick',
          id: String(i++),
          retry: 10_000,
          data: { symbol, price, delta: 0, timestamp: Date.now() },
        }),
      );

      while (!stream.aborted) {
        await stream.sleep(interval);
        if (stream.aborted) break;
        const delta = (Math.random() - 0.5) * price * 0.01;
        price = Math.max(0.01, price + delta);
        await stream.write(
          formatEvent({
            event: 'tick',
            id: String(i++),
            data: { symbol, price, delta, timestamp: Date.now() },
          }),
        );
      }
    } finally {
      // Runs for any reason the generator exits — abort, error, return.
      // The matching connection counter decrement must be here, not after
      // the loop, or a thrown write leaks the count forever.
      dec('tick');
    }
  });
});
Enter fullscreen mode Exit fullscreen mode

Three things to notice.

First, the interval parameter is clamped. If you let a client ask for interval=1 you just built yourself a CPU burner. I clamp to 100–10000 ms with Zod, 422 on violations, and that's the end of that class of attack.

Second, the stream.aborted check sits at the top of the while loop and also after the sleep. Without both, a client that disconnects during the 1000 ms idle window gets an extra event write against an already-closed socket before the loop exits. Not fatal, but ugly; the double check is cheap insurance.

Third, and most importantly, the connection counter dec('tick') lives in a finally block. This is the disconnect handling trap, and it's the single thing most "my SSE server leaks connections" bug reports boil down to. If you put dec('tick') after the loop, one unexpected throw from stream.write during a half-closed socket means the counter is off by one forever. Put it in finally, the number always balances. Check with /health after curling for a while: connections.total should settle back to 0.

The in-memory pub-sub — minimum viable fan-out

When you need "server decides to push, route it to whoever's listening", you've moved past single-endpoint streaming and into fan-out. You don't need Redis on day one. You need a Map<string, Set<Subscriber>>:

type Subscriber = (payload: unknown) => void;

export class PubSub {
  private readonly channels = new Map<string, Set<Subscriber>>();

  subscribe(channel: string, fn: Subscriber): () => void {
    let set = this.channels.get(channel);
    if (!set) {
      set = new Set();
      this.channels.set(channel, set);
    }
    set.add(fn);
    return () => this.unsubscribe(channel, fn);
  }

  publish(channel: string, payload: unknown): number {
    const set = this.channels.get(channel);
    if (!set) return 0;
    let count = 0;
    for (const fn of set) {
      try {
        fn(payload);
        count += 1;
      } catch {
        // A throwing subscriber must not take down its siblings.
      }
    }
    return count;
  }
  // unsubscribe, stats, subscriberCount…
}
Enter fullscreen mode Exit fullscreen mode

The /stream/channel route subscribes a closure that pushes into a per-connection queue; the matching /publish POST handler calls bus.publish(channel, body.payload) and returns the delivery count. That's a working pub-sub dashboard backend in well under 100 lines.

This is not a broker. It doesn't survive a restart, it doesn't work across replicas, and it holds references to dead subscribers if you forget to unsubscribe. Those are exactly the tradeoffs you want at this tier. When you outgrow them — usually the second time you scale horizontally — replace the class with one that publishes to Redis Streams or NATS. The rest of the service doesn't need to change, because the pub-sub interface is three methods.

The thing to watch: the unsubscribe call MUST run when the client disconnects. It lives in a finally right next to the connection counter dec, because both have the same lifetime. Forget it and every dropped client leaks a closure that transitively pins the entire Hono stream object. With a few hundred concurrent users that's a slow memory leak you'll find at 3 AM.

Heartbeats, the part everyone forgets

Your reference service works great in tests. You deploy it behind nginx. Ten minutes later every idle client is getting disconnected. Why?

Because HTTP reverse proxies have an idle timeout. nginx defaults to 60 seconds. AWS ALB defaults to 60. Cloudflare's is 100. If your stream goes quiet for longer than that, the proxy assumes the connection is dead and closes it. EventSource reconnects, but your users see a flicker every minute.

The fix is the comment frame:

: heartbeat

Enter fullscreen mode Exit fullscreen mode

Clients ignore lines starting with :, so the application sees nothing, but there is traffic on the wire. Fire one every 15 seconds — well under every reasonable proxy timeout — and the connection stays warm forever. In sse-ticker the ticker endpoint emits its own events fast enough that an explicit heartbeat isn't needed, but the helper is there and the log-tail / pub-sub endpoints, which can idle for arbitrarily long, lean on it.

One more proxy gotcha while we're here: nginx buffers responses by default. The first 4 KB of output gets held until the buffer fills or the connection closes, which means your first SSE event lands on the client a long, confusing moment after you expected it. The fix is a response header, X-Accel-Buffering: no, which tells nginx to flush immediately. sse-ticker sets it on every stream route.

Tradeoffs, honestly named

I talked up SSE at the top; here's the fair list of places it's the wrong answer.

  • Bidirectional traffic. If the client is constantly sending to the server — chat, multiplayer, collaborative editing — use WebSockets. SSE forces the upstream to happen over separate POST requests, which is fine for subscribe/unsubscribe but not for 30 messages a second.
  • Binary. SSE is text only. If you're streaming audio, video frames, or protobuf-encoded deltas, use WebSockets (or, honestly, HTTP streaming + chunked transfer with your own framing).
  • Browser connection limit per origin. HTTP/1.1 browsers cap at 6 connections per origin. If your page opens 10 EventSource streams to the same host, four of them will wait in line until one closes. HTTP/2 multiplexes and solves this, so terminate your SSE traffic on an HTTP/2-capable origin — which your CDN very likely already is.
  • Memory per connection. Each live stream pins the closure, the stream object, and whatever state you captured. With an in-memory pub-sub that's also a live Set entry. At a few thousand concurrent clients per Node process this becomes measurable; plan to horizontally scale by then.

None of these invalidate the thesis. They're just the cases where you should reach for something else. For an internal dashboard, a notifications feed, a progress bar UI, or a status page, SSE remains simpler to build, simpler to operate, and simpler to debug.

Try it in 30 seconds

git clone https://github.com/sen-ltd/sse-ticker
cd sse-ticker
docker build -t sse-ticker .
docker run --rm -p 8000:8000 sse-ticker
Enter fullscreen mode Exit fullscreen mode

In another terminal:

# Live ticker
curl -N "http://localhost:8000/stream/tick?symbol=BTC&interval=500"

# Long-running job progress
curl -N "http://localhost:8000/stream/progress?id=build-42"

# Minimal pub-sub — run the subscriber first…
curl -N "http://localhost:8000/stream/channel?name=news" &

# …then publish
curl -X POST -H 'content-type: application/json' \
  -d '{"channel":"news","payload":{"msg":"hello"}}' \
  http://localhost:8000/publish
Enter fullscreen mode Exit fullscreen mode

The code is MIT-licensed and under 500 lines of TypeScript. 31 vitest tests exercise the wire format, the pub-sub, the connection counting, and the streaming endpoints via Hono's app.request() — no network, no sockets, the whole suite runs in ~2 seconds. The Docker image is ~147 MB on node:20-alpine. Copy the pieces you need.

The takeaway I want to leave you with: before you reach for WebSockets on your next "push from server" feature, ask which way the data flows. If it's one way, you almost certainly want the simpler spec that ships with every browser, every proxy, and every curl binary you already have.

Top comments (0)