The last three real-time features I shipped were all built on Server-Sent Events. Two of them, I had originally planned to build on WebSockets. The third I had already half-built on WebSockets before I realised the WebSocket part was adding work and removing nothing. I ripped it out and the feature shipped a day earlier.
This is not a takedown of WebSockets. WebSockets are the right answer for plenty of things. The point is that the reflex to reach for WebSockets the moment someone says "real-time" is wrong more often than people admit, and the cost of that wrong choice is paid quietly for the entire lifetime of the feature.
Most of what people call "real-time" is one-way streaming from server to client. Notifications, live counters, progress bars, AI token streams, log tails, dashboard updates, presence indicators. None of these need the client to talk back over the same channel. They need the server to push, the client to listen, and a way to reconnect when the network blips. That shape is what SSE was designed for and what WebSockets are overbuilt for.
The decision is not "which one is better." The decision is "which one is the right shape for my data flow." Once you frame it that way, the answer is usually obvious.
What Each One Actually Is
Server-Sent Events is a one-way streaming protocol that runs over plain HTTP. The client opens a long-lived GET request. The server holds the connection open and writes chunks of text in a specific format. The browser parses each chunk into an event and fires it at an EventSource object. The connection is unidirectional: server pushes, client receives, that is the whole protocol.
data: {"type":"progress","percent":42}
data: {"type":"progress","percent":67}
data: {"type":"complete"}
That is literally the wire format. Two newlines separate events. Lines starting with data: are the payload. Other prefixes (event:, id:, retry:) add metadata. Browsers have been parsing this since 2011.
WebSockets is a bidirectional binary or text protocol that starts with an HTTP upgrade handshake and then runs over its own framing on top of TCP. After the handshake, the connection is no longer HTTP. It is a full-duplex pipe where either side can send messages at any time. The browser exposes it through new WebSocket(url). Most server frameworks have their own WebSocket library (ws for Node, gorilla/websocket for Go) because the protocol does not map cleanly onto the request-response model frameworks are built around.
The first big difference is the connection model. SSE runs on the same HTTP stack as everything else in your app. Your reverse proxy understands it. Your CDN can pass it through. Your load balancer routes it like any other request. WebSockets sit outside the normal HTTP request lifecycle. Most infrastructure handles them with a special path, special timeouts, and special sticky-session rules.
The second big difference is direction. SSE is server-to-client only. If the client needs to send something to the server, it makes a normal HTTP request. WebSockets are bidirectional from the start. If you do not need bidirectionality, that is a feature you are paying for and not using.
The Reflex To Reach For WebSockets
The reason most developers reach for WebSockets first is path dependency. The first big real-time tutorial people encountered (Socket.io chat apps, around 2015) used WebSockets. The mental model of "real-time = WebSockets" got cemented before SSE was widely known. Most React tutorials for live features still start with socket.io-client. Most Stack Overflow answers about "how do I push from server to client" answer with WebSockets even when the question describes a one-way flow.
This is a vibes-driven default, not an evaluated one. The honest version of the comparison for one-way flows is:
- SSE: a single HTTP GET, your existing auth middleware works, your existing logging works, your CDN handles it, automatic reconnection is in the spec, the wire format is human-readable. No new dependencies.
- WebSockets for the same one-way flow: a separate protocol upgrade, sticky-session rules in your load balancer, a separate auth path (cookies often do not work the same way), no automatic reconnection unless you write it, binary framing you have to debug with Wireshark or a browser DevTools panel that is less mature than the network panel.
For a one-way flow, every line of that comparison favours SSE. The only reason to pick WebSockets for a one-way flow is if you already have a WebSocket infrastructure for other reasons and adding SSE would be a second thing to maintain. That is a real reason. It is just rarer than the reflex suggests.
When WebSockets Actually Win
There are flows where SSE is the wrong shape and WebSockets are the right one. The clearest signal is bidirectional, low-latency, high-frequency message exchange. Specifically:
Multiplayer games and collaborative editing. The client sends actions every few milliseconds, the server merges and broadcasts, the client renders. Round-trip latency matters. The overhead of opening a new HTTP request per client action would crush you. WebSockets are the right shape.
Real-time voice or video signaling. The signaling layer for WebRTC negotiates connections through a server, and the negotiation is bidirectional and bursty. WebSockets are the standard tool. Once the WebRTC connection is established, the actual audio/video flows peer-to-peer over a different protocol entirely.
Chat applications with presence and typing indicators. Messages flow both ways. Presence updates flow both ways. Typing indicators are bursty bidirectional events. You can build this on long-polling or SSE plus POST, but it is more code and the latency is worse. WebSockets fit.
Low-latency trading interfaces, live auction bidding, multiplayer drawing tools. Anything where the round-trip latency between a client action and a server-broadcast response matters at the sub-100ms level.
The pattern across all of these is that the client and the server are in conversation. Not a server monologue with the client occasionally interrupting, an actual conversation. If your feature is a conversation, WebSockets earn their cost.
When SSE Actually Wins (Which Is Most Of The Time)
The flows that suit SSE are the ones where the server pushes and the client mostly listens. These cover more product features than people expect:
AI token streaming. The server generates a response from an LLM, streams tokens as they arrive, and the client renders them. The client does not need to interrupt in real time over the same channel. If the user wants to cancel, that is a separate HTTP request. This is the canonical SSE use case in 2026 and the reason the AI SDK in most frameworks defaults to SSE under the hood, not WebSockets. The AI SDK guide covers this in more detail for the framework-specific patterns.
Progress bars for long-running jobs. The user kicks off an import, a render, a deployment. The server reports progress every second or so until done. One-way push. SSE is the right answer.
Notifications. A bell icon updates when something happens. The client connects on page load. The server pushes events as they happen. The client renders. SSE.
Live counters and dashboards. Visitor counts, sales dashboards, system metrics, social media counters. The server has updates. The client wants them. No back-channel needed beyond normal HTTP requests when the user actually does something.
Log streaming. Tail a server log into a browser. Push lines as they appear. SSE.
Server-driven UI updates in collaborative tools, where one user changes something and other users need to see the change. The clients are the source of changes, but they communicate via normal HTTP POSTs to the server. The server fans the changes back out over SSE. The fan-out direction is one-way.
For all of these, the SSE version of the implementation is shorter, the operational footprint is smaller, and the failure modes are the standard HTTP ones your tooling already understands.
The Code Comparison That Made It Click For Me
The clearest way to see the difference is to look at the same feature built both ways. Take a progress bar that streams updates from a long-running job.
The SSE version, server side (Node, any HTTP framework):
export async function GET(req: Request) {
const stream = new ReadableStream({
async start(controller) {
const encoder = new TextEncoder();
for await (const update of jobUpdates(req)) {
const data = `data: ${JSON.stringify(update)}\n\n`;
controller.enqueue(encoder.encode(data));
}
controller.close();
},
});
return new Response(stream, {
headers: {
'Content-Type': 'text/event-stream',
'Cache-Control': 'no-cache',
'Connection': 'keep-alive',
},
});
}
The client side:
const es = new EventSource('/api/job-progress');
es.onmessage = (event) => {
const update = JSON.parse(event.data);
setProgress(update.percent);
};
That is the whole feature. The browser handles reconnection automatically. If the network drops for ten seconds, the EventSource reopens and resumes. If the server crashes and restarts, the client reconnects without code.
The WebSocket version is longer in both directions. You set up a WebSocket route on the server (most frameworks need a separate adapter or middleware). The client opens a WebSocket. You send and receive JSON messages. You write reconnection logic because the WebSocket API does not include it. You handle the case where the server restarts and you need to re-authenticate.
const ws = new WebSocket(`wss://${location.host}/api/job-progress`);
ws.onmessage = (event) => {
const update = JSON.parse(event.data);
setProgress(update.percent);
};
ws.onclose = () => {
// reconnect logic here, including backoff and re-auth
};
The reconnect block is the thing that gets you. Most tutorials skip it. Most production WebSocket clients re-implement it. The EventSource API includes it for free.
Auth, Which Is Where People Get Stuck
Auth on SSE works exactly like auth on any HTTP request. The browser sends cookies. Your middleware reads them. Your handler verifies. The connection is opened. This is the same code path as every other request in your app.
Auth on WebSockets is its own conversation. You can pass cookies during the handshake (which is technically an HTTP upgrade), but many WebSocket libraries do not expose the cookies cleanly to handler code. The common workaround is to send an auth token as the first message after the connection opens, which means there is a brief window where the connection is open but unauthenticated. You have to handle that window correctly or you have a security bug. People get this wrong.
The other workaround is to put the token in the URL as a query parameter, which works but leaks the token into server access logs and is generally a bad idea.
For SSE, you do nothing. The cookie just works. This is one of those quiet wins that does not show up in feature comparisons but saves you a half-day of debugging the first time you try it.
The Connection Limit Trap
There is one specific gotcha with SSE that catches people: HTTP/1.1 browsers limit concurrent connections per origin to six. If your user has six tabs open and each one has an SSE connection, the seventh tab cannot make any HTTP requests at all until one of the SSE connections closes. The browser is queuing requests behind the connection limit.
The fix is to use HTTP/2 or HTTP/3 in production. Both protocols multiplex many streams over a single connection, so the six-connection limit does not apply. Vercel, Cloudflare, Fastly, AWS CloudFront all serve HTTP/2 by default. If you are behind a proper edge layer, this is a non-issue.
If you are running SSE through a self-hosted reverse proxy on HTTP/1.1, this can bite you, and the symptom is "the seventh tab is broken." Always check your edge configuration. HTTP/2 fixes the entire class of problem.
WebSockets do not have this limit because they use a separate connection upgrade per socket, not a pool of HTTP requests. This is a real WebSocket advantage in the narrow case where you have to run on HTTP/1.1 for some reason, but in 2026 there is almost no reason you have to.
What About Long Polling
Long polling is the original "real-time over HTTP" pattern. The client opens a request. The server holds it open until there is data, then responds. The client immediately opens another request. The loop continues forever.
Long polling works, but in 2026 there is no reason to start with it. SSE is the modern version of the same idea, with a defined wire format, automatic reconnection, and ten lines less code on both sides. The only reason to reach for long polling is if you are stuck behind a proxy that strips streaming responses, which is rare and usually fixable.
I have not built a long-polling feature on purpose in five years. It is the kind of thing you only choose when something else is broken.
Frameworks Are Quietly Choosing For You
The framework you are building on probably has an opinion. Next.js, Astro, SvelteKit, and most modern React frameworks support SSE through their normal response APIs without any special configuration. You return a ReadableStream with the right headers and it works. The same frameworks need extra adapters or third-party libraries to support WebSockets, especially in serverless deploys.
This is not an accident. SSE fits the request-response model that frameworks are built around. The handler returns a response, the response is a stream, the stream stays open for a while. WebSockets break that model. They need a long-lived connection that is not tied to a single request, which most serverless platforms do not support cleanly.
If you are deploying to Vercel, Cloudflare Workers, AWS Lambda, or any other function-based platform, SSE works in their default execution model. WebSockets need either a workaround (Cloudflare Durable Objects, Lambda WebSocket API Gateway) or a separate long-lived server. The infrastructure cost of WebSockets in a serverless world is real.
For long-lived servers (Fly.io, Railway, Render, a VM you manage), WebSockets are fine. The infrastructure cost goes away. The cost is now operational: keeping a server alive for the duration of every connected client.
The platform-aware version of the decision: if you are on serverless, default to SSE unless you have a bidirectional flow that requires WebSockets. If you are on long-lived servers, the choice is more open and you can pick based on the data flow alone.
What I Run In Production
The pattern I have landed on for most real-time features:
The server uses SSE for any one-way flow: notifications, progress, AI streams, dashboards, logs. The handler returns a ReadableStream with Content-Type: text/event-stream. Auth runs through the normal middleware. Reconnection is the browser's problem.
The client uses EventSource for SSE, with a tiny wrapper that handles typed events and JSON parsing. The wrapper is about thirty lines of code and replaces a much larger WebSocket client library.
For the rare bidirectional features (a collaborative editor I built last year, a live drawing tool, an internal admin tool with two-way command-and-control), I reach for WebSockets and accept the cost. I run those on a long-lived Fly.io server, separate from the rest of the app. The separation is deliberate: keeping WebSocket complexity out of the main serverless app is worth one extra deployment target.
A small amount of glue connects the two. The SSE-served frontend can send commands to the server via normal HTTP POST. The server fans the commands out to the connected SSE clients. This pattern (POST in, SSE out) covers more "bidirectional" features than you would expect, because most of those features are actually one-way fan-out with occasional user-initiated commands.
The result is a real-time stack that is mostly boring HTTP, with a small WebSocket surface for the cases that genuinely need it. The boring HTTP part is the part that scales, deploys, and debugs cleanly. The WebSocket part is the part that needs care.
What I Would Tell You If You Asked
The question is not "which protocol is better." The question is "what shape is my data flow."
If your data flow is one-way from server to client, with the client only occasionally needing to send commands back, use SSE. Use POST for the commands. You will write less code, deploy on more platforms, and your auth will already work.
If your data flow is genuinely bidirectional, with the client and server in continuous conversation, use WebSockets. Accept that you are taking on a deployment burden and a new failure mode in exchange for the flexibility you actually need.
If you are not sure which one you have, look at how many messages per second the client sends versus the server. If it is one-to-many in the server's favour, you have a one-way flow with a back-channel. SSE. If it is more balanced, you have a conversation. WebSockets.
The mistake I made for years was treating "real-time" as a synonym for "WebSockets" and reaching for the heavier option by reflex. The mistake cost me real engineering time, real infrastructure complexity, and real debugging hours that I would not have spent if I had picked the simpler protocol. SSE is not new. It is just quiet, and quiet things lose to loud things in tutorials and frameworks even when they are the better tool.
The broader version of this lesson is that the framework defaults are usually right, and the boring HTTP-shaped answer is usually right, and the moment you find yourself adding a new dependency to solve a problem the platform already solved you should pause and check. The stop obsessing about the perfect stack post is the same instinct applied to a different question. The instinct generalises: pick the tool that matches the shape of the problem, not the tool that pattern-matched to the buzzword.
For real-time in 2026, the shape of most problems is one-way. The tool that matches is SSE. Use it without apology.
Top comments (0)