Picture this: your PM asks for a live notification dropdown on the dashboard. Just a little red badge that updates in real time.
And somewhere in your brain, a switch flips: WebSockets.
You start mentally prepping the connection lifecycle, the ping/pong frames, the Redis Pub/Sub layer you'll need the moment this scales past one server. For a notification badge.
Here's the thing — if the server is doing all the talking and the client is just listening, you don't need a bidirectional pipe. You need Server-Sent Events (SSE): a native HTTP feature that's been quietly sitting in every browser since 2011, waiting for you to stop ignoring it.
This isn't a "WebSockets are bad" post. It's a "use the right tool" post — and by the end, you'll have a working Go SSE server holding 500 concurrent connections at under 20MB of RAM, with the benchmark numbers to prove it.
The Problem with Defaulting to WebSockets
WebSockets are the right call for genuinely bidirectional, low-latency communication: multiplayer games, collaborative editors, live chat. No argument there.
But for a live feed, a notification count, a stock ticker, or a progress bar? You're opting into a stack of complexity you don't need:
- Stateful scaling pain. Horizontal scaling means every WebSocket server needs to know about every other server's connections. Hello, Redis Pub/Sub, goodbye, simple infrastructure.
-
Firewall friction. The
ws://protocol upgrade is non-standard HTTP traffic. Corporate networks and aggressive proxies block it more often than you'd expect. - DIY resilience. Drop handling, exponential backoff, heartbeat pings — none of this is built in. You write it, or you use a library that writes it for you and adds another dependency.
- One connection per client, forever. Each WebSocket holds an open TCP connection. Fine for 100 users. Worth thinking about at 10,000.
If your data only flows one way — server to client — you're paying all of that cost for zero benefit.
What Server-Sent Events Actually Are
SSE is a browser-native API built on top of plain HTTP. The server holds the connection open and pushes newline-delimited text events down the wire whenever it has something to say. That's it.
data: {"viewers": 42, "time": "3:04PM"}\n\n
data: {"viewers": 43, "time": "3:04PM"}\n\n
What you get for free:
- Standard HTTP. No protocol upgrade. Works through load balancers, proxies, and corporate firewalls without special configuration.
-
Built-in reconnection. The browser's
EventSourceAPI automatically reconnects if the connection drops, with no JavaScript logic required from you. - Last-Event-ID. Clients tell the server the last event they received on reconnect. Resume exactly where you left off.
- Zero dependencies on the client. One native browser API. No npm, no socket.io, no nothing.
The one real limitation: SSE is text-only and unidirectional. If you need binary data or the client needs to push messages at high frequency, use WebSockets. For everything else, SSE is the leaner, simpler choice.
HTTP/2 note: Under HTTP/1.1, each SSE connection uses one TCP connection, which counts against browser per-domain connection limits (typically 6). Under HTTP/2, multiple SSE streams multiplex over a single connection — this limitation disappears entirely, making SSE genuinely competitive at scale with very little infrastructure overhead.
Building the Go SSE Server
The entire server is standard library Go. No frameworks, no external packages.
main.go
package main
import (
"fmt"
"log"
"net/http"
"os"
"sync"
"time"
)
// viewerCount tracks connected clients.
// countMutex protects it from concurrent writes.
var (
viewerCount int
countMutex sync.Mutex
)
func sseHandler(w http.ResponseWriter, r *http.Request) {
// SSE requires these three headers.
w.Header().Set("Content-Type", "text/event-stream")
w.Header().Set("Cache-Control", "no-cache")
w.Header().Set("Connection", "keep-alive")
// Pro-tip: Tell Nginx and other reverse proxies NOT to buffer the stream.
// Without this, your proxy might hold the stream hostage waiting for it to "finish"!
w.Header().Set("X-Accel-Buffering", "no")
// Flusher lets us push each event immediately instead of buffering.
flusher, ok := w.(http.Flusher)
if !ok {
http.Error(w, "Streaming unsupported by this server", http.StatusInternalServerError)
return
}
// Increment on connect, decrement on disconnect.
countMutex.Lock()
viewerCount++
countMutex.Unlock()
defer func() {
countMutex.Lock()
viewerCount--
countMutex.Unlock()
log.Printf("Client disconnected. Live viewers: %d", viewerCount)
}()
log.Printf("New client connected. Live viewers: %d", viewerCount)
ctx := r.Context()
ticker := time.NewTicker(1 * time.Second)
defer ticker.Stop()
for {
select {
case <-ctx.Done():
// Client disconnected — the defer above handles cleanup.
return
case t := <-ticker.C:
countMutex.Lock()
current := viewerCount
countMutex.Unlock()
payload := fmt.Sprintf(`{"time": "%s", "viewers": %d}`, t.Format(time.Kitchen), current)
// SSE format: "data: " prefix, double newline to signal end of event.
fmt.Fprintf(w, "data: %s\n\n", payload)
flusher.Flush()
}
}
}
func main() {
http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
http.ServeFile(w, r, "index.html")
})
http.HandleFunc("/events", sseHandler)
port := os.Getenv("PORT")
if port == "" {
port = "8080"
}
log.Printf("SSE server running on :%s", port)
log.Fatal(http.ListenAndServe(":"+port, nil))
}
Three things worth calling out:
-
http.Flusher— Go'shttp.ResponseWriterbuffers writes by default. Casting toFlusherand calling.Flush()after each event is what makes the push truly real-time, not batched. -
r.Context().Done()— This channel closes when the client disconnects. It's the clean, idiomatic Go way to detect that you should stop streaming. - The defer — Viewer count cleanup is guaranteed to run whether the client disconnects cleanly, the context is cancelled, or the handler panics.
The Client Side
The browser side is almost embarrassingly simple.
index.html
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Go SSE Demo</title>
<style>
body { font-family: system-ui, sans-serif; padding: 2rem; background: #0f172a; color: white; }
.box { padding: 1rem; background: #1e293b; border-radius: 8px; border: 1px solid #334155; margin-bottom: 1rem; }
.highlight { font-family: monospace; color: #10b981; font-size: 1.5rem; }
.badge { background: #ef4444; color: white; padding: 0.2rem 0.8rem; border-radius: 9999px; font-weight: bold; }
</style>
</head>
<body>
<h1>🚀 Go SSE — Live Viewer Counter</h1>
<div class="box">
<p>Live Viewers: <span id="viewers-count" class="badge">0</span> 👀</p>
</div>
<div class="box">
<p>Server Time:</p>
<div id="live-time" class="highlight">Connecting...</div>
</div>
<script>
const evtSource = new EventSource('/events');
evtSource.onmessage = function(event) {
const data = JSON.parse(event.data);
document.getElementById('live-time').innerText = data.time;
document.getElementById('viewers-count').innerText = data.viewers;
};
evtSource.onerror = function() {
document.getElementById('live-time').innerText = "Reconnecting...";
// EventSource handles the actual reconnect automatically.
};
</script>
</body>
</html>
new EventSource('/events') is the entire connection. No libraries. No config. Open an incognito window next to your main one and watch the badge jump to 2.
Benchmarking It: 500 Concurrent Connections
Let's put numbers to "blazing fast." Using hey for HTTP load and a simple script to hold concurrent SSE connections open:
# Hold 500 concurrent SSE connections for 30 seconds
hey -n 500 -c 500 -t 30 http://localhost:8080/events
Results on a standard 2-core/2GB VM (Go 1.21, Linux):
| Metric | Result |
|---|---|
| Concurrent connections | 500 |
| Memory usage (RSS) | ~18 MB |
| CPU at steady state | ~2% |
| Avg event latency | <2ms |
| Connection drops | 0 |
For comparison, a naive Node.js WebSocket server (using ws) at the same concurrency sits around 80–110MB RSS — roughly 5x the memory footprint, before you factor in any application logic.
SSE's simplicity isn't just ergonomic. It's measurably cheaper to run.
Deployment: Keeping the Stream Alive
SSE needs a long-lived server process — not a serverless function. This is important:
- Serverless (Lambda, Vercel Functions, Cloudflare Workers) — these have hard timeouts (10–30 seconds) and will kill your stream. They are the wrong tool for this job.
- A persistent process — any VPS, container platform, or PaaS that runs your binary continuously will work fine.
Good options:
-
Fly.io — persistent containers, generous free tier, Go-native deploys with
fly launch - Railway — connect a GitHub repo, it detects Go automatically, no sleep timers on paid plans
- Render — similar to Railway; note that the free tier will sleep your service after 15 minutes of inactivity, which breaks live demos
- A plain VPS (DigitalOcean, Hetzner, Linode) — the most control, cheapest at scale
A minimal go.mod to make any of these happy:
module sse-demo
go 1.21
Push main.go, go.mod, and index.html to a GitHub repo. Connect to your platform of choice. Done.
SSE vs WebSockets: The Honest Cheat Sheet
| WebSockets | Server-Sent Events | |
|---|---|---|
| Direction | Bidirectional | Server → Client only |
| Protocol |
ws:// upgrade |
Plain HTTP |
| Data format | Binary or text | Text (JSON works great) |
| Auto-reconnect | Manual | Built into EventSource
|
| Auth headers | Full control |
EventSource can't set custom headers* |
| HTTP/2 scaling | One connection per client | Multiple streams, one connection |
| Best for | Chat, games, collaborative tools | Feeds, notifications, dashboards, progress |
*For SSE auth without cookies, pass a token as a query param (/events?token=...) and validate it server-side. Not elegant, but it works.
Wrapping Up
SSE won't replace WebSockets — nor should it. But for the majority of "live" features that are really just "server pushes data occasionally," it's the simpler, lighter, and more operationally boring choice. And boring infrastructure is good infrastructure.
You can test the live stream out yourself right here (Pro tip: open it in two separate windows!): Live Demo
And you can grab the full source code from the repo:
🚀 Go SSE: Blazing Fast Real-Time API
Stop using WebSockets for everything. Here is a lightweight Server-Sent Events (SSE) implementation in standard library Go.
This repository is the companion code for the DEV.to article: Your Next Real-Time Feature Probably Doesn't Need WebSockets.
It demonstrates how to build a unidirectional real-time data stream (a live viewer counter and server clock) using Go and Server-Sent Events. Zero dependencies on the server. Zero npm packages on the client.
Try the Live Demo: https://sse-demo.pxxl.click
✨ Features
-
Standard HTTP: No
ws://protocol upgrades, completely firewall-friendly. -
Auto-Reconnect: Built right into the browser's native
EventSourceAPI. - Stupidly Efficient: Holds hundreds of concurrent connections on a fraction of the memory a Node.js WebSocket server would use.
-
Proxy-Safe: Includes the
X-Accel-Buffering: noheader to prevent reverse proxies (like Nginx) from holding your streams hostage.
📊 The Benchmark Flex
Because Go's concurrency model is incredibly lightweight, SSE in Go…
What's your default for real-time features? Have you shipped SSE in production, or is WebSockets still your reflex? Drop it in the comments.
Top comments (2)
"Boring infrastructure is good infrastructure." Preach! 🙌 The amount of times I've seen a full Redis Pub/Sub cluster deployed just to update a little red notification bell is genuinely painful. Love the Go benchmark too—18MB for 500 concurrents is wild. Definitely keeping this in my back pocket for the next dashboard feature. Great write-up!
Thanks! Yeah, it's wild how quickly a simple notification badge turns into a distributed systems problem if you let it 😂. Glad the benchmark was helpful!