At PersonaCart, we run a creator commerce platform — 50+ pages, 14 languages, multi-tenant SaaS. Our dashboard, order managment, analytics, and notifications all needed real-time data.
Like most teams, we started with the obvious approach... polling:
setInterval(() => fetch("/api/orders").then(r => r.json()).then(setOrders), 30000);
Six pages polling every 30 seconds. Per user. Always. Even when literally nothing changed.
The math was ugly: 6 pages × 2 API calls each × 30s = 12 requests/minute per user. With 100 concurrent users thats 1,200 API calls per minute. And 90%+ of them returned the exact same data. Not great.
We knew we needed Server-Sent Events. But we run on Go Fiber v3, and thats where things got tricky.
🚫 Every SSE Library Breaks on Fiber
We tried r3labs/sse (1,000+ stars) and tmaxmax/go-sse (500+ stars). Both are built on net/http. Neither works properly with Fiber.
Heres the thing — Fiber v3 is built on fasthttp. When you try to bridge SSE libraries via fasthttpadaptor, fasthttp.RequestCtx.Done() only fires on server shutdown, not on client disconnect. So every disconnected client basically becomes a zombie subscriber. Memory leaks everywhere. Goroutines leak. It was a mess.
This is confirmed in Fiber issue #3307 and #4145. Its an architectural limitation — its not going to be fixed.
🛠️ So We Built fibersse
Open-sourced here: github.com/vinod-morya/fibersse
Its the only SSE library built natively for Fiber v3. But we didn't just want "SSE that works on Fiber." We wanted to kill polling entirely.
💡 The Key Insight: Send Signals, Not Data
Instead of pushing full payloads over SSE (which gets expensive fast), we send cache invalidation signals:
hub.Invalidate("orders", order.ID, "created")
The client recieves a tiny signal and refetches only what actually changed:
es.addEventListener('invalidate', (e) => {
const { resource } = JSON.parse(e.data);
queryClient.invalidateQueries({ queryKey: [resource] });
});
No polling. No stale data. UI updates within 200ms. Honestly it felt like magic the first time we saw it work.
⚡ Event Coalescing — Nobody Else Has This
So picture this: a CSV import fires 10,000 progress events. You really don't want 10k messages hitting the client. fibersse handles this with three priority lanes:
| Priority | Type | Behavior |
|---|---|---|
| P0 — Instant | notifications, errors | Bypass all buffering |
| P1 — Batched | status changes | 2-second window |
| P2 — Coalesced | progress updates | Last-writer-wins per key |
So if progress goes 1%→2%→...→8% in one flush window? The client only recieves 8%. Thats it.
for i, row := range rows {
processRow(row)
hub.Progress("import", importID, tenantID, i+1, len(rows))
}
// 10,000 calls → ~15 client updates
📊 Adaptive Throttling
This one's pretty cool — buffer saturation drives per-connection flush intervals automatically:
| Buffer Utilization | Interval | What Happens |
|---|---|---|
| <10% | 500ms | Fast delivery |
| 10–50% | 2s | Normal mode |
| 50–80% | 4s | Slowing things down |
| >80% | 8s | Backpressure relief |
Mobile users on 3G automatically get fewer updates. Zero config needed.
📈 Results
Ok so here are the actual numbers. I still can't belive the difference tbh:
| Metric | Before | After | Change |
|---|---|---|---|
| API calls/user/min | ~12 | ~0.5 | -96% |
| Time to see new data | 0–30s | <200ms | ~100x faster |
| Server CPU (100 users) | 35% constant | 8% idle | -77% |
| Goroutines | 400+ | ~100 | -75% |
🏎️ Benchmarks
| Operation | Speed | Allocs |
|---|---|---|
| Publish (1000 conns) | 84μs | 20 |
| Topic match | 8ns | 0 |
| Connection send | 19ns | 0 |
| Backpressure drop | 2ns | 0 |
🚀 Quick Start
Getting started is pretty straightforward:
go get github.com/vinod-morya/fibersse@latest
hub := fibersse.New(fibersse.HubConfig{
OnConnect: func(c fiber.Ctx, conn *fibersse.Connection) error {
conn.Topics = []string{"orders", "dashboard"}
conn.Metadata["tenant_id"] = getTenantID(c)
return nil
},
})
app.Get("/events", hub.Handler())
// In any handler:
hub.InvalidateForTenant(tenantID, "orders", order.ID, "created")
✅ Full Feature List
Heres everything fibersse ships with:
- Event coalescing (last-writer-wins)
- 3 priority lanes (instant / batched / coalesced)
- NATS-style topic wildcards
- Connection groups (publish by tenant_id, plan, etc.)
- Adaptive throttling
- Built-in JWT + ticket auth
- Prometheus metrics out of the box
- Last-Event-ID replay
- Graceful Kubernetes-style drain
- Batch domain events
~3,500 lines of Go. 39 tests. 42 benchmarks. MIT licensed.
GitHub: github.com/vinod-morya/fibersse
If your Fiber app is still polling — seriously, switch to SSE invalidation. Our server load dropped 77% and the UX improvement was night and day.
Built by Vinod Morya at PersonaCart.
Top comments (0)