What this is: a small OSS pattern sketch — not a Redis replacement, not a production auth platform. I built it to play with one specific question: "if you only need to push small mutations from one writer to many readers, do you actually need Redis?" Sharing the design and the trade-offs in case the pattern is useful to anyone.
Repo: github.com/as1as1984/sse-edge-auth
The shape of the problem
The goal here isn't don't use Redis. It's what does this problem look like when you strip it down to the minimum pieces.
A common edge-auth setup has many edge nodes in front of an origin, all needing to agree on things like "is this IP banned?" or "is this JWT revoked?". The default answer is Redis — every edge queries the same shared store.
But notice the asymmetry: mutations are rare, reads are constant. You might revoke a token once a minute; the edge fleet handles thousands of requests per second. Putting a network round trip on every read to keep N nodes in sync feels disproportionate.
One clarification worth making upfront: SSE itself isn't faster than Redis pub/sub — as fanout channels, they're in the same ballpark. The difference shows up on the read path. With Redis, every request pays a network lookup (~0.5–5ms on LAN). With local SQLite, every check is an in-process function call (~0.01–0.1ms). The speed comes from in-process SQLite, not from SSE.
If you frame it as a fanout problem instead of a shared-state problem, two pieces of unexciting tech are a clean fit:
| Need | Choice |
|---|---|
| Push small mutations from one writer to N readers | Server-Sent Events (one-way HTTP stream) |
| Answer reads locally with no network involved | In-process SQLite — every check is a function call |
That's the entire architecture.
Architecture
operator
|
POST /ban/ip
v
+---------------+
| master server | GET /events (SSE)
+-------+-------+ ──────────────────────+
|
+-----------+-----------+-----------+
v v v v
+-------+ +-------+ +-------+ +-------+
| edge | | edge | | edge | | edge |
|sqlite | |sqlite | |sqlite | |sqlite |
+---+---+ +---+---+ +---+---+ +---+---+
| | | |
+-----------+-> origin <+-----------+
Each edge subscribes to the master's SSE stream on startup. When you POST /ban/ip, the master writes the event to an in-memory ring buffer and broadcasts it. Every connected edge applies it to its own local SQLite. From that moment, requests to that IP are rejected by the local auth gate — no remote call.
SSE + Last-Event-ID: the part I find satisfying
The genuinely nice thing about SSE for this pattern is that the resume protocol is already in the spec. Every event has an ID:
id: 42
event: ip_banned
data: {"ip": "1.2.3.4", "reason": "abuse", "timestamp": 1234567890}
The edge sends the last ID it saw on reconnect:
GET /events
Last-Event-ID: 42
The master replays everything since. We didn't have to design a catch-up protocol — we just needed a ring buffer.
The same channel carries cache invalidation:
event: cache_invalidated
data: {"tags": ["products"], "keys": [], "timestamp": 1234567890}
Once you have a reliable fanout channel for one kind of state mutation, adding another kind is a one-line consumer on the edge. Same Last-Event-ID resume, same ordering guarantees.
Why SSE, not WebSocket
| SSE | WebSocket | |
|---|---|---|
| Direction | server → client | bidirectional |
| Protocol | plain HTTP | HTTP upgrade + framing |
| Reconnect / resume | in the spec | DIY |
| Proxy / LB compatibility | works everywhere HTTP works | sometimes painful |
Traffic in this design is strictly master → edge. WebSocket buys bidirectionality we don't use, and costs complexity we don't want.
The bit I'm most curious about: a composable cache TTL pipeline
Since edges already see every request, they double as a response cache. Where it gets interesting is how TTL gets decided — as a pipeline of small pure functions:
function resolveTTL(ip, baseTTL) {
let ttl = baseTTL;
ttl = adjustTTLByFrequency(ip, ttl); // trusted IPs → longer TTL
ttl = adjustTTLByTime(ttl); // off-peak → longer, peak → shorter
return Math.max(0, ttl);
}
Each rule lives in its own file:
-
ttl-by-frequency.js— high-frequency IPs are likely real clients; trust them with a longer TTL. First-seen IPs get a shorter one. -
ttl-by-time.js— content changes less off-peak; cache longer overnight, shorter during peak. -
failure-pattern.js— N auth failures in a window from the same IP triggers a local auto-ban, written into the same SQLite table the master uses. Edge-local self-healing — no master round trip needed for "I'm being abused right now." -
lru-eviction.js— when the cache exceedsCACHE_MAX_ENTRIES, oldest-accessed keys are dropped.
Adding a fifth rule means writing one function and one line in resolveTTL. The composability matters more to me than any specific rule.
Tag-based invalidation
The origin tags responses:
Cache-Control: public, max-age=60
X-Cache-Tags: products, category-3
When products change, one call to the master:
curl -X POST http://master:4000/invalidate \
-H 'content-type: application/json' \
-d '{"tags": ["products"]}'
The master broadcasts cache_invalidated, every edge drops matching entries from its local SQLite. Same channel, same resume guarantees as auth state.
Honest limits
I want to be specific about what this pattern does not give you, because the answer to "do I need Redis?" depends entirely on these:
- The master is a single point of failure for new mutations. If it's down, edges keep serving with last-known state, but you can't ban anyone new. Master HA is not in v0.1.
- An edge offline longer than the ring buffer (10k events by default) can miss intermediate events on reconnect. There's no full-state-pull endpoint yet.
- The cache is in-memory only. Restarting an edge clears it.
- No cluster, no persistence layer, no replication. Real Redis-shaped systems give you those; this pattern explicitly doesn't.
So this fits a fairly narrow shape: small/medium edge fleets, mostly long-lived edges, one master is acceptable as a coordination point, and "edge keeps working with stale state during master outages" is preferable to "everything halts when the shared store is gone."
If your situation needs more than that, you probably do want Redis — or Kafka, or a real distributed consensus system.
Run it locally
git clone https://github.com/as1as1984/sse-edge-auth
cd sse-edge-auth
(cd master && npm install) && (cd edge && npm install)
# master
(cd master && PORT=4000 npm start)
# three edges
(cd edge && PORT=5001 NODE_ID=edge-a ORIGIN_URL=http://localhost:8080 npm start)
(cd edge && PORT=5002 NODE_ID=edge-b ORIGIN_URL=http://localhost:8080 npm start)
(cd edge && PORT=5003 NODE_ID=edge-c ORIGIN_URL=http://localhost:8080 npm start)
Try a ban:
curl -X POST http://localhost:4000/ban/ip \
-H 'content-type: application/json' \
-d '{"ip":"::1","reason":"demo"}'
curl http://localhost:5001/ # 403 ip_banned, same on edges 5002/5003
Current gaps
- No full-state-pull endpoint — an edge that exceeds the ring buffer window can't resync cleanly on reconnect. Still undecided between paginated event replay and snapshot dump.
-
No file-backed SQLite — restarting an edge clears its cache.
better-sqlite3supports this natively; just haven't wired it up yet. - No master HA — a leader/follower setup where followers accept SSE subscriptions and forward writes is needed but not in v0.1.
-
No real-network benchmark — a docker-compose with
tc netemwould tell us much more about this pattern's actual behavior than any localhost numbers could.
Repo: github.com/as1as1984/sse-edge-auth
Stack: Node.js 20+, better-sqlite3, jose, Express
License: MIT
Top comments (0)