If you're running a Node.js or Bun API in production, there's a good chance Redis is somewhere in your stack. And if you've been paying attention to the open source world over the past two years, you know that "just use Redis" isn't as simple as it used to be.
I maintain hitlimit, a rate limiting library for Node.js and Bun. This week I shipped v1.3.0 with first-class Valkey and DragonflyDB support. Here's why, and what the numbers look like.
The Redis Licensing Timeline
For 15 years, Redis was BSD-licensed. Use it however you want, no restrictions. Then things changed fast:
- March 2024 — Redis switches from BSD to dual SSPL/RSAL. The goal: stop cloud providers (AWS, Google) from offering managed Redis without paying. The community reaction is immediate and hostile.
- March 2024 — The Linux Foundation announces Valkey, a fork of Redis 7.2.4 under the BSD license. AWS, Google, Oracle, Ericsson, and Snap back it on day one.
- April 2025 — One year later, DevClass reports that Redis has lost most of its external contributors.
- May 2025 — Redis 8 adds AGPLv3 as a third license option. Antirez (the original creator) returns to help course-correct.
- September 2024 — Percona's survey reports 83% of large enterprises have adopted or are actively exploring Valkey. Over 70% of Redis users cite the licensing change as motivation to seek alternatives.
- 2026 — Valkey reaches v9 with features Redis doesn't have. DragonflyDB, a high-performance Redis-compatible store, continues gaining ground.
The AGPL option is an improvement, but the trust damage was done. Valkey is at v9 with a thriving contributor base, and many teams aren't looking back.
What This Means for Your API
If your rate limiting relies on Redis, you now have licensing questions. Maybe they don't affect you today — AGPL might be fine for your use case. But if you're at a company with a legal team, or you ship software to customers, or you simply prefer not to think about it, Valkey and DragonflyDB eliminate those questions entirely.
Valkey: Linux Foundation project. BSD licensed. Same Redis protocol, same commands, same client libraries. Drop-in replacement.
DragonflyDB: Redis-compatible, optimized for multi-core. BSL licensed. Designed for high-throughput workloads.
Both work with ioredis. Both run the same Lua scripts. Both use the same port. The migration is boring — which is exactly what you want from infrastructure changes.
The Migration: Three Lines of Code
// Before — Redis
import { redisStore } from '@joint-ops/hitlimit/stores/redis'
const store = redisStore({ url: 'redis://localhost:6379' })
// After — Valkey (literally change the import)
import { valkeyStore } from '@joint-ops/hitlimit/stores/valkey'
const store = valkeyStore({ url: 'redis://localhost:6379' })
Under the hood, ValkeyStore and DragonflyStore run the exact same Lua pipeline as RedisStore. Same atomic INCR + conditional PEXPIRE in a single round-trip. Zero new logic, zero new failure modes.
-- The Lua script powering all three stores
local key = KEYS[1]
local windowMs = tonumber(ARGV[1])
local count = redis.call('INCR', key)
local ttl = redis.call('PTTL', key)
if ttl < 0 then
redis.call('PEXPIRE', key, windowMs)
ttl = windowMs
end
return {count, ttl}
The Benchmarks
I benchmark every store, every framework, every release. Here are the v1.3.0 distributed store numbers — the ones that actually matter when you're running multiple API instances behind a load balancer.
Test methodology: 50,000 iterations × 5 runs per scenario, 5,000 warmup iterations discarded. Node.js v24.4.1, Apple M1. All raw JSON results are in the benchmarks directory.
Redis vs Valkey vs DragonflyDB
| Store | ops/sec | Avg Latency | p99 Latency | License |
|---|---|---|---|---|
| Valkey | 6,879 | 145μs | 250μs | BSD |
| Redis | 6,733 | 164μs | 266μs | RSAL/SSPL/AGPL |
| DragonflyDB | 5,861 | 170μs | 272μs | BSL |
| Postgres | 3,491 | 286μs | 608μs | PostgreSQL |
Valkey and Redis are within measurement noise. Same protocol, same Lua scripts, same ioredis client underneath. The bottleneck is network round-trip time — ~150μs over localhost — not the datastore.
DragonflyDB is slightly slower for this workload because rate limiting uses simple single-key operations where Dragonfly's multi-core architecture doesn't kick in. Still well within production tolerance.
Postgres is roughly half the throughput — but 3,500 ops/sec is still far more than most APIs need, and if you're already running PG, it means zero extra infrastructure.
With Framework Middleware (Node.js)
| Framework | Store | ops/sec | Avg Latency |
|---|---|---|---|
| Express | Redis | 6,188 | 161μs |
| Express | Valkey | ~6,200 | ~160μs |
| Hono | Redis | 6,340 | 157μs |
| Fastify | Redis | 5,272 | 189μs |
Framework overhead is negligible when each operation spends ~150μs on a network round-trip. The store choice matters more than the framework choice at this layer.
Bun Distributed Performance
| Store | ops/sec | Avg Latency |
|---|---|---|
| Valkey | 6,075 | 164μs |
| Redis | 6,984 | 143μs |
| DragonflyDB | 6,008 | 166μs |
Same story on Bun — all three stores are within noise for rate limiting workloads.
Choosing the Right Store
| Store | ops/sec | Distributed? | License | Best for |
|---|---|---|---|---|
| Valkey | 6,879 | Yes | BSD | Multi-instance, no licensing questions |
| Redis | 6,733 | Yes | RSAL/SSPL/AGPL | Multi-instance, if AGPL works for you |
| DragonflyDB | 5,861 | Yes | BSL | Multi-instance, multi-core optimized |
| Postgres | 3,491 | Yes | PostgreSQL | Already running PG, no extra infra |
| SQLite | 418,260 | No | Public domain | Single instance, survives restarts |
Full Example
import express from 'express'
import { createLimiter } from '@joint-ops/hitlimit'
import { valkeyStore } from '@joint-ops/hitlimit/stores/valkey'
const app = express()
const limiter = createLimiter({
store: valkeyStore({ url: 'redis://localhost:6379' }),
windowMs: 15 * 60 * 1000, // 15 minutes
limit: 100,
})
// Apply to all routes
app.use(limiter.express())
// Or different limits per route
const strictLimiter = createLimiter({
store: valkeyStore({ url: 'redis://localhost:6379' }),
windowMs: 60_000,
limit: 10,
})
app.post('/api/login', strictLimiter.express(), (req, res) => {
// 10 attempts per minute
})
app.get('/api/data', (req, res) => {
// 100 requests per 15 minutes (from the global limiter)
})
app.listen(3000)
Works with Express, Fastify, Hono, NestJS, Bun.serve, and Elysia. Same API across all frameworks.
Why Dedicated Stores Instead of "Just Use the Redis Store"?
You can use the Redis store with Valkey and DragonflyDB — it works perfectly. But dedicated stores give you:
-
Clarity —
valkeyStore()is more descriptive thanredisStore()pointed at a Valkey instance - Discoverability — someone searching "valkey rate limiter" on npm actually finds it
- Future-proofing — if Valkey or DragonflyDB ever diverge from Redis protocol, we have a clean abstraction point
Getting Started
# Node.js
npm install @joint-ops/hitlimit
# Bun
bun add @joint-ops/hitlimit-bun
~8KB core bundle (Node.js) / ~18KB (Bun). Tree-shakeable stores — you only pay for what you import. Zero runtime dependencies. MIT licensed.
GitHub: https://github.com/JointOps/hitlimit-monorepo
Has the Redis licensing situation changed any of your infrastructure decisions? I'd love to hear about it in the comments.
Top comments (0)