When Redis Ltd announced the license change in March 2024, I was in the middle of planning a caching layer for a mid-sized SaaS product — four engineers, roughly 80k daily active users, nothing hyperscale but enough that infrastructure decisions have real consequences. My first reaction was basically: this is annoying, but probably fine. I figured the open source community would grumble for a few weeks and move on.
I was wrong. The fork that came out of it — Valkey, now a Linux Foundation project — turned out to be more interesting than I expected.
Here's what I actually learned after running both in staging, migrating one production service, and spending way too many evenings reading GitHub issues.
March 2024: Why the SSPL Switch Was a Bigger Deal Than It Looked
Redis Ltd moved to dual-license Redis under RSAL (Redis Source Available License) and SSPL (Server Side Public License). Both sound fine until you notice that the Open Source Initiative does not consider SSPL an open source license. MongoDB used it first, and the controversy followed them too.
The practical implication: any cloud provider building a managed Redis-compatible service would now need a commercial agreement with Redis Ltd. The old BSD license let them run Redis however they wanted. SSPL did not.
So within weeks, Amazon, Google, Oracle, Ericsson, and others backed a fork. The Linux Foundation accepted it in late March 2024. They called it Valkey — and started from Redis 7.2.4, the last version under the BSD license.
What struck me was how fast this moved. Fork announcement, Linux Foundation acceptance, first release (7.2.5) — all within about two months. Compare that to the years-long drama around other high-profile forks. Someone had clearly been planning for this scenario before it was announced publicly.
The right way to think about Valkey is not "Redis but free." The governance model is genuinely different. Redis Ltd controls Redis. Valkey is steered by a Technical Steering Committee across multiple companies with no single controlling entity. Whether that matters depends on how much you care about vendor lock-in at the infrastructure layer.
Valkey Is Not the Same Project Anymore — and Neither Is Redis
I assumed Valkey would just track Redis feature-for-feature, staying compatible but staying behind. That's not what happened.
By late 2024, Valkey 8.0 shipped with meaningful performance work around I/O threading. Redis had always had a reputation for being single-threaded on commands (even though I/O threading was added in Redis 6.0), and Valkey's team pushed further on that in 8.0. In my own synthetic benchmarks — 8 CPU cores, mostly GET/SET workloads with some sorted set operations — Valkey 8.0 was measurably faster than Redis 7.4 at high connection counts. Maybe 15-20% throughput improvement in the scenarios I tested. Not nothing.
The config that mattered:
# valkey.conf — enable threaded I/O (Valkey 8.x)
io-threads 4
io-threads-do-reads yes
# This was available in Redis 6.0+ too, but Valkey's default
# behavior and threading model changed in 8.0
# Check your cpu count: don't set io-threads > (cpu_count - 1)
I'm not confident this scales beyond the specific workload I tested — scripting-heavy or transaction-heavy workloads behave differently. But the point stands: Valkey is making its own performance bets now, not just merging Redis commits.
On the Redis side, and this genuinely surprised me: Redis 8 added vector set support and kept iterating on Redis Query Engine (the search/vector work from the old RediSearch module). That's real differentiation. If you're building anything combining caching with vector similarity search, Redis has a more integrated story than Valkey does right now.
By early 2026, the two projects have genuinely diverged. Not dramatically — still about 90% compatible at the command level — but enough that you can't pick one on inertia alone.
The Client Library Situation Nobody Warned Me About
This is where I hit an actual wall.
I was migrating a Node.js service from ElastiCache (which AWS quietly switched to Valkey by default for new clusters in 2024) to self-hosted Valkey for cost reasons. Our client was ioredis, which we'd used for years.
I pushed the config change on a Friday afternoon. Forty minutes later, the service started throwing intermittent connection errors — not enough to page anyone, but enough to show up in our error rate dashboard, which I happened to be watching for a completely unrelated reason. We rolled back. I spent the weekend reading ioredis GitHub issues and found the actual problem buried in a thread from mid-2025.
The issue was how certain client libraries handled CLIENT INFO and HELLO commands during connection setup — version negotiation behavior that had diverged between Valkey 8.x and what ioredis expected based on Redis 7.x behavior. Not exactly a Valkey bug, more of a "the ecosystem assumed Redis forever" problem. The fix existed in an ioredis release that had already shipped, but our lock file had pinned us to an older version.
Check your client library's Valkey compatibility explicitly, and check recent release notes before migrating. "Redis-compatible" does not mean "tested against Valkey 8." Some libraries are explicit about this now; many aren't.
The managed service picture is cleaner. AWS MemoryDB and ElastiCache both support Valkey and handle client library abstraction for you. If you're on a managed offering, the migration path is more straightforward than self-hosted.
Where the Managed Services Landscape Settled
The cloud picture shifted more decisively than I expected.
AWS made Valkey the default for new ElastiCache and MemoryDB instances in 2024. You can still choose Redis — they have a commercial agreement with Redis Ltd — but the default changed. That's operationally significant: environments spun up from templates or Terraform modules defaulted to Valkey unless someone explicitly overrode it.
Google Cloud Memorystore offers Valkey as well. Azure still has Azure Cache for Redis, backed by actual Redis under their own commercial arrangement. The big three have split: AWS and GCP leaning Valkey, Azure leaning Redis.
Redis Insight (the GUI tool) works fine with Valkey — RESP protocol is shared, so most tooling in the ecosystem still functions. The divergence shows up in edge cases: specific command behaviors, module support, vector search.
Which brings me to the most important decision factor right now. If you're using Redis Modules heavily — RedisSearch, RedisJSON, RedisTimeSeries — your decision is mostly already made. Those are Redis Ltd products, not part of Valkey. Community-driven Valkey equivalents are emerging but aren't at the same maturity level yet.
My Actual Recommendation
I thought about hedging this and decided not to.
On managed AWS or GCP: use Valkey. The operational complexity is identical, the defaults are already pointing there, and there's no reason to route Redis Ltd licensing costs through your cloud provider for standard caching workloads. Session storage, rate limiting, pub/sub, leaderboards with sorted sets — Valkey handles all of it, and the performance characteristics are at least comparable, often better.
Self-hosted with heavy Redis Stack module usage: stay on Redis. Valkey doesn't have mature equivalents yet, and the compatibility gap is real if you've built on top of RedisSearch or RedisJSON. Watch the Valkey module ecosystem closely — that's where the gap closes or doesn't over the next year or so.
Starting a new project from scratch in 2026? Valkey is my default. The governance model is more stable long-term — Linux Foundation backing means no surprise license changes. The performance work is real. For typical web application use cases, you're not giving anything up.
The one caveat I'd offer: Redis has 15 years of tutorials, Stack Overflow answers, and tribal knowledge behind it. Valkey is two years old. That documentation gap is real, and for smaller teams without deep Redis expertise, it shows. Plan for that.
For my team, we migrated session caching and rate limiting to Valkey 8.x on managed infrastructure. The search service stayed on Redis — we're using Redis Query Engine heavily and I don't want to rewrite that integration. Both are running fine. The migration I thought would take a weekend took about three weeks once you factor in the client library audit, testing, and the Friday rollback incident.
Worth it. The fact that a fork can happen, get Linux Foundation backing, and reach production-grade maturity in two years says something — and it's a better outcome than everyone just swallowing the SSPL change and moving on.
Top comments (0)