DEV Community

daniel jeong
daniel jeong

Posted on • Originally published at manoit.co.kr

Complete Guide to Valkey 8.1 vs Redis 8.0 — The 2026 In-Memory Datastore Fork War and Production Migration Strategy

Complete Guide to Valkey 8.1 vs Redis 8.0 — The 2026 In-Memory Datastore Fork War Winner and Production Migration Strategy

In March 2024, Redis Ltd. switched its open-source license from BSD-3-Clause to SSPL/RSALv2 dual licensing, sending shockwaves through the open-source community. The Linux Foundation immediately launched the Valkey project, with approximately 50 companies — including AWS, Google Cloud, Oracle, Alibaba, and Ericsson — joining as contributors. As of April 2026, Valkey 8.1 delivers 37% higher throughput and 28% lower memory usage compared to Redis, while AWS ElastiCache and GCP Memorystore have made Valkey their default engine.

Meanwhile, Redis added AGPLv3 as a third licensing option in version 8.0, attempting an open-source reconciliation, and counterattacked with Vector Set — a new data type for the AI era. This guide provides a comprehensive, production-focused comparison of both sides: architecture differences, performance benchmarks, licensing strategies, cloud ecosystem landscape, and migration strategies.

Background — Why Valkey Was Born

The Redis License Change and Community Split

Redis operated under BSD-3-Clause for 15 years, becoming the de facto standard for in-memory datastores. In March 2024, Redis Ltd. switched to SSPLv1/RSALv2 dual licensing, citing insufficient contributions from cloud providers who offered Redis as a managed service. Neither license is OSI-approved, effectively restricting managed service offerings.

The Linux Foundation launched Valkey in late March 2024, forking from Redis 7.2.4. AWS, Google Cloud, Oracle, Snap, and Ericsson joined as initial contributors, with former Redis core developers forming the technical leadership committee. The name "Valkey" combines "value" and "key" — the fundamental concepts of a datastore.

Redis AGPLv3 Retreat — A Reconciliation Gesture Too Late?

In May 2025, alongside Redis 8.0 GA, Redis added AGPLv3 as a third licensing option. Users can now choose from SSPLv1, RSALv2, or the OSI-approved AGPLv3. However, by this point the Valkey ecosystem was already firmly established. AGPLv3 requires source disclosure for all modifications when providing network services, creating higher adoption barriers for enterprises compared to Valkey's BSD-3-Clause.

Deep Architecture Comparison — I/O Threading and Memory Optimization

Valkey 8.1's I/O Threading Innovation

The enhanced async I/O threading introduced in Valkey 8.0 represents the biggest architectural change since the fork. While maintaining the single-threaded command processing model for atomicity, network reads/writes are offloaded to an I/O thread pool. Valkey intelligently distributes I/O tasks across multiple cores based on real-time usage analysis, maximizing hardware utilization.

Valkey 8.1 takes this further. TLS negotiation offloading moved to I/O threads, improving new connection acceptance speed by approximately 300% in TLS-enabled environments. Replication stream writes on primary nodes are also offloaded to the I/O thread pool, making diskless replication with TLS up to 18% faster. Memory prefetching techniques preload hashtable buckets and elements into CPU cache during key iteration, significantly improving SCAN command family performance.

Memory Efficiency — Hashtable Redesign

Valkey 8.x's hashtable redesign significantly reduces per-key-value-pair memory overhead: approximately 20 bytes for keys without TTL, and up to 30 bytes for keys with TTL. At hyperscale, the difference is dramatic:

Key Count Valkey 8.1 Redis 8.0 Difference
1M ~80 MB ~100 MB -20%
10M ~760 MB ~980 MB -22%
50M 3.77 GB 4.83 GB -28% (1.06 GB)

At 50 million keys, Valkey saves 1.06 GB compared to Redis. For environments running hundreds of replicas, the infrastructure cost savings are substantial.

Redis 8.0's Vector Set — The AI Era Differentiator

Redis 8.0's most powerful differentiator is the Vector Set data type. Inspired by Sorted Sets, it natively supports high-dimensional vector embedding storage and similarity search. This enables semantic search, recommendation systems, and RAG (Retrieval-Augmented Generation) pipelines without a separate vector database.

Key Vector Set capabilities:

Feature Description
Auto Quantization Default 8-bit quantization, with no-quantization and binary quantization options
Dimension Reduction Automatic dimension reduction via random projection
Attribute Filtering JSON blob metadata association for filtered searches
HNSW Index High-performance approximate nearest neighbor search

Note that Vector Set is currently in beta, and its API may change. Unless your use case specifically requires combining caching with vector similarity search in a single infrastructure, dedicated vector databases (Weaviate, Milvus, etc.) remain the more mature choice.

Performance Benchmarks — The Gap in Numbers

Independent benchmarks conducted by Momento on AWS c8g.2xlarge (8 vCPU) instances clearly demonstrate the performance gap:

Metric Valkey 8.1.1 Redis 8.0 Valkey Advantage
SET RPS 999,800 729,400 +37%
SET p99 Latency 0.80 ms 0.99 ms -19%
GET RPS ~1,050,000 ~860,000 +22%
Memory (50M keys) 3.77 GB 4.83 GB -28%
TLS Connection Accept (Valkey 8.1 baseline) +300%

Valkey's I/O threading architecture shows a clear advantage in multi-core environments. The 300% TLS connection acceptance improvement makes a real difference in production environments with TLS enabled. Pipeline workloads also see an additional 10% throughput improvement in Valkey 8.1 over 8.0.

Licensing Strategy and Governance — Who Is True Open Source?

Aspect Valkey Redis
License BSD-3-Clause SSPLv1 / RSALv2 / AGPLv3 (triple)
OSI Approved BSD-3 is OSI-approved Only AGPLv3 is OSI-approved
Governance Linux Foundation (neutral) Redis Ltd. (commercial entity)
Contributors ~50 companies (AWS, GCP, Oracle, etc.) Redis Ltd.-centric
GitHub Stars 19,800+ 68,000+ (15 years accumulated)
Managed Service No restrictions AGPLv3 requires source disclosure

From an enterprise perspective, the most critical difference is license risk. Valkey's BSD-3-Clause has virtually no commercial use restrictions. Redis's AGPLv3, while OSI-approved, requires modified source disclosure for network services, necessitating legal review. Choosing SSPL/RSALv2 means you're using source-available, not open-source, licenses.

Cloud Ecosystem Landscape — Big Three Choices

As of April 2026, major cloud providers' in-memory datastore strategies have clearly diverged:

Cloud Service Default Engine Notes
AWS ElastiCache / MemoryDB Valkey Serverless 33% cheaper, node-based 20% cheaper
GCP Memorystore Valkey Up to 14.5 TB, multi-zone clusters
Azure Azure Managed Redis Redis Azure Cache for Redis retiring
Akamai Managed Valkey Valkey Switched in 2024
Oracle OCI Cache Valkey Linux Foundation contributor

AWS's Valkey-based ElastiCache Serverless is 33% cheaper than Redis OSS, with node-based options 20% cheaper. Excluding Azure, Valkey has become the de facto standard engine for multi-cloud strategies.

Feature Comparison — Where They Diverge

Feature Valkey 8.1 Redis 8.0+
I/O Threading Enhanced async I/O Basic I/O threading
Vector Search Not supported (external) Vector Set (native)
Full-text Search Not supported Redis Query Engine
Time Series Not supported RedisTimeSeries
JSON Support Basic support RedisJSON (native)
Probabilistic Data Structures Basic support RedisBloom (integrated)
Client Compatibility 100% Redis client compatible Native

Redis's strength lies in its unified data platform strategy — caching + vector search + full-text search + time series in a single infrastructure. Valkey focuses on pure caching/session/message queue workloads, delegating vector and full-text search to dedicated solutions (Weaviate, Elasticsearch, etc.).

Production Migration Guide

Redis to Valkey Migration

Valkey maintains full API compatibility with Redis OSS 7.2. Existing Redis clients (ioredis, redis-py, Jedis, StackExchange.Redis) work without code changes. Migration follows three steps:

Step 1: Compatibility Verification

# Check current Redis version
redis-cli INFO server | grep redis_version

# Test compatibility with Valkey container
docker run -d --name valkey-test -p 6380:6379 valkey/valkey:8.1

# Test existing client against Valkey
redis-cli -p 6380 PING
# Response: PONG

# Run existing application integration tests
REDIS_URL=redis://localhost:6380 npm test
Enter fullscreen mode Exit fullscreen mode

Step 2: Data Migration

# RDB snapshot migration (with downtime)
redis-cli -h source-redis BGSAVE
# Copy RDB file to Valkey data directory and restart

# Live migration (minimal downtime)
# Set Valkey as Redis replica
valkey-cli -h valkey-new REPLICAOF source-redis 6379

# Verify sync completion
valkey-cli -h valkey-new INFO replication
# Confirm: master_link_status:up, master_sync_in_progress:0

# Execute failover
valkey-cli -h valkey-new REPLICAOF NO ONE
Enter fullscreen mode Exit fullscreen mode

Step 3: AWS ElastiCache Migration (Managed)

# Change ElastiCache engine via AWS CLI
aws elasticache modify-replication-group \
  --replication-group-id my-redis-cluster \
  --engine valkey \
  --engine-version 8.0 \
  --apply-immediately

# Or create new Valkey cluster with data migration
aws elasticache create-serverless-cache \
  --serverless-cache-name my-valkey-cache \
  --engine valkey \
  --major-engine-version 8
Enter fullscreen mode Exit fullscreen mode

Decision Framework

Scenario Recommendation Reason
Pure caching/session/queue Valkey Higher throughput, lower memory, permissive license
Caching + vector search integration Redis Vector Set simplifies infrastructure
AWS/GCP managed services Valkey Default engine, 20-33% cost savings
Azure environment Redis Azure Managed Redis native support
Full-text search + time series needed Redis RedisSearch, RedisTimeSeries integrated
Minimize license risk Valkey BSD-3, no managed service restrictions
High-traffic (1M+ RPS) Valkey I/O threading maximizes multi-core utilization

Kubernetes Deployment — Helm Chart Comparison

# Valkey Helm deployment (Bitnami)
# values-valkey.yaml
architecture: replication
auth:
  enabled: true
  password: "your-secure-password"
master:
  resources:
    requests:
      cpu: 500m
      memory: 1Gi
    limits:
      cpu: 2000m
      memory: 4Gi
  persistence:
    enabled: true
    size: 10Gi
replica:
  replicaCount: 3
  resources:
    requests:
      cpu: 250m
      memory: 512Mi
metrics:
  enabled: true
  serviceMonitor:
    enabled: true  # Prometheus integration
Enter fullscreen mode Exit fullscreen mode
# Install Valkey
helm install my-valkey oci://registry-1.docker.io/bitnamicharts/valkey \
  -f values-valkey.yaml -n datastore

# Install Redis (for comparison)
helm install my-redis oci://registry-1.docker.io/bitnamicharts/redis \
  -f values-redis.yaml -n datastore
Enter fullscreen mode Exit fullscreen mode

2026 and Beyond — Valkey 9.0 and Redis 8.6 Directions

The Valkey community has already released Valkey 9.0, demonstrating aggressive development velocity. The Keyspace Amsterdam event shows an independent community ecosystem taking shape. Redis has released 8.4 and 8.6 in quick succession, focusing on performance improvements and memory reduction, with Redis 8.6 achieving "substantial performance improvements and memory reduction."

The codebase divergence deepens over time. Valkey focuses on core performance and memory efficiency, while Redis differentiates through data platform integration (vector, search, time series, JSON). Starting from the same code in 2024, by 2026 each has developed a distinct identity.

Conclusion — A Practical Selection Guide

Choosing an in-memory datastore in 2026 is no longer "just use Redis." For most caching, session management, and message queue workloads, Valkey is the rational choice across performance, cost, and licensing dimensions. If you're on AWS or GCP, managed service cost savings alone justify the switch.

However, for AI-centric architectures requiring vector similarity search, full-text search, and time series data integrated with a cache layer, Redis's unified data platform strategy reduces infrastructure complexity. If Azure is your primary environment, Redis is the natural choice.

The key takeaway: existing Redis client code works with Valkey without modification. Migration cost is extremely low, so if you're currently running Redis, we strongly recommend testing Valkey 8.1 in your staging environment.


This article was generated by the **ManoIT Tech Blog Automation Pipeline. AI-powered trend analysis, technical research, content generation, and quality verification. For technical corrections or suggestions, contact ManoIT.


Originally published at ManoIT Tech Blog.

Top comments (0)