Redis Patterns Library
Production-proven Redis patterns for the problems you actually face — caching with proper invalidation, rate limiting that doesn't leak memory, pub/sub architectures that scale, session management with security, and cluster configurations that survive node failures. Every pattern includes the redis-cli commands, Lua scripts for atomicity, and configuration templates you can deploy today.
Key Features
- 5 caching strategies — cache-aside, write-through, write-behind, read-through, and refresh-ahead with TTL and eviction policy guidance
- Rate limiting patterns using sliding window, token bucket, and fixed window counters with atomic Lua scripts
- Pub/Sub and Streams architectures for real-time messaging, event sourcing, and consumer group processing
- Session management with secure token generation, sliding expiration, and session data encryption patterns
- Redis Cluster configuration templates for 6-node production clusters with automatic failover and slot rebalancing
- Distributed locking using Redlock with proper fencing tokens, retry logic, and lock extension
- Leaderboard and ranking patterns using sorted sets with pagination, tie-breaking, and real-time score updates
- Memory optimization techniques including hash ziplist encoding, key expiry patterns, and memory fragmentation monitoring
Quick Start
unzip redis-patterns-library.zip
cd redis-patterns-library/
# Connect to your Redis instance
redis-cli -h localhost -p 6379
# Test the rate limiter pattern
redis-cli EVAL "$(cat src/rate_limiting/sliding_window.lua)" 1 \
"rate:user:42" 100 60
# Load the session management helpers
redis-cli < src/sessions/create_session.redis
Basic cache-aside pattern:
# Check cache first
redis-cli GET "cache:user:1001"
# (nil) — cache miss
# After fetching from database, cache with 5-minute TTL
redis-cli SET "cache:user:1001" '{"name":"Jane","tier":"premium"}' EX 300
# Subsequent reads hit cache
redis-cli GET "cache:user:1001"
# {"name":"Jane","tier":"premium"}
# Invalidate on write
redis-cli DEL "cache:user:1001"
Architecture / How It Works
redis-patterns-library/
├── src/
│ ├── caching/
│ │ ├── cache_aside.redis # Basic cache-aside pattern
│ │ ├── write_through.redis # Sync write to cache + DB
│ │ ├── cache_stampede.lua # Probabilistic early expiry
│ │ └── invalidation_patterns.md # Tag-based, event-based strategies
│ ├── rate_limiting/
│ │ ├── sliding_window.lua # Sliding window rate limiter
│ │ ├── token_bucket.lua # Token bucket algorithm
│ │ └── fixed_window.lua # Simple fixed-window counter
│ ├── pubsub/
│ │ ├── basic_pubsub.redis # Pub/Sub commands
│ │ ├── streams_consumer_group.redis # Streams with consumer groups
│ │ └── event_sourcing.redis # Event log with XADD/XREAD
│ ├── sessions/
│ │ ├── create_session.redis # Session creation with token
│ │ ├── sliding_expiry.lua # Refresh TTL on access
│ │ └── session_security.md # Encryption and rotation guide
│ ├── locking/
│ │ ├── distributed_lock.lua # Redlock implementation
│ │ └── lock_with_fencing.lua # Fencing token for safety
│ ├── leaderboards/
│ │ └── sorted_set_leaderboard.redis # Ranking with ZADD/ZRANGE
│ └── cluster/
│ ├── redis-cluster.conf # Production cluster config
│ └── sentinel.conf # Sentinel for HA without cluster
├── examples/
│ ├── ecommerce_caching.md
│ └── api_rate_limiting.md
├── docs/
│ ├── checklists/pre-deployment.md
│ └── overview.md
└── config.example.yaml
Usage Examples
Sliding window rate limiter (atomic Lua script):
-- sliding_window.lua
-- KEYS[1] = rate limit key (e.g., "rate:api:user:42")
-- ARGV[1] = max requests allowed
-- ARGV[2] = window size in seconds
local key = KEYS[1]
local limit = tonumber(ARGV[1])
local window = tonumber(ARGV[2])
local now = tonumber(redis.call('TIME')[1])
local window_start = now - window
-- Remove expired entries
redis.call('ZREMRANGEBYSCORE', key, '-inf', window_start)
-- Count current requests in window
local count = redis.call('ZCARD', key)
if count < limit then
-- Allow request: add timestamp as score and member
redis.call('ZADD', key, now, now .. ':' .. math.random(1000000))
redis.call('EXPIRE', key, window)
return {1, limit - count - 1} -- allowed, remaining
else
return {0, 0} -- denied, 0 remaining
end
# Usage: allow 100 requests per 60 seconds for user 42
redis-cli EVAL "$(cat sliding_window.lua)" 1 "rate:api:user:42" 100 60
# Returns: 1) (integer) 1 -- allowed
# 2) (integer) 99 -- remaining
Redis Streams consumer group for event processing:
# Create the stream and consumer group
redis-cli XGROUP CREATE events mygroup $ MKSTREAM
# Producer: add events
redis-cli XADD events "*" type "order.created" order_id "ORD-1001" amount "59.99"
redis-cli XADD events "*" type "order.paid" order_id "ORD-1001"
# Consumer: read and acknowledge
redis-cli XREADGROUP GROUP mygroup consumer1 COUNT 10 BLOCK 5000 STREAMS events ">"
# Process events, then acknowledge:
redis-cli XACK events mygroup "1711234567890-0"
# Check pending (unacknowledged) messages
redis-cli XPENDING events mygroup - + 10
Distributed lock with Redlock pattern:
-- distributed_lock.lua
-- KEYS[1] = lock key
-- ARGV[1] = unique lock token (UUID)
-- ARGV[2] = lock TTL in milliseconds
local key = KEYS[1]
local token = ARGV[1]
local ttl = tonumber(ARGV[2])
-- SET NX with PX (only if not exists, with expiry)
local result = redis.call('SET', key, token, 'NX', 'PX', ttl)
if result then
return 1 -- lock acquired
else
return 0 -- lock held by another process
end
# Acquire lock (10 second TTL)
redis-cli EVAL "$(cat distributed_lock.lua)" 1 "lock:order:1001" "uuid-abc-123" 10000
# Release lock (only if we hold it)
redis-cli EVAL "if redis.call('GET',KEYS[1])==ARGV[1] then return redis.call('DEL',KEYS[1]) else return 0 end" 1 "lock:order:1001" "uuid-abc-123"
Configuration
# config.example.yaml
redis:
host: localhost
port: 6379
password: YOUR_REDIS_PASSWORD_HERE
db: 0
tls: false
caching:
default_ttl_seconds: 300 # 5 minutes
max_memory: "2gb"
eviction_policy: allkeys-lru # allkeys-lru | volatile-lfu | noeviction
cache_prefix: "cache:"
rate_limiting:
default_limit: 100 # requests per window
default_window_seconds: 60
key_prefix: "rate:"
sessions:
ttl_seconds: 1800 # 30 minutes
sliding_expiry: true # refresh TTL on each access
key_prefix: "session:"
cluster:
nodes: 6 # 3 masters + 3 replicas minimum
replicas_per_master: 1
cluster_node_timeout: 15000 # ms before marking node as failing
Best Practices
-
Always set TTLs on cache keys. Keys without expiry accumulate forever. Use
EX(seconds) orPX(milliseconds) on everySET. -
Use Lua scripts for multi-step operations. Redis executes Lua atomically — no race conditions between
GETandSET. -
Prefer
allkeys-lrueviction for pure caching workloads.volatile-lruonly evicts keys with TTLs, leaving permanent keys untouched. - Use Redis Streams over Pub/Sub when you need message durability. Pub/Sub drops messages if no subscriber is listening; Streams persist them.
-
Set
maxmemoryexplicitly. Without it, Redis grows until the OOM killer terminates it. For caching, set to 60-70% of available RAM. -
Monitor
used_memory_rssvsused_memory. A large gap indicates memory fragmentation; consider restarting Redis during a maintenance window.
Troubleshooting
| Problem | Cause | Fix |
|---|---|---|
| Cache stampede on popular keys | Many requests hit expired key simultaneously | Use probabilistic early expiry (XFetch algorithm) in cache_stampede.lua
|
| Rate limiter leaking memory | Sorted set entries not expiring | Verify EXPIRE is called after ZADD; check Lua script sets expiry on every call |
| Pub/Sub messages lost | Consumer disconnected during publish | Switch to Redis Streams with consumer groups for durable messaging |
| Cluster slots not covered | Master failed without replica promotion | Ensure every master has at least one replica; check CLUSTER NODES output |
This is 1 of 9 resources in the Database Admin Pro toolkit. Get the complete [Redis Patterns Library] with all files, templates, and documentation for $29.
Or grab the entire Database Admin Pro bundle (9 products) for $109 — save 30%.
Top comments (0)