Redis memory eviction is a sophisticated mechanism that automatically manages memory when usage exceeds the maxmemory limit. Understanding this system is crucial for Redis optimization and production deployment.
🔑 Key Takeaways:
- 8 distinct eviction policies for different use cases
- Innovative eviction pool algorithm with 16-slot optimization
- Approximate LRU/LFU algorithms balancing accuracy and performance
- Progressive eviction to maintain responsiveness
📊 Foundation: Two Core Data Structures
Redis maintains two essential dictionaries that form the foundation of its eviction system:
✅ db->keys: All key-value pairs (allkeys scope)
⏰ db->expires: Keys with expiration time (volatile scope)
🎯 The Eviction Pool: Redis's Secret Weapon
#define EVPOOL_SIZE 16
#define EVPOOL_CACHED_SDS_SIZE 255
struct evictionPoolEntry {
unsigned long long idle; // Idle time or score value
sds key; // Key name
sds cached; // Cached SDS object to avoid frequent allocation
int dbid; // Database ID
int slot; // Slot index
};
🎮 Eight Eviction Strategies: Choose Your Game Plan
Redis offers 8 distinct eviction policies, each optimized for different scenarios:
📋 Complete Policy Matrix
| Policy | 🎯 Use Case | Algorithm | Scope |
|---|---|---|---|
volatile-lru |
Cache with TTL | 🕐 Least Recently Used | Keys with expiration |
volatile-lfu |
Popular data priority | 📊 Least Frequently Used | Keys with expiration |
volatile-ttl |
Session management | ⏱️ Shortest TTL first | Keys with expiration |
volatile-random |
Fair eviction | 🎲 Random selection | Keys with expiration |
allkeys-lru |
General caching | 🕐 Least Recently Used | All keys |
allkeys-lfu |
Hot data retention | 📊 Least Frequently Used | All keys |
allkeys-random |
Simple eviction | 🎲 Random selection | All keys |
noeviction |
Strict memory | 🚫 No eviction, reject writes | - |
🧠 The Eviction Pool Algorithm: Smart Selection Process
The eviction pool is Redis's optimization masterpiece - a 16-slot buffer that dramatically improves key selection quality:
⚡ How It Works (The Magic Behind the Scenes)
🔍 Step 1 - Sampling: Grab 5 random keys from target dictionary
📊 Step 2 - Scoring: Calculate "idle score" for each key
🎯 Step 3 - Pool Update: Insert best candidates into 16-slot pool
✨ Step 4 - Selection: Pick the highest-scoring key for eviction
💡 Why This Works:
- Maintains 16 best candidates across multiple sampling rounds
- Much better than naive "pick best of 5" approach
- Balances selection quality with performance overhead
🎯 Scoring System: How Redis Decides
🕐 LRU: idle = estimateObjectIdleTime(kv); // Older = Higher Score
📊 LFU: idle = 255 - LFUDecrAndReturn(kv); // Less Used = Higher Score
⏰ TTL: idle = ULLONG_MAX - kvobjGetExpire(kv); // Expires Soon = Higher Score
🎲 Random Strategy: No Pool Needed
// Round-robin search across databases for random key
for (i = 0; i < server.dbnum; i++) {
j = (++next_db) % server.dbnum;
db = server.db + j;
kvstore *kvs = (policy == MAXMEMORY_ALLKEYS_RANDOM) ?
db->keys : db->expires;
int slot = kvstoreGetFairRandomDictIndex(kvs);
de = kvstoreDictGetRandomKey(kvs, slot);
if (de) {
bestkey = kvobjGetKey(dictGetKV(de));
break;
}
}
🔬 LRU vs LFU: The Algorithm Showdown
🕐 LRU Implementation: 24-bit Timestamp Magic
#define LRU_CLOCK_RESOLUTION 1000 // 1 second precision
#define LRU_CLOCK_MAX ((1<<LRU_BITS)-1)
// Get current LRU clock
unsigned int getLRUClock(void) {
return (mstime()/LRU_CLOCK_RESOLUTION) & LRU_CLOCK_MAX;
}
// Estimate object idle time
unsigned long long estimateObjectIdleTime(robj *o) {
unsigned long long lruclock = LRU_CLOCK();
if (lruclock >= o->lru) {
return (lruclock - o->lru) * LRU_CLOCK_RESOLUTION;
} else {
// Handle clock wraparound
return (lruclock + (LRU_CLOCK_MAX - o->lru)) * LRU_CLOCK_RESOLUTION;
}
}
🎯 Redis's Clever Trade-off:
- Only 24 bits per object (vs full timestamp)
- 1-second precision (sufficient for most cases)
- O(1) idle time calculation
- Handles clock wraparound gracefully
📊 LFU Implementation: Logarithmic Frequency Counter
🧮 Brilliant 24-bit Split:
16 bits 8 bits
+------------------+--------+
| Last access time | LOG_C |
+------------------+--------+
- 16 bits: Last access time (minute precision)
- 8 bits: Logarithmic frequency counter (0-255)
🚀 Why Logarithmic?
- Linear counters saturate too quickly
- Log counters give diminishing returns for high-frequency items
- Probabilistic increment: harder to increment as frequency grows
// Logarithmic increment: Higher frequency makes increment harder
uint8_t LFULogIncr(uint8_t counter) {
if (counter == 255) return 255;
double r = (double)rand()/RAND_MAX;
double baseval = counter - LFU_INIT_VAL;
if (baseval < 0) baseval = 0;
double p = 1.0/(baseval * server.lfu_log_factor + 1);
if (r < p) counter++;
return counter;
}
// Time decay: Reduce frequency based on elapsed time
unsigned long LFUDecrAndReturn(robj *o) {
unsigned long ldt = o->lru >> 8;
unsigned long counter = o->lru & 255;
unsigned long num_periods = server.lfu_decay_time ?
LFUTimeElapsed(ldt) / server.lfu_decay_time : 0;
if (num_periods)
counter = (num_periods > counter) ? 0 : counter - num_periods;
return counter;
}
🔧 Eviction Execution Flow
⚙️ Main Function: performEvictions()
int performEvictions(void) {
// 1. Safety check
if (!isSafeToPerformEvictions()) return EVICT_OK;
// 2. Memory state check
if (getMaxmemoryState(&mem_reported, NULL, &mem_tofree, NULL) == C_OK) {
return EVICT_OK;
}
// 3. Policy check
if (server.maxmemory_policy == MAXMEMORY_NO_EVICTION) {
return EVICT_FAIL;
}
// 4. Loop eviction until enough memory is freed
while (mem_freed < mem_tofree) {
bestkey = selectEvictionKey();
deleteEvictedKeyAndPropagate(db, keyobj, &key_mem_freed);
mem_freed += key_mem_freed;
// Periodic checks every 16 keys
if (keys_freed % 16 == 0) {
flushSlavesOutputBuffers();
checkTimeLimit();
}
}
return EVICT_OK;
}
⏱️ Time Limit & Progressive Eviction
// Calculate time limit based on tenacity parameter
static unsigned long evictionTimeLimitUs(void) {
if (server.maxmemory_eviction_tenacity <= 10) {
return 50uL * server.maxmemory_eviction_tenacity; // 0-500μs
}
if (server.maxmemory_eviction_tenacity < 100) {
return (unsigned long)(500.0 * pow(1.15, server.maxmemory_eviction_tenacity - 10.0));
}
return ULONG_MAX; // No limit
}
// Start background eviction when timeout occurs
void startEvictionTimeProc(void) {
if (!isEvictionProcRunning) {
isEvictionProcRunning = 1;
aeCreateTimeEvent(server.el, 0, evictionTimeProc, NULL, NULL);
}
}
⚡ Performance Optimizations That Matter
🚀 Sampling Strategy: 5 keys → 16-slot pool (smart caching)
⚡ Time Slicing: Progressive eviction prevents blocking
🧠 SDS Caching: Reuse string objects in eviction pool
🔄 Lazy Free: Background deletion for large objects
📊 Batch Processing: Check every 16 keys for efficiency
🛠️ Production Configuration Guide
⚙️ Critical Parameters
| Parameter | Default | 🎯 Impact | Tuning Tips |
|---|---|---|---|
maxmemory |
0 | 🚨 Memory limit | Set to 80% of available RAM |
maxmemory-policy |
noeviction |
🎮 Strategy choice | Use allkeys-lru for cache |
maxmemory-samples |
5 | 🎯 Selection quality | Increase to 10 for better accuracy |
maxmemory-eviction-tenacity |
10 | ⚡ Responsiveness | Lower for latency-sensitive apps |
lfu-log-factor |
10 | 📊 LFU sensitivity | Higher = more gradual frequency growth |
lfu-decay-time |
1 | ⏰ LFU time decay | Adjust based on access patterns |
📈 Monitoring & Best Practices
🔍 Essential Metrics to Watch
📊 evicted_keys: Track eviction frequency
⚠️ eviction_exceeded_time: Monitor blocking time
🧩 mem_fragmentation_ratio: Watch memory efficiency
💾 used_memory: Current memory usage
🎯 maxmemory: Your memory ceiling
🎯 Strategy Selection Guide
🏆 Production Recommendations:
🌐 Web Caching: allkeys-lru or allkeys-lfu
- Most cached data follows access patterns
- Evict across all keys for maximum efficiency
🔐 Session Storage: volatile-ttl
- Sessions have natural expiration times
- TTL-based eviction respects session lifecycle
📊 Analytics Cache: allkeys-lfu
- Popular queries should stay longer
- Frequency matters more than recency
🎲 Development/Testing: allkeys-random
- Simple and predictable behavior
- Good for non-production environments
⚡ Performance Tuning Tips
🚀 For High-Throughput Systems:
- Increase
maxmemory-samplesto 10-20 - Use
maxmemory-eviction-tenacity20-30 - Monitor
eviction_exceeded_timeclosely
🎯 For Low-Latency Systems:
- Keep
maxmemory-samplesat 5 - Set
maxmemory-eviction-tenacityto 5-10 - Consider
lazyfree-lazy-eviction yes
💡 For Memory-Constrained Systems:
- Use aggressive eviction policies (
allkeys-*) - Set
maxmemoryto 70-80% of available RAM - Monitor fragmentation ratio regularly
🎉 Key Takeaways
✅ Smart Algorithm: 16-slot eviction pool beats naive sampling
✅ Balanced Trade-offs: Approximate algorithms for production performance
✅ Configurable: 8 policies + tunable parameters for any use case
✅ Production-Ready: Time limits and progressive eviction prevent blocking
This deep dive is based on Redis 8.2 source code analysis. Understanding these internals helps you optimize Redis for your specific use case!
🔗 Want to dive deeper? Check out the Redis source code in src/evict.c - it's surprisingly readable and well-commented!
Top comments (0)