Redis memory eviction is a sophisticated mechanism that automatically manages memory when usage exceeds the maxmemory limit. Understanding this system is crucial for Redis optimization and production deployment.
ð Key Takeaways:
- 8 distinct eviction policies for different use cases
- Innovative eviction pool algorithm with 16-slot optimization
- Approximate LRU/LFU algorithms balancing accuracy and performance
- Progressive eviction to maintain responsiveness
ð Foundation: Two Core Data Structures
Redis maintains two essential dictionaries that form the foundation of its eviction system:
â
db->keys: All key-value pairs (allkeys scope)
â° db->expires: Keys with expiration time (volatile scope)
ðŊ The Eviction Pool: Redis's Secret Weapon
#define EVPOOL_SIZE 16
#define EVPOOL_CACHED_SDS_SIZE 255
struct evictionPoolEntry {
unsigned long long idle; // Idle time or score value
sds key; // Key name
sds cached; // Cached SDS object to avoid frequent allocation
int dbid; // Database ID
int slot; // Slot index
};
ðŪ Eight Eviction Strategies: Choose Your Game Plan
Redis offers 8 distinct eviction policies, each optimized for different scenarios:
ð Complete Policy Matrix
| Policy | ðŊ Use Case | Algorithm | Scope |
|---|---|---|---|
volatile-lru |
Cache with TTL | ð Least Recently Used | Keys with expiration |
volatile-lfu |
Popular data priority | ð Least Frequently Used | Keys with expiration |
volatile-ttl |
Session management | âąïļ Shortest TTL first | Keys with expiration |
volatile-random |
Fair eviction | ðē Random selection | Keys with expiration |
allkeys-lru |
General caching | ð Least Recently Used | All keys |
allkeys-lfu |
Hot data retention | ð Least Frequently Used | All keys |
allkeys-random |
Simple eviction | ðē Random selection | All keys |
noeviction |
Strict memory | ðŦ No eviction, reject writes | - |
ð§ The Eviction Pool Algorithm: Smart Selection Process
The eviction pool is Redis's optimization masterpiece - a 16-slot buffer that dramatically improves key selection quality:
⥠How It Works (The Magic Behind the Scenes)
ð Step 1 - Sampling: Grab 5 random keys from target dictionary
ð Step 2 - Scoring: Calculate "idle score" for each key
ðŊ Step 3 - Pool Update: Insert best candidates into 16-slot pool
âĻ Step 4 - Selection: Pick the highest-scoring key for eviction
ðĄ Why This Works:
- Maintains 16 best candidates across multiple sampling rounds
- Much better than naive "pick best of 5" approach
- Balances selection quality with performance overhead
ðŊ Scoring System: How Redis Decides
ð LRU: idle = estimateObjectIdleTime(kv); // Older = Higher Score
ð LFU: idle = 255 - LFUDecrAndReturn(kv); // Less Used = Higher Score
â° TTL: idle = ULLONG_MAX - kvobjGetExpire(kv); // Expires Soon = Higher Score
ðē Random Strategy: No Pool Needed
// Round-robin search across databases for random key
for (i = 0; i < server.dbnum; i++) {
j = (++next_db) % server.dbnum;
db = server.db + j;
kvstore *kvs = (policy == MAXMEMORY_ALLKEYS_RANDOM) ?
db->keys : db->expires;
int slot = kvstoreGetFairRandomDictIndex(kvs);
de = kvstoreDictGetRandomKey(kvs, slot);
if (de) {
bestkey = kvobjGetKey(dictGetKV(de));
break;
}
}
ðŽ LRU vs LFU: The Algorithm Showdown
ð LRU Implementation: 24-bit Timestamp Magic
#define LRU_CLOCK_RESOLUTION 1000 // 1 second precision
#define LRU_CLOCK_MAX ((1<<LRU_BITS)-1)
// Get current LRU clock
unsigned int getLRUClock(void) {
return (mstime()/LRU_CLOCK_RESOLUTION) & LRU_CLOCK_MAX;
}
// Estimate object idle time
unsigned long long estimateObjectIdleTime(robj *o) {
unsigned long long lruclock = LRU_CLOCK();
if (lruclock >= o->lru) {
return (lruclock - o->lru) * LRU_CLOCK_RESOLUTION;
} else {
// Handle clock wraparound
return (lruclock + (LRU_CLOCK_MAX - o->lru)) * LRU_CLOCK_RESOLUTION;
}
}
ðŊ Redis's Clever Trade-off:
- Only 24 bits per object (vs full timestamp)
- 1-second precision (sufficient for most cases)
- O(1) idle time calculation
- Handles clock wraparound gracefully
ð LFU Implementation: Logarithmic Frequency Counter
ð§Ū Brilliant 24-bit Split:
16 bits 8 bits
+------------------+--------+
| Last access time | LOG_C |
+------------------+--------+
- 16 bits: Last access time (minute precision)
- 8 bits: Logarithmic frequency counter (0-255)
ð Why Logarithmic?
- Linear counters saturate too quickly
- Log counters give diminishing returns for high-frequency items
- Probabilistic increment: harder to increment as frequency grows
// Logarithmic increment: Higher frequency makes increment harder
uint8_t LFULogIncr(uint8_t counter) {
if (counter == 255) return 255;
double r = (double)rand()/RAND_MAX;
double baseval = counter - LFU_INIT_VAL;
if (baseval < 0) baseval = 0;
double p = 1.0/(baseval * server.lfu_log_factor + 1);
if (r < p) counter++;
return counter;
}
// Time decay: Reduce frequency based on elapsed time
unsigned long LFUDecrAndReturn(robj *o) {
unsigned long ldt = o->lru >> 8;
unsigned long counter = o->lru & 255;
unsigned long num_periods = server.lfu_decay_time ?
LFUTimeElapsed(ldt) / server.lfu_decay_time : 0;
if (num_periods)
counter = (num_periods > counter) ? 0 : counter - num_periods;
return counter;
}
ð§ Eviction Execution Flow
âïļ Main Function: performEvictions()
int performEvictions(void) {
// 1. Safety check
if (!isSafeToPerformEvictions()) return EVICT_OK;
// 2. Memory state check
if (getMaxmemoryState(&mem_reported, NULL, &mem_tofree, NULL) == C_OK) {
return EVICT_OK;
}
// 3. Policy check
if (server.maxmemory_policy == MAXMEMORY_NO_EVICTION) {
return EVICT_FAIL;
}
// 4. Loop eviction until enough memory is freed
while (mem_freed < mem_tofree) {
bestkey = selectEvictionKey();
deleteEvictedKeyAndPropagate(db, keyobj, &key_mem_freed);
mem_freed += key_mem_freed;
// Periodic checks every 16 keys
if (keys_freed % 16 == 0) {
flushSlavesOutputBuffers();
checkTimeLimit();
}
}
return EVICT_OK;
}
âąïļ Time Limit & Progressive Eviction
// Calculate time limit based on tenacity parameter
static unsigned long evictionTimeLimitUs(void) {
if (server.maxmemory_eviction_tenacity <= 10) {
return 50uL * server.maxmemory_eviction_tenacity; // 0-500Ξs
}
if (server.maxmemory_eviction_tenacity < 100) {
return (unsigned long)(500.0 * pow(1.15, server.maxmemory_eviction_tenacity - 10.0));
}
return ULONG_MAX; // No limit
}
// Start background eviction when timeout occurs
void startEvictionTimeProc(void) {
if (!isEvictionProcRunning) {
isEvictionProcRunning = 1;
aeCreateTimeEvent(server.el, 0, evictionTimeProc, NULL, NULL);
}
}
⥠Performance Optimizations That Matter
ð Sampling Strategy: 5 keys â 16-slot pool (smart caching)
⥠Time Slicing: Progressive eviction prevents blocking
ð§ SDS Caching: Reuse string objects in eviction pool
ð Lazy Free: Background deletion for large objects
ð Batch Processing: Check every 16 keys for efficiency
ð ïļ Production Configuration Guide
âïļ Critical Parameters
| Parameter | Default | ðŊ Impact | Tuning Tips |
|---|---|---|---|
maxmemory |
0 | ðĻ Memory limit | Set to 80% of available RAM |
maxmemory-policy |
noeviction |
ðŪ Strategy choice | Use allkeys-lru for cache |
maxmemory-samples |
5 | ðŊ Selection quality | Increase to 10 for better accuracy |
maxmemory-eviction-tenacity |
10 | ⥠Responsiveness | Lower for latency-sensitive apps |
lfu-log-factor |
10 | ð LFU sensitivity | Higher = more gradual frequency growth |
lfu-decay-time |
1 | â° LFU time decay | Adjust based on access patterns |
ð Monitoring & Best Practices
ð Essential Metrics to Watch
ð evicted_keys: Track eviction frequency
â ïļ eviction_exceeded_time: Monitor blocking time
ð§Đ mem_fragmentation_ratio: Watch memory efficiency
ðū used_memory: Current memory usage
ðŊ maxmemory: Your memory ceiling
ðŊ Strategy Selection Guide
ð Production Recommendations:
ð Web Caching: allkeys-lru or allkeys-lfu
- Most cached data follows access patterns
- Evict across all keys for maximum efficiency
ð Session Storage: volatile-ttl
- Sessions have natural expiration times
- TTL-based eviction respects session lifecycle
ð Analytics Cache: allkeys-lfu
- Popular queries should stay longer
- Frequency matters more than recency
ðē Development/Testing: allkeys-random
- Simple and predictable behavior
- Good for non-production environments
⥠Performance Tuning Tips
ð For High-Throughput Systems:
- Increase
maxmemory-samplesto 10-20 - Use
maxmemory-eviction-tenacity20-30 - Monitor
eviction_exceeded_timeclosely
ðŊ For Low-Latency Systems:
- Keep
maxmemory-samplesat 5 - Set
maxmemory-eviction-tenacityto 5-10 - Consider
lazyfree-lazy-eviction yes
ðĄ For Memory-Constrained Systems:
- Use aggressive eviction policies (
allkeys-*) - Set
maxmemoryto 70-80% of available RAM - Monitor fragmentation ratio regularly
ð Key Takeaways
â
Smart Algorithm: 16-slot eviction pool beats naive sampling
â
Balanced Trade-offs: Approximate algorithms for production performance
â
Configurable: 8 policies + tunable parameters for any use case
â
Production-Ready: Time limits and progressive eviction prevent blocking
This deep dive is based on Redis 8.2 source code analysis. Understanding these internals helps you optimize Redis for your specific use case!
ð Want to dive deeper? Check out the Redis source code in src/evict.c - it's surprisingly readable and well-commented!
Top comments (0)