Redis memory eviction is a sophisticated mechanism that automatically manages memory when usage exceeds the maxmemory
limit. Understanding this system is crucial for Redis optimization and production deployment.
๐ Key Takeaways:
- 8 distinct eviction policies for different use cases
- Innovative eviction pool algorithm with 16-slot optimization
- Approximate LRU/LFU algorithms balancing accuracy and performance
- Progressive eviction to maintain responsiveness
๐ Foundation: Two Core Data Structures
Redis maintains two essential dictionaries that form the foundation of its eviction system:
โ
db->keys
: All key-value pairs (allkeys scope)
โฐ db->expires
: Keys with expiration time (volatile scope)
๐ฏ The Eviction Pool: Redis's Secret Weapon
#define EVPOOL_SIZE 16
#define EVPOOL_CACHED_SDS_SIZE 255
struct evictionPoolEntry {
unsigned long long idle; // Idle time or score value
sds key; // Key name
sds cached; // Cached SDS object to avoid frequent allocation
int dbid; // Database ID
int slot; // Slot index
};
๐ฎ Eight Eviction Strategies: Choose Your Game Plan
Redis offers 8 distinct eviction policies, each optimized for different scenarios:
๐ Complete Policy Matrix
Policy | ๐ฏ Use Case | Algorithm | Scope |
---|---|---|---|
volatile-lru |
Cache with TTL | ๐ Least Recently Used | Keys with expiration |
volatile-lfu |
Popular data priority | ๐ Least Frequently Used | Keys with expiration |
volatile-ttl |
Session management | โฑ๏ธ Shortest TTL first | Keys with expiration |
volatile-random |
Fair eviction | ๐ฒ Random selection | Keys with expiration |
allkeys-lru |
General caching | ๐ Least Recently Used | All keys |
allkeys-lfu |
Hot data retention | ๐ Least Frequently Used | All keys |
allkeys-random |
Simple eviction | ๐ฒ Random selection | All keys |
noeviction |
Strict memory | ๐ซ No eviction, reject writes | - |
๐ง The Eviction Pool Algorithm: Smart Selection Process
The eviction pool is Redis's optimization masterpiece - a 16-slot buffer that dramatically improves key selection quality:
โก How It Works (The Magic Behind the Scenes)
๐ Step 1 - Sampling: Grab 5 random keys from target dictionary
๐ Step 2 - Scoring: Calculate "idle score" for each key
๐ฏ Step 3 - Pool Update: Insert best candidates into 16-slot pool
โจ Step 4 - Selection: Pick the highest-scoring key for eviction
๐ก Why This Works:
- Maintains 16 best candidates across multiple sampling rounds
- Much better than naive "pick best of 5" approach
- Balances selection quality with performance overhead
๐ฏ Scoring System: How Redis Decides
๐ LRU: idle = estimateObjectIdleTime(kv); // Older = Higher Score
๐ LFU: idle = 255 - LFUDecrAndReturn(kv); // Less Used = Higher Score
โฐ TTL: idle = ULLONG_MAX - kvobjGetExpire(kv); // Expires Soon = Higher Score
๐ฒ Random Strategy: No Pool Needed
// Round-robin search across databases for random key
for (i = 0; i < server.dbnum; i++) {
j = (++next_db) % server.dbnum;
db = server.db + j;
kvstore *kvs = (policy == MAXMEMORY_ALLKEYS_RANDOM) ?
db->keys : db->expires;
int slot = kvstoreGetFairRandomDictIndex(kvs);
de = kvstoreDictGetRandomKey(kvs, slot);
if (de) {
bestkey = kvobjGetKey(dictGetKV(de));
break;
}
}
๐ฌ LRU vs LFU: The Algorithm Showdown
๐ LRU Implementation: 24-bit Timestamp Magic
#define LRU_CLOCK_RESOLUTION 1000 // 1 second precision
#define LRU_CLOCK_MAX ((1<<LRU_BITS)-1)
// Get current LRU clock
unsigned int getLRUClock(void) {
return (mstime()/LRU_CLOCK_RESOLUTION) & LRU_CLOCK_MAX;
}
// Estimate object idle time
unsigned long long estimateObjectIdleTime(robj *o) {
unsigned long long lruclock = LRU_CLOCK();
if (lruclock >= o->lru) {
return (lruclock - o->lru) * LRU_CLOCK_RESOLUTION;
} else {
// Handle clock wraparound
return (lruclock + (LRU_CLOCK_MAX - o->lru)) * LRU_CLOCK_RESOLUTION;
}
}
๐ฏ Redis's Clever Trade-off:
- Only 24 bits per object (vs full timestamp)
- 1-second precision (sufficient for most cases)
- O(1) idle time calculation
- Handles clock wraparound gracefully
๐ LFU Implementation: Logarithmic Frequency Counter
๐งฎ Brilliant 24-bit Split:
16 bits 8 bits
+------------------+--------+
| Last access time | LOG_C |
+------------------+--------+
- 16 bits: Last access time (minute precision)
- 8 bits: Logarithmic frequency counter (0-255)
๐ Why Logarithmic?
- Linear counters saturate too quickly
- Log counters give diminishing returns for high-frequency items
- Probabilistic increment: harder to increment as frequency grows
// Logarithmic increment: Higher frequency makes increment harder
uint8_t LFULogIncr(uint8_t counter) {
if (counter == 255) return 255;
double r = (double)rand()/RAND_MAX;
double baseval = counter - LFU_INIT_VAL;
if (baseval < 0) baseval = 0;
double p = 1.0/(baseval * server.lfu_log_factor + 1);
if (r < p) counter++;
return counter;
}
// Time decay: Reduce frequency based on elapsed time
unsigned long LFUDecrAndReturn(robj *o) {
unsigned long ldt = o->lru >> 8;
unsigned long counter = o->lru & 255;
unsigned long num_periods = server.lfu_decay_time ?
LFUTimeElapsed(ldt) / server.lfu_decay_time : 0;
if (num_periods)
counter = (num_periods > counter) ? 0 : counter - num_periods;
return counter;
}
๐ง Eviction Execution Flow
โ๏ธ Main Function: performEvictions()
int performEvictions(void) {
// 1. Safety check
if (!isSafeToPerformEvictions()) return EVICT_OK;
// 2. Memory state check
if (getMaxmemoryState(&mem_reported, NULL, &mem_tofree, NULL) == C_OK) {
return EVICT_OK;
}
// 3. Policy check
if (server.maxmemory_policy == MAXMEMORY_NO_EVICTION) {
return EVICT_FAIL;
}
// 4. Loop eviction until enough memory is freed
while (mem_freed < mem_tofree) {
bestkey = selectEvictionKey();
deleteEvictedKeyAndPropagate(db, keyobj, &key_mem_freed);
mem_freed += key_mem_freed;
// Periodic checks every 16 keys
if (keys_freed % 16 == 0) {
flushSlavesOutputBuffers();
checkTimeLimit();
}
}
return EVICT_OK;
}
โฑ๏ธ Time Limit & Progressive Eviction
// Calculate time limit based on tenacity parameter
static unsigned long evictionTimeLimitUs(void) {
if (server.maxmemory_eviction_tenacity <= 10) {
return 50uL * server.maxmemory_eviction_tenacity; // 0-500ฮผs
}
if (server.maxmemory_eviction_tenacity < 100) {
return (unsigned long)(500.0 * pow(1.15, server.maxmemory_eviction_tenacity - 10.0));
}
return ULONG_MAX; // No limit
}
// Start background eviction when timeout occurs
void startEvictionTimeProc(void) {
if (!isEvictionProcRunning) {
isEvictionProcRunning = 1;
aeCreateTimeEvent(server.el, 0, evictionTimeProc, NULL, NULL);
}
}
โก Performance Optimizations That Matter
๐ Sampling Strategy: 5 keys โ 16-slot pool (smart caching)
โก Time Slicing: Progressive eviction prevents blocking
๐ง SDS Caching: Reuse string objects in eviction pool
๐ Lazy Free: Background deletion for large objects
๐ Batch Processing: Check every 16 keys for efficiency
๐ ๏ธ Production Configuration Guide
โ๏ธ Critical Parameters
Parameter | Default | ๐ฏ Impact | Tuning Tips |
---|---|---|---|
maxmemory |
0 | ๐จ Memory limit | Set to 80% of available RAM |
maxmemory-policy |
noeviction |
๐ฎ Strategy choice | Use allkeys-lru for cache |
maxmemory-samples |
5 | ๐ฏ Selection quality | Increase to 10 for better accuracy |
maxmemory-eviction-tenacity |
10 | โก Responsiveness | Lower for latency-sensitive apps |
lfu-log-factor |
10 | ๐ LFU sensitivity | Higher = more gradual frequency growth |
lfu-decay-time |
1 | โฐ LFU time decay | Adjust based on access patterns |
๐ Monitoring & Best Practices
๐ Essential Metrics to Watch
๐ evicted_keys
: Track eviction frequency
โ ๏ธ eviction_exceeded_time
: Monitor blocking time
๐งฉ mem_fragmentation_ratio
: Watch memory efficiency
๐พ used_memory
: Current memory usage
๐ฏ maxmemory
: Your memory ceiling
๐ฏ Strategy Selection Guide
๐ Production Recommendations:
๐ Web Caching: allkeys-lru
or allkeys-lfu
- Most cached data follows access patterns
- Evict across all keys for maximum efficiency
๐ Session Storage: volatile-ttl
- Sessions have natural expiration times
- TTL-based eviction respects session lifecycle
๐ Analytics Cache: allkeys-lfu
- Popular queries should stay longer
- Frequency matters more than recency
๐ฒ Development/Testing: allkeys-random
- Simple and predictable behavior
- Good for non-production environments
โก Performance Tuning Tips
๐ For High-Throughput Systems:
- Increase
maxmemory-samples
to 10-20 - Use
maxmemory-eviction-tenacity
20-30 - Monitor
eviction_exceeded_time
closely
๐ฏ For Low-Latency Systems:
- Keep
maxmemory-samples
at 5 - Set
maxmemory-eviction-tenacity
to 5-10 - Consider
lazyfree-lazy-eviction yes
๐ก For Memory-Constrained Systems:
- Use aggressive eviction policies (
allkeys-*
) - Set
maxmemory
to 70-80% of available RAM - Monitor fragmentation ratio regularly
๐ Key Takeaways
โ
Smart Algorithm: 16-slot eviction pool beats naive sampling
โ
Balanced Trade-offs: Approximate algorithms for production performance
โ
Configurable: 8 policies + tunable parameters for any use case
โ
Production-Ready: Time limits and progressive eviction prevent blocking
This deep dive is based on Redis 8.2 source code analysis. Understanding these internals helps you optimize Redis for your specific use case!
๐ Want to dive deeper? Check out the Redis source code in src/evict.c
- it's surprisingly readable and well-commented!
Top comments (0)