Caching is the difference between a sluggish video platform and one that feels instant. DailyWatch serves trending video content from 8+ regions, and the caching strategy evolved through several iterations before landing on something solid. Here's the architecture.
The Three Cache Layers
Before jumping to Redis, understand what you're layering:
- Page cache — Full HTML output stored on disk or in Redis
- Data cache — Structured query results (video lists, category data)
- Search cache — User query results with TTL-based expiry
Each layer has different invalidation rules and TTLs. Mixing them up is how you end up serving stale content or cache-busting too aggressively.
Redis Data Structures for Video Metadata
Video metadata fits naturally into Redis hashes. Each video gets a hash keyed by its ID:
// Store video metadata as a Redis hash
function cacheVideo(Redis $redis, array $video): void {
$key = "video:{$video['id']}";
$redis->hMSet($key, [
'title' => $video['title'],
'channel' => $video['channel_title'],
'thumbnail' => $video['thumbnail_url'],
'views' => $video['view_count'],
'published_at' => $video['published_at'],
'region' => $video['region'],
]);
$redis->expire($key, 21600); // 6 hours
}
// Batch fetch for listing pages
function getVideos(Redis $redis, array $ids): array {
$pipe = $redis->pipeline();
foreach ($ids as $id) {
$pipe->hGetAll("video:{$id}");
}
return $pipe->exec();
}
Pipelining is essential. A category page might display 30 videos — 30 round trips to Redis would negate the caching benefit entirely.
Sorted Sets for Trending Rankings
Trending videos need ordering. Redis sorted sets handle this without re-sorting on every request:
// Update trending score (view velocity = views per hour)
function updateTrendingScore(Redis $redis, string $videoId, float $velocity): void {
$redis->zAdd('trending:global', $velocity, $videoId);
// Per-region sets
$region = getVideoRegion($videoId);
$redis->zAdd("trending:{$region}", $velocity, $videoId);
}
// Get top 20 trending for a region
function getTrending(Redis $redis, string $region, int $limit = 20): array {
return $redis->zRevRange("trending:{$region}", 0, $limit - 1, true);
}
The zRevRange call returns videos sorted by score in descending order. No SQL query, no sorting in PHP — Redis does it in O(log(N)+M) time.
Search Cache with Smart Key Design
Search queries are expensive. Cache them, but design keys that allow partial invalidation:
function getCachedSearch(Redis $redis, string $query, int $page): ?array {
$key = 'search:' . md5(strtolower(trim($query))) . ":p{$page}";
$cached = $redis->get($key);
return $cached ? json_decode($cached, true) : null;
}
function cacheSearch(Redis $redis, string $query, int $page, array $results): void {
$key = 'search:' . md5(strtolower(trim($query))) . ":p{$page}";
$redis->setex($key, 600, json_encode($results)); // 10 min TTL
// Track keys for bulk invalidation
$redis->sAdd('search:keys', $key);
$redis->expire('search:keys', 3600);
}
Search cache TTL is intentionally short (10 minutes). Users expect search to reflect recent content, so stale results feel broken even when technically they're just "slightly old."
Cache Invalidation: The Hard Part
Invalidation runs on two triggers:
- Cron fetch — When new videos arrive, bust related caches
- Admin action — Manual cache clear for specific pages
function invalidateOnFetch(Redis $redis, string $region): void {
// Clear region-specific trending
$redis->del("trending:{$region}");
// Clear home page cache
$redis->del('page:home');
// Clear category caches for affected region
$categories = $redis->sMembers('categories:active');
$pipe = $redis->pipeline();
foreach ($categories as $cat) {
$pipe->del("page:category:{$cat}");
}
$pipe->exec();
// Search cache expires naturally via TTL
}
Notice that search cache is not explicitly invalidated — the short TTL handles it. Over-invalidating search would cause cache stampedes during high-traffic periods.
TTL Strategy Summary
| Cache Type | TTL | Rationale |
|---|---|---|
| Video metadata | 6 hours | Updated on cron fetch cycle |
| Trending sets | Until next fetch | Rebuilt entirely on new data |
| Category pages | 3 hours | Balance freshness vs load |
| Search results | 10 minutes | Users expect near-real-time |
| Home page | 3 hours | Stale-while-revalidate pattern |
Memory Management
Set maxmemory-policy allkeys-lru in Redis config. When memory fills up, the least recently used keys get evicted first. For DailyWatch, 256MB of Redis handles the entire cache layer comfortably — video metadata is small, and page caches are compressed HTML.
# redis.conf
maxmemory 256mb
maxmemory-policy allkeys-lru
save 900 1
save 300 10
The save directives create RDB snapshots. If Redis restarts, you lose at most 5 minutes of cache — which rebuilds quickly from the primary SQLite database anyway.
Key Takeaway
Don't cache everything in one flat key-value pattern. Match Redis data structures to your access patterns: hashes for entities, sorted sets for rankings, short-TTL strings for search. The right structure eliminates entire categories of bugs.
This article is part of the Building DailyWatch series.
Top comments (0)