DEV Community

nk sk
nk sk

Posted on

Building a Pluggable Cache: A Low-Level Design Walkthrough

Caching is an essential technique for building high-performance, scalable systems. But implementing a cache that supports different write strategies, eviction policies, and invalidation mechanisms isn’t trivial. In this blog, we’ll walk through a complete Low-Level Design (LLD) of a cache in Java, using proven design patterns to make it modular, extensible, and production-ready.


1. Design Goals

We want a cache system that supports:

  • Multiple write strategies: write-through, write-back, write-around
  • Multiple eviction policies: LRU, LFU, TTL
  • Invalidation mechanisms: listener-based notifications
  • Persistence abstraction: in-memory, database, or external store
  • Metrics: cache hits and misses
  • Extensibility via clean interfaces and design patterns

2. Design Patterns Used

  1. Strategy Pattern → for eviction and write strategies
  2. Factory + Builder → for cache creation with flexible configs
  3. Observer Pattern → for invalidation listeners
  4. Singleton → for metrics
  5. Decorator → for TTL-based eviction wrapping other policies

3. Core Interfaces

We define the key contracts:

  • Cache<K, V> → main cache operations
  • EvictionPolicy<K> → how entries are evicted
  • WriteStrategy<K, V> → how writes propagate to persistence
  • Persistence<K, V> → abstraction for underlying storage
  • InvalidationListener<K> → observer for eviction/invalidations

4. Eviction Policies

  • LRU (Least Recently Used): Removes least accessed entries first.
  • LFU (Least Frequently Used): Removes least frequently used entries.
  • TTL Decorator: Wraps another policy and enforces time-to-live expiration.

This uses Strategy Pattern so you can swap policies without changing core logic.


5. Write Strategies

  • Write-Through: Write synchronously to cache and persistence.
  • Write-Back (Write-Behind): Write to cache first, persist asynchronously.
  • Write-Around: Write directly to persistence; cache updates only on reads.

This allows you to tune for consistency vs. performance.


6. Invalidation & Observers

We support listeners to notify when keys are invalidated (e.g., evicted). This is useful for keeping multiple caches in sync or for external logging.


7. Persistence Layer

We abstract persistence so you can plug in:

  • In-memory maps (for demo/testing)
  • Databases
  • Distributed caches like Redis

8. Implementation Highlights

Here’s the structured code (simplified overview):

interface Cache<K, V> { V get(K key); void put(K key, V value); void remove(K key); }
interface EvictionPolicy<K> { void recordAccess(K key); K evictKey(); }
interface WriteStrategy<K, V> { void write(K key, V value, Persistence<K, V> p); }
interface Persistence<K, V> { void write(K k, V v); Optional<V> read(K k); }

class SimpleCache<K, V> implements Cache<K, V> { /* wires strategies & policies */ }

class LruEvictionPolicy<K> implements EvictionPolicy<K> { /* linked hash set */ }
class TtlEvictionDecorator<K> implements EvictionPolicy<K> { /* adds expiry */ }

class WriteThroughStrategy<K, V> implements WriteStrategy<K, V> { /* sync */ }
class WriteBackStrategy<K, V> implements WriteStrategy<K, V> { /* async */ }

class InMemoryPersistence<K, V> implements Persistence<K, V> { /* map */ }
Enter fullscreen mode Exit fullscreen mode

9. Factory & Config

We add a CacheFactory with a CacheConfig builder to simplify creation:

CacheConfig cfg = new CacheConfig();
cfg.capacity = 1000;
cfg.policy = new LruEvictionPolicy<>(cfg.capacity);
cfg.writeStrategy = new WriteThroughStrategy<>();
cfg.persistence = new InMemoryPersistence<>();

Cache<String, String> cache = CacheFactory.create(cfg);
Enter fullscreen mode Exit fullscreen mode

This ensures a clean API for clients.


10. Example Usage

cache.put("a", "1");
cache.put("b", "2");
System.out.println(cache.get("a")); // hit
System.out.println(cache.get("c")); // miss → load from persistence
Enter fullscreen mode Exit fullscreen mode

We can also attach listeners:

((SimpleCache<String, String>) cache).addInvalidationListener(key ->
    System.out.println("Invalidated: " + key));
Enter fullscreen mode Exit fullscreen mode

11. Metrics

We track hits and misses with a Singleton CacheMetrics class:

System.out.println("Hits=" + CacheMetrics.get().hits());
System.out.println("Misses=" + CacheMetrics.get().misses());
Enter fullscreen mode Exit fullscreen mode

12. Putting It All Together

  • Use LRU + Write-Through for strong consistency.
  • Use LFU + Write-Back for performance under read-heavy workloads.
  • Use TTL + Write-Around for freshness-sensitive data.

13. Conclusion

By applying design patterns and modular abstractions, we built a pluggable, extensible cache system. The system allows you to:

  • Swap eviction policies
  • Choose write strategies based on consistency vs. performance
  • Add listeners for invalidation
  • Extend persistence layers

This design can scale from a simple in-memory cache to a distributed system with persistence in databases or Redis.


👉 Next steps: we could extend this with distributed cache support, real LFU implementation, and Spring Boot integration with Redis.

Top comments (0)