DEV Community

Rhaqim
Rhaqim

Posted on

Speed vs Simplicity: Choosing the Right Cache

Why I Switched from HashiCorp LRU to Ristretto for High-Performance Caching in Go

While working on Buckt, I implemented a caching layer to speed up repeated file reads and downloads. I started with HashiCorp's lru, which was simple and easy to integrate. But as the system scaled and concurrency increased, it became clear that it couldn't keep up.

This post highlights the issues I encountered and why Ristretto ended up being a much better fit.


❌ Issues with HashiCorp LRU

While the lru package is solid and predictable, I ran into a few limitations:

  • Blocking Writes - All Add() operations lock the cache, leading to bottlenecks under concurrent load.
  • Fixed Capacity, Not Cost-Based - It evicts based on item count, not memory usage — inefficient when storing large items like files.
  • No Native Metrics - Hit/miss tracking must be implemented manually.

✅ Why Ristretto?

Ristretto, created by Dgraph Labs, offers:

  • Non-Blocking Writes - Writes are buffered and processed asynchronously, preventing contention during high loads.

  • Cost-Aware Eviction - You can assign a "cost" (e.g., byte size), and the cache evicts based on total cost rather than item count.

  • TinyLFU Eviction Strategy - More efficient and accurate for real-world usage patterns than basic LRU.

  • Built-In Concurrency Support - Designed to scale with multiple goroutines hitting the cache simultaneously.


🔧 How I Use It

cache, _ := ristretto.NewCache(&ristretto.Config[string, []byte]{
    NumCounters: 1e7,     // Frequency-tracking keys
    MaxCost:     1 << 30, // 1GB total cost
    BufferItems: 64,      // Set buffer size
})
Enter fullscreen mode Exit fullscreen mode

🚀 Improvements After the Switch

  • Requests are faster, especially repeated ones — no disk I/O.
  • Concurrency issues disappeared — no lock contention on write.
  • Memory usage is under control with cost-based eviction.

⚠️ Caveats

  • Avoid calling cache.Wait() in critical paths — it blocks until the write buffer is flushed.
  • Eviction is probabilistic, so results may vary slightly.
  • You must define the cost meaningfully for your use case.

🧪 Bonus: Cache Hit Tracking

type FileCache struct {
    cache *ristretto.Cache[string, []byte]
    hits  atomic.Uint64
    misses atomic.Uint64
}

func (fc *FileCache) Get(key string) ([]byte, bool) {
    val, ok := fc.cache.Get(key)
    if ok {
        fc.hits.Add(1)
    } else {
        fc.misses.Add(1)
    }
    return val, ok
}
Enter fullscreen mode Exit fullscreen mode

🔚 Final Thoughts

HashiCorp’s LRU cache is great for simple use cases, but when performance and scalability matter (especially with concurrent file reads/writes) Ristretto is a better fit.

Highly recommend it for high-performance Go applications.


🔗 Resources

If you've used Ristretto let me know and in what way.

Top comments (0)