DEV Community

Alessio Michelini
Alessio Michelini

Posted on

When and How to Use LRU Cache in Node.js Backend Projects

If you've been working with Node.js for a while, you've probably encountered situations where your application makes repeated requests to fetch the same data, whether from a database, an external API, or some expensive computation. And if you're like me, you've probably thought "there must be a better way to handle this."

Well, there is, and it's called caching. But not just any caching—today we're going to talk about LRU (Least Recently Used) caching and when it makes sense to use it in your Node.js applications.

What is an LRU Cache?

Before we jump into the code, let's understand what an LRU cache actually is.

An LRU cache is a data structure that stores a limited number of items and automatically removes the least recently used items when it reaches its capacity. Think of it like a small notebook where you keep notes, but you only have space for, let's say, 100 pages. When you need to add a new note and the notebook is full, you remove the oldest note you haven't looked at in a while.

The beauty of LRU caching is that it keeps frequently accessed data readily available while automatically discarding data that hasn't been used recently, all without you having to manage it manually.

When should you use LRU Cache?

Let's be practical here. You should use LRU caching when:

1. You're making repeated expensive operations

If you're querying a database for the same data multiple times, or calling an external API that returns relatively static data, caching can save you a lot of time and resources.

For example, let's say you have an e-commerce application where product details don't change very often:

// Without caching - hits the database every time
const getProduct = async (productId) => {
  const product = await db.query('SELECT * FROM products WHERE id = ?', [productId]);
  return product;
};
Enter fullscreen mode Exit fullscreen mode

If 1000 users are viewing the same product, you're making 1000 database calls for the exact same data. That's wasteful.

2. Your data doesn't change frequently

LRU cache works best with data that's relatively static or changes infrequently. Things like:

  • User profile information
  • Product catalogs
  • Configuration settings
  • API responses that update every few minutes/hours
  • Computed results from heavy calculations

3. You have memory to spare

Remember, caching means storing data in memory. If your application is already running tight on memory, adding a cache might not be the best idea.

4. You need to limit memory usage

Unlike simple object caching where you might accidentally cache unlimited items and run out of memory, LRU cache has a built-in size limit. When it reaches capacity, it automatically evicts the least recently used items.

When you shouldn't use LRU Cache

Just as important as knowing when to use it, is knowing when not to use it:

1. You're running multiple Node.js processes

This is probably the most important limitation to understand. LRU cache stores data in memory within a single Node.js process. If you're running multiple instances of your application (which you should be doing in production for reliability and performance), each process will have its own separate cache.

Let me explain why this is a problem with a practical example.

Let's say you have 3 Node.js processes running behind a load balancer:

// Process 1, 2, and 3 are all running this code
const userCache = new LRUCache({ max: 100, ttl: 60000 });

const getUser = async (userId) => {
  const cached = userCache.get(userId);
  if (cached) return cached;

  const user = await db.query('SELECT * FROM users WHERE id = ?', [userId]);
  userCache.set(userId, user);
  return user;
};
Enter fullscreen mode Exit fullscreen mode

Now here's what happens:

  • User makes a request that hits Process 1 → cache miss, fetches from DB, caches it
  • User makes another request that hits Process 2 → cache miss again! (different process, different cache)
  • User makes another request that hits Process 3 → cache miss again!

So you're not really caching anything effectively across your application. You're just caching per process, which might give you some benefit, but nowhere near what you'd expect.

Even worse, if you update data in one process and invalidate the cache there, the other processes still have stale data:

// Update happens in Process 1
const updateUser = async (userId, updates) => {
  await db.query('UPDATE users SET ? WHERE id = ?', [updates, userId]);
  userCache.delete(userId); // Only clears cache in Process 1!
};
// Process 2 and 3 still have the old cached data!
Enter fullscreen mode Exit fullscreen mode

This means LRU cache in Node.js is really only suitable for:

  • Small projects running a single Node.js process
  • Development environments
  • Simple scripts or tools that don't need to scale

For production applications that need to scale horizontally, you should use a proper distributed cache like Redis or Memcached instead.

2. Your data changes constantly

If you're caching data that updates every second, you're going to have stale data problems. For example, caching real-time stock prices or live sports scores wouldn't make much sense.

3. Your data is already fast to retrieve

If your operation takes 5 milliseconds, adding a cache might actually slow things down due to the overhead of cache management. Don't optimize what doesn't need optimizing.

4. You need guaranteed fresh data

If you're building a banking application where every transaction must reflect the absolute current state, caching might introduce dangerous inconsistencies.

5. Your data is unique per request

If every request requires different data that's never reused, caching won't help you. For example, caching user-specific data that's only accessed once doesn't make sense.

Let's see it in action

Now that we understand the theory, let's build something practical. We'll use the lru-cache package, which is the most popular LRU cache implementation for Node.js.

First, install the package:

npm install lru-cache
Enter fullscreen mode Exit fullscreen mode

A simple example

Let's say we have a function that fetches user data from a database:

const { LRUCache } = require('lru-cache');
const db = require('./database'); // your database connection

// Create a cache that holds up to 500 items
// Each item expires after 5 minutes
const userCache = new LRUCache({
  max: 500,
  ttl: 1000 * 60 * 5, // 5 minutes in milliseconds
});

const getUser = async (userId) => {
  // Check if we have it in cache
  const cached = userCache.get(userId);
  if (cached) {
    console.log('Cache hit!');
    return cached;
  }

  // Not in cache, fetch from database
  console.log('Cache miss, fetching from DB...');
  const user = await db.query('SELECT * FROM users WHERE id = ?', [userId]);

  // Store in cache for future requests
  userCache.set(userId, user);

  return user;
};

// Usage
const user1 = await getUser(123); // Fetches from DB
const user2 = await getUser(123); // Returns from cache!
Enter fullscreen mode Exit fullscreen mode

A more complex real-world example

Let's build a more practical example with an API that fetches weather data:

const { LRUCache } = require('lru-cache');
const axios = require('axios');

class WeatherService {
  constructor() {
    // Cache weather data for each city
    // Max 100 cities, each cached for 30 minutes
    this.cache = new LRUCache({
      max: 100,
      ttl: 1000 * 60 * 30,
      // Optional: what to do when an item is evicted
      dispose: (value, key) => {
        console.log(`Evicting weather data for ${key}`);
      }
    });
  }

  async getWeather(city) {
    // Create a cache key
    const cacheKey = city.toLowerCase();

    // Try to get from cache
    const cached = this.cache.get(cacheKey);
    if (cached) {
      return {
        ...cached,
        fromCache: true
      };
    }

    // Not in cache, fetch from API
    try {
      const response = await axios.get(
        `https://api.weatherapi.com/v1/current.json`,
        {
          params: {
            key: process.env.WEATHER_API_KEY,
            q: city
          }
        }
      );

      const weatherData = {
        city: response.data.location.name,
        temperature: response.data.current.temp_c,
        condition: response.data.current.condition.text,
        humidity: response.data.current.humidity,
        fetchedAt: new Date().toISOString()
      };

      // Store in cache
      this.cache.set(cacheKey, weatherData);

      return {
        ...weatherData,
        fromCache: false
      };
    } catch (error) {
      console.error('Error fetching weather:', error.message);
      throw new Error('Failed to fetch weather data');
    }
  }

  // Clear cache for a specific city
  clearCity(city) {
    this.cache.delete(city.toLowerCase());
  }

  // Clear entire cache
  clearAll() {
    this.cache.clear();
  }

  // Get cache statistics
  getStats() {
    return {
      size: this.cache.size,
      maxSize: this.cache.max
    };
  }
}

module.exports = WeatherService;
Enter fullscreen mode Exit fullscreen mode

And here's how you'd use it in your Express application:

const express = require('express');
const WeatherService = require('./weatherService');

const app = express();
const weatherService = new WeatherService();

app.get('/weather/:city', async (req, res) => {
  try {
    const { city } = req.params;
    const weather = await weatherService.getWeather(city);
    res.json(weather);
  } catch (error) {
    res.status(500).json({ error: error.message });
  }
});

app.get('/weather/cache/stats', (req, res) => {
  res.json(weatherService.getStats());
});

app.delete('/weather/cache/:city', (req, res) => {
  const { city } = req.params;
  weatherService.clearCity(city);
  res.json({ message: `Cache cleared for ${city}` });
});

app.listen(3000, () => {
  console.log('Server running on port 3000');
});
Enter fullscreen mode Exit fullscreen mode

Now when you call /weather/london multiple times within 30 minutes, only the first request will hit the external API. The rest will be served from cache, making your application faster and reducing API costs.

Advanced features you should know about

The lru-cache package has some really useful features:

Size-based eviction

Instead of limiting by number of items, you can limit by memory size:

const cache = new LRUCache({
  max: 500,
  maxSize: 5000, // total size in arbitrary units
  sizeCalculation: (value) => {
    // Calculate size of each item
    return JSON.stringify(value).length;
  }
});
Enter fullscreen mode Exit fullscreen mode

Update TTL on access

Keep frequently accessed items in cache longer:

const cache = new LRUCache({
  max: 100,
  ttl: 1000 * 60 * 5,
  updateAgeOnGet: true // Reset TTL when item is accessed
});
Enter fullscreen mode Exit fullscreen mode

Custom disposal

Do cleanup when items are evicted:

const cache = new LRUCache({
  max: 100,
  dispose: (value, key, reason) => {
    // Clean up resources, log evictions, etc.
    if (reason === 'evict') {
      console.log(`${key} was evicted due to cache being full`);
    }
  }
});
Enter fullscreen mode Exit fullscreen mode

Some gotchas to watch out for

1. Cache invalidation is hard

The hardest problem in computer science is knowing when to invalidate your cache. If you cache user data and the user updates their profile, you need to clear that cache entry:

const updateUser = async (userId, updates) => {
  await db.query('UPDATE users SET ? WHERE id = ?', [updates, userId]);

  // Don't forget to invalidate the cache!
  userCache.delete(userId);
};
Enter fullscreen mode Exit fullscreen mode

2. Don't cache sensitive data carelessly

Be careful about caching sensitive information. If you're caching user data, make sure you're not exposing one user's data to another:

// Bad: using just productId as key
const cacheKey = productId;

// Better: include user context if permissions matter
const cacheKey = `${userId}:${productId}`;
Enter fullscreen mode Exit fullscreen mode

3. Memory leaks

While LRU cache automatically manages size, you still need to be careful about what you cache. Don't cache huge objects or you'll run out of memory quickly.

Measuring the impact

You should always measure whether caching is actually helping. Here's a simple way to track hit rates:

class CachedService {
  constructor() {
    this.cache = new LRUCache({ max: 100, ttl: 60000 });
    this.stats = {
      hits: 0,
      misses: 0
    };
  }

  async getData(key) {
    const cached = this.cache.get(key);

    if (cached) {
      this.stats.hits++;
      return cached;
    }

    this.stats.misses++;
    const data = await this.fetchData(key);
    this.cache.set(key, data);
    return data;
  }

  getHitRate() {
    const total = this.stats.hits + this.stats.misses;
    if (total === 0) return 0;
    return (this.stats.hits / total * 100).toFixed(2);
  }
}
Enter fullscreen mode Exit fullscreen mode

When to migrate to Redis (or another distributed cache)

Now, here's the thing: LRU cache is great for getting started, but as your application grows, you'll hit its limitations pretty quickly.

Signs you need to move to Redis

You should consider migrating to Redis when:

  1. You're running multiple Node.js processes - This is the big one. As soon as you need to scale horizontally, in-memory caching becomes problematic.

  2. You need cache consistency across instances - If one process updates data, all processes need to see the updated cache.

  3. Your cache needs to survive restarts - LRU cache is ephemeral. When your process restarts, the cache is gone. Redis persists data.

  4. You need more advanced features - Redis offers pub/sub, sorted sets, lists, and many other data structures that LRU cache doesn't have.

The migration is simpler than you think

The good news is that if you've been using LRU cache with a good abstraction layer (like the CachedService examples above), migrating to Redis is straightforward.

Here's how the same weather service would look with Redis:

const redis = require('redis');

class WeatherService {
  constructor() {
    this.redisClient = redis.createClient({
      host: process.env.REDIS_HOST || 'localhost',
      port: process.env.REDIS_PORT || 6379
    });
    this.redisClient.connect();
  }

  async getWeather(city) {
    const cacheKey = `weather:${city.toLowerCase()}`;

    // Try to get from Redis
    const cached = await this.redisClient.get(cacheKey);
    if (cached) {
      return {
        ...JSON.parse(cached),
        fromCache: true
      };
    }

    // Not in cache, fetch from API
    try {
      const response = await axios.get(
        `https://api.weatherapi.com/v1/current.json`,
        {
          params: {
            key: process.env.WEATHER_API_KEY,
            q: city
          }
        }
      );

      const weatherData = {
        city: response.data.location.name,
        temperature: response.data.current.temp_c,
        condition: response.data.current.condition.text,
        humidity: response.data.current.humidity,
        fetchedAt: new Date().toISOString()
      };

      // Store in Redis with 30 minute expiration
      await this.redisClient.setEx(
        cacheKey,
        1800, // 30 minutes in seconds
        JSON.stringify(weatherData)
      );

      return {
        ...weatherData,
        fromCache: false
      };
    } catch (error) {
      console.error('Error fetching weather:', error.message);
      throw new Error('Failed to fetch weather data');
    }
  }

  async clearCity(city) {
    await this.redisClient.del(`weather:${city.toLowerCase()}`);
  }
}
Enter fullscreen mode Exit fullscreen mode

As you can see, the interface is almost identical. The main difference is that Redis operations are asynchronous (they return promises), while LRU cache operations are synchronous.

My recommendation

Start with LRU cache if:

  • You're building a prototype or small project
  • You're running a single Node.js process
  • You want to get something working quickly without infrastructure complexity
  • You're just learning about caching concepts

Move to Redis when:

  • You need to scale to multiple processes or servers
  • You need cache persistence
  • You need cache sharing across different services
  • Your project is going to production

Don't feel bad about starting with LRU cache and migrating later. It's a perfectly valid approach, and the patterns you learn with LRU cache transfer directly to Redis.

Measuring the impact

You should always measure whether caching is actually helping. Here's a simple way to track hit rates:

class CachedService {
  constructor() {
    this.cache = new LRUCache({ max: 100, ttl: 60000 });
    this.stats = {
      hits: 0,
      misses: 0
    };
  }

  async getData(key) {
    const cached = this.cache.get(key);

    if (cached) {
      this.stats.hits++;
      return cached;
    }

    this.stats.misses++;
    const data = await this.fetchData(key);
    this.cache.set(key, data);
    return data;
  }

  getHitRate() {
    const total = this.stats.hits + this.stats.misses;
    if (total === 0) return 0;
    return (this.stats.hits / total * 100).toFixed(2);
  }
}
Enter fullscreen mode Exit fullscreen mode

Conclusion

LRU caching is a powerful tool that can significantly improve your application's performance when used correctly. But it's important to understand its limitations, especially around scaling.

The key takeaway: LRU cache is perfect for small projects or single Node.js processes, but becomes problematic at scale.

Here's a quick decision tree:

  • ✅ Use it for expensive operations that return the same data repeatedly
  • ✅ Use it for relatively static data that doesn't change often
  • ✅ Use it when you need automatic memory management
  • ✅ Use it for prototypes and small projects with a single process
  • ✅ Use it to learn caching patterns before moving to Redis
  • ❌ Don't use it for data that changes constantly
  • ❌ Don't use it when operations are already fast
  • ❌ Don't use it for unique, one-time data
  • ❌ Don't use it when you have multiple Node.js processes (use Redis instead)
  • ❌ Don't use it when you need cache consistency across instances

The reality is that most production Node.js applications run multiple processes for reliability and performance. As soon as you spin up a second instance, LRU cache loses much of its effectiveness because each process has its own isolated cache. This leads to cache misses, stale data, and inconsistent behavior.

That said, LRU cache is still valuable:

  • It's a great learning tool to understand caching concepts
  • It's perfect for CLI tools, scripts, or single-process applications
  • It can be useful as a secondary cache layer in front of Redis
  • It's excellent for local development and testing

Remember, caching is an optimization technique. As with all optimizations, measure first, optimize second. Don't add complexity to your application unless you have data showing it's actually helping.

And when you're ready to scale beyond a single process, the patterns you learned with LRU cache will transfer directly to Redis. The migration path is straightforward, and you'll be glad you started with something simple to understand the fundamentals.

Finally, always remember the two hardest things in computer science: cache invalidation and naming things. Make sure you have a clear strategy for both, especially when dealing with distributed systems!

DISCLAIMER
I used AI to help me out to write this article, mostly for grammar and to create quick samples code.

Top comments (0)