DEV Community

Cover image for Medusa V1 Redis Caching Guide: Boost API Performance 77% Faster
Michał Miler for u11d

Posted on • Originally published at u11d.com

Medusa V1 Redis Caching Guide: Boost API Performance 77% Faster

Building a high-performance e-commerce store isn't just about having great products - it's about delivering lightning-fast experiences that keep customers engaged. In the world of headless commerce, where API response times directly impact user experience, performance optimization becomes critical. This comprehensive guide shows you how to implement Redis caching in Medusa V1 to dramatically improve your store's performance, complete with real-world benchmarks and production-ready code.

Why Performance Matters in E-commerce

The Cost of Slow Stores

Modern e-commerce customers expect instant responses. Studies show that:

  • A 1-second delay can reduce conversions by 7%
  • 53% of mobile users abandon sites that take longer than 3 seconds to load
  • Fast-loading stores have higher SEO rankings and better user engagement
  • Performance directly impacts revenue - especially during high-traffic periods like Black Friday

Common Performance Bottlenecks

Before diving into solutions, let's identify the typical performance challenges:

  • Database Query Overhead: Repeated queries for popular products, categories, and configurations
  • Complex Product Filtering: Multi-attribute searches that hit the database hard
  • Concurrent User Load: Multiple users accessing the same data simultaneously

Performance Baseline: Before Optimization

Let's establish our performance baseline using autocannon to simulate realistic e-commerce traffic:

Test Scenario

  • Endpoint: Product listing API (/store/products)
  • Load: 500 concurrent virtual users
  • Requests: 10,000 total product fetches
  • Environment: Standard Medusa V1 Starter (create-medusa-app) with PostgreSQL

Baseline Results (Without Caching)

Stat Latency
2.5% 3430 ms
50% 4885 ms
97.5% 8397 ms
99% 9070 ms
Avg 5054.33 ms
Stdev 965.89 ms
Max 9452 ms

Analysis: The median response time of 4.8 seconds is unacceptable for modern e-commerce. Peak latencies approaching 10 seconds would result in massive customer abandonment.

Implementing Redis Caching: Architecture & Code

Redis Service Foundation

Our caching implementation starts with a robust Redis service that handles connection management and basic operations. You can find the complete implementation in our GitHub repository.

// src/services/redis.ts
import { Logger } from "@medusajs/medusa";
import { Lifetime } from "awilix";
import Redis from "ioredis";

export class RedisService {
  static LIFE_TIME = Lifetime.SINGLETON;
  private readonly client = new Redis(process.env.REDIS_URL);
  private logger: Logger;

  constructor({ logger }: { logger: Logger }) {
    this.logger = logger;
  }

  async set(key: string, value: string | number, ttl?: number) {
    try {
      if (ttl) {
        await this.client.set(key, value, "EX", ttl);
      } else {
        await this.client.set(key, value);
      }
      this.logger.debug(`[redis] Set key "${key}", TTL ${ttl || "infinite"}`);
    } catch (error) {
      this.logger.error(`[redis] Failed to set key "${key}": ${error}`);
    }
  }

  async get(key: string): Promise<string | null> {
    try {
      const value = await this.client.get(key);
      this.logger.debug(`[redis] Get key "${key}": ${value ? "HIT" : "MISS"}`);
      return value;
    } catch (error) {
      this.logger.error(`[redis] Failed to get key "${key}": ${error}`);
      return null;
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

Cache Provider with Advanced Features

The CacheProviderService implements sophisticated caching patterns including cache locking to prevent thundering herd problems:

// src/services/cache-provider.ts
export class CacheProviderService {
  private readonly locks: Map<string, Promise<any>> = new Map();
  private readonly keyPrefix = "medusa:cache";

  async withCache<T>(
    identifier: CacheIdentifier,
    dataProvider: () => Promise<T>,
    timeToLive: Seconds,
    lockTimeout: Seconds = 30
  ): Promise<T> {
    const resolvedKey = this.generateCacheKey(identifier);

    const cachedValue = await this.fetchFromCache<T>(resolvedKey);
    if (cachedValue) {
      return cachedValue;
    }

    let lockPromise = this.locks.get(resolvedKey);

    if (lockPromise) {
      return await this.waitForLockWithTimeout(
        lockPromise,
        resolvedKey,
        lockTimeout,
        dataProvider
      );
    }

    lockPromise = this.executeWithLock(identifier, dataProvider, timeToLive);
    this.locks.set(resolvedKey, lockPromise);

    try {
      const result = await lockPromise;
      return result;
    } finally {
      this.locks.delete(resolvedKey);
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

Why Cache Locking Matters: When multiple requests hit the same uncached endpoint simultaneously, without locking, each request would trigger a database query. With 500 concurrent users, this creates a "cache stampede" that can overwhelm your database.

Enhanced Product Service with Caching

Medusa V1 allows for overriding default services, which is demonstrated here by extending the core ProductService. This approach enables us to wrap Medusa's core product operations with intelligent caching. By leveraging the CacheProviderService, we ensure that product listing and counting operations are optimized with a 1-minute TTL, significantly reducing database load and improving response times.

// src/services/product.ts
export class ProductService extends MedusaProductService {
  constructor(container: InjectedDependencies) {
    super(container);
    this.cacheProviderService = container.cacheProviderService;
  }

  listAndCount(
    selector: ProductSelector,
    config?: FindProductConfig
  ): Promise<[Product[], number]> {
    return this.cacheProviderService.withCache(
      {
        namespace: CacheNamespace.PRODUCT,
        parameters: { selector, config },
      },
      async () => super.listAndCount(selector, config),
      60 // 1 minute TTL
    );
  }
}
Enter fullscreen mode Exit fullscreen mode

Performance Results: After Optimization

After implementing Redis caching with the same test parameters we got the following results:

Stat Latency
2.5% 1038 ms
50% 1103 ms
97.5% 1773 ms
99% 2278 ms
Avg 1158.78 ms
Stdev 223.06 ms
Max 3510 ms

Performance Improvements Summary

Metric Before Caching After Caching Improvement
Median Latency 4,885 ms 1,103 ms 77% faster
Average Latency 5,054 ms 1,158 ms 77% faster
99th Percentile 9,070 ms 2,278 ms 75% faster
Max Latency 9,452 ms 3,510 ms 63% faster

Analysis:

  • Response times improved by about 70-80%
  • Latency variance decreased significantly (better consistency)
  • Maximum latency dropped below 4 seconds, keeping more users engaged

These dramatic improvements prove that proper Redis caching implementation is crucial for achieving production-grade performance in Medusa V1 stores.

Cache Strategy and Business Logic

Choosing the right Time To Live (TTL) for cached data is crucial for balancing performance and data freshness. For example, product listings can be cached for 60 seconds to ensure a good balance between freshness and performance, while individual products, which change less frequently, can have a TTL of 300 seconds. Categories and shipping options, being relatively static, can have longer TTLs of 600 and 1800 seconds, respectively. User sessions, on the other hand, require a balance between security and user experience, making a TTL of 3600 seconds appropriate.

Cache invalidation is another critical aspect of caching strategy. Time-based expiration works well for most e-commerce data where eventual consistency is acceptable. However, for critical updates like price changes or inventory adjustments, event-driven invalidation is necessary. This involves invalidating related cache entries in the update services to ensure data accuracy.

Holistic Performance Optimization

Caching is just one piece of the performance puzzle. Beyond caching, database optimization plays a vital role. Adding indexes on frequently queried columns, optimizing complex joins and queries, and using database connection pooling can significantly enhance performance. For heavy read workloads, implementing read replicas can further distribute the load.

Infrastructure scaling is another critical aspect. Using a Content Delivery Network (CDN) for static assets, load balancers for horizontal scaling, and monitoring tools for database performance can ensure your application scales effectively. Setting up a Redis Cluster can also provide high availability and fault tolerance.

At the application level, implementing pagination for large result sets and using GraphQL for precise data fetching can optimize data transfer. Additionally, optimizing image sizes and formats, along with lazy loading for non-critical data, can enhance the user experience.

Monitoring and observability are essential for maintaining performance. Tracking metrics like cache hit ratio, average response times, database query performance, Redis memory usage, and error rates can help identify and address performance bottlenecks proactively.

For frontend applications, set appropriate cache headers for static assets and configure API response caching in your Next.js storefront. Leverage CDN caching through services like CloudFlare or AWS CloudFront for global performance optimization. Consider implementing Static Site Generation (SSG) to pre-render product pages, significantly improving initial page load times and SEO performance. These frontend optimizations, combined with backend Redis caching, create a comprehensive performance strategy.

Conclusion

The results speak for themselves: implementing Redis caching in Medusa V1 can deliver 70%+ performance improvements with minimal code changes. However, remember that performance optimization is an ongoing process, not a one-time implementation.

Top comments (0)