DEV Community

Akbar Septriyan
Akbar Septriyan

Posted on

Redis for query caching

Introduction

In this article, we'll delve into a use case where we can use Redis for query caching, to increase the RPS of our backend service. This article is inspired by this post, however instead of using nodejs, we will use Java Spring boot and in addition we will also instrument our code using OpenTelemetry and observe the response time using Jaeger.

Use case

Let’s say you are a backend engineer in an e-commerce company and are in charge of product discovery in the homepage. As your company grows, you start selling more products (more data in your database) and more users (more requests) are accessing your e-commerce site. This increase in data size and number of traffic is making your homepage lagging and in turn makes many users complain about your ecommerce website. How can you solve this problem?

Problem

Let’s simulate above use case, below is a simplified diagram to illustrate our system:

Initial system design

Let’s break down the components of the above system.

Product DB

We use mongodb as our primary database and we will use fashion product dataset from kaagle to populate our data. The dataset contains 44k rows. Below is the example of product data:

{
   "id": "66a1de675d61dc885a0139f1",
   "gender": "Men",
   "masterCategory": "Apparel",
   "subCategory": "Topwear",
   "articleType": "Shirts",
   "baseColour": "Navy Blue",
   "season": "Fall",
   "year": 2011,
   "usage": "Casual",
   "productDisplayName": "Turtle Check Men Navy Blue Shirt"
}
Enter fullscreen mode Exit fullscreen mode

Product Service

We will use Spring Boot to build our product service. The product service will provide an endpoint to get list of products by it’s displayName:

GET /products/getByDisplayName?displayName={product display name}
Enter fullscreen mode Exit fullscreen mode

Here is the relevant code for above endpoint:

public List<Product> searchProductsByDisplayName(String displayName) {
        return productRepository.findProductByProductDisplayNameContainingIgnoreCase(displayName); 
    }

Enter fullscreen mode Exit fullscreen mode
@Repository
public interface ProductRepository extends MongoRepository<Product, String> {
   @Query("{ 'productDisplayName' : { $regex: ?0, $options: 'i' } }")
   List<Product> findProductByProductDisplayNameContainingIgnoreCase(String productDisplayName);
}

Enter fullscreen mode Exit fullscreen mode
@GetMapping("/products/getByDisplayName")
   public ResponseEntity<Object> getMobileDataProvider(@RequestParam String displayName) {
       List<Product> searchedProduct = productService.searchProductsByDisplayName(displayName);
       return ResponseEntity.status(HttpStatus.OK).body(searchedProduct);
   }

Enter fullscreen mode Exit fullscreen mode

In addition to product service and mongodb, I have also added Java opentelemetry agent to instrument our code and Jaeger to monitor our application metrics. You can find the full source code in this github repository.

Now let’s try to run and see the performance of our endpoint:

Postman API call

Jaeger initial system

Jaeger initial system 2

Based on the above data we see that our endpoint response time is acceptable (most of them are less than 100ms).

Now let’s increase the traffic and see if our service is still able to handle it. To do that I am going to use locust. In the locust file I have a list containing 30 popular product names and we will choose a random product from that list to be searched using our API endpoint. In addition, I will also set 60 concurrent user (at peak) to call our API and below is the locust script and result:

Locust result

Based on the above graphs we can see that we have a bottleneck in our system. The increase in RPS is not directly proportional to the increase of concurrent users. It stuck at 200 RPS. In addition, as the concurrent users number increased, the response time also increased.

Solution

Now to solve this problem, what can we do?
Let’s first find where’s the problem lies:

Jaeger tracing 1

Jaeger tracing 2

Jaeger tracing 3

Upon inspecting the trace, we found that the call to the database took a long time, around 500ms. It’s 10 times more than our “normal” API response time. Now we know the bottleneck is caused by a call to the database, and to help our database in handling the load we can use cache. A cache is a high-speed data storage layer, the data in cache usually will be stored in memory (which makes it fast), compared to databases which store the data into persistent layers (which makes it slow).

Solution Implementation

There are multiple common patterns of caching usage, one of them is Cache-Aside pattern. We will use the Cache-Aside pattern to solve the problem. At its core, Cache-Aside is a simple yet effective strategy for optimizing the performance of data retrieval by (lazy) loading the data into a cache layer. By doing so, the Cache-Aside pattern reduces the number of requests made to the primary data source and improves the overall responsiveness of the application. In this case, we will use Redis, one of the most popular in-memory storage. Below is the simplified diagram to illustrate our proposed solution

Proposed solution

Without further ado, let’s implement it in our code:
The first step is to configure it into our apps:

@Configuration
@EnableCaching
public class RedisConfig{
   @Value("${spring.data.redis.host}")
   public String redisHost;


   @Bean
   JedisConnectionFactory jedisConnectionFactory() {
       JedisConnectionFactory jedisConnFactory = new JedisConnectionFactory();
       jedisConnFactory.setHostName(redisHost);
       return jedisConnFactory;
   }


   @Bean
   public RedisTemplate<String, Object> redisTemplate() {
       RedisTemplate<String, Object> template = new RedisTemplate<>();
       template.setConnectionFactory(jedisConnectionFactory());
       return template;
   }
}

Enter fullscreen mode Exit fullscreen mode

Then let’s add the cache logic to our code:

public List<Product> searchProductsByDisplayName(String displayName) {
       //Get entry from redis
       ValueOperations<String, Object> ops = redisTemplate.opsForValue();
       String key = RedisUtil.generateHashSHA256(displayName);
       String redisEntry = (String) ops.get(key);


       //cache hit
       if (redisEntry != null) {
           try {
               return mapper.readValue(redisEntry, new TypeReference<List<Product>>() {});
           } catch (JsonProcessingException e) {
               log.atError()
                   .addKeyValue("redis_entry", redisEntry)
                   .setMessage("Failed to convert redisEntry into List<Product>")
                   .log();
               throw new InternalError();
           }
       }
       //cache miss
       else {
           List<Product> products = productRepository.findProductByProductDisplayNameContainingIgnoreCase(displayName);
           try {
               ops.set(key, mapper.writeValueAsString(products));
           } catch (JsonProcessingException e) {
               log.atError()
                   .setMessage("Failed to convert List<Product> into String")
                   .log();
               throw new InternalError();
           }
           return productRepository.findProductByProductDisplayNameContainingIgnoreCase(displayName);
       }
   }

Enter fullscreen mode Exit fullscreen mode

Here is the explanation for above code:

  1. First, we generate a hash from the input param.
  2. Second, we get an entry from redis using hashed value from #1 as the key.
  3. If it turns out there is an entry (cache hit), then we simply return it.
  4. If it turns out there is no entry, then we query from our primary database, and add the query result into cache, then return the query result.

Now after adding cache into our code, let’s run the same locust test:

Jaeger test

Locust test

Locust test 2

Based on the above result we can see that the RPS that our service can handle increases significantly from 200 to 1500 RPS or 750% increase. In addition the response time also dramatically reduced to below 60ms. However, based on the above data, we can still find bottlenecks in our system since the increase in RPS is not directly proportional to the increase of concurrent users. It stuck at 1500 RPS. This means that we still need to apply another strategy to truly make our service scalable (e.g to distribute the load to another computer since this experiment is run on a local machine).

Conclusion

In this article we have learned how to use redis for query caching to improve our system responsiveness. We first describe the problem, and then find the bottleneck (database call). After that we solve it by introducing cache which helps us reduce the number of requests made to the primary data source and improves the overall responsiveness of the application.

Note:
The result of this simple experiment can not be used as reference, because in this experiment it's assumed that the cache size is unlimited (no eviction policy set yet), In addition, since we only randomized 30 items to be called in our API, the cache hit rate is very high. In production settings, the capacity of cache will be limited and user input will be more varied, making the cache hit rate lower.

Top comments (0)