DEV Community

mohammed osama
mohammed osama

Posted on

NestJS Caching Globally neatly.

First things first, if you don't know about the NestJS Caching module, It's quite easy to understand, It will let you cache whatever you want through the CACHE_MANAGER and take control over it and make decision whether to keep or delete, and keep for how long etc.., Also allowing you to configure your own cache driver which could be Redis, Memcached etc...

First follow the docs for the installation,
https://docs.nestjs.com/techniques/caching#installation

here is a snippet to register your cache driver.

import {  CacheModule } from '@nestjs/common';
import {ConfigModule} from '@nestjs/config';
import { config } from './shared/config/index';

@Module({
imports: [
 ConfigModule.forRoot({
      cache: true,
      load: [() => config],
      isGlobal: true,
    }),
CacheModule.registerAsync({
      imports: [ConfigModule],
      useFactory: async (config: ConfigService) => {
        const cache = config.get('cache');
        const driver = config.get(cache.driver);
        // Later, if needed, create a cache factory to instantiate different drivers based on config.
        if (cache.driver === 'redis') {
          return {
            ttl: ms(cache.ttl), // using ms package to parse 15m to timestamp.
            store: require('cache-manager-redis-store'),
            host: driver.host,
            port: driver.port,
          };
        }
        return {
          ttl: ms(cache.ttl),
        };
      },
      inject: [ConfigService],
    })
]
});

Enter fullscreen mode Exit fullscreen mode

we are registering the cache module async, and injecting the config service to load the configuration that will be initialised through our .env file, there we will be determining which driver to use and its proper configuration,
while registering the cache module, I'm assuming that I'll be using Redis, unless that, I'll be falling back to the defaults which will be in-memory cache.

If you don't know yet how to handle config or get started with config, here's a snippet of how my config looks like

import 'dotenv/config'
export const config = {
  cache: {
    ttl: process.env.CACHE_TTL as string,
    driver: process.env.CACHE_DRIVER || 'redis',
  },
}
Enter fullscreen mode Exit fullscreen mode

and that's it, we are good to go for the important part of this article, which is caching globally.

NestJS provides a cache interceptor that will cache all of the GET HTTP Requests, but this is kinda insufficient as if you delete/update/create, this cached HTTP request will never be synced, so you will encounter a problem while syncing your frontend or mobile. Luckily, NestJS is binding the caching interceptor at the providers. Therefore, we can provide our own custom cache interceptor which will allow us to avoid this problem and sync properly.

You can take a look at auto-caching responses at the docs to see how they're caching. https://docs.nestjs.com/techniques/caching#auto-caching-responses
Simply, They're using their cache interceptor and adding it to the provider, which will literally intercept each incoming request and make a decision whether to cache or not.

  providers: [
    {
      provide: APP_INTERCEPTOR,
      useClass: CacheInterceptor,
    },
  ],
Enter fullscreen mode Exit fullscreen mode

If you ever wondered, how they're caching, or what's happening behind the scenes, here's a snippet of the interceptor to understand what's going on there, then we will customize it a bit to match our needs.



  async intercept(
    context: ExecutionContext,
    next: CallHandler,
  ): Promise<Observable<any>> {
    const key = this.trackBy(context);
    const ttlValueOrFactory =
      this.reflector.get(CACHE_TTL_METADATA, context.getHandler()) ?? null;

    if (!key) {
      return next.handle();
    }
    try {
      const value = await this.cacheManager.get(key);
      if (!isNil(value)) {
        return of(value);
      }
      const ttl = isFunction(ttlValueOrFactory)
        ? await ttlValueOrFactory(context)
        : ttlValueOrFactory;
      return next.handle().pipe(
        tap(response => {
          const args = isNil(ttl) ? [key, response] : [key, response, { ttl }];
          this.cacheManager.set(...args);
        }),
      );
    } catch {
      return next.handle();
    }
  }
Enter fullscreen mode Exit fullscreen mode

Each Interceptor at NestJS is implementing the NestInterceptor interface which has a method called intercept. in our case, the intercept method at the caching interceptor is going to use the trackBy method, which will define the key of the cached response, so at your first GET Request, the generated key doesn't exist, but later the key will exist so it will return the data from the cache using the generated key earlier. If the key doesn't exist, sure enough, it will just return next to go to the next interceptor or proceed with its life-cycle which could be hitting your controllers/resolvers or whatever.

I think you might be wondering how the key is going to be generated, or how the trackBy method is actually working.

 trackBy(context: ExecutionContext): string | undefined {
    const request = context.switchToHttp().getRequest();
    const { httpAdapter } = this.httpAdapterHost;

    const isGetRequest = httpAdapter.getRequestMethod(request) === 'GET';
    const excludePaths = [
      // Routes to be excluded
    ];
    if (
      !isGetRequest ||
      (isGetRequest &&
        excludePaths.includes(httpAdapter.getRequestUrl(request)))
    ) {
      return undefined;
    }
    return httpAdapter.getRequestUrl(request);
  }

Enter fullscreen mode Exit fullscreen mode

As you see, the trackBy method accepts a context which could be your GraphQL context, express Context which contains (request, response etc..) or fastify context which contains (request, response etc..).
then it will retrieve your request via switching the context to HTTP (incase of graphql, this will be undefined) and therefore, this cache interceptor won't be working if you are working via graphql, however, you can make this work with graphql using

 GqlExecutionContext.create(context).getContext()
Enter fullscreen mode Exit fullscreen mode

NOTE: If you are following along and trying to cache globally the responses while using graphql, this will just give you an idea, of what to do, but this isn't adopted yet to work with graphql, as you will be facing problems with caching depending on the fetched attributes or so.

Then it will be checking if the incoming request if it is a GET Request. If our case is a get request, the method will return the url (including your query parameters) which will be your key for caching. So, in essence, NestJS is caching your responses by taking the URL and making it the key of your cache and its value is the full response that was returned on the first cycle.
Therefore, they were mentioning out at the docs, that it will literally auto-cache your responses and globally if you set up the interceptor, Hopefully, you got the idea now!.

Now, Let's dive into the most interesting part which is syncing the cache and make our own interceptor.


import { Injectable, CacheInterceptor, ExecutionContext, CACHE_KEY_METADATA } from '@nestjs/common';

@Injectable()
export class HttpCacheInterceptor extends CacheInterceptor {
  protected cachedRoutes = new Map();
  trackBy(context: ExecutionContext): string | undefined {
    const request = context.switchToHttp().getRequest();
    // if there is no request, the incoming request is graphql, therefore bypass response caching.
    // later we can get the type of request (query/mutation) and if query get its field name, and attributes and cache accordingly. Otherwise, clear the cache in case of the request type is mutation.
    if (!request) {
      return undefined;
    }
    const { httpAdapter } = this.httpAdapterHost;
    const isHttpApp = httpAdapter && !!httpAdapter.getRequestMethod;
    const cacheMetadata = this.reflector.get(CACHE_KEY_METADATA, context.getHandler());

    if (!isHttpApp || cacheMetadata) {
      return cacheMetadata;
    }
    const isGetRequest = httpAdapter.getRequestMethod(request) === 'GET';
    if (!isGetRequest) {
      setTimeout(async () => {
        for (const values of this.cachedRoutes.values()) {
          for (const value of values) {
            // you don't need to worry about the cache manager as you are extending their interceptor which is using caching manager as you've seen earlier.
            await this.cacheManager.del(value);
          }
        }
      }, 0);
      return undefined;
    }
    // to always get the base url of the incoming get request url.
    const key = httpAdapter.getRequestUrl(request).split('?')[0];
    if (this.cachedRoutes.has(key) && !this.cachedRoutes.get(key).includes(httpAdapter.getRequestUrl(request))) {
      this.cachedRoutes.set(key, [...this.cachedRoutes.get(key), httpAdapter.getRequestUrl(request)]);
      return httpAdapter.getRequestUrl(request);
    }
    this.cachedRoutes.set(key, [httpAdapter.getRequestUrl(request)]);
    return httpAdapter.getRequestUrl(request);
  }
}
Enter fullscreen mode Exit fullscreen mode

Depending on the REST API conventions, if you have posts CRUD for example, the index will be /api/posts, and the show by id can be like /api/posts/1, and if you are searching and using query string it might be like /api/posts?search=title and so on...

The Idea is depending on the base url of the CRUD which in our example is /api/posts, this will be our key and will have other sub-keys which could be /api/posts/3 or api/posts/4 for another post, or /api/posts?search=title

we are using a Map data-structure to have our own key is the base key which will be /api/posts and the rest of the sub-keys will be inside an array, so the map would look like this

'/api/posts' => ['/api/posts', '/api/posts/1', '/api/posts?search=title'];
Enter fullscreen mode Exit fullscreen mode

Why doing so?, because if there is any upcoming request that Isn't GET method, it means that we are either updating/creating/deleting, so we will have to invalidate these related urls keys and flush their responses so we can sync later on the next request. and we are invaliding them at this snippet

Note: If we haven't done it this way, we will just invalidate the whole cache to re-sync later, which isn't really a great thing to do, therefore we made the Map to keep track of what's going to be updated, and what's related to flush it later.


if (!isGetRequest) {
      setTimeout(async () => {
        for (const values of this.cachedRoutes.values()) {
          for (const value of values) {
            await this.cacheManager.del(value);
          }
        }
      }, 0);
      return undefined;
    }
Enter fullscreen mode Exit fullscreen mode

why setTimeout?, because we want to do this in the background, and not to throttle the incoming http request and make it wait for the in-validating process.

So If the incoming request is Get Request, we will need to add it our map

  • Scenario 1:

The Map has the base key which is /api/posts, but we couldn't find at the array of this key the incoming request url string.

   if (this.cachedRoutes.has(key) && !this.cachedRoutes.get(key).includes(httpAdapter.getRequestUrl(request))) {
      this.cachedRoutes.set(key, [...this.cachedRoutes.get(key), httpAdapter.getRequestUrl(request)]);
      return httpAdapter.getRequestUrl(request);
    }

Enter fullscreen mode Exit fullscreen mode

Example: If we are having the map like this

'/api/posts' => ['/api/posts']
Enter fullscreen mode Exit fullscreen mode

and the incoming request is something like this /api/posts?search=title
then we will be inserting this to our map. We don't even have the incoming key

this.cachedRoutes.set(key, [httpAdapter.getRequestUrl(request)]);
Enter fullscreen mode Exit fullscreen mode

Example: If you are hitting for the first time
api/posts
we don't have this yet at the map, so we are setting it.

  • Scenario 2:

What if our first HTTP GET Request is
api/posts?search=title
No problems, cause we are taking the first segment of the url since are splitting the url by ? which will always return us the base url and in our case will be 'api/posts', same goes if your first hit is /api/posts, this as well will always return us the base url.

Caveats: this is the most simplest way to cache and sync your responses automatically without being involved at the hassle of doing it on your own. For example, It's a bit redundant to save all of the posts and when showing the post by id, you also save it on its own, would be better if you get it from the cached values (would introduce complexity as if you have many posts and trying to find this post, this would be heavier than querying it from the database directly and will slow down your response due to looping and trying to find the post.).

Don't forget to use the custom HTTP Interceptor we just made to see it in action.๐Ÿ˜‚

providers: [{
    provide: APP_INTERCEPTOR,
    useClass: HttpCacheInterceptor,
  }]
Enter fullscreen mode Exit fullscreen mode

Alright, that's it for the custom caching interceptor . I hope you enjoyed it โœŒ๏ธ, I see ya at another article ๐Ÿ™ˆ. Don't forget to follow me if you enjoyed this one ๐Ÿ‘€

Top comments (4)

Collapse
 
spiffgreen profile image
Spiff Jekey-Green

I strongly believe this is just too much for implementing simple caching in nestjs, whether one is using redis or inMemory cache. I was able to get it running without the need for writing too much code, and yes It's reusable

Collapse
 
ediur profile image
Ediur

Could you please show some snippet how do you manage the invalidation of cache on some resource update/create/delete?

My self find managing cache, hard and tricky.

Collapse
 
zarinia profile image
Zarinia

No, I disagree with your opinion
When you use Redis and Graph, this is the best suggestion

Collapse
 
spiffgreen profile image
Spiff Jekey-Green

Do you mean GraphQL or you mean RedisGraph that's a deprecated feature ๐Ÿ˜’