DEV Community

Cover image for Building a cache decorator to keep your app fast ๐ŸŽ
Andrew Durber
Andrew Durber

Posted on • Edited on

Building a cache decorator to keep your app fast ๐ŸŽ

Splash Photo by Jouwen Wang on Unsplash

People in 2020 expect apps to be fast. Really fast. Slow pages negatively affect conversions. Speed minimises user frustration.

More money and happy customers? I'll take it.

I spend a lot of time thinking about performance and there are lots of things to consider when building a high-performance application, but the single most important concept is "don't do work if you do not need to." Your code will never be faster than no code. Your API calls will never be faster not calling the API in the first place.

Background

In an application I'm building, we fetch a ton of data. After watching my network tab in Chrome Dev Tools as I navigated and interacted with the app, there were dozens of requests. Most of which don't change very much. Navigating around the app can cause data to be fetched multiple times or if the user reloads the page. The web app is a SPA, so thankfully full page loads are rare.

When we're caching we have two possible methods:

  1. In-memory (simplest)
  2. Persistent (not hard, but more difficult than in-memory)

I separated all my API calls into a service layer within the application, I apply all transforms and request batching there. I started with the slowest requests and built a simple TTL cache.

Using the cache was simple. I check if the cache has a value for the given cache key, if so return it. If not fetch the data and add the data to the cache when we get it.

Here's a link to the TTL Cache implementation if you're interested: Gist: TTL Cache

type MyData = { id: string; name: string }

const dataCache = new TTLCache<MyData>({ ttl: 60 })

async function fetchMyData(userId: string): Promise<MyData> {
  const cacheKey = `mydata:${userId}`
  if (dataCache.has(cacheKey)) {
    return dataCache.get(cacheKey)
  }

  // do API call
  const result = await Api.get('/my-data', { params: { userId } })
  if (result.data) {
    dataCache.set(cacheKey, result.data)
  }

  return result.data
}

Enter fullscreen mode Exit fullscreen mode

The problem

After using this pattern with dozens of API calls, it started getting cumbersome. Caching should be a side-effect, I want to focus solely on what the code is doing.

After staring at my screen for a little while. Tilting my head and squinting. I decided to try and create an abstraction for this pattern.

The solution - Decorators!

We'll be building an in-memory cache here, but at the bottom I'll leave an implementation that uses IndexedDB for persistent caching.

Note

Using decorators required me to use a class but that was a minor inconvenience.

One of the first steps I take when designing an API for an abstraction is to write some code on how I want the code to look.

  1. I wanted to be able to see that some call was cached but I didn't want it to take more than 3 lines of code to do so.
  2. I just wanted to specify a cache key.
  3. All arguments to the call must be serialized. So a change in the arguments returns fresh data.

Here's the code I wrote for my perfect API.


class UserService{
  @cache('mydata')
  async fetchMyData(userId:string):Promise<MyData>{
    const result = await Api.get('/my-data', { params: { userId } })
    return result.data
  }
}

Enter fullscreen mode Exit fullscreen mode

MAGNIFICO!

I knew I could write a decorator that did this. However, a problem arose immediately: I'd need to initialise the cache(s) outside of the decorator.

The simple solution was to just create an object with the caches:

const caches = {
 myData: new TTLCache<MyData>({ ttl: 60 })
}
Enter fullscreen mode Exit fullscreen mode

Quick aside: The anatomy of a decorator

function cache(cache: keyof typeof caches) { // The decorator factory
    return function(target: any, propertyKey: string, descriptor: PropertyDescriptor) { // The decorator

    }
  }
Enter fullscreen mode Exit fullscreen mode
  1. target is the class that the decorated function is on.
  2. propertyKey is the name of the decorated function.
  3. descriptor is the meat and potatoes. It's the function definition.

Implementation

So as a first step, let's create a decorator that just calls the function.


const caches = {
    myDataCache: new TTLCache<MyData>({ttl: 60})
}
function cache(cache: keyof typeof caches) {
    const ttlCache = caches[cache] // Get the cache instance
    return function(_: any, __: string, descriptor: PropertyDescriptor) {
      let method = descriptor.value // grab the method
// We want to override the method so lets give the method a new value. 
      descriptor.value = function() {
          // just call the original function
        return method.apply(this, arguments)
      }
    }
  }
Enter fullscreen mode Exit fullscreen mode

Like I said, this does nothing. We've overridden the method...with itself?

ย Serialize the arguments

As I mentioned previously, we need to cache calls with different arguments separately.

"All arguments to the call must be serialized. So a change in the arguments returns fresh data."

Let's create a function that takes any number of arguments and stringifys them all:

const serializeArgs = (...args: any[]) =>
  args
    .map((arg: any) => arg.toString())
    .join(':')
Enter fullscreen mode Exit fullscreen mode

Let's update our decorator value to include the cache key.

 descriptor.value = function() {
    const cacheKey = serializeArgs(...arguments)
    // call the function
    return method.apply(this, arguments)
}
Enter fullscreen mode Exit fullscreen mode

We call it within the descriptor.value function to get the arguments of the called function

This creates a nice cache key:

@cache('myData')
async fetchMyData(userId:string){}

// lets say it was called with 1234
service.fetchMyData(1234)
// cache key is: myData1234

// if we had additional arguments
async fetchMyData(userId:string, status:string){}

service.fetchMyData(1234, 'ACTIVE')
// cache key is: myData1234:ACTIVE
Enter fullscreen mode Exit fullscreen mode

Check if the cache has the value

Nice and simple:

descriptor.value = function() {
    const cacheKey = serializeArgs(...arguments)
    // Check if we have a cached value. 
    // We do it here before the method is actually called
    // We're short circuiting
    if (ttlCache.has(cacheKey)) {
        return ttlCache.get(cacheKey)
    }
    // call the function
    return method.apply(this, arguments)
}
Enter fullscreen mode Exit fullscreen mode

ย Running the method and getting the result

I thought this was going to be more challenging, but after thinking about it, we know that the method returns a promise. So let's call it.

descriptor.value = function() {
    const cacheKey = serializeArgs(...arguments)
    if (ttlCache.has(cacheKey)) {
    return ttlCache.get(cacheKey)
    }

// We don't need to catch, let the consumer of this method worry about that
    return method.apply(this, arguments).then((result: any) => {
        // If we have a result, cache it!
    if (result) {
        ttlCache.set(cacheKey, result)
    }
    return result
    })
}
Enter fullscreen mode Exit fullscreen mode

That's it! That's the full implementation of the cache.

  1. We check if there's a value in the cache. If so then exit early with the cached value
  2. We call the method, resolve the promise, if there is a value add it to the cache. Return the result.

You don't need to even use a TTL cache, you could user localStorage or whatever you wish.

Full implementation

Here's the full implementation if you're interested.

const caches = {
  myDataCache: new TTLCache<MyData>({ ttl: 60 }),
}
function cache(cache: keyof typeof caches) {
  const ttlCache = caches[cache] // Get the cache instance
  return function(_: any, __: string, descriptor: PropertyDescriptor) {
    let method = descriptor.value // grab the function
    descriptor.value = function() {
      const cacheKey = serializeArgs(...arguments)
      if (ttlCache.has(cacheKey)) {
        return ttlCache.get(cacheKey)
      }

      return method.apply(this, arguments).then((result: any) => {
        // If we have a result, cache it!
        if (result) {
          ttlCache.set(cacheKey, result)
        }
        return result
      })
    }
  }
}

Enter fullscreen mode Exit fullscreen mode

Taking it further

An in-memory cache might not cut it. If you have data that you want to cache through reloads, you can use IndexedDB.
Here's an example of using money-clip, a TTL IndexedDB wrapper.

IndexedDB has an asynchronous API so we need to wrap the method call in a promise.

import {get, set} from 'money-clip'

export function persistentCache(key: string, maxAge: MaxAge) {
  const cacheOptions: Options = {
    version: extractNumberFromString(environment.version) || 1,
    maxAge: hmsToMs(maxAge.hours || 0, maxAge.minutes || 0, maxAge.seconds || 0) || 60 * 1000,
  }

  return function(_: any, __: string, descriptor: PropertyDescriptor) {
    let method = descriptor.value

    descriptor.value = function() {
      const cacheKey = serializeArgs(key, ...arguments)
      var args = arguments
      return get(cacheKey, cacheOptions).then((data) => {
        if (data) {
          return data
        }

        return method.apply(this, args).then(
          (result: any) => {
            if (result) {
              set(cacheKey, result, cacheOptions)
            }
            return result
          },
          () => {
            return method.apply(this, args)
          }
        )
      })
    }
  }
}

Enter fullscreen mode Exit fullscreen mode

There's also nothing stopping you from using localStorage or sessionStorage. Anything where you can get and set values will work perfectly.

Top comments (0)