DEV Community

loading...

Discussion on: Understanding JavaScript/TypeScript Memoization

carlillo profile image
Carlos Caballero Author

Hi Mihail!

Good catch! In fact, the lodash implementation (github.com/lodash/lodash/blob/508d...) use map.

In my article, I only wanted show the memoization technique when you've a pure function!

In case that you want use it in a real project, I think that lodash implementation is a good option.

Thread Thread
qm3ster profile image
Mihail Malo

I have so many comments about that function :/

  1. Line 58 is redundant if they also do line 62. It seems they kept this redundancy through the last commit to that function
  2. You can only set memoize.Cache, but can't set it in the call to memoize (like memoize(fn, resolver, cacheClass))
  3. You can't pass in an existing cache like memoize(fn, resolver, cacheInstance), which would be more flexible and no less ergonomic.
  4. Line 55: memoized.cache = cache.set(key, result) || cache What if my custom cache returns a truthy value that isn't itself? For example if it returns result value? Why not just set it to cache, what does this achieve? Supporting an immutable cache?

The closest I'd go with is something like this:

export const memoize = ({
  resolver,
  cacheClass = Map,
  cacheFactory = () => new cacheClass()
} = {}) => func => {
  const cache = cacheFactory()
  return typeof resolver === "undefined"
    ? arg => {
        if (cache.has(arg)) return cache.get(arg)
        const result = func(arg)
        cache.set(arg, result)
        return result
      }
    : (...args) => {
        const key = resolver(...args)
        if (cache.has(key)) return cache.get(key)
        const result = func(...args)
        cache.set(key, result)
        return result
      }
}

I don't think this support should be there by default, especially passing this both to the resolver and the function.
Maybe this 😂👌:

export const memoize = ({
  resolver,
  cacheClass = Map,
  cacheFactory = () => new cacheClass()
} = {}) => func => {
  const cache = cacheFactory()
  return typeof resolver === "undefined"
    ? func.hasOwnProperty("prototype")
      ? function(arg) {
          if (cache.has(arg)) return cache.get(arg)
          const result = func.call(this, arg)
          cache.set(arg, result)
          return result
        }
      : arg => {
          if (cache.has(arg)) return cache.get(arg)
          const result = func(arg)
          cache.set(arg, result)
          return result
        }
    : func.hasOwnProperty("prototype")
      ? resolver.hasOwnProperty("prototype")
        ? function(...args) {
            const key = resolver.apply(this, args)
            if (cache.has(key)) return cache.get(key)
            const result = func.apply(this, args)
            cache.set(key, result)
            return result
          }
        : function(...args) {
            const key = resolver(...args)
            if (cache.has(key)) return cache.get(key)
            const result = func.apply(this, args)
            cache.set(key, result)
            return result
          }
      : (...args) => {
          const key = resolver(...args)
          if (cache.has(key)) return cache.get(key)
          const result = func(...args)
          cache.set(key, result)
          return result
        }
}

Obviously a combinatorial explosion, but that is my point - most of the time you won't use most of the features anywhere in your entire project, so it's better to write your own function. Maybe you want to use a WeakMap of Maps, or vice versa, instead of a resolver.

Memoization is already not a business logic requirement but a performance optimization. So why waste performance on a suboptimal generic solution instead of improving your algorithm by inlining a cache?