I find it's best to use a Map or, for functions and other objects, a WeakMap.
The way I write my code, structural comparison will give almost no advantage, but JSON.stringify of huge objects will add overhead to every call (even a cache hit)
I'm PhD. in Computer Science from Málaga, Spain. Currently, I am teaching developers and degree/master computer science how to be experts in web technologies and computer science.
Line 58 is redundant if they also do line 62. It seems they kept this redundancy through the last commit to that function
You can only set memoize.Cache, but can't set it in the call to memoize (like memoize(fn, resolver, cacheClass))
You can't pass in an existing cache like memoize(fn, resolver, cacheInstance), which would be more flexible and no less ergonomic.
Line 55: memoized.cache = cache.set(key, result) || cache What if my custom cache returns a truthy value that isn't itself? For example if it returns result value? Why not just set it to cache, what does this achieve? Supporting an immutable cache?
Obviously a combinatorial explosion, but that is my point - most of the time you won't use most of the features anywhere in your entire project, so it's better to write your own function. Maybe you want to use a WeakMap of Maps, or vice versa, instead of a resolver.
Memoization is already not a business logic requirement but a performance optimization. So why waste performance on a suboptimal generic solution instead of improving your algorithm by inlining a cache?
For further actions, you may consider blocking this person and/or reporting abuse
We're a place where coders share, stay up-to-date and grow their careers.
I find it's best to use a
Map
or, for functions and other objects, aWeakMap
.The way I write my code, structural comparison will give almost no advantage, but
JSON.stringify
of huge objects will add overhead to every call (even a cache hit)Hi Mihail!
Good catch! In fact, the lodash implementation (github.com/lodash/lodash/blob/508d...) use map.
In my article, I only wanted show the memoization technique when you've a pure function!
In case that you want use it in a real project, I think that lodash implementation is a good option.
I have so many comments about that function :/
memoize.Cache
, but can't set it in the call tomemoize
(likememoize(fn, resolver, cacheClass)
)memoize(fn, resolver, cacheInstance)
, which would be more flexible and no less ergonomic.memoized.cache = cache.set(key, result) || cache
What if my custom cache returns a truthy value that isn't itself? For example if it returnsresult
value? Why not just set it tocache
, what does this achieve? Supporting an immutable cache?The closest I'd go with is something like this:
I don't think
this
support should be there by default, especially passingthis
both to the resolver and the function.Maybe this 😂👌:
Obviously a combinatorial explosion, but that is my point - most of the time you won't use most of the features anywhere in your entire project, so it's better to write your own function. Maybe you want to use a
WeakMap
ofMap
s, or vice versa, instead of a resolver.Memoization is already not a business logic requirement but a performance optimization. So why waste performance on a suboptimal generic solution instead of improving your algorithm by inlining a cache?