DEV Community

YCM Jason
YCM Jason

Posted on

how not to use reduce

Why not reduce?

This list intend to be a forever growing one that hope to collect typical patterns of reduce to avoid. Please feel free to suggest more examples!


This post is not about space / time performance perks by not using reduce. This is all about readability.


πŸ”΄ Do not

faces.reduce((acc, face) => {
  return [...acc, mask(face)]
}, [])
Enter fullscreen mode Exit fullscreen mode

βœ… Do

faces.map(mask)
Enter fullscreen mode Exit fullscreen mode

πŸ”΄ Do not

bags.reduce((acc, bag) => {
  return [...acc, ...bag.apples]
}, [])
Enter fullscreen mode Exit fullscreen mode

βœ… Do

bags.flatMap(bag => bag.apples)
Enter fullscreen mode Exit fullscreen mode

πŸ”΄ Do not

phones.reduce((acc, phone) => {
  return isNew(phone) ? [...acc, phone] : acc
}, [])
Enter fullscreen mode Exit fullscreen mode

βœ… Do

phones.filter(isNew)
Enter fullscreen mode Exit fullscreen mode

πŸ”΄ Do not

dogs.reduce((acc, dog) => {
  return isHappy(dog) ? acc + 1 : acc
}, 0)
Enter fullscreen mode Exit fullscreen mode

βœ… Do

dogs.filter(isHappy).length
Enter fullscreen mode Exit fullscreen mode

πŸ”΄ Do not

people.reduce((acc, person) => ({
  [person.dna]: person
}), {})
Enter fullscreen mode Exit fullscreen mode

βœ… Do

Object.fromEntries(
  people.map(person => [person.dna, person])
)
Enter fullscreen mode Exit fullscreen mode

πŸ”΄ Do not

people.reduce((acc, person) => {
  return Math.max(acc, person.age)
}, -Infinity)
Enter fullscreen mode Exit fullscreen mode

βœ… Do

Math.max(...people.map(person => person.age))
Enter fullscreen mode Exit fullscreen mode

Top comments (10)

Collapse
 
miketalbot profile image
Mike Talbot ⭐ • Edited

Hmmm, I totally agree with the first few - but the last 3 your "do not" cases use less memory and will very probably be faster - making and spreading arrays for things like a max or a "lookup maker" seem like a bad idea to me.

jsperf.com/check-the-max

In this one I've made your "Do not" case do the same as the "Do" case

jsperf.com/create-index-using-from...

Collapse
 
ycmjason profile image
YCM Jason

this post is not about space / time performance. do-while / for-loop would probably be the best if you wish to talk about speed / storage.

please check out the links attached at the beginning of the post. they explain why reduce is to be avoided.

Collapse
 
miketalbot profile image
Mike Talbot ⭐ • Edited

Seems like deciding to be 20% to 70% slower for readability is the bigger anti-pattern.

//In a library
function makeAnIndexOn(from) {
   return function(acc, current) {
        acc[current[from]] = current
        return acc
   }
}

const index = people.reduce(makeAnIndexOn("dna"))


Surely this is fast and readable

Thread Thread
 
ycmjason profile image
YCM Jason • Edited

I disagree.

  1. it is not 70% slower. The first example is just 15% slower. The second example you gave, building a Map is different from building an object. So it is not a fair test. If you remove the Map test case, using spread and map is just 15% slower. jsperf.com/create-index-using-from...
  2. I always favour readability over performance unless performance becomes an issue. Even if it was 70% slower, unless it actually shows noticeable impact on the application, I'd say it is not worth sacrificing readability. Less readable code are less optimisable because it becomes harder for readers to understand what the code is doing.

Having said that, these are all opinions. Performance vs readability seems to have been an everlasting war. I think it comes down to what are you trying to achieve with you code.


Please ignore the following, just taking the piss πŸ˜‚
If speed is all you care about and you do not need to care about maintenance / readability, then go ahead and wrap everything in one big for-loop / while-loop, construct no function because functions take up memories and invoking function have their overhead. Ah perhaps you can consider writing webasm too. It is probably the fastest.

Thread Thread
 
miketalbot profile image
Mike Talbot ⭐

So in my first example of making an index, I used a map to create a multi-key array look up and it was 70% slower. Then realised DNA was probably "unique" HAHA. So yeah 70% faster if there is repetition.

In the map version: your "Do" is still 20% slower than the object version and lets face it, not that much more readable. The Map version is way faster of course.

I can and do write highly readable code that is fast. I wrap things in well-named functions that work efficiently :). Yes, those functions will often hide away performance, the same way as those core native functions do.

I just care about UX and UX tries not to be janky etc.

Thread Thread
 
ycmjason profile image
YCM Jason • Edited

Readability vs performance is a hard thing to argue. Everyone has their own interpretation as of, what is readability, what is performance and why one is more important than the other etc.

I hope you have read the article I linked in the beginning. .reduce is a very vey powerful function. Simply by using it incurs readability cost. Because the reader need to figure out what it does. As oppose to using Object.fromEntries which the user knows immediately that it is constructing an object. There is nothing to "read into" to understand what it does.

I won't try to convince you because I don't think I can. But please go and try writing code this way. Maybe you will gain some new insights.

Collapse
 
rkichenama profile image
Richard Kichenama

Thank you for the post. I struggle with understanding how important readability is, coming from a 'use descriptive variables' and 'comment complex code' background. You have given me stuff to think about, but I feel more comfortable with explaining wha tthe code does and advocating for the more efficient and maintainable code. My gut reaction to the term 'readability', with the absence of those two axioms, makes me feel less like a solution architect and more paint by numbers.

Collapse
 
ycmjason profile image
YCM Jason • Edited

Try it out. Try to write code in a way so that "it explains itself" and doesn't require comments. Optimise only the complexity (big-O) of the code; not the performance perks of using for-loops / some other methods.

So if you are doing something like

xs.find(x => ys.includes(x))

This is O(n*m) and we can improve this to make it O(n + m) by doing:

const ySet = new Set(ys)
xs.find(x => ySet.has(x))

So this is the kind of things I optimise.

P.S. I use comments primarily for things that are uncommon. A good example I encounter recently is, in jest, expect(-0).toBe(0) will fail the test. So I needed to do expect(-0 === 0).toBe(true). I add comment for these kind of things explaining why I write like this.

Collapse
 
rkichenama profile image
Richard Kichenama

I believe you are right, though the implementation of Set in javascript should give O(1) or at most sublinear lookups with a hashing function. Based on that, I assume the following would be equivalent without the new data type.

const jObj = ys.reduce((t, c) => { t[c] = true; return t; });
xs.find(x => ( jObj[x] ));
Thread Thread
 
ycmjason profile image
YCM Jason

ya except I'd not use a reduce.