DEV Community

Discussion on: Stop mutating in map, reduce and forEach

Collapse
 
aprillion profile image
Peter Hozák

I suspect in a lot of cases, especially for arithmetic on larger dense arrays of numbers, the .filter().map() version will not only be MUCH simpler to read, but also much faster because the simpler unconditional map works better on modern CPUs (not sure if current compilers actually vectorize it, or if it's fast enough already).

Collapse
 
smeijer profile image
Stephan Meijer

It looks like you suspect correct. Despite the additional iterations, I'm unable to create large differences between filter/map and reduce.

Here is a perf comparison. There is no clear winner. Sometimes filter/map is slightly faster, sometimes reduce.

For this simple test, filter>map is faster when having 10k records or less. On a higher number of records, reduce starts to be slightly faster. I'm not sure what's going on here. But I'm sure I have to update the article. Don't blindly use reduce as perf optimization. It might perform worse.

img

Collapse
 
marcosvega91 profile image
Marco Moretti • Edited

Both examples are O(n) complexity so they should be almost similar. Only on large example you will see differences

Thread Thread
 
smeijer profile image
Stephan Meijer

True. I can't believe I missed that. Jacob Paris explains it dead simple in a tweet.

Looping once and doing three things on each item performs the same as doing one thing on each item but looping three times.

It's time I return to school.

Thread Thread
 
hiasltiasl profile image
Matthias Perktold

I also expected the filter+map version to be slower, not because of the extra loop, but because of the intermediate array that needs to be created. But maybe the browser can optimize it away.

Thread Thread
 
smeijer profile image
Stephan Meijer

I haven't profiled it on that level, so maybe there is a difference in memory usage. I honestly don't know. Browsers have changed quite a bit over the last few years. So I wouldn't be surprised if the intermediate array has been optimized away.

It would be interesting to compare the memory profiles though.

Collapse
 
mihaistancu profile image
Mihai Stancu • Edited

In today's software the bottleneck isn't usually the CPU. Most modern applications hit many other bottlenecks way before having to consider optimizing for CPU time (RAM, sync-io, async-io pooling, etc.).

And "since the dawn of time man hath prematurely optimized the wrong thing" meaning that a prematurely optimized application will be filled with "clever" optimizations that don't address the real optimization problem and architecturally make it harder to address.

So the question is what would you rather have to do:
a) optimize a readable but unoptimized application
b) find the correct optimization to add (to a pile of optimizations) in an unreadable (partially-)optimized application**
?

** Terms and conditions may apply which may sometimes lead you to deoptimizing existing optimizations either for clarity of what that thing does (by intent) before changing it or for exposing the piece that truly needs to be optimized that the architecture is making very hard to reach.