Hey everybody! This is my first-ever post!
In this article, I will be talking about the reduce method of JavaScript arrays, which I feel is someti...
For further actions, you may consider blocking this person and/or reporting abuse
Not looked in any great detail - but something a bit odd in your
forEach
test in your code... it isn't usingforEach
!!You might also want to compare just using a normal
for
loop and accessing the array using a numeric index. That can very often be the fastest method if you're really concerned about speed.For benchmarking, you might want to consider a pre-existing tool like perf.link:
Perflink | JS Benchmarks
JavaScript performance benchmarks you can share via URL.
I made a test of the different methods here - and using the
index
method is by far the fastest, withforEach
andreduce
jostling for 2nd place (sometimes one is faster, sometimes the other is) on Chrome (desktop), with Firefox (desktop) consistently puttingreduce
in 2nd place followed byforEach
.After the first run, №4 (
filter.map
) consistently outperforms even indexing!Go figure!
Turns out it was due to the tiny array. With large input data, index absolutely dominates to this day.
And iterators (I changed №4 to the below for this picture) are extremely unoptimized in my mobile V8:
Hi!
You're right! That really isn't a
forEach
, but more of afor in
I'll make that correction.Also thank you for sharing the index alternative and the Perflink tool!
I guess I didn't think of using the for loop with the index since it's not something I would typically use.
Never prematurely optimize - you save 3ms processing 100,000 numbers but the next developer that looks at the code will spend three minutes trying to figure out what's supposed to happen in your reduce code - that's going to cost the business a whole lot more than the difference in processing time, especially if they don't figure it out and introduce a bug.
In a real application those milliseconds will be nothing compared to latencies caused by network and database calls but the clarity gained from using filter/map is always priceless.
All of that said, your observations around the spread operator are the real gem in this article and worth sharing. Keep up the good work but only fix the problems that actually exist :)
I'm not sure about the tldr use reduce instead of map and filter.
The whole point of using map and filter is that you separate the logic into parts, the what(condition to determine what you need) and how (the translation).
For a tiny benefit in an abnormal situation you made the code much less intuitive at a glance. I don't think this is a good blanket takeaway.
Writing readable and maintainable code should be the first port of call unless time and/or memory optimisation is a specific requirement to the task. Over optimising for time for a project where it doesn't matter if something takes 200ms or 2 minutes at the expense of making you or your successor have to work longer to make sense of it in the future is not ideal.
Also why did you use for in? It's not the most ideal one, it has some pitfalls unless that has been corrected in recent updates. Either use for of or a normal for loop. (If arr.forEach is out of the question)
Great article! You made reduce sound so much simpler! I also liked how you called out that "reduce is not for sums only", because yes the naming of reducer, total/acc can make it so easy to misunderstand
I find .reduce hard to reason about and seen too often a code where people are creating new arrays and objects inside .reduce instead of mutating accumulated value.
Hence we made an agreement to default to
for .. of
loopsMaybe it does end up with couple more lines of code but reasoning about the flow is easier
If you really want to do the kind of operation that do much things in a single iteration, check out generator. You can build generator version of map, filter, then use that version to lazily compute when you iterate through items. That achieve both the logic separation of map and filter, while keeping the performance and memory usage optimized
TLDRs generally appears at the beginning of articles
Using
reduce
instead ofmap
orfilter
robs your code of a clear intent narrative that's is arguably more important in the majority of cases than performance gains.It also doesn't help that in my experience many developers seem to have difficulty parsing what a reduce function is doing.
That is slower than a normal for loop:
Nobody uses
for in
for arrays it doesn't make any senseThis one is marginally faster.
Not really marginally if it gets JITed
You lost me as soon as you talked about accessing arrays with for..in rather than for..of.