Performance is the heartbeat of any JavaScript application, wielding a profound impact on user experience and overall success. It's not just about ...
Some comments have been hidden by the post's author - find out more
For further actions, you may consider blocking this person and/or reporting abuse
You should distinguish application performance and Javascript performance. This is NOT the same:
Do you think, cutting the time of the smallest part (the Javascript routines) will cause any measureable performance boost?
Actually continuously updating the vdom (rerendering) is the biggest performance bottleneck for react apps (other than fetching data). The proposed optimizations help to decrease the number of rerenders as well as reduce the cost of rerendering.
If you look to the React history, the VDom was invented to free the developers from the burden of writing performant code. So, it´s really an interesting question, if streamlining your Javascript will have much impact at all. If so, React did a bad job.
It was invented to save time on browser DOM search and updating because DOM search methods are slower than usual JS tree/array search.
And VDOM's updating is based on the basic and fast comparison
===operator instead of using deep equality, which is way slower, and it's easier to delegate reference updating to devs instead of inventing fast deep equalityVDOM was invented to make DOM updates more efficient and it did. How often you update the VDOM and how long the VDOM update takes is still up to the developer. If you loop over a million table rows on every render, obviously its going to cause problems.
You are totally right. But that means, you should avoid to loop unnecessarily over millions of table rows, not just loop faster!
Hey! Great tips! There are other methods I like too, suchs as
.trim()and.some()This gave me an idea for a library - a library that implements all the common array utils, but using the Builder pattern, so you can chain the util functions together and they will be run only once at the same time.
Anyhow great article and thanks for the tip!
I completely agree that Promise.then point, async/await syntax is fantastic and widely compatible across browsers and Node environments, so it should be more prioritized.
When it comes to naming and organizing components, I'm a fan of atomic design. Its structure is exceptional, clear, and easily shareable within a team.
your Telegram channel is Russian! So I can't understand, unfortunately.
Hey!

I've developed a bot, which automatically translates my posts to popular languages so anyone can read
It's sometimes hard to choose which language to use when you're trilingual :D
Personally I steer clear of using
reducefor all but the simplest cases because it gets unreadable pretty quickly.Using
.reducemight lead to unreadable code, particularly when the reducer function grows larger. However, the reducer function can be refactored into a more readable formGreat tips, great article, thanks!
Thank you for your feedback, @lnahrf ! Are there any JavaScript antipatterns you're familiar with and dislike?
Whatever the number of loops you'll do the same exact operations !
This code:
Is almost exactly the same as
Which again is the same as (assuming order of operations doesn't matter):
The only thing you're saving is a counter. It's peanuts.
Compare
reducewithfor of, And you would realize thatfor ofperforms faster compared toreduce.I would give least preference toreduceoverfor of,filter, map, reduceall can be done in a singlefor of.The choice between using array methods like
map,filter, andreduceversus explicit loops often depends on your code style. While both approaches achieve similar results, the array methods offer clearer readability: map modifies the data structure, filter removes specific elements from an array, and reduce accumulates values. However, using loops, like 'for', might require more effort to grasp since they can perform a variety of tasks, making it necessary to spend time understanding the logic within the loop's body.I don’t believe this to be accurate.
For starters, just because you're looping once doesn't mean the total number of operations changed. In your reduce function, you have two operations per iteration. In the chained methods there are two loops with one operation per iteration each.
In both cases the time complexity is O(2N) which is O(N). Which means their performance will be the same except for absolutely minor overhead from having two array methods instead of one.
Even if we assume you managed to reduce the number of operations, you can't reduce it a full order of magnitude. At best (if at all possible) you can make your optimization be O(N) while the chained version is O(xN), and for any constant x the difference will be quite minimal and only observable in extreme cases.
This is just a theoretical analysis of the proposal here. I haven't done any benchmarks, but my educated guess is that they will be in line with this logic.
I made a very similar comment, it pleases me to see it here already :)
I'm baffled by the number of people who seem to think that if they cram more operations into one loop they are somehow gonna reduce overall complexity in any meaningful way... But no: it'll still be O(n) where n is the length of the array.
Many people don't understand what "reduce" does, it can thus make more sense if you're in a team to use a map and a filter, which is extremely straightforward to understand and not a performance penalty.
I love obscure one-liners as much as any TypeScript geek with a love for functional programming, but I know I have to moderate myself when working within a team.
Shit, is big O complexity still studied in computer science?
The code you start off with is readable but as you optimize the code, it becomes more unreadable since your fusing many operations into a single loop. Consider using a lazy collections or a streaming library which applies these loop fusion optimizations for you so you don’t have to worry about intermediate collections being generated and you can keep your focus on more readable single purpose code. Libraries like lazy.js and Effect.ts help you do this
While doing more operations in one loop is gonna save you incrementing a counter multiple times, you'll still have O(n) complexity where n is the size of your array, so you will most likely not save any measurable amount of resources at all.
Sometimes it is more clear to write map then filter just for readability, cuz not everyone understands what reduce does unfortunately.
Code that cannot be read fluently by the whole team is in my experience way more costly than a few sub-optimal parts in an algorithm that takes nanoseconds to complete.
You should study the subject of algorithmic complexity if you don't know much about this concept.