Performance is the heartbeat of any JavaScript application, wielding a profound impact on user experience and overall success. It's not just about speed: it's about responsiveness, fluidity and efficiency. A well-performing JS app increases load speed, provides smoother interactions and more engaged user base. Users expect seamless experience, and optimizing performance ensures that your app meets the expectations. Improved performance leads to better SEO rankings, higher conversion rates and increased user retention. It's the cornerstone of user satisfaction, which makes it imperative to prioritize and continually optimize performance in your JavaScript applications.
Let's go to the problems!
Problem #1 - Combination of Array Functions
I've already reviewed numerous PRs (Pull Requests) in my life, and there is an issue I've seen a lot of times. It happens when someone combines .map with .filter or .reduce in any order like this:
arr.map(e => f(e)).filter(e => test(e)).reduce(...)
When you combine these methods, they go through all of the elements for a couple of times. For a small amount of data it doesn't really make any difference, but when the array gets bigger the computations take longer.
The easy solution
Use only reduce
method. reduce
is the ultimate solution when you need mapping and filtering at the same time. It'll go through the array only once and do both operations simultaneously, saving time and iterations count.
For example, this:
arr.map(e => f(e)).filter(e => test(e))
will transform into:
arr.reduce((result, element) => {
const mappedElement = f(element)
if(test(mappedElement)) result.push(mappedElement)
return result
}, [])
Problem #2 - Useless Reference Update
This problem shows up if you're using React.js or any other library where immutability is important for reactivity and re-renders. Creating a new reference using spreading is quite a common action to do when you update the state/property of the component:
...
const [arrState, setArrayState] = useState()
...
...
setArrayState([...newArrayState])
...
However, spreading the results of .map,.filter and other functions into a new array for new references is useless, because you'll just create an array with new reference to the result which is a new array with new result:
...
const [arrState, setArrayState] = useState()
...
...
setArrayState([...arrState.map(e => f(e))])
...
The solution
Just remove useless spread operator when you use:
- .map
- .filter
- .reduce - with new array accumulator-result
- .reduceRight - with new array accumulator-result
- .flat
- .flatMap
- .concat
- .toReversed
- .toSorted
- .toSpliced
You are welcome to share other JS antipatterns you know in the comments, I'll add them all in the post!
Be sure to follow my dev.to blog and Telegram channel; there will be even more content soon.
Top comments (44)
You should distinguish application performance and Javascript performance. This is NOT the same:
Do you think, cutting the time of the smallest part (the Javascript routines) will cause any measureable performance boost?
Actually continuously updating the vdom (rerendering) is the biggest performance bottleneck for react apps (other than fetching data). The proposed optimizations help to decrease the number of rerenders as well as reduce the cost of rerendering.
If you look to the React history, the VDom was invented to free the developers from the burden of writing performant code. So, it´s really an interesting question, if streamlining your Javascript will have much impact at all. If so, React did a bad job.
It was invented to save time on browser DOM search and updating because DOM search methods are slower than usual JS tree/array search.
And VDOM's updating is based on the basic and fast comparison
===
operator instead of using deep equality, which is way slower, and it's easier to delegate reference updating to devs instead of inventing fast deep equalityVDOM was invented to make DOM updates more efficient and it did. How often you update the VDOM and how long the VDOM update takes is still up to the developer. If you loop over a million table rows on every render, obviously its going to cause problems.
You are totally right. But that means, you should avoid to loop unnecessarily over millions of table rows, not just loop faster!
Well yes, but sometimes it cannot be avoided and then it's better to make it as fast as possible
with HMR my rebuilds are usually really quick (like 4-5 seconds for initial build then < 1s most changes to modules not too high up the dependency tree)
Let me give you an example:
The DOM of this page is created completely dynamic with Javascript, the main content is generated by a custom markdown parser, which is not optimized in any case. Building the DOM takes about 90 ms on the first run, rebuilding the whole DOM takes about 15 ms on my laptop with chrome. So, we are below 0.1s in any way.
If you do some measurements with chrome dev-tools, the page load takes about 2,1 s (ok, I´m on a train now), but even on a faster network this will be more than 10 times slower than all JS-actions. So, even if I could speed up my code by 50%, this would not make any difference for the user.
Yeah I don't think there is any debate about this on this thread. Minuscule optimisations like that proposed in the post are never gonna yield any measurable improvement besides making the code hard to read for some people.
I´m not sure everybody is aware of the facts. It´s the same with bundlers. The number of files and the order they are called is often much more important than the total file size. Nevertheless there is a whole fraction of developers squeezing every possible byte out of their library to win the contest: "who has the smallest library". I spend days to decode this highly optimized code to find some hidden bugs or simply understand, what the code does. And what is it good for to have the world smallest framework with less than 1 kB of code, if your hero image is 20 times larger...
Couldn't agree more, and I cannot but partly recognize my younger self in your description.
Long ago for instance I had tasked myself with writing the shortest possible Promise library that passes the A+ test suite... I think my entry is quite brief indeed, but god, it's worse than regexps in its write-only nature, totally unmaintainable.
Rebuild !== rerender
They're not the same, but you cannot have good application performance with bad JS performance. When you deal with performance issues, the reason could be everywhere, and it also could be from antipatterns I've described in my post. It's beneficial to proactively avoid these issues before delving into debugging, freeing up your time to check other performance factors.
Thank you for your insights and the article regarding page performance. I'm planning to craft more comprehensive articles, both general and specific, targeting the pain performance points that affect most developers
I was gonna say I couldn't remember a website where the JS seems ludicrously slow, then I remembered EC2 Console and Google Developer Console: they both suck so much.
Hmm, I like this idea, since API is taking 3s to load and I can't do anything to optimize it, let's do some extra iterations on the frontend just for fun 😊
Hy Ediur,
not a bad Idea. Call it AI (Artificial Iterations) and it will be the next big thing!
Agreeing with Eckehard. Optimisation and priorities are very very different in backend and in frontend. Backend and frontend are two very different beasts.
It is not only a question of priority. The reasons for a delay may be very different.
If an application needs to wait for some data, we can try to use some kind of cache if the result is used more than once. Or we can change the way in which data are requested. If you need to request a database scheme via SLQ before requesting the data, you get a lot of traffic forth and back, each causing an additional delay. If you run the same operation on your database server and ask only for the result, you will get the result much faster.
But If an operation takes the same time to finish a million of loops, you would probably need a very different strategy. In that case, optimizing your code might help, but it is probably better to find a strategy that does not require so much operations at all.
To know, if an optimization helps at all, it is necessary to know, what causes the delay. Without this knowledge, you can waste a lot of time optimizing things that are fast anyway.
In theory. But typically one shouldn't have a million of loops in frontend, that's backend's job normally. And that's my entire point. If one finds themselves having to optimise that kind of things on the frontend, then i think they should take a step back and re-evaluate, they've probably got their frontend/backend division wrong.
imho... me thinks
;o)
But things are shifting. There are a lot of SPA´s out there that just require some data and have no backend.
Aah, a lot of SPA's with a lot of data and million-loops, and no backend. I really know nothing about those, I don't even know an example case (feel free to share a url), but I'll take your word for it ;o)
See here, here, here, here, here, here, here or here.
This is from the vue-documentation:
If you just have a database running on a server, I would not call this a "backend". React or Vue can run completely in the browser, so you just need a webserver and some API endpoints to get your data. An application like this would not run much faster, if you try to put the image processing on the server, but it´s worth to see, what causes the delay.
Thanks a lot, i will read👍
dev.to/efpage/comment/2b22f
Realistically, they're getting the data from some backend somewhere, just not necessarily maintained by the maker of SPA. There are a lot of public APIs out there that can be consumed (which is how an SPA could have no backend but still huge data sets), and theoretically filters can be passed in requests to limit or prevent large data sets in the response.
Hey! Great tips! There are other methods I like too, suchs as
.trim()
and.some()
This gave me an idea for a library - a library that implements all the common array utils, but using the Builder pattern, so you can chain the util functions together and they will be run only once at the same time.
Anyhow great article and thanks for the tip!
I completely agree that Promise.then point, async/await syntax is fantastic and widely compatible across browsers and Node environments, so it should be more prioritized.
When it comes to naming and organizing components, I'm a fan of atomic design. Its structure is exceptional, clear, and easily shareable within a team.
your Telegram channel is Russian! So I can't understand, unfortunately.
Hey!

I've developed a bot, which automatically translates my posts to popular languages so anyone can read
It's sometimes hard to choose which language to use when you're trilingual :D
Thank you for your feedback, @lnahrf ! Are there any JavaScript antipatterns you're familiar with and dislike?
Great tips, great article, thanks!
Personally I steer clear of using
reduce
for all but the simplest cases because it gets unreadable pretty quickly.Using
.reduce
might lead to unreadable code, particularly when the reducer function grows larger. However, the reducer function can be refactored into a more readable formCompare
reduce
withfor of
, And you would realize thatfor of
performs faster compared toreduce
.I would give least preference toreduce
overfor of
,filter, map, reduce
all can be done in a singlefor of
.The choice between using array methods like
map
,filter
, andreduce
versus explicit loops often depends on your code style. While both approaches achieve similar results, the array methods offer clearer readability: map modifies the data structure, filter removes specific elements from an array, and reduce accumulates values. However, using loops, like 'for', might require more effort to grasp since they can perform a variety of tasks, making it necessary to spend time understanding the logic within the loop's body.Whatever the number of loops you'll do the same exact operations !
This code:
Is almost exactly the same as
Which again is the same as (assuming order of operations doesn't matter):
The only thing you're saving is a counter. It's peanuts.
Some comments have been hidden by the post's author - find out more