DEV Community

Cover image for Optimize your JS code in 10 seconds
Arthur Kh
Arthur Kh

Posted on • Originally published at masterarthur.xyz

Optimize your JS code in 10 seconds

Performance is the heartbeat of any JavaScript application, wielding a profound impact on user experience and overall success. It's not just about speed: it's about responsiveness, fluidity and efficiency. A well-performing JS app increases load speed, provides smoother interactions and more engaged user base. Users expect seamless experience, and optimizing performance ensures that your app meets the expectations. Improved performance leads to better SEO rankings, higher conversion rates and increased user retention. It's the cornerstone of user satisfaction, which makes it imperative to prioritize and continually optimize performance in your JavaScript applications.

Let's go to the problems!

Problem #1 - Combination of Array Functions

I've already reviewed numerous PRs (Pull Requests) in my life, and there is an issue I've seen a lot of times. It happens when someone combines .map with .filter or .reduce in any order like this:

arr.map(e => f(e)).filter(e => test(e)).reduce(...)
Enter fullscreen mode Exit fullscreen mode

When you combine these methods, they go through all of the elements for a couple of times. For a small amount of data it doesn't really make any difference, but when the array gets bigger the computations take longer.

The easy solution

Use only reduce method. reduce is the ultimate solution when you need mapping and filtering at the same time. It'll go through the array only once and do both operations simultaneously, saving time and iterations count.

For example, this:

arr.map(e => f(e)).filter(e => test(e))
Enter fullscreen mode Exit fullscreen mode

will transform into:

arr.reduce((result, element) => {
    const mappedElement = f(element)
    if(test(mappedElement)) result.push(mappedElement)
    return result
}, [])
Enter fullscreen mode Exit fullscreen mode

Problem #2 - Useless Reference Update

This problem shows up if you're using React.js or any other library where immutability is important for reactivity and re-renders. Creating a new reference using spreading is quite a common action to do when you update the state/property of the component:

...
const [arrState, setArrayState] = useState()
...
...
setArrayState([...newArrayState])
...
Enter fullscreen mode Exit fullscreen mode

However, spreading the results of .map,.filter and other functions into a new array for new references is useless, because you'll just create an array with new reference to the result which is a new array with new result:

...
const [arrState, setArrayState] = useState()
...
...
setArrayState([...arrState.map(e => f(e))])
...
Enter fullscreen mode Exit fullscreen mode

The solution

Just remove useless spread operator when you use:

You are welcome to share other JS antipatterns you know in the comments, I'll add them all in the post!

Be sure to follow my dev.to blog and Telegram channel; there will be even more content soon.

Top comments (44)

Collapse
 
efpage profile image
Eckehard

You should distinguish application performance and Javascript performance. This is NOT the same:

  • Usually, an application needs some data. Fetching data may cause a significant delay.
  • Then, Javascript can handle this data, which is usually the fastest part of the show. This is, where your "optimization" takes place!
  • After that, maybe you cause complex state changes. It may take much more time to handle all the dpendencies in your app. Luckily, all this is done on the VDOM, so it will not cause any further delay.
  • If you use React, then all your interactions have to be diffed to the real DOM
  • After the DOM is ready, the browser has to render it. This is usually the most time consuming part
  • If you change some ressources, maybe the browser needs to fetch some new images. Even that cannot be started before the DOM was ready. So, the user has to wait for all elements to be loaded.

Do you think, cutting the time of the smallest part (the Javascript routines) will cause any measureable performance boost?

Collapse
 
brense profile image
Rense Bakker

Actually continuously updating the vdom (rerendering) is the biggest performance bottleneck for react apps (other than fetching data). The proposed optimizations help to decrease the number of rerenders as well as reduce the cost of rerendering.

Collapse
 
efpage profile image
Eckehard

If you look to the React history, the VDom was invented to free the developers from the burden of writing performant code. So, it´s really an interesting question, if streamlining your Javascript will have much impact at all. If so, React did a bad job.

Thread Thread
 
mainarthur profile image
Arthur Kh

It was invented to save time on browser DOM search and updating because DOM search methods are slower than usual JS tree/array search.
And VDOM's updating is based on the basic and fast comparison === operator instead of using deep equality, which is way slower, and it's easier to delegate reference updating to devs instead of inventing fast deep equality

Thread Thread
 
brense profile image
Rense Bakker

VDOM was invented to make DOM updates more efficient and it did. How often you update the VDOM and how long the VDOM update takes is still up to the developer. If you loop over a million table rows on every render, obviously its going to cause problems.

Thread Thread
 
efpage profile image
Eckehard

You are totally right. But that means, you should avoid to loop unnecessarily over millions of table rows, not just loop faster!

Thread Thread
 
brense profile image
Rense Bakker

Well yes, but sometimes it cannot be avoided and then it's better to make it as fast as possible

Collapse
 
djfm profile image
François-Marie de Jouvencel • Edited

with HMR my rebuilds are usually really quick (like 4-5 seconds for initial build then < 1s most changes to modules not too high up the dependency tree)

Thread Thread
 
efpage profile image
Eckehard

Let me give you an example:

The DOM of this page is created completely dynamic with Javascript, the main content is generated by a custom markdown parser, which is not optimized in any case. Building the DOM takes about 90 ms on the first run, rebuilding the whole DOM takes about 15 ms on my laptop with chrome. So, we are below 0.1s in any way.

If you do some measurements with chrome dev-tools, the page load takes about 2,1 s (ok, I´m on a train now), but even on a faster network this will be more than 10 times slower than all JS-actions. So, even if I could speed up my code by 50%, this would not make any difference for the user.

Thread Thread
 
djfm profile image
François-Marie de Jouvencel

Yeah I don't think there is any debate about this on this thread. Minuscule optimisations like that proposed in the post are never gonna yield any measurable improvement besides making the code hard to read for some people.

Thread Thread
 
efpage profile image
Eckehard

I´m not sure everybody is aware of the facts. It´s the same with bundlers. The number of files and the order they are called is often much more important than the total file size. Nevertheless there is a whole fraction of developers squeezing every possible byte out of their library to win the contest: "who has the smallest library". I spend days to decode this highly optimized code to find some hidden bugs or simply understand, what the code does. And what is it good for to have the world smallest framework with less than 1 kB of code, if your hero image is 20 times larger...

Thread Thread
 
djfm profile image
François-Marie de Jouvencel

Couldn't agree more, and I cannot but partly recognize my younger self in your description.

Long ago for instance I had tasked myself with writing the shortest possible Promise library that passes the A+ test suite... I think my entry is quite brief indeed, but god, it's worse than regexps in its write-only nature, totally unmaintainable.

Thread Thread
 
brense profile image
Rense Bakker

Rebuild !== rerender

Collapse
 
lnahrf profile image
Lev Nahar

I could only agree with some of your points when working on very simple projects on the front-end.

Yeah, when you run very simple code and you don't handle a lot of data, making another loop is not a bottleneck. But when working with millions of data entries on the front-end or the back-end (node/bun) it is very important to consider loop optimization.

That is why I still think @mainarthur 's post has value.

Collapse
 
djfm profile image
François-Marie de Jouvencel • Edited

Whatever the number of loops you'll do the same exact operations !

This code:

for (let i = 0; i < 2; ++i) {
  opA();
  opB();
}
Enter fullscreen mode Exit fullscreen mode

Is almost exactly the same as

opA();
opB();
opA();
opB();
Enter fullscreen mode Exit fullscreen mode

Which again is the same as (assuming order of operations doesn't matter):

for (let i = 0; i < 2; ++i) {
  opA();
}

for (let i = 0; i < 2; ++i) {
  opB();
}
Enter fullscreen mode Exit fullscreen mode

The only thing you're saving is a counter. It's peanuts.

Collapse
 
mainarthur profile image
Arthur Kh

They're not the same, but you cannot have good application performance with bad JS performance. When you deal with performance issues, the reason could be everywhere, and it also could be from antipatterns I've described in my post. It's beneficial to proactively avoid these issues before delving into debugging, freeing up your time to check other performance factors.

Thank you for your insights and the article regarding page performance. I'm planning to craft more comprehensive articles, both general and specific, targeting the pain performance points that affect most developers

Collapse
 
djfm profile image
François-Marie de Jouvencel • Edited

I was gonna say I couldn't remember a website where the JS seems ludicrously slow, then I remembered EC2 Console and Google Developer Console: they both suck so much.

Collapse
 
elsyng profile image
Ellis

Agreeing with Eckehard. Optimisation and priorities are very very different in backend and in frontend. Backend and frontend are two very different beasts.

Collapse
 
efpage profile image
Eckehard

It is not only a question of priority. The reasons for a delay may be very different.

If an application needs to wait for some data, we can try to use some kind of cache if the result is used more than once. Or we can change the way in which data are requested. If you need to request a database scheme via SLQ before requesting the data, you get a lot of traffic forth and back, each causing an additional delay. If you run the same operation on your database server and ask only for the result, you will get the result much faster.

But If an operation takes the same time to finish a million of loops, you would probably need a very different strategy. In that case, optimizing your code might help, but it is probably better to find a strategy that does not require so much operations at all.

To know, if an optimization helps at all, it is necessary to know, what causes the delay. Without this knowledge, you can waste a lot of time optimizing things that are fast anyway.

Thread Thread
 
elsyng profile image
Ellis

In theory. But typically one shouldn't have a million of loops in frontend, that's backend's job normally. And that's my entire point. If one finds themselves having to optimise that kind of things on the frontend, then i think they should take a step back and re-evaluate, they've probably got their frontend/backend division wrong.

imho... me thinks ;o)

Thread Thread
 
efpage profile image
Eckehard

But things are shifting. There are a lot of SPA´s out there that just require some data and have no backend.

Thread Thread
 
elsyng profile image
Ellis

Aah, a lot of SPA's with a lot of data and million-loops, and no backend. I really know nothing about those, I don't even know an example case (feel free to share a url), but I'll take your word for it ;o)

Thread Thread
 
efpage profile image
Eckehard

See here, here, here, here, here, here, here or here.

This is from the vue-documentation:

Some applications require rich interactivity, deep session depth, and non-trivial stateful logic on the frontend. The best way to build such applications is to use an architecture where Vue not only controls the entire page, but also handles data updates and navigation without having to reload the page. This type of application is typically referred to as a Single-Page Application (SPA).

If you just have a database running on a server, I would not call this a "backend". React or Vue can run completely in the browser, so you just need a webserver and some API endpoints to get your data. An application like this would not run much faster, if you try to put the image processing on the server, but it´s worth to see, what causes the delay.

Thread Thread
 
elsyng profile image
Ellis

Thanks a lot, i will read👍

Thread Thread
 
mdledoux profile image
Martin Ledoux

dev.to/efpage/comment/2b22f

But things are shifting. There are a lot of SPA´s out there that just require some data and have no backend.

Realistically, they're getting the data from some backend somewhere, just not necessarily maintained by the maker of SPA. There are a lot of public APIs out there that can be consumed (which is how an SPA could have no backend but still huge data sets), and theoretically filters can be passed in requests to limit or prevent large data sets in the response.

Collapse
 
ediur profile image
Ediur

Hmm, I like this idea, since API is taking 3s to load and I can't do anything to optimize it, let's do some extra iterations on the frontend just for fun 😊

Collapse
 
efpage profile image
Eckehard

Hy Ediur,

not a bad Idea. Call it AI (Artificial Iterations) and it will be the next big thing!

Collapse
 
lnahrf profile image
Lev Nahar

Great article with simple tips to implement.
JS is notorious for abusing loops! Thanks for sharing.

Collapse
 
mainarthur profile image
Arthur Kh

Thank you for your feedback, @lnahrf ! Are there any JavaScript antipatterns you're familiar with and dislike?

Collapse
 
lnahrf profile image
Lev Nahar

Might be controversial, but I really dislike Promise.then - it creates badly formatted code (nesting hell), but for some reason I see a lot of developers using it instead of coding in a synchronous style with async/await.

Another architectural anti-pattern I notice quite a lot is the fact that developers name their components based on location and not functionality. For example, naming a generic tooltip component "homepageTooltip" (because we use it on the homepage) instead of "customTooltip".

But I am a backend engineer so I don't know anything about any of this 😁

Thread Thread
 
mainarthur profile image
Arthur Kh

I completely agree that Promise.then point, async/await syntax is fantastic and widely compatible across browsers and Node environments, so it should be more prioritized.

When it comes to naming and organizing components, I'm a fan of atomic design. Its structure is exceptional, clear, and easily shareable within a team.

Collapse
 
jorensm profile image
JorensM • Edited

This gave me an idea for a library - a library that implements all the common array utils, but using the Builder pattern, so you can chain the util functions together and they will be run only once at the same time.

Anyhow great article and thanks for the tip!

Collapse
 
schemetastic profile image
Schemetastic (Rodrigo)

Hey! Great tips! There are other methods I like too, suchs as .trim() and .some()

Collapse
 
chalist profile image
Chalist

your Telegram channel is Russian! So I can't understand, unfortunately.

Collapse
 
mainarthur profile image
Arthur Kh • Edited

Hey!
Image description
I've developed a bot, which automatically translates my posts to popular languages so anyone can read

It's sometimes hard to choose which language to use when you're trilingual :D

Collapse
 
moopet profile image
Ben Sinclair

Personally I steer clear of using reduce for all but the simplest cases because it gets unreadable pretty quickly.

Collapse
 
mainarthur profile image
Arthur Kh

Using .reduce might lead to unreadable code, particularly when the reducer function grows larger. However, the reducer function can be refactored into a more readable form

Collapse
 
glntn101 profile image
Luke

Great tips, great article, thanks!

Collapse
 
akashkava profile image
Akash Kava • Edited

Compare reduce with for of, And you would realize that for of performs faster compared to reduce.I would give least preference to reduce over for of, filter, map, reduce all can be done in a single for of.

Collapse
 
mainarthur profile image
Arthur Kh

The choice between using array methods like map, filter, and reduce versus explicit loops often depends on your code style. While both approaches achieve similar results, the array methods offer clearer readability: map modifies the data structure, filter removes specific elements from an array, and reduce accumulates values. However, using loops, like 'for', might require more effort to grasp since they can perform a variety of tasks, making it necessary to spend time understanding the logic within the loop's body.

Collapse
 
ghamadi profile image
Ghaleb

I don’t believe this to be accurate.

For starters, just because you're looping once doesn't mean the total number of operations changed. In your reduce function, you have two operations per iteration. In the chained methods there are two loops with one operation per iteration each.

In both cases the time complexity is O(2N) which is O(N). Which means their performance will be the same except for absolutely minor overhead from having two array methods instead of one.

Even if we assume you managed to reduce the number of operations, you can't reduce it a full order of magnitude. At best (if at all possible) you can make your optimization be O(N) while the chained version is O(xN), and for any constant x the difference will be quite minimal and only observable in extreme cases.

This is just a theoretical analysis of the proposal here. I haven't done any benchmarks, but my educated guess is that they will be in line with this logic.

Collapse
 
djfm profile image
François-Marie de Jouvencel • Edited

I made a very similar comment, it pleases me to see it here already :)

I'm baffled by the number of people who seem to think that if they cram more operations into one loop they are somehow gonna reduce overall complexity in any meaningful way... But no: it'll still be O(n) where n is the length of the array.

Many people don't understand what "reduce" does, it can thus make more sense if you're in a team to use a map and a filter, which is extremely straightforward to understand and not a performance penalty.

I love obscure one-liners as much as any TypeScript geek with a love for functional programming, but I know I have to moderate myself when working within a team.

Shit, is big O complexity still studied in computer science?

Collapse
 
calvinlfer profile image
Calvin Lee Fernandes

The code you start off with is readable but as you optimize the code, it becomes more unreadable since your fusing many operations into a single loop. Consider using a lazy collections or a streaming library which applies these loop fusion optimizations for you so you don’t have to worry about intermediate collections being generated and you can keep your focus on more readable single purpose code. Libraries like lazy.js and Effect.ts help you do this

Collapse
 
djfm profile image
François-Marie de Jouvencel • Edited

While doing more operations in one loop is gonna save you incrementing a counter multiple times, you'll still have O(n) complexity where n is the size of your array, so you will most likely not save any measurable amount of resources at all.

Sometimes it is more clear to write map then filter just for readability, cuz not everyone understands what reduce does unfortunately.

Code that cannot be read fluently by the whole team is in my experience way more costly than a few sub-optimal parts in an algorithm that takes nanoseconds to complete.

You should study the subject of algorithmic complexity if you don't know much about this concept.

Some comments have been hidden by the post's author - find out more