DEV Community

Cover image for Write faster JavaScript

Write faster JavaScript

Yaser Adel Mehraban on May 20, 2019

Most of the times, we write code which is being copy pasted from all over internet. StackOverflow is the main source these days for finding solutio...
Collapse
 
karataev profile image
Eugene Karataev

I agree that performance matters. But premature optimization is the root of all evil. In most cases code readability is more important than optimization tricks. Do optimizations when you really need them.

Speaking of your first example, I think the most performant and at the same time readable way to solve the problem would be old good for loop:

const result = [];
for (let i = originalList.length - 1; i >= 0; i--) {
  let item = originalList[i];
  if (item.age < 22) result.push(item.name);
}

Readable and performant.

It is much faster to add an item to a Set than to an array using push() or unshift().

pushing an item to an array is fast, it's complexity is O(1). Appending an element with unshift is indeed slow (O(n)), because it requires reassigning all indexes in the array.

You cannot use indexOf() or includes() to find the value NaN, while a Set is able to store this value.

let arr = [1, NaN];
console.log(arr.includes(NaN)); // true
Collapse
 
yashints profile image
Yaser Adel Mehraban

Thanks for taking the time and pointing these out.
100% agree with your point on readability. But your example is another indicator that there are many ways to solve a problem and we should choose one which helps performance.

Will update the includes part 👌

Collapse
 
karataev profile image
Eugene Karataev

You say that it's good for performance to reuse functions (react classes and nexted functions examples). But in the beginning of the article when optimising chain functions you keep to use arrow functions in forEach and reduce methods.

My example with for loop is faster than using forEach or reduce examples.

I just don't feel consistency between your examples. Also if the focus of your article is performance then please show the most performant way (without sacrificing readabilty) instead of half-performant.

Collapse
 
worc profile image
worc • Edited

i'd be cautious trying to optimize when the benefit is so small like with removing the array chain. i tried the use-case and the .filter().map() version runs in under 10ms while the .forEach() version runs in about 5ms. it's pretty hard to justify that kind of optimization if the user won't notice it, and it's costing you 30000ms or 60000ms every time a developer has to wonder why you did something non-standard in the code base.

Collapse
 
yashints profile image
Yaser Adel Mehraban

This example is not a real world example, in a business application there are many of these scenarios which would add up. If you can save even half of second on an interaction it's worth it. And last, if you become aware of other ways to solve a problem faster why not

Collapse
 
worc profile image
worc

the why not is easy, cpu time is cheap, developer time isn't. if the optimizations are bad for the readability of the source code and imperceptible to end users, you're taking on bad tech debt. the reverse of that is true though too, don't get me wrong. "good" tech debt would mean taking on slightly more obscure code so that you can bring noticeably faster response times directly to the user.

for the real world use, yeah, you definitely can find a lot of inefficiencies that add up to a poorly performing app, but you have a pretty generous time budget before users are going to disengage with the application. if you're doing something like the example (simple sorting, filtering, restructuring on large datasets), you generally have anywhere from 1,000ms to 10,000ms before a user is going to think there's a serious problem here.

in other words you'd have to 200 plus bad chains together before you finally reached a point where users are noticing and reacting negatively.

there's also the case we haven't even considered where usually when someone is filtering then mapping it's because filtering is fast and cheap, while a much heavier lift is in the map. in those kinds of scenarios your advice would actually end up costing more time—both on the wall and on the cpu.

Thread Thread
 
yashints profile image
Yaser Adel Mehraban

I think we're discussing the same thing from two different angle, but let's just say if developers take a bit of time upskilling and finding new ways to solve problems more efficient you'll have a performing app and save time finding where to optimise in a large code base.

Back to your point, a developer spending a bit of time reading will help much more than simply code and wait for a bug or a ticket to be raised later on performance.

Last you're just taking one of the points and generalising it, perf tuning is a broad aread, this is one of the thousands

Thread Thread
 
worc profile image
worc

yeah i guess i fall pretty heavily on the side of legibility before everything else. especially in a high-level language like javascript. most of the time, in most cases, you're just not going to need to exploit the weirdness of the language itself for performance tuning. usually what happens is a hot spot is identified some time down the line when something about the app scales.

and in those cases i would rather a past developer had put their time and energy into making sure their intent is clear before getting clever.

Collapse
 
kleyu profile image
Kuba

I'd include some benchmarks, especially for the difference in perf when using Set and Array methods.

Other than that, great article.

Collapse
 
yashints profile image
Yaser Adel Mehraban

Not a bad idea, will do it

Collapse
 
johncip profile image
jmc • Edited

I like that these tips tend more toward use of the right data structures and algorithms (using the terms loosely -- your use of reducer functions, closures, and sets) instead of the usual "replace each with for i" advice.

Often folks who reach for the latter have missed something more fundamental (using your 200k array example -- it could be something like pagination, or caching intermediate values).

FWIW, the places I've had to optimize the most frequently were db queries. On the front end, mostly rendering of large DOM trees and memory usage. ¯\_(ツ)_/¯

(And I agree with the folks cautioning against premature optimization.)

Collapse
 
javaguirre profile image
Javier Aguirre

Thank you for the article, insightful. :-)

I didn’t get very well your explanation about arrow functions, although your point is don’t use it inside a class cause it has no scope and it will be recreated every time, right?

Thank you!

Collapse
 
yashints profile image
Yaser Adel Mehraban

Apart from recreation of the function itself it has other drawbacks too.

Let me give you two examples, first let's say you have a class with an arrow function and when testing you want to mock it:

class A {
  static color = "red";
  counter = 0;

  handleClick = () => {
    this.counter++;
  }

  handleLongClick() {
    this.counter++;
  }
}

Usually the easiest and proper way to do so is with the prototype as all changes to the Object prototype object are seen by all objects through prototype chaining.

But in this instance:

A.prototype.handleLongClick is defined.

A.prototype.handleClick is not a function.

Same happens with inheritance:

class B extends A {
  handleClick = () => {
    super.handleClick();

    console.log("B.handleClick");
  }

  handleLongClick() {
    super.handleLongClick();

    console.log("B.handleLongClick");
  }
}

Then:

new B().handleClick();
// Uncaught TypeError: (intermediate value).handleClick is not a function

new B().handleLongClick();
// A.handleLongClick
// B.handleLongClick
Collapse
 
imthedeveloper profile image
ImTheDeveloper

I have a side project in node.js which has organically grown over time to become quite large. It essentially works on a middleware pattern checking inbound messages and reacting to them. I have many middlewares that a message can pass through and I'm at the point now where I need to start optimising the order, pinpoint slow code and overall improve efficiency.

At the moment since I'm quite a noob to performance measures are there any recommendations on how I can profile the speed and overall execution areas that bog down my performance? Right now I'm reducing myself to using console.log with timestamps which obviously gets you only so far.

Collapse
 
yashints profile image
Yaser Adel Mehraban

I highly recommend reading this article, also check out performance API

Collapse
 
acutesoftware profile image
Duncan Murray

Thanks for the article - I'm learning JS at the moment and this was interesting.

Cheers

Collapse
 
damirtomic profile image
DamirTomic

You didn't say how much faster it is. Is it 1%, 10%, 30%?