DEV Community

Cover image for Measuring performance
Eckehard
Eckehard

Posted on

Measuring performance

Javascript is a powerful but sometimes strange programming language, especially when it comes to performance. Some effects are the result of implementation details and performance may vary between browsers and even between different versions of the same browser. So, the only way to find out is to try. But even that can be a strange journey, as you may see.

Generally, Javascript performance is very high, so we should not care too much about it. There are many things going on while a website is rendered that take much longer than Javascript execution, so it is unlikely to make a slow page fast just by optimizing Javascript. But if you are dealing with large arrays, text parsers or other kind of operations, that include manipulation of thousands or even millions of elements, performance may become critical.

There are several ways to measure time in Javascript (See here), usually I´m using the performance API. It is just not as precise as it could be, as for some reason it is intentionally limited to 100 ms to avoid browser fingerprinting (see here for details), so you need to repeat the code multiple times to get better results. But there is one effect that is important to know and that I cannot really explain.

Can somebody explain?

See the example below. This is a simple loop that changes a static array of 10.000 elements. To make it measurable, the operation is repeated 1000 times. Finally, it finishes in about 100 ms, which means, the total array of 10.000 elements was changed in 0,1 ms - amazing. Each operation took 0,01 us or 10 ns only!

    const count = 1000 // Number of repetitions

    function run() {
      ar1 = Array(10000).fill(" ")
      for (k = 0; k < count; k++) {
        for (i = 0; i < ar1.length; i++) {
          ar1[i] = "+"
        }
      }
    }

    t0 = performance.now()
    run()
    t1 = performance.now()
    run()
    t2 = performance.now()
    run()
    t3 = performance.now()
    run()
    t4 = performance.now()

console.log(`Time 1 = ${(t1 - t0).toFixed()}ms`)
console.log(`Time 2 = ${(t2 - t1).toFixed()}ms`)
....

Console ------>

Time 1 = 167ms  <--
Time 2 = 124ms
Time 3 = 117ms
Time 4 = 124ms

Enter fullscreen mode Exit fullscreen mode

But the strange thing: The first loop took about 40% longer than the next three. I tried this several times, but always the first loop took longer than the rest.

Thigs get even stranger if I rerun the page:

Time 1 = 106ms
Time 2 = 84ms
Time 3 = 89ms
Time 4 = 80ms
Enter fullscreen mode Exit fullscreen mode

The first loop is still slower, but now all loops run about 30% faster. After the second rerun, results get fairly stable, but still vary by about 5%.

I repeated this multiple times with different codes, but always the first loop is significantly slower than the following.

A second test

Here is an extension to the example above. I added a second routine. This is the same code, just instead of for ( ; ; ) I used for ( ..in.. ), which is known to be slower:

    function run2() {
      ar2 = Array(10000).fill(" ")
      for (k = 0; k < count; k++) {
        for (i in ar2) {
          ar2[i] = "+"
        }
      }

    t0 = performance.now()
    run()
    t1 = performance.now()
    run()
    t2 = performance.now()
    run2()
    t3 = performance.now()
    run2()
    t4 = performance.now()

Console ------>

Time 1 = 141ms
Time 2 = 115ms
Time 3 = 1621ms
Time 4 = 1663ms
Enter fullscreen mode Exit fullscreen mode

Amazing enough, for ( ..in.. ) is about 10 time slower than the conventional loop. But the first routine still is also much slower on the first run. Imagine, I would have only compared two loops: My results would have been significantly wrong.

Let´s test in reverse order:

Time 1 = 1737ms
Time 2 = 1533ms
Time 3 = 103ms
Time 4 = 101ms
Enter fullscreen mode Exit fullscreen mode

Differences are - relatively - smaller, but still, the first loop takes about 200ms longer than the second.

I cannot really explain, why the first loop is always so much slower. But it is in any way important to know, as otherwise your comparison could be misleading. Currently, I´m repeating all my tests multiple times to get more reliable results.

If you have any different experience or know a more reliable way to benchmark code, please let me know.

Top comments (6)

Collapse
 
m__mdy__m profile image
mahdi

I tried it and it was weirdly amazing for me too
What I think (probably wrong):
When it enters the loop it takes a while because it checks all the arguments or numbers or whatever and enters the loop and doesn't exit the loop (like it's sitting there waiting for the next loop) and when it reaches the condition, it leaves
(It's just a theory)

Collapse
 
efpage profile image
Eckehard

I´m not sure we are getting much help here, maybe we can find more in this post:

The basic idea is to avoid retranslation where possible. To start, a profiler simply runs the code through an interpreter. During execution, the profiler keeps track of warm code segments, which run a few times, and hot code segments, which run many, many times.

So, this just tells me it is not easy to not get fooled by the optimization. It just does not tell mit what to do to get reliable results...

Collapse
 
lexlohr profile image
Alex Lohr

The explanation is simple: JS engines like V8 take some time to optimize the code that is run multiple times. The optimization happens within the first evaluation, so that's where the ~40% come from.

Collapse
 
efpage profile image
Eckehard

Sounds reasonable to optimize code only if it runs. But then, I would expect a different behavoir. We have two routines, a and b. I would expect:

a -> slow
a
b -> slow
b

But what we see ist:

a -> slow
a
b
b

It´s unlikely that b is optimized on the first run of a.

Collapse
 
lexlohr profile image
Alex Lohr

Sorry, my wording was ambiguous. It's on the first run of any repeatable code within the loaded module, IIRC.

Thread Thread
 
efpage profile image
Eckehard

Ok, things ARE complicated behind the scenes. Thank you to help me find some explanations. The real question would be: how can I avoid such effects to make better measurements?

I tried to add a simple delay like this before the execution starts:

    function wait(ms) {
      var start = new Date().getTime();
      var end = start;
      while (end < start + ms) {
        end = new Date().getTime();
      }
    }
Enter fullscreen mode Exit fullscreen mode

but the results are even more confusing (this is all the same routine called, 1000 ms delay, but the same with longer delay):

1st. run -->
Time 1 = 455ms
Time 2 = 469ms
Time 3 = 588ms
Time 4 = 687ms

2nd. run -->
Time 1 = 435ms
Time 2 = 452ms
Time 3 = 578ms
Time 4 = 703ms

3rd. run -->
Time 1 = 544ms
Time 2 = 478ms
Time 3 = 554ms
Time 4 = 735ms
Enter fullscreen mode Exit fullscreen mode

Image description

How to measure performance?

Precision anyway is far from acceptable. But - how to reliably measure execution performance?!?!?