Is the script under "Methodology" exactly what was run to produce these results? If so, you may want to re-run with the following modifications and see if there are any differences in the results:
Use only a single approach in each rest run. Running all 4 approaches in each iteration of the loop could result in different runtime optimizations and memory access patterns vs. running each approach separately.
For each approach and iteration, run the test several times. Note the execution times for first, min, max and avg. I'm guessing you'd want to know best of avg performance.
Performance testing is hard and good methodology really is key to drawing any reliable conclusions.
I definitely agree though, "these types of optimizations generally do not have significant consequences."
Phew! I'm glad to tell you that, yes, the results are still consistent with my findings. Of course, it has some plus-and-minus here and there, but the trend still holds true.
I isolated each test case in their own .js file if you're wondering how I modified the script. I ran each respective script and plotted the results. The charts were still similar to the ones in the article.
Here's one of the script files I used for each respective test case. There are three others (for each test case), but the changes are very minor. I simply had to change the contents of the for loop to the appropriate code snippet.
// UncachedRegular.jsconst{performance}=require('perf_hooks');// Test parametersconstORDERS_OF_MAGNITUDE=8;// Stored functionsfunctionplus1Func(x){returnx+1;}constplus1Arrow=x=>x+1;for(leti=1;i<10**ORDERS_OF_MAGNITUDE;i*=10){// Test arrayconsttest=newArray(i).fill(0,0,i);// Uncached (regular function)consta0=performance.now();test.map(function(x){returnx+1});consta1=performance.now();constuncachedRegular=a1-a0;// Log results hereconsole.log(uncachedRegular);}
For further actions, you may consider blocking this person and/or reporting abuse
We're a place where coders share, stay up-to-date and grow their careers.
Is the script under "Methodology" exactly what was run to produce these results? If so, you may want to re-run with the following modifications and see if there are any differences in the results:
Use only a single approach in each rest run. Running all 4 approaches in each iteration of the loop could result in different runtime optimizations and memory access patterns vs. running each approach separately.
For each approach and iteration, run the test several times. Note the execution times for first, min, max and avg. I'm guessing you'd want to know best of avg performance.
Performance testing is hard and good methodology really is key to drawing any reliable conclusions.
I definitely agree though, "these types of optimizations generally do not have significant consequences."
Phew! I'm glad to tell you that, yes, the results are still consistent with my findings. Of course, it has some plus-and-minus here and there, but the trend still holds true.
I isolated each test case in their own
.js
file if you're wondering how I modified the script. I ran each respective script and plotted the results. The charts were still similar to the ones in the article.Here's one of the script files I used for each respective test case. There are three others (for each test case), but the changes are very minor. I simply had to change the contents of the
for
loop to the appropriate code snippet.