 
          
              In this article, we explore the foundational building blocks of JavaScript by crafting several key components from the ground up. As we delve into ...
              
        
    
  For further actions, you may consider blocking this person and/or reporting abuse
 
 
    
Thanks for sharing. I was looking for internals quite some time to get better understanding.
Did you make some performance comparison to the "native" calls?
Example of testing
myMapperformance:Results:
What might be the potential reasons?
My custom
myMapfunction is a simplistic and direct implementation. It doesn't include many of the protective and flexible features of the built-inmap, such as checking if the callback is a function or handling thisArg, which allows you to specify the value ofthisinside the callback. The absence of these features means less work is done under the hood, contributing to faster execution for straightforward tasks.The behavior of built-in functions is often more complex due to spec compliance, which might include checks and operations that your custom method does not perform. For instance, the built-in
mapmust accommodate a wide range of scenarios and edge cases defined in the ECMAScript specification, like handling sparse arrays or dealing with objects that have modified prototypes. My custommyMapis straightforward and lacks these comprehensive checks, which might lead it to run faster for straightforward cases. For straightforward cases, in that scenario, JavaScript engines like V8 (Chrome, Node.js), SpiderMonkey (Firefox), and JavaScriptCore (Safari) might apply various optimizations at runtime.The performance is awsome. Just the results for the large array are suspicious, as it is so much faster. Maybe you double check the results if
Array.from({length: 10000000}is handled properly.I found that there are a lot of strange effects resulting from the optimizations done by the JS-engine, so it is hard to find the real reason. Is
const result = new Array()really faster than result.push()? Is for(;;) faster than this.forEach (I suppose it is...)?The only way to find out it so try, and sometimes you will find no logical explanation for the results.
But anyway it is good to know your code is comparable, so it will be a good basis for own implementations with similar features.
If the number of elements is predefined and static, then allocating memory by means of
new Array(...)is certainly more efficient than dynamically re-allocating memory (done under the hood for.push), because during the next allocation all elements are copied into a new memory block (like in C++ for vectors, if I remember correctly). For the.filterI use the method with dynamic memory allocation, but to be honest I don't know how justified it is, I just decided to show different methods :)And yes, as I said, all my implementations are simplified compared to native JS methods, which handle more edge cases. If you don't need to handle them, then yes, simplified implementations will speed up your project!
I'll try and report later, but more than once I found strange results on JS performance. Take this post as an example:
On some properties (not all), differences are amazing:
No chance to do some estimation based on "logic"...
I tried this
and this are the results (which are greatly varying with each run):
Built-in map - Small Array: 0.18896484375 ms
Custom myMap - Small Array: 0.5419921875 ms
Custom myMap2 - Small Array: 0.6591796875 ms
Built-in map - Medium Array: 14.93798828125 ms
Custom myMap - Medium Array: 9.138916015625 ms
Custom myMap2 - Medium Array: 11.64697265625 ms
Built-in map - Large Array: 2860.112060546875 ms
Custom myMap - Large Array: 235.31494140625 ms
Custom myMap2 - Large Array: 1246.418212890625 ms
So, as expected, differences are larger visible and dominant for very large arrays, but neglectible for usual sizes.
You actually found a very interesting pitfall about creating a temporary object when accessing string methods. If you collect some of these pitfalls and publish them, I'd love to read them!
It would be even more interesting to test
.filterthan.mapbecause this method can return a smaller array. It would be interesting to find the threshold of the returned number of elements, at which it is more advantageous to use dynamic array allocation, if such a threshold exists.Writing a collection of JS pitfalls would probably fill a whole book. And even if you know all this tips, this does not save you from tapping into one. We will still need to do some performance testing.
If you are interested in general JS performance, maybe this is a comprehensive source.
But as Panagiotis Tsalaportas said:
Some of these aren't quite recreating JS functionality - you're missing the optional
thisArgonmapandfilter.That's right, showing all aspects would actually be cumbersome, so I decided to show the most interesting ones. The implementation of
thisArgin this case is extremely trivial.Things like the filter implementation have bugs in the code, like misnamed parameters etc.
That's definitely not how
finallyworks. There's no exception throwing in afinallyHey!
First, I would call a "typo", not "bugs". But thank you, I have corrected it!
Second, let's break down the functionality of
.finallyand how it's being used inMyPromiseclass.In traditional synchronous code, a
finallyblock indeed runs after atry/catchblock regardless of the outcome (whether thetryblock executed successfully or an exception was caught in thecatchblock), and it is used for cleanup activities. It's true that in such a context, you wouldn't typically throw exceptions from afinallyblock because the purpose offinallyis not to handle errors but to ensure some code runs no matter what happened in thetry/catchblocks.However, when it comes to promises,
.finallyhas a somewhat different behavior:.finallyis also fulfilled with the original value..finallyis also rejected with the original reason..finallyruns successfully, it does not alter the fulfillment or rejection of the promise chain (unlike what you suggest)..finallycallback itself, or if it returns a rejected promise, this will become the new rejection reason for the chain.Great article! I especially like the details of Description and Key Aspects...these are essentially the requirements process on a small scale.