Oh, and so might implicit returns…
Background
We all know and love arrow functions, the clean looks, the convenience. But using them does come at a cost.
Firstly, if you are not familiar with arrow functions, this is a simple comparison of how they compare to normal functions.
// Traditional function declaration
function functionName (...parameters) {
// Do some stuff…
return result
}
// The same function as an arrow function
const functionName = (...parameters) => {
// Do some stuff…
return result
}
Okay, I know what arrow functions are. How are they bad?
JavaScript is compiled at runtime, unlike other languages which require compilation before use. This means that we are relying on the runtime compilation engine to properly interpret and process our code efficiently. This means that different implementations can be processed differently under the hood, despite giving the same outcome.
Comparisons
To test I wrapped the function calls for the functions below in a console.time/console.timeEnd sandwich and passed each one the same variables.
// Traditional function
function foo(bar) {
return bar
}
// Arrow function
const foo = bar => {
return bar
}
// Arrow function with implicit return (remember this from the beginning?)
const foo = bar => bar
Results
Traditional function: 0.0746ms
Arrow function: 0.0954ms
Arrow function with implicit return: 0.105ms
Conclusion
Arrow functions and especially arrow functions using implicit returns do take more time to run compared to traditional functions. Implicit returns suffer from the same issues that arrow functions do, in that they take more compilation time. In larger scripts this could feasibly lead to noticeable performance costs, especially if contained within loops.
Does this mean that we should all stop using them then?
Well i'm not going to, and i'm not going to recommend that you stop either. I would hope that everyone is minimizing their code for production. All major minimizers will pre-compile your code in to traditional functions, just for compatibility reasons, negating the performance loss in real world use. If you are experiencing slowdowns in an unminified development environment, then you could consider this as a possible issue. In reality though, a poorly optimized loop will incur many more performance costs than a few arrow functions.
All tests were run 5 times and averaged using Google’s V8 engine running on NodeJs.
Firefox’s SpiderMonkey and Microsoft’s ChakraCore display similar results although were not tested.
Top comments (19)
Actually jsPerf shows that arrow functions with implicit return are the fastest.
But the difference in performance is so small, that it shouldn't be taken into account when deciding what type of function to use.
Agreed. With a difference of 1.5M operations per second, and assuming a screen refresh time of 16.66 (that's repeating, of course) milliseconds, you would have to execute 250K of the faster option changed to the slower one to make a noticeable difference even to a professional Starcraft player.
👆 This comment.
You sir just destroyed the entire article . . . take a bow.
This might be more interesting to compare: jsperf.com/fufufufu/3
In my test run the results are almost the same for all three tests (ops/sec varies +-1%).
That is the point ;)
This article is pretty fear mongering and statistically incorrect 🤓.
In the same line:
The bottleneck of the code is not going to be in the syntax one uses.
Agree!
Performance posts help the community the most when they demonstrate good benchmarking rigor. Benchmark.js and jsPerf are two benchmarking tools with the reputation for proper testing. Even still, best not to jump to conclusions as evergreen browsers continue to evolve their JS runtimes and optimize for the code seen in the wild. So I agree with your take-away!
Man you almost give me a heart attack with that title 😅
Sorry, gotta get the clickbait in 😅, need my internet points!
Can I steal this answer, please?
Okay if you ran this console.time 5 times 30 times 100 times you won't get consistant results.
I don't care what console.time or js perf say. Does your program feel faster or slower now that you use arrows?
I thought not. A user perceived speed is all that matters.
If you had some high performance critical needs then JavaScript is not a good choice anyway. C++ or Lua Jit would be a better pick.
Edit: (I was so very drunk when I wrote this)
This is why micro-benching is dangerous.
If we want to talk about real production runtime cost, it depends what's your compile target:
Yes, arrows function get optimized when the target is the latest browser the dev is probably used, but as you support older browsers (consoles, old apple products, smart TVs, embedded systems, etc) arrow functions are not necessarily supported, and in those cases, they get compiled to standard function but that's bad for two reasons:
the compiler doesn't know if the intention is to use the "sugar syntax" or to bind the context of the function to the parent scope and takes no chances and always bind creating a triple scope in memory and code boiler plate for each function that exists
that boiler plate prevents some jit optimizations done by the browser that speed up javascript execution.
Always measure, but measure for your use case, not the one you can code in 1 minutes aside of the project.
First of all hello everyone, I've come to rant a little bit about these results:
I'm not going to say this is totally useless but it is a bit.
Performance has always been somewhat of a concern with Javascript (not withstanding NodeJS) mainly because the code is interpreted/compiled/run on the client side, which means you don't have control over the environment.
So perfs can greatly vary depending on the OS, the Browser (version), the System specs, what other applications are running, what extensions are installed, etc...
So basically if your not planning on running your script server-side, benchmarks either have to encompass an array of usual suspect environments or be pretty much useless (how many times have you coded something really cool and efficient in JS just to have the client tell you it doesn't work in IE-11 or Safari or Firefox-entreprise-edition or the antivirus/proxy/whatnot blocked your code, or that actually it was just his internet connection that was slow ...?!)
So basically the fact that everyone seems to be getting different results, even over multiple runs of the same test on the same environment is exactly what you should expect, it's pretty much impossible to have consistent results and even if you did configure your environment so as to obtain consistent results, this would in no way reflect real-life situations and be completely moat.
Imagine running stress tests on your Debian 9 + Apache 2.4 + 8G ram + Core i5 and then installing your application on a Windows Server + Nginx + 6G ram + Ryzen 3
No one in their right mind would actually think of giving any significance to the tests run because the server was completely different, yet here we are doing the same thing with js scripts.
So anyway, sorry for the ranting but it is really important to realize this simple fact when working with frontend Javascript: you don't know the environment because everything is done client-side.
Now you're simply making up issues. The variance in platform is exactly the same as for any other "programming language banchmark" on the internet -- and that does not matter. Language performance is always going to vary depending on what version of interpreter / compiler is being used, what the underlying platform is (operating system or web browser etc.), what the hardware is. Nothing in this made up mess is unique nor limited to JavaScript.
The only problem with the original "benchmark" was that it is simply not a benchmark. Nobody in their right mind would consider "tests" that execute in less than one millisecond to yield accurate results. The author should have realized how dumb it is to do one run each and draw conclusions from that. Usually the benchmarks execute a single test maybe a thousand or ten thousand times and take the peak values as well as averages.
Single run simply makes no sense already considering that all browsers employ a JIT engine which keeps improving performance after multiple runs -- the first run is always going to yield the worst result.
So what’s your thousand run result is?
I think you should share the exact code used to test this, so people are able to confirm it.
I believe dev speed matters way more than a few milliseconds. This was a nice experiment to conduct, but I dont think anyone would abandon the ease of use of arrow functions :)
The keys diff between arrows and traditional functions might be the support of arguments, this and etc