DEV Community

Cover image for In search of the best EcmaScript version for the website assembly
Alexandrovich Dmitriy
Alexandrovich Dmitriy

Posted on

In search of the best EcmaScript version for the website assembly

As it turned out, choosing the ES version for building a web application, as well as organizing this assembly itself, can be a very difficult task. Especially if you are going to make this choice based solely on evidence. And in this article I will try to answer the following questions that have arisen during my investigation on this topic:

  1. How does the compilation of code into ES5 affect site's performance?
  2. Which tool generates the most efficient code — TypeScript Compiler, Babel or SWC?
  3. Does modern syntax affect the browsers' parsing speed of JavaScript code?
  4. Is it possible to achieve a real reduction of bundle’s size, taking into account the use of Brotli or GZIP, if you compile the code into a higher version of ES?
  5. Is it really necessary to build sites in ES5 in 2023?
  6. And also how we implemented the transition to a higher version of ES, and how our metrics have changed.

To answer questions 1–3, I even created a full-fledged benchmark, and I decided to test the fourth question on our real project with a large code base.


Is compiling into ES5 bad?

List of ECMAScript features updates every year and it really helps developers to reduce the amount of code and increase its readability. And to get the opportunity to use the latest version of ES in the source code, developers just need to configure the build process — configure the compilation, as well as add some polyfills.

Just a quick reminder for those who have forgotten why it is necessary to configure the assembly. For example, function Array.prototype.at appeared only in ES2022, and Chrome version below 92 does not know about the existence of such a function. Therefore, if you use it but didn’t think about ensuring backward compatibility, all users of older versions of Chrome will not be able to fully use your site.

Let me give you a couple of short examples on backward compatibility. First, you can add polyfills.

// After adding such polyfills
import "core-js/modules/es.array.at.js";
import "core-js/modules/es.array.find.js";

// You can safely you these functions
[1, 2, 3].at(-1);
[1, 2, 3].find(it => it > 2);
Enter fullscreen mode Exit fullscreen mode

And second, you can use a compiler that will convert modern syntax code into code that is supported by older browsers:

// For example, this code
const sum = (a, b) => a + b;

// Using Babel or any other compiler can be converted into this code
var sum = function sum(a, b) {
  return a + b;
};
Enter fullscreen mode Exit fullscreen mode

Well, I’ve never really liked the need for that backward compatibility. After all, it implies the mandatory generation of additional code, which in turn leads to an increase of the bundle’s size, clogging of RAM, and possibly a performance degradation of the application. And all this is provided that most (at least in our case) clients have a relatively recent version of the browser, which means that for them the backward compatibility can be potentially destructive.

That’s why it became interesting for me to answer the questions that I indicated at the beginning of the article. I decided to start my research by creating a benchmark. Its purpose is: isolated evaluation of the performance of features in assemblies compiled info ES5 by different tools (TypeScript, Babel, SWC), as well as in an assembly without compilation.

The experiment was performed only on features that require compilation, such as classes or asynchronous functions. I decided not to test the features depending on the use of polyfills, because if browsers already have an implementation of a specific feature, polyfills try not to insert their own implementation instead.


Benchmark description: parsing speed and performance test

As I wrote above, I’m going to evaluate each possible compiler separately, because the results of code generation of each compiler may differ. Therefore, in the benchmark, to test each feature, I created bundles compiled using TypeScript, SWC and Babel. You may object that it would be nice to check ESBuild as well, but at the time of writing, it was not capable of generating ES5 code, so I did not consider it.

Example of generated code difference:

// Such code
const sum = (a = 0, b = 0) => a + b;

// Babel will compile into this
var sum = function sum() {
  var a = arguments.length > 0 && arguments[0] !== undefined ? arguments[0] : 0;
  var b = arguments.length > 1 && arguments[1] !== undefined ? arguments[1] : 0;
  return a + b;
};

// And TypeScript into this
var sum = function (a, b) {
    if (a === void 0) { a = 0; }
    if (b === void 0) { b = 0; }
    return a + b;
};
Enter fullscreen mode Exit fullscreen mode

In addition to these 3 builds, I created another one in which the code of the feature under test remained intact. I will continue to call this one “modern” in the text. Modern assembly will contain modern syntax.

I was also interested to check how different features work in different browsers. After all, browsers may have different engines or at least a different set of optimizations. This means that the benchmark results may potentially differ from one browser to another. And just to automate the collection of metrics in different browsers, I created a small HTTP server on NodeJS.

Each test involves opening the generated HTML file N times with a delay between runs. Each launch was performed in a new browser tab in private mode. Upon opening the HTML file, the browser runs the JavaScript code, and after its execution sends a request to the HTTP server with the result of the test iteration run. I tried to get metrics that would be maximally correlated with the metrics of First Paint, Last Visual Change and others similar to them.

Benchmark process visualization

First of all, I created the benchmark to determine the performance of the features, but it was also interesting to look at the impact of the features on the parsing speed. Thus, to evaluate the parsing speed, I created 4 additional builds, in which I simply multiplied the code from the assemblies to measure performance. And then I just measured how long it takes the browser to read the contents of the script element.


Benchmark results: not everything is so clear

We gradually came to the section with the results. Here, I made a bar chart for each version of the ES as well as for each syntax feature. Each graph shows the code execution speed for each of the builds in each of the browsers. The longest line on the graph means that the build worked the fastest.

Be careful — there are a lot of tests and graphs in this section!

Performance evaluation of ES features

ES2015 (ES6)

Arrow functions benchmark results

Arrow functions. As it turned out, there is a difference in the speed of executing the common and arrow functions. However, only for Chrome, Opera and other V8 browsers. There, the arrow functions work 15% slower. Apparently, in these browsers, controlling the context in which the function was created is more difficult than using your own context for each function.

Test source code.

Classes benchmark results

Classes. In this test, there is a huge gap in the results of different compilers. Modern and TypeScript configurations showed significantly faster results. Basically, the modern configuration shows to be the most productive, apart from that Safari worked better with TypeScript. Babel and SWC generated the code 2–3 times slower.

Test source code.

Default parameters benchmark results

In the test of using default parameters, the results are absolutely the opposite. SWC and Babel show similar results and work out the fastest. The slowest was the TypeScript build. The modern one has not gone far from TypeScript, but still shows itself a little more effective.

Test source code.

For..of iterators benchmark results

Iteration using the for..of construction. TypeScript is breaking all records again. Next comes the modern assembly, SWC and at the end is Babel.

Test source code.

Generators benchmark results

Generators. Babel showed the fastest result among the compilers. With a modern assembly, not everything is so clear. In Safari it worked more effective than Babel. But at the same time, in Firefox, it is also the slowest. Apparently, the Firefox developers didn't think much about optimizing the generators. But if we don't take into account this browser, then I would say that the modern assembly shares the first place with Babel, and SWC and TypeScript together stand in second.

Test source code.

Object literals benchmark results

In the test of using enhanced object literals, the situation is also ambiguous. In general, TypeScript and modern builds are the most productive, in Firefox and Safari TS is the one who takes precedence, in V8 browsers it is modern. According to the chart, Babel turned out to be the slowest, but I think this was due to some side effect, and in a real project, the results of SWC and Babel would be the same.

Test source code.

Rest parameters benchmark results

Extremely unambiguous results came out in the test of using the rest parameters. The most productive configuration is modern, the slowest is TypeScript.

Test source code.

Spread operator benchmark results

Spread operator. Definitely the modern assembly showed itself the fastest. In Chrome and Opera, the difference was as much as 4 times. The rest of the configurations showed themselves at about the same level, but in Firefox TypeScript worked slightly slower.

Test source code.

Template literals benchmark results

Template literals — again, the modern assembly has definitely shown itself to be more productive. There is no difference in the assemblies with different tools.

Test source code.

ES2016

Exponentiation operator benchmark results

Exponentiation operator. Absolutely no difference, everything is within the margin of error.

Test source code.

ES2017

Async functions benchmark results

Asynchronous functions. Modern assembly is again in the first place. The largest margin in Safari — it is up to 20%. There is a slight difference between other configurations, but it will not be possible to draw unambiguous conclusions — in Chrome and Opera, Babel is the slowest build, and in Firefox the fastest.

Test source code.

ES2018

Formally speaking, only two syntax features appeared this year — rest and spread operators in objects. However, I thought that 2 tests might not be enough. And all because, depending on how these operators are used, different tools generate code in different ways.

Here is a list of links to the sandboxes of the selected compilers, if you want to look at the variety of generated code:

Let’s start with a simple one. To evaluate the rest operator, I created 2 tests — in one I just copy the object, and in the other I take several properties from the object.

Rest in objects benchmark results

In the first case, the rest operator showed quite interesting results. Browsers seem to be divided into 2 camps: Chrome and Opera are optimized for working with TypeScript code, then the modern build shows itself best in terms of speed, and Babel and SWC are weaving at the end; but in Firefox and Safari the situation is absolutely the opposite — TypeScript works the slowest, and the results for the rest of the builds are almost the same.

Rest in objects benchmark results

In the second case, in all the same Safari and Firefox, the modern configuration wins everyone. But in Opera and Chrome, it is the slowest. Of the compilers, TypeScript was again a little slower than the rest of the assemblies.

Now let’s speak about the spread operator. I have written 4 tests using the spread operator in different configurations. But regardless of how I used the operator, the benchmark results turned out to be similar to the results for the rest operator — modern and TS builds work fast in Safari and Firefox, but just as slowly in Chrome and Opera.

Spread in objects benchmark results

In all tests, there is approximately such a picture. But if you are interested in looking at all the results, you can study them in the repository.

ES2018 Bonus

Spread in objects with numbers as keys benchmark results

A funny fact that I discovered while writing the benchmark. If you have already looked at the source code of the tests, you noticed how I used the values 'a' + i as keys. And I did it on purpose! Because, as turned out, if you use a number as a key in an object, then for some reason unknown to me in Chrome and Opera, the modern assembly begins to work incredibly quickly. And not just faster than other builds in the same browsers, but even faster than Firefox or Safari, although they showed their superiority in the tests above.

Test source code.

ES2019

Private class variables benchmark results

Private properties in classes. Again, an unconditional victory for the modern assembly. And TypeScript shows good results, apart from the tests in Safari. But anyway you shouldn’t rely on them — TypeScript, unlike other assemblers, is not able to compile private variables in ES5.

Test source code.

ES2020

Nullish coalescing operator benchmark results

Nullish coalescing operator. Again, an unconditional victory for the modern configuration. And Babel proved to be the worst.

Test source code.

Optional chaining operator benchmark results

Optional chaining operator. TypeScript performed worse than other assemblies, but otherwise there is no difference.

Test source code.

ES2021

Logical operators. I was interested to check individually how they work, when assignment is applied and when not.

Logical operators benchmark results

In the first case, the modern build shows itself slightly less productive in Chrome, but more productive in Safari. There is no difference between the compilers.

Test source code.

Logical operators benchmark results

And in the second case, a modern assembly paired with TypeScript shows its superiority over other assemblies.

Test source code.

ES2022

Private methods benchmark results

Private methods in classes. The results are the same as in the class usage test. And TypeScript is still not able to use private modifiers in ES5. But in ES6, the ratio of results remains the same.

Test source code.

Parsing speed evaluation

In general, the trend to increase the speed of parsing was popular during OptimizeJS popularity period. A lot of time has passed since then, the developer of this library himself marked it obsolete, and the V8 developers described the practices used in it as destructive. Therefore, now the front-end developers do not chase a couple of milliseconds won. And I too wasn’t going to, of course. But still, I was wondering if using modern syntax could affect the browser’s speed of reading JavaScript code.

I ran the tests, and got a couple of interesting results. For example, it turned out that Safari reads arrow functions slower than common ones, despite the fact that the file with arrow functions has the smallest size.

Arrow functions parsing benchmark results

And Firefox processes code with private properties in the class for quite a long time. And it’s funny that it reads private methods without that much difficulty.

Private properties in class parsing benchmark results

This is where the interesting facts end. In other cases, the benchmark results show a clear correlation of time and the number of characters in the generated code, which means that in other cases, parsing of a modern assembly proved to be the most effective. If you want to see the results in detail, here is the link.

A brief summary of the benchmark

The entire text described above can be summarized with 3 main ideas.

  1. The modern assembly does not have absolute superiority over ES5 and sometimes even performs slower. However, it is the fastest in most cases.

  2. There is no ideal tool for building the most productive code in ES5. At least because of the fact that different browsers have different optimizations. But you can choose for yourself the best ratio of pros and cons. For example, if suddenly there’s a huge amount of generators in your application, Babel will be a very obvious choice, and if there are a lot of classes, it’s worth looking towards TypeScript.

    I would say that TypeScript often performs better than other tools. However, it upsets me that in some places where it feels good in Safari, in Chrome it is able to show the worst result. Especially considering the fact that the majority of users on Chrome.

  3. We can conclude that not all browsers have paid attention to optimizing work with modern syntax. Firefox works terribly with generators, Chrome hasn't fine tuned spread in objects, etc. However, it seems to me that if browsers are engaged in under-the-hood optimizations, they are more likely to pay attention to modern syntax. So who knows, maybe in a couple of years the modern assembly will definitely be the fastest.


And what about the bundle size?

The favorite phrase of developers still compiling into ES5 sounds like this:

“Well, what’s the point of chasing a reduction in the size of the bundle? Compression tools will level out all this difference anyway.”

And whether they are right in their reasoning, we will find out now.

I decided to check this point on my working project, because compression is a rather complex process, and therefore it would not be entirely fair to conduct an assessment separately for each feature.

During the tests, I removed the polyfills from the assembly. Then I compiled our project with each of these tools, compressed them using GZip and Brotli, and calculated the total volume of the created chunks of the application. And these are the results I got.

Raw GZip Brotli
Modern 6.58 MB 1.79 MB 1.74 MB
TypeScript 7.07 MB 1.82 MB 1.86 MB
Babel 7.71 MB 1.92 MB 1.86 MB
SWC 7.60 MB 1.94 MB 1.86 MB

You may be surprised that Brotli showed worse results on TypeScript than GZip. This happened because I was running Brotli with a compression level of 2 (the maximum is 11). I decided to choose this compression level because it is as close as possible to the settings used in Cloudflare by default, which we use in our project.

And what do we see? The size of the project has really decreased by 7–15%, both in the raw and compressed versions. And here the decision is up to you. For someone such a difference will be insignificant, and for others, on the contrary, it will seem significant. For ourselves, we decided that this difference is big enough to try to use a modern assembly in production.

It turns out that the modern assembly gets another victory.

And once again, the table tells how TypeScript shows its superiority in terms of the volume of generated code over other libraries.


Is 4% so important?

From everything described above, a simple conclusion can be drawn. Users will get a nicer UX if your product is compiled in a higher ES version. Your web application will become more productive and also your bundle will be smaller.

However, at the same time, you need to understand that according to Browserslist, only 96% of users worldwide currently have ES2015 support, 95% have ES2017, and higher versions have even lower support.

Therefore, the conclusion can be made as follows:

  • If these 4% of users with outdated browsers are not so important to you, then it would be more logical to build a site in a fresh version of ES. For example, in ES2018.

  • If they are still important, but you do not have a very large project, or the increase in quality metrics is not very important to you, you can gather under ES5. Performance will not suffer critically from this.

  • But if users with outdated browsers are also important to you, as well as a slight increase in performance, you should think about creating two assemblies — modern and ES5 — and think about how to deliver the right assembly to the user. That’s exactly what we did in our company.


Our experience of using modern assembly

In general, the idea of separating assemblies in our product appeared long before my appearance in a company, I just improved it a little. Now we are assembling our application twice — one assembly is compiled in ES5 format with all the required polyfills, and another in ES2018 format with a very limited set of polyfills.

To the question of why chose ES2018. The higher we looked at the version of the standard, the less the difference between the builds of different versions was felt. We chose ES2018 as a kind of edge at which 95% of users will get a fast website, and at which the advantages of a modern build will be used to the maximum. We don’t keep private fields in the class, so the only difference between ES2018 and ES2022 is a small performance loss when using the nullish coalescing operator and, possibly, the logical operator. For sure we’ll get over this loss.

And now about how we implemented it. Especially for this article, I decided to create another repository, just to show how the assembly of the application can be organized taking into account the separation of assemblies. There I implemented a simplified implementation of our working variant. However, it still shows how you can organize not only the separation of JavaScript code assemblies, but also CSS. If you open the developer tools in the assembled site, you can see that even on this small project, you can get a reduction of files by 120 KB, which was 30% in my case. You can use the deployed assembly from this repository at this link.

And if you don’t want to look at the repository, then I will briefly describe how we determine on the client side which assembly needs to be downloaded. We are just checking the browser’s ability to handle asynchronous functions, as well as the presence of several polyfills. And then, using the window.LEGACY flag, we add a script with the desired address to the head of the document.

try {
  // Polyfills check
  if (
    !('IntersectionObserver' in window) ||
    !('Promise' in window) ||
    !('fetch' in window) ||
    !('finally' in Promise.prototype)
  ) {
    throw {};
  }

  // Syntax check
  eval('const a = async ({ ...rest } = {}) => rest; let b = class {};');
  window.LEGACY = false;
} catch (e) {
  window.LEGACY = true;
}
Enter fullscreen mode Exit fullscreen mode

Real metrics

Of course, metrics in a vacuum is a good thing, but what about real metrics? In the end, we deployed a strict separation of assemblies on ES5 and ES2018 on the production. And here is such a difference in metrics Sitespeed.io we received on different builds:

  • First Paint — 13% faster;
  • Page Load Time — 13% faster than;
  • Last Visual Change — 8% faster;
  • Total blocking time — 13% less;
  • Speed Index — 9% faster.

For the most part, this difference was achieved due to the smaller size of the downloaded files. But in any case, the transition to ES2018 could slightly affect the metrics for the better. And the best part is that this gain was obtained almost without touching the source code.


The end

Thank you for your time. I hope you, like me, were interested in learning about performance, parsing speed, and the metrics obtained.

I highly recommend looking at the benchmark repository. In the article, I described the conclusions only on the performance of assemblies, but in the benchmark I also wanted to look at the difference in the performance of browsers in different OS and architectures. For example, you can find out if Microsoft’s assurances that Edge is faster than Chrome are true or not.

I will also once again give a link to the repository with an example of organizing the separation of assemblies not only JavaScript but also CSS code. And in addition to it, a link to GH pages with the deployment of this assembly.

Here are my social links: twitter, mastodon, linkedin. And that’s it. Bye!

Top comments (2)

Collapse
 
miketalbot profile image
Mike Talbot ⭐

Very, very helpful. Thank you.

Collapse
 
yoskutik profile image
Alexandrovich Dmitriy

You're welcome :)