DEV Community

loading...

JavaScript on GraalVM

Paulo Lopes
I'm Paulo and I've used my 10+ years of software development experience writing, rewriting, banging my head against the wall, editing and re-editing high-performance web application to make Vert.x ...
・3 min read

When we think of JavaScript on the server side, our minds always think nodejs, but is this correct? What are the alternatives? Why do we need/care about these alternatives?

JavaScript on the server side has become mainstream, like sliced bread, everyone knows it, everyone has tried it, everyone uses it (Uber, Netflix even Microsoft...).

Node has made all this possible. It is a simple but not to simple runtime, that gives you enough to build powerful apps.

On my job I consider myself a polyglot developer, so I'm surrounded by Java, JavaScript and other languages all the time and this gives me exposure to different runtimes, JVM, V8, ChakraCore and most recently GraalVM. According to their website:

GraalVM is a universal virtual machine for running applications written in JavaScript, Python 3, Ruby, R, JVM-based languages like Java, Scala, Kotlin, and LLVM-based languages such as C and C++.

Why would one care about another runtime? Well, to me it all started when I saw the Techempower Benchmark and noticed that in contrast to popular believe, JavaScript isn't really super fast! (I believe it is super productive, but speed is highly debatable). Of course benchmarks should always be taken with a grain of salt, this one is quite interesting as it compares around 300 different frameworks across different languages and runtimes.

techempower

To my despair, the best result of nodejs ranks at #52, so I decided to see what can we do to improve this. Looking at the benchmark source code turned out to not help much as there were no obvious performance bottlenecks, so I've decided to look outside of the box/V8.

graaljs-performance

So at the first sight, GraalJS does not seem to give us much of an improvement, a few wins here and there, but on the other hand, what the Techempower benchmarks is showing is that, on the long run, jobs on the JVM tend to be the fastest.

In order to verify my premise, I've started working on a small project called EcmaScript for Eclipse Vert.x or ES4X in short. The idea was simple:

Create a modern EcmaScript runtime on top of the Eclipse Vert.x toolkit using the capabilities of the JVM/GraalVM

In order to keep the project maintainable I've decided to keep it minimal:

  1. Provide a commonjs module loader
  2. Develop using a node/npm style a.k.a. package.json
  3. Support debugging with chrome inspector
  4. IDE support using TypeScript Definitions

After a couple of iterations and feedback, I've decided to implement the bechmark. So I've started with:

npm init -y
npm add vertx-scripts --save-dev
npm add @vertx/core --save-prod
npm add @vertx/web --save-prod
npm add @reactiverse/reactive-pg-client --save-prod
npm install
Enter fullscreen mode Exit fullscreen mode

This looked pretty much nodejs friendly to me, so I've implemented the benchmark and went over the process of being reviewed by the community and being merged into the master branch. And when that happen something amazing emerged:

bench-db

JavaScript and Vert.x were not just fast, but by far the fastest when compared to nodejs (or Spring Framework as a reference to the most common framework used in Java), but even faster that the pure Java counterpart of the benchmark using the same dependencies vert.x and vertx-web.

In fact the results are amazing as it puts JavaScript as the #2 fastest language in the benchmark:

es4x-fastest

So the next time you need to write high performance JavaScript server code, give a look at ES4X, or join the project and help making it even greater!!!

Discussion (8)

Collapse
sj82516 profile image
YuanChieh,Cheng

hi @pmlopes
May I ask a question?
I am wondering which part make es4x so fast comparing to node.js because as you mentioned in the video that GraalJS run as fast or almost fast as v8.
Does the performance gain from replacing libuv or run on GlaarVM ?
Or it is because you seem to wrap the node.js api by java implementation?
I'm really curious about that.

Thanks :)

Collapse
pmlopes profile image
Paulo Lopes Author

There are a couple of things that make it faster. The usage of netty provides faster IO than libuv, then vertx implements a different eventloop that uses all computing resources and gives better handling of async tasks, finally graalvm (in contrast to V8) can optimize the full application (not just the js script) it optimizes netty, vertx and the js code (graaljs + es4x) so in the end the JIT gives an extra boost to everything and you get that performance. The cost isn't free, you will see that this has the side effect that only works as expected for long running processes (short cli apps won't benefit from this).

Collapse
sj82516 profile image
YuanChieh,Cheng

hi Paulo, it's me again. :P

It's really interesting but a little bit confusing to read vert.x with es4x.
As a beginner, I am not really sure about the relation between es4x and vert.x.
As far as I known, vert.x is independent with GraalVM and not support ES6 feature.

But ES4X supports ES6(which is awesome of course) and have to run GraalVM as the compiler and the runtime.

Back to something I feel not quite sure,

  1. ES4X seems to support vert.x syntax but also ES6 syntax. Take timer for example, in vert.x I have to use vertx.setPeriodic() and in ES6 use setInterval(). They both work in ES4X project. But in the ES4X example, I don't see the showcase about this kind capacity.
  2. This is my personal question about Vertx. I not really sure the multiple event loops mean in the documents.

Even though a Vertx instance maintains multiple event loops, any particular handler will never be executed concurrently, and in most cases (with the exception of worker verticles) will always be called using the exact same event loop.

especially this line any particular handler will never be executed concurrently. Does it mean even I create A,B,C 3 event loops, if A event loop is executing one handler, B/C have to stall until A finish execution ?

3.Would I gain a lot performance benefit if I use es4x-launcher to run my current project written in Express.js ?

Thanks for your answer and contribution.

Thread Thread
pmlopes profile image
Paulo Lopes Author

Hi,

ES4X is a new JavaScript runtime that is supposed to replace the official vertx js. The official vertx js is using nashorn so it only supports ES5.1, it's not high performance and has a development workflow based on java tools such as maven and not npm.

The goal is that with version 4 vertx will drop that js runtime and adopt ES4X.

This brings to the following question, ES4X is based on vertx, not libuv so express will not work out of the box but writing an API shim could be doable but it's not on my plans atm.

Finally regarding the multiple event loops, they never block each other, what is guaranteed is that if a request is started on event loop A then all handlers for that request are handled on event loop A to avoid context switching and improve performance.

Thread Thread
sj82516 profile image
YuanChieh,Cheng

Thanks for your patient and explanation.
It solved my problem when I knew the ES4X is a JS runtime.
Really appreciate.

Collapse
siloamx profile image
Marek Marczak

I really doubt that single vertx instance (vertice) is faster than single instance of libuv. Libuv is written in pure C and talks directly to system kernel (file descriptors). The main reason that vertx is faster is the multi-reactor pattern. Most of frameworks based on libuv (node, asyncio) are single threaded. Vertx can create event loop on each thread by default.

Collapse
pmlopes profile image
Paulo Lopes Author

Hi Marek, please carefully read the benchmark source code. You will see that the node implementations are scaling to all cores using the cluster module. Also others had the same question and also provided a implementation using 0http which claims to be fast, results however show the opposite.

You can see all source code here:

github.com/TechEmpower/FrameworkBe...

Also the code has been peer reviewed but you're welcome to review it too.

The latest run and results are here:

techempower.com/benchmarks/#sectio...

Collapse
sj82516 profile image
YuanChieh,Cheng

This is really great!