Cover image for There is beauty in simplicity

There is beauty in simplicity

lorenzofox3 profile image RENARD Laurent Updated on ・11 min read

Last week I finally worked on a test runner for Nodjes based on zora.
I had already written an article inspired by some of the zora's properties and I keep finding interesting how such a small project (in code size) can inspire me new subjects of discussion (I still have few in mind). This one will lead us through some fundamental concepts of Nodejs architecture and general computer programming such event loop, concurrency, parallelism, and how they can be related to the performances of a testing software.

A surprising benchmark

It all started when I added pta to the benchmark in the zora's repository. This benchmark tries to compare speed of execution for various testing frameworks. Performance is clearly at the center of the developer's experience and their productivity when it comes to testing software. Some of the popular frameworks have relative complex architectures involving abstractions such child processes to deliver (not only) top level performances. While zora is at the opposite quite simple but performs much faster according to the aforementioned benchmark.

How can it be ?

The benchmark consists in running N test files, each having M tests. One test would be the corresponding code with the different test runners syntaxes (if I did not make any mistake):

const wait = waitTime => new Promise(resolve => {

test('some test ', async function (assert) {
    await wait(WAIT_TIME); // wait time is a variable of the benchmark
    assert.ok(Math.random() * 100 > ERROR_RATE); // a given percentage of the tests should fail (eg ~3%) 

By changing N, M and WAIT_TIME we can mimic what I consider to be the profile of some typical Nodejs applications.

  1. profile small library: N = 5, M = 8, T = 25ms
  2. profile web app: N = 10, M = 8, T = 40ms
  3. profile api: N =12, M = 10, T = 100ms

Each framework runs with its default settings.

Here are the results on my developer machine (MacBook Pro, 2.7GH i5) with node 12 :

zora-3.1.0 pta-0.1.0 tape-4.11.2 Jest-24.9.0 AvA-2.4.0 Mocha-6.2.1
Library ~100ms ~230ms ~1240ms ~2835ms ~1888ms ~1349ms
Web app ~130ms ~280ms ~3523ms ~4084ms ~2900ms ~3696ms
API ~190ms ~330ms ~12586ms ~7380ms ~3900ms ~12766ms

We can even increase the differences if we use somehow extreme(?) values (N=100, T=10, WAIT_TIME=100ms)

zora pta tape Jest AvA Mocha
~450ms ~750ms (1.6x slower) ~104sec (230x slower) ~43.1sec (96x slower) ~24.1sec (53x slower) ~104.5sec (230x slower)

As we will see, the results can actually be predictable, at least for some of the test runners.

The Event Loop and Nodejs's architecture

Nodejs' Javascript engine (like many others) is single threaded and is built around an event loop. There are already many resources online to grasp these two concepts (you can for example refer to the official Nodejs documentation) but to make it short it means:

  1. The main process of a Nodejs program runs within a single thread.
  2. Processing tasks are scheduled with a queue of events. These tasks can be anything like executing a statement, calling the next item of an iterator, resuming a suspended asynchronous function, etc.

The event system is particularly helpful for asynchronous operations as you do not have to block the main thread waiting for a task to complete. You would rather have to launch the asynchronous task and later, when it is over, the scheduler will be notified to enqueue another task: the execution of the callback.

Historically asynchronous tasks were made exclusively through event listeners called, due to their nature, "call me back" or "callback". In modern Nodejs there are newer built in abstractions you can use such async functions and promises or (async)iterators, (async)generator functions, etc. But in essence, the idea is the same: prevent the main thread from being blocked waiting.

Consider the following snippet:

(function fn(){
    console.time('fn timer 1');
    setTimeout(() => console.timeEnd('timer1') /* (B) */, 1000); // this won't block the main thread neither the function execution
    setTimeout(() => console.timeEnd('timer2') /* (C) */, 1000); // this won't block the main thread neither the function execution
    console.timeEnd('fn timer') // (A) this will called before the timer is executed

The callbacks will execute after the function fn runs to its completion. The whole program will run in a bit more than 1000ms as the
setTiemout is not blocking: it just schedules on the event loop the execution of the callback function after some elapsed time.

The whole Nodejs architecture is based around these concepts. Let's take the example of a web API.

In a multi threading environment, a request would typically be handled by a thread from its parsing to the sending of the response.
It means once the request has been parsed and the database is processing the query the thread is paused waiting for the database to complete its work, eventually wasting processing resources. Later it is resumed to send the response made of the database result.
It implies you can roughly have as many concurrent requests as threads the server can manage at the same time.

In Nodejs as long as you don't block the event loop the server would be able to handle more requests even within its single thread. It is usually done by using one of the asynchronous patterns to deal with the costly tasks which need access to the disk, the network or any kernel operation. Most of the time, the often called "I/O" operation, is itself delegated to a process which leverage multi threading capabilities like a database server for instance.

Similarly than in our previous example and the setTimeout, the request handler does not have to block the event loop waiting for the database to complete its job, it just needs to pass a callback to execute once the database is done. It means the server can possibly handle a lot of concurrent requests with a single thread, being mostly limited by the database. In a sense, this architecture allows the system to avoid being idle and waste resources.


Concurrency is the ability of a program to start, execute, terminate tasks in an overlapping time. It does not mean the tasks have to run at the same time. It can refer to the ability to interrupt a task and allocate system resources to another task (context switching). Nodejs is a perfect example as you can reach very high concurrency with a single thread.

Now that we are familiar with the callback pattern, let's use async functions and promises instead.

const wait = (time = 1000) => new Promise(resolve => setTimeout(() => resolve(), time));

async function task(label){
    await wait();
    console.log(`task ${label} is done`);

The task function may appear to block the main thread but it is not the case. The await statement allows indeed to suspend its execution for a while but it does not prevent the main thread from running another task.

const run = async () => {
    const p1 = task(`task 1`);
    const p2 = task(`task 2`);
    await p1;
    await p2;

// or if it makes more sense

const run = async () => {
    const tasks = [task(`task 1`), task(`task 2`)];
    await Promise.all(tasks);


The last program will run in something close to 1000ms whereas a single task function itself takes 1000ms to run. We were able to execute the two tasks concurrently.


Now let's consider the following function:

// async function is not mandatory here, but it emphases the point.
async function longComputation() {
    console.log(`starts long computation`);
    let sum = 0;
    for (let i = 0; i < 1e9; i++) {
        sum += i;
    console.log(`ends long computation`);
    return sum;

This function takes close to 1s to return its result on my machine. But contrary to the task function, longComputation whose code is all synchronous blocks the main thread and the event loop by monopolising the CPU resources given to the thread. If you run the following program

const run = async () => {
    const p1 = longBlockingComputation();
    const p2 = longBlockingComputation();
    await p1;
    await p2;


It will take close to 2s (~1s + ~1s) to complete and the second task won't start before the first one is finished. We were not able to run the two tasks concurrently.

In practice, writing such code is a very bad idea and you would rather delegate this task to another process able to take advantage of parallelism.

Parallelism is the ability to run different tasks literally at the same time. It usually involves running multiple threads with different CPU cores.

Well, actually even with Nodejs you can run multiple threads (or child processes). Let's see an example with the newer Worker Threads API;


const {
} = require('worker_threads');

function longComputation() {
    let sum = 0;
    for (let i = 0; i < 1e9; i++) {
        sum += i;
    return sum;


and the main program

const {
} = require('worker_threads');

const longCalculation = () => new Promise ((resolve, reject) => {
    const worker= new Worker('./worker.js');
    worker.on('error', reject);

const run = async () => {
    const p1 = longCalculation();
    const p2 = longCalculation();
    await p1;
    await p2;


Great! This has run in roughly 1000ms. It is also interesting how we have shifted back to the paradigm of the previous section with non blocking functions.

Note: attentive readers will have spotted that the longCalculation creates a new thread worker with each invocation. In practice you would rather use a pool of workers.

How is this related to our testing frameworks ?

As mentioned, speed is a must for the developer experience. Being able to run tests concurrently is therefore very important. On the other hand
it enforces you to write independent tests: if you run tests concurrently you do not want them to mess up some shared data. It is often a good practice but sometimes you need to maintain some state between tests and run various tests serially (one starts when the previous is finished). This can make the design of a testing software API quite challenging...

Let's now try to explain the result we had for our "extreme" case:

  • Mocha and Tape run test files and tests within a file serially so they will roughly last N * M * WAIT_TIME ~= 100 * 10 * 0.1s ~= 100s (this is consistent)

  • I can see from the progress in the console that AVA is likely running 4 tests files in parallel on my machine. I think from the documentation that within a file the tests should run concurrently (so that the whole test suite would run roughly in N/4 * WAIT_TIME ~= 25 x 0.1 ~= 2.5s ) but there might be extra cost managing the four child processes (or workers ?) because it is 10 times slower than the expected result.

  • Jest seems to run 3 test files in parallel on my machine and the tests within a file serially. So I expected N/3 * M * WAIT_TIME ~= 33 * 10 * 0.1 ~= 33s but yet it is slower. Again managing child processes is clearly not free.

  • Zora and pta run every test concurrently so we can expect the execution time to be related to the slowest test. In practice it takes some time to launch Nodejs, parse the scripts and require the modules. This can explain the little extra time. But the results stay steadily below the second whatever test profile we run.

A small zora

Let's build a small zora to understand how it works (and achieve a high concurrency) and how it tackles the problems mentioned in the introduction of the previous section.

We can write a testFunction function as so:

// test.js
const testFunction = module.exports = (description, specFunction, testList) => {
    let error = null;
    let passing = true;
    const subTestList = [];
    // we return the routine so we can explicitly wait for it to complete (serial tests)
    const subTest = (description, fn) => testFunction(description, fn, subTestList).execRoutine; 

    // eagerly run the test as soon as testFunction is called
    const execRoutine = (async function () {
        try {
            await specFunction({test: subTest});
        } catch (e) {
            passing = false;
            error = e;

    const testObject = Object.defineProperties({
        // we **report** test result with async iterators... in a non blocking way
        [Symbol.asyncIterator]: async function* () {
            await execRoutine;
            for await (const t of subTestList) {
                yield* t;// report sub test
                passing = passing && t.pass; // mark parent test as failing in case a subtest fails (but don't bubble the error)
            yield this; // report this test
    }, {
        execRoutine: {value: execRoutine},
        error: {
            get() {
                return error;
        description: {
            value: description
        pass: {
            get() {
                return passing;

    // collect the test in the parent's test list

    return testObject;

and the test harness factory as so

// run.js
const testFunction = require('./test.js');
const reporter = require('./reporter.js');

const createHarness = () => {
    const testList = [];
    const test = (description, spec) => testFunction(description, spec, testList);

    return {
        async report() {
            for (const t of testList) {
                for await (const a of t) {

const defaultTestHarness = createHarness();

// automatically start to report on the next tick of the event loop
process.nextTick(() => defaultTestHarness.report());

module.exports = defaultTestHarness;

The (dummy)reporter being:

// reporter.js
module.exports = testResult => {
    const isFailed = testResult.pass === false;
    console.log(`${!isFailed ? 'ok' : 'no ok'} - ${testResult.description}`);
    if (testResult.error) {
        if (testResult.error.operator) {
            console.log(`operator: ${testResult.error.operator}`);
        if (testResult.error.expected) {
            console.log(`expected: \n ${JSON.stringify(testResult.error.expected, null, 4)}`);
        if (testResult.error.actual) {
            console.log(`actual: \n ${JSON.stringify(testResult.error.actual, null, 4)}`);

That's it! You have a whole testing library within less than 100 lines of source code which can use whatever assertion library as long as it throws an error (the assert module from Nodejs' core is a good candidate !).

  • It will report failures: "where?", "what?" and "why?"
const assert = require('assert').strict;
const {test} = require('./run.js');

test(`some test`, () => {
    assert.deepEqual([1, 2, 3], [1, 2, 4], `array should be equivalent`);

will output:

screen shot of a testing experience

  • It will run every test concurrently and will likely be faster than all the other mega bytes sized test runners
test(`some async test that shows concurrency`, async t => {

    let foo = 'bar';

    t.test(`nested async`, async t => {
        await wait(100);
        assert.equal(foo, 'baz', 'see changed value although started before');
        foo = 'whatever'

    t.test(`change foo faster`, t=>{
        assert.equal(foo, 'bar');
        foo = 'baz';

  • Yet it will allow you to control the concurrency of you test with regular javascript control flows
test(`some serial test`, async t => {
    let foo = 'bar';

    // we specifically wait for that test to complete with the "await" keyword ...
    await t.test('nested inside', async t => {
        await wait(100);
        assert.equal(foo, 'bar', 'see the initial value of foo');
        foo = 'whatever';

    // to start this one
    t.test('run only once "nested inside" has finished', () => {
        assert.equal(foo, 'whatever', 'see the changed value');


If you wish to play with this basic test runner, you can fork the following gist and run the test program with node: node test_program.js


We have reviewed Nodejs' architecture and saw how it can allow high concurrency without necessarily involving parallelism. We have placed it in the context of a testing software and saw how we could give a high quality user experience to the developer and greatly improve their productivity.

We can also discuss whether parallelism has an added value in the context of Nodejs testing experience. We already saw that it may not be the case regarding the performances. Of course you could find some use cases where parallelism could bring you better performances. Or you could argue the test function in the benchmark is not "blocking enough" to be realistic (you would be right!) but as we said earlier, if you need parallelism in your tests because the code you are testing is slow, you are probably doing it wrong.

In practice I have personally been using zora (or pta) for a wide range of use cases and never had any performance issue:

  • In ship-hold, we run a whole range of integration tests against a database server below a second.
  • In mapboxgl-webcomponent, we run browser automation (screen shots capture, etc) within few seconds (this might actually be considered slow).
  • In smart-table, we run many unit tests in a second.
  • pta is tested by itself and the test suite contains child processes to run pta's CLI as a binary, all this in less than 2 seconds.

On the other hand, child processes have other interesting properties from a testing perspective, naming isolation. It allows you to run a given set of tests in an isolated, sand boxed environment.
However, it also leaves you with few new issues to address (stream synchronisation, exit codes, etc) making the code base inevitably grow. I would not say AVA is minimal(14.8mb), neither is Jest(32mb). Of course they offer way more "features" than our few bytes test runner. But are "runs previously failed tests first" or "re-organizes runs based on how long test files take" really required when a whole test suite runs within a pair of second.

The title refers to our ability, as developers, to sometimes over engineer solutions where simplicity is just what we need.


Editor guide
icanrelate profile image

@lorenzofox3 Hey Renard! great article! Thank you for sharing. 🙏🏻 I've been a zora user for a while now. I'm thankful you share how the internals work, saves me some time. :)

N/4 * WAIT_TIME ~= 25 x 0.1 ~= 2.5s ) but there might be extra cost managing the four child processes (or workers ?) because it is 10 times slower than the expected result.

I think you are missing * M, So the equation becomes N/4 * M * WAIT_TIME ~= 25 x 10 x 0.1 ~= 25s

lorenzofox3 profile image
RENARD Laurent Author

I think you are missing * M, So the equation becomes N/4 * M * WAIT_TIME ~= 25 x 10 x 0.1 ~= 25s

I don't think so, from AvA's documentation you can see that within a file all the tests run concurrently. So you can expect the whole test file to execute in a time close to the slowest of the tests in the file. Hence the math. More details

icanrelate profile image

Okay, so if you have 100 test files, that you divided by 4 because of 4 concurrencies, how can it run the 10 test files inside those 100 test files if it's already running at a max concurrency of 4?

Let me put it this way if you have 4 files with 1 test, and you have a concurrency of 4. It would make sense that those 4 files get finished at the slowest test which is 100ms. So it'll finish at 100ms. By your math, if I increase that test by 10, so 4 files with 10 test each, it'll still finish 100ms?

It doesn't make sense, if If you remove the number of tests per file in the equation (M), then it doesn't matter if I have an insane amount of test say 1 million test inside a file, it'll still run at 100ms because they run concurrently?

Thread Thread
lorenzofox3 profile image
RENARD Laurent Author

I think you are confusing concurrency and parallelism, you can have a look at this stackoverflow thread if the explanations in my article are not enough.

The event loop and concurrency

Concurrency is the ability to run multiple processing tasks in an overlapping time. It does not mean the tasks have to run in parallel, exactly at the same time: It can be achieved by starting a task, pausing it, allocating processing power to another task, come back to the first task and resume it, etc

The beauty of Nodejs' v8 engine comes in its event loop architecture which enable a single process (with a single thread) to achieve a great concurrency by avoiding idle time.

Consider the following code, a typical server handler:

function handler(req, res){
  getUserFromDatabase(req.query.id, user => res.send(user))

the handler function will actually not event take 1ms to run to its completion yet the response will be send after few ms (the time for db to process):
1 - sending a request to a database
2 - schedule a new processing task: calling the callback.

Roughly, in a typical blocking architecture the equivalent of getUserFromDatabase would block the execution of the handler and wait for the database result before progressing "freezing" the current thread/process. It means that this current thread would not be able to do anything else and accept more requests while being idle. If you want to achieve higher concurrency, you will need to spawn different threads at the same time: that is parallelism. It usually involves CPU architecture with multiple cores which are able to manage multiple threads at the same time efficiently.

In the non blocking model of Node.js that is not the case. Its execution is suspended and will be resumed when the database returns something but meanwhile the nodejs process is able to accept more requests because getUserFromDatabase is not blocking. It means you can even have an higher concurrency with a single thread without involving parallelism at all.

This works fine for most of node programs written in a non blocking way. Sometimes your program is blocking and resource intensive. Tools such Babel need to parse a whole file, create an abstract tree, modify it and emit generated code. This is resource intensive and usage of parallelism (through child_process module) is useful to process multiple files at the same time for example.

AvA and Jest consider some benefits of running multiple test files at the same time in different child processes (4 for AvA in my machine). However managing child processes can itself be costly and introduce some penalty.

On the other hand, zora consider most javascript code is fast or can(should) at least be written in a non blocking way: for example if the function you want to test is slow, you could use a worker and come back to a non blocking model. It does not wait a test to finish before starting another one. This way and thanks to node's architecture, every single test runs concurrently regardless the number of tests you have per file and the number of file you have. Of course at some point you can reach a limit but it has never happen so far to me.


test(`test 1`, async t => {
    await wait(1000)//wait 1s

test(`test 2`, async t => {
    await wait(1000)//wait 1s

This program would take something close to 1s to run even though each test has to wait 1s. Both tests run concurrently even though the program runs within a single process.

Within a test file AvA also runs the tests this way, concurrently (If I am not wrong): So the time to run a test file is close to its slowest test. However, AvA will run four test files at the same times one per worker/child_process -> hence the math.
Whereas in zora all the test files will more or less run at the same time, just the time to require them.

If your code is somehow blocking and slow, you might find AvA or Jest faster. But in 90% of the cases, I would argue you are probably doing something wrong by having this slow blocking code with a javascript program.