DEV Community

Discussion on: There is beauty in simplicity

Collapse
 
lorenzofox3 profile image
RENARD Laurent

I think you are missing * M, So the equation becomes N/4 * M * WAIT_TIME ~= 25 x 10 x 0.1 ~= 25s

I don't think so, from AvA's documentation you can see that within a file all the tests run concurrently. So you can expect the whole test file to execute in a time close to the slowest of the tests in the file. Hence the math. More details

Collapse
 
icanrelate profile image
chriz • Edited

Okay, so if you have 100 test files, that you divided by 4 because of 4 concurrencies, how can it run the 10 test files inside those 100 test files if it's already running at a max concurrency of 4?

Let me put it this way if you have 4 files with 1 test, and you have a concurrency of 4. It would make sense that those 4 files get finished at the slowest test which is 100ms. So it'll finish at 100ms. By your math, if I increase that test by 10, so 4 files with 10 test each, it'll still finish 100ms?

It doesn't make sense, if If you remove the number of tests per file in the equation (M), then it doesn't matter if I have an insane amount of test say 1 million test inside a file, it'll still run at 100ms because they run concurrently?

Thread Thread
 
lorenzofox3 profile image
RENARD Laurent

I think you are confusing concurrency and parallelism, you can have a look at this stackoverflow thread if the explanations in my article are not enough.

The event loop and concurrency

Concurrency is the ability to run multiple processing tasks in an overlapping time. It does not mean the tasks have to run in parallel, exactly at the same time: It can be achieved by starting a task, pausing it, allocating processing power to another task, come back to the first task and resume it, etc

The beauty of Nodejs' v8 engine comes in its event loop architecture which enable a single process (with a single thread) to achieve a great concurrency by avoiding idle time.

Consider the following code, a typical server handler:

function handler(req, res){
  getUserFromDatabase(req.query.id, user => res.send(user))
}

the handler function will actually not event take 1ms to run to its completion yet the response will be send after few ms (the time for db to process):
1 - sending a request to a database
2 - schedule a new processing task: calling the callback.

Roughly, in a typical blocking architecture the equivalent of getUserFromDatabase would block the execution of the handler and wait for the database result before progressing "freezing" the current thread/process. It means that this current thread would not be able to do anything else and accept more requests while being idle. If you want to achieve higher concurrency, you will need to spawn different threads at the same time: that is parallelism. It usually involves CPU architecture with multiple cores which are able to manage multiple threads at the same time efficiently.

In the non blocking model of Node.js that is not the case. Its execution is suspended and will be resumed when the database returns something but meanwhile the nodejs process is able to accept more requests because getUserFromDatabase is not blocking. It means you can even have an higher concurrency with a single thread without involving parallelism at all.

This works fine for most of node programs written in a non blocking way. Sometimes your program is blocking and resource intensive. Tools such Babel need to parse a whole file, create an abstract tree, modify it and emit generated code. This is resource intensive and usage of parallelism (through child_process module) is useful to process multiple files at the same time for example.

AvA and Jest consider some benefits of running multiple test files at the same time in different child processes (4 for AvA in my machine). However managing child processes can itself be costly and introduce some penalty.

On the other hand, zora consider most javascript code is fast or can(should) at least be written in a non blocking way: for example if the function you want to test is slow, you could use a worker and come back to a non blocking model. It does not wait a test to finish before starting another one. This way and thanks to node's architecture, every single test runs concurrently regardless the number of tests you have per file and the number of file you have. Of course at some point you can reach a limit but it has never happen so far to me.

example

test(`test 1`, async t => {
    await wait(1000)//wait 1s
    t.ok(true);
});

test(`test 2`, async t => {
    await wait(1000)//wait 1s
    t.ok(true);
});

This program would take something close to 1s to run even though each test has to wait 1s. Both tests run concurrently even though the program runs within a single process.

Within a test file AvA also runs the tests this way, concurrently (If I am not wrong): So the time to run a test file is close to its slowest test. However, AvA will run four test files at the same times one per worker/child_process -> hence the math.
Whereas in zora all the test files will more or less run at the same time, just the time to require them.

If your code is somehow blocking and slow, you might find AvA or Jest faster. But in 90% of the cases, I would argue you are probably doing something wrong by having this slow blocking code with a javascript program.