Tools are everywhere in a developer's workflows. NPM, the JavaScript package manager is indeed full of productivity tools which aim at improving software quality and developers' efficiency. However it is not the only source as we will see later in this article.
Some tools may directly improve user facing products such a code minifier (terser) which helps to reduce the amount of code shipped within a production web application. But most of the time a tool rather helps the developers by improving their own experience and by making the code easier to maintain, debug and ship. In that sense you can say it also helps the end user indirectly. A bundler (rollup) or packager (webpack) for instance, will allow developers to split a code base into smaller chunks easier to reason about.
The scope of a tool may vary as well. Some are very focused on a particular problem (mkdirp) while others try to build a whole user experience around a wide range of problems (create-react-app).
In the second case, we may not realize but the tool becomes really the expression of opinions or processes on how to solve a set of problems. Therefore such a tool usually has to make some trade-off and may harm the user experience depending on the requirements.
In this article, instead of using an existing testing framework full of features, we are going to tailor our own testing experience based on actual problems and requirements while they arise during the development cycle of a software.
Tailoring a test experience
I have chosen the testing experience as leitmotiv because it is quite a challenge as it may involve many different topics (code transformation, reporting, different running environments, performances, etc) and may vary a lot between two different use cases. It is probably the reason you have already many testing frameworks in the JavaScript ecosystem.
Monolithic design vs The UNIX philosophy
Popular JavaScript testing frameworks come usually with a lot of features. As stated earlier these features are somehow opinions on what problems you may encounter and how to fix them so you don't have to think about it and can focus on your tests. They usually provide configuration settings and programmatic extension points so you can tweak your testing experience based on your needs and bring some flexibility to the worflow.
On the other hand, they may not be flexible enough or introduce extra complexity if your needs are a little bit out off the frame.
For example AvA automatically transpiles ESM syntax on your test files. It may be very useful if you write your tests in some way (you don't have to configure anything to get the transpilation done!) but it may be difficult to bypass or confusing to come out with a set up if you write your tests in another way. That is an example on how an opinion may go against flexibility.
Another approach, is the UNIX philosophy which
favors composability as opposed to monolithic design
The idea is to compose small focused programs together to achieve a greater goal.
Compared to our AvA's example you can build a testing experience with three components as so.
transpiler -> test runner -> reporter
And if you don't need the transpiler, you can just remove it from the pipeline.
This is very flexible as long as every component is designed to use a common interface (text streams).
A simple Node program
The boss comes to you and asks:
I want you to write a functional Node Math library.
You agree on the first stretch to implement an add function to perform the sum of two numbers and which would support partial application. You come with the following implementation (The implementation is actually a detail here).
//src/index.js
module.exports = (a, b) => {
if(b === void 0){
return x => a + x;
}
return a + b;
};
you write the following testing program.
//test/index.js
const add = require('../src/index.js');
const {test} = require('zora');
test(`add(a,b) - should sum the two arguments provided`, t => {
t.eq(add(2,4), 6, `2 + 4 = 6`);
});
test(`add(a) - should support partial application`, t => {
const add_two = add(2);
t.eq(add_two(3), 6, `2 + 4 = 6`); // let's make that one fail
t.eq(add_two(11), 13, `2 + 11 = 13`);
});
You will have noticed for the testing program we are using zora. Following the UNIX philosophy, it is a small library I wrote which is dedicated on writing JavaScript testing programs: not running them, not transforming source code, not printing colours in the console, etc. Of course it comes with its own opinions but will be particularly useful for this short essay as it is very focused on one single problem compared to other full featured frameworks.
You can run the test program with Node thanks to the command:
node ./test/index.js
You will see the following output in the console
TAP version 13
# add(a,b) - should sum the two arguments provided
ok 1 - 2 + 4 = 6
# add(a) - should support partial application
not ok 2 - 2 + 4 = 6
---
actual: 5
expected: 6
operator: "equal"
at: " Object.<anonymous> (/Volumes/data/article-playground/test/index.js:8:1)"
...
ok 3 - 2 + 11 = 13
1..3
# not ok
# success: 2
# skipped: 0
# failure: 1
The output is a text format called TAP (Test Anything Protocol). It gives you a status on each test of your program and in case of a failure, you'll have the location of the failure and the reason it failed so you can fix your test/source code. After all it is all what you can expect from a testing program.
Composing with a pipeline
Arguably, the output is not very human friendly (no colour, the passing tests may be considered as noise, etc). Most of the testing frameworks come with a set of reporters you can choose from depending on your preferences. In UNIX philosophy you will ask another program to process this output stream. TAP is a widely spread text protocol and not only in the JavaScript community so that you should find plenty of tools able to parse and process a TAP stream.
For example you can install tap-summary from NPM registry and now type the command:
node ./test/index.js | tap-summary
You will get the following output
If you need something different there is no problem. Just search for tap reporters in NPM or install a binary coming from a different technology. That's the beauty of delegating the reporting task to a different process.
Exit codes
Zora is itself platform agnostic: it is not in charge of running your testing programs. You should be able to do it with any JavaScript run time which supports the Ecmascript specification (edition 2018): Node >= 10, modern browsers, etc. However in a Node environment, one can expect the process executing the testing program to exit with a code different than 0 in case of a failure in the tests. That is actually a requirement in many continuous integration platforms to mark a build as failed and avoid false positives.
However if you print the exit code of your testing program you will get 0.
node ./test/index.js; echo $?;
# > 0
Thankfully by delegating the reporting part to a different process more "platform aware" we can remedy to this eventual problem as the exit code will be the one returned by the last process in the pipe:
node ./test/index.js | tap-summary; echo $?;
# > 1
More advanced program
The following week, you agree to deliver the multiplication operator. As your code base grows, your team decide to split the code into different files to better organize the code and ease the maintenance. You have now the following implementation.
// src/util.js
exports.curry = fn => (a, b) => b === void 0 ? x => fn(a, x) : fn(a, b);
// src/addition.js
const {curry} = require('./util');
module.exports = curry((a, b) => a + b);
// src/multiplication.js
const {curry} = require('./util');
module.exports = curry((a, b) => a * b);
// src/index.js (the entry point of the library)
exports.add = require('./addition');
exports.multiply = require('./multiplication');
And the testing part of the project will also reflect the new organization.
// ./test/addition.spec.js
const {add} = require('../src/index.js');
const {test} = require('zora');
test(`add(a,b) - should sum the two arguments provided`, t => {
t.eq(add(2, 4), 6, `2 + 4 = 6`);
});
test(`add(a) - should support partial application`, t => {
const add_two = add(2);
t.eq(add_two(3), 6, `2 + 4 = 6`); // let's make that one fails
t.eq(add_two(11), 13, `2 + 11 = 13`);
});
and
// test/multiplication.spec.js
const {multiply} = require('../src/index.js');
const {test} = require('zora');
test(`multiply(a,b) - should multiply the two arguments provided`, t => {
t.eq(multiply(3, 4), 12, `3 * 4 = 12`);
});
test(`multiply(a) - should support partial application`, t => {
const time_three = multiply(3);
t.eq(time_three(4), 12, `3 * 4 = 12`);
t.eq(time_three(10), 30, `3 * 10 = 30`);
});
Neat! A new problem arise though. If we keep using Node as the runner we now need to run several testing programs (one for every *.spec.js file). A naive approach would be to simply run every file:
node ./test/multiplication.spec.js && node ./test/addition.spec.js
However this solution is not very efficient and we probably want to consider all our tests as a whole.
The simple solution
We can create an entry point for our testing program exactly the same way we already do for our library
// ./test/index.js
require('./addition.spec.js');
require('./multiplication.spec.js');
And that's it, we can now run all the tests with a single command and still pipe the output to another process.
node ./test/index.js | tap-summary
Another good point is that many tools which perform code transformation require a single entry point. So if we need extra build step on our testing program we are all good.
We can also decide to run a single test file which usually gathers functionally similar tests together. In the same way we can decide to comment out some files very easily.
The small downside however, is that we have to maintain this entry point: for example, we must not forget to add the require statement after we have added a new test file.
The funny solution
The previous solution showed us that all what we need is a program to dynamically require files. Interestingly, tape another popular testing library (and which has a lot in common with zora - zora was inspired by tape) comes with a command line interface which basically does what we need. So if we install tape we can basically use its test runner as so:
tape ./test/*.spec.js
Both libraries are very small according to package phobia (tape metrics and zora metrics) yet it probably does not make sense to have both installed.
The scripting solution
Interpreted languages with access to system APIs such JavaScript or Python are very powerful automation tools: they offer a lot of already built packages (thanks to NPM in our case). Moreover, once you are
used to the core modules (fs, path, etc), you can quickly generate custom tools and command line interfaces.
However the operating system itself (at least in UNIX systems) comes with a rich set of scripting capabilities through the shell, the default Bash interpreter and its builtins. I am currently learning more thoroughly Bash as it offers more possibilities for short scripts in the long term. Node is not necessarily available everywhere whereas you can use Bash in CI scripts, on remote servers, whit Docker images and you already use your terminal anyway, at least to run simple commands.
So in this section we are going to create in Bash that file which will require dynamically the spec files to showcase few of the possibilities Bash can offer.
Consider the following file (./scripts/tester.sh)
#!/usr/bin/env bash
# fail on first failing pipeline
set -e;
# set the debug file path in the project based on an environment variable (or use the default)
debug_file=${DEBUG_FILE:-$PWD/test-debug.js}
# clean existing debug file if any
rm -f $debug_file;
# use passed arguments for spec files definition or use default glob ./test/*.spec.js
spec_files=${@:-$PWD/test/*.spec.js};
# generate the debug file depending on the input parameter
for f in $spec_files;
do echo "require('$f');" >> $debug_file;
done
# run the debug file with node
node $debug_file;
you can make it executable thanks to the command
chmod +x ./scripts/tester.sh
and run it
./test/tester.sh
They are different ways to make the latest script more user friendly in your daily workflow and more portable. You can for example create an alias for the current session
alias t="./scripts/tester.sh"
So now you can run your tests by simply typing t
in your terminal.
The script itself is more or less self explanatory: it creates a fresh debug file (test-debug.js) which will require spec files based on the argument. If no argument is provided it will require all the
files matching the pattern ./test/*.spec.js. Finally, it will run the debug file with Node.
You can overwrite the debug file name thanks to an environment variable and you can require a subset of the spec files by passing a list of arguments to the script.
export DEBUG_FILE="test.js";
t ./test/{addition,multiplication}.spec.js
If you want a minimalist reporter to only print in the console the failing tests with their diagnostic, you can pipe the output into a grep
command
t | grep '^not ok\|^\s'
will output
not ok 2 - 2 + 4 = 6
---
actual: 5
expected: 6
operator: "equal"
at: " Object.<anonymous> (/Volumes/data/article-playground/test/addition.spec.js:8:1)"
...
The smart solution
It is less known but when you call the Node executable you can pass some options. One particularly handy for us is the require option which allows to load some modules before the actual script runs. And it supports glob patterns! So if you type the following command:
echo "process.exit(0);" | node -r ./test/*.spec.js
It is a little bit like you would run the following Node program
require('./test/addition.spec.js');
require('./test/multiplication.spec.js');
// and other *.spec.js files if any
process.exit(0);
It will basically run all the spec files and exit the process with the status code 0 if the program managed to run to its completion. You can of course change the pattern if you want to run a subset of the test files.
And if you want a different exit code in case of a test failure, again, just pipe the output to a specialized reporting process.
echo "process.exit(0);" | node -r ./test/*.spec.js | tap-summary
The icing on the cake: code coverage
It is sometimes useful to know which part of your source code is tested and more importantly which one is not. There are various libraries in the JavaScript world able to do so. Some require code instrumentation: a process which transforms your code to add "counters" around every line in order to know how many times a line is traversed.nyc (and Istanbul) are the most famous. As these libraries require an initial build step they may add a layer of complexity in the process.
Lately, V8 (Chrome's JavaScript engine which is shipped within Node) has been bundled with code coverage capabilities. Thanks to c8 module you can somehow rely on this "native" feature of the engine to measure you code coverage.
echo "process.exit(0);" | c8 node -r ./test/*.spec.js | tap-summary
Conclusion
With a simple command line, by composing different small and focused software together we have managed to build our own flexible testing experience.
It includes everything we need and nothing more: an assertion library (zora), a free and flexible test runner (Nodejs), code coverage (c8), custom reporting (tap-summary) and exit code handling (tap-summary) while it keeps our dependency tree exactly to what we can expect.
Moreover, if at anytime we want to change a component or simply remove it, it is straightforward and does not rely on any complex configuration file. In the same way you can add other components when the need appears (babel, typescript, etc).
In the next episode we are going to see how it goes in the browser...
Top comments (0)