DEV Community

loading...

Continuously running Flowtype checks with jest runner

binygal profile image Binyamin Galinsky Originally published at Medium on ・3 min read

A few weeks ago I became part of a new project. The first thing I did was to set up my regular workspace in preparation for coding. My usual set up is VSCode on one screen, iTerm running Jest on the other and Chrome open in the background. Everything was perfect, I grabbed a cup of coffee, sat down in front of my desk and realised that something was missing… This project was configured with Flowtype and I wanted it to run continuously on my screen showing me if I miss some type definition (I’m a forgetful programmer and I need constant reminders while adding new tech to my stack).

My problem was that Flow don’t have watch mode.

What is Jest runner?

Jest is Facebook’s test platform. As Jest got more and more mature many abilities were implemented in it, and the core team understood that running tests is just small part of Jest abilities. One of Jest abilities is watch mode for reruns on file changes.

Since v.21 of Jest, it is possible to replace the jest test runner with your own runners and this opens a lot of opportunities like running Mocha tests using the Jest platform, running eslint linting and in our case — Flowtype checks.

(If you want to learn more about Jest as a platform, you should watch the following amazing talk)

According to Jest documentation the runner should return an object with runTests function. When Jest will run our runner it will pass it some callback functions one of those is onResult which we should call with our tests results either fail or pass.

Here is the implementation for runTests function explained in details:

We will start by storing the start time and immediately execute the flow check with child_process (note that runTests should return a promise to notify jest that the testing is done so everything is encapsulated inside a promise).

const start = +new Date();
exec('flow', { stdio: 'ignore', cwd: process.cwd() }, (err, stdout) => {

Next we will use the output that we receive as the stdout arg in exec callback to generate an object that holds all of the errors. The object is structured with the file paths as the property name and the error for that path as the value as follows:

index.js: error: String should be int

const errors = stdout.split('Error');
const errorsPerFile = errors.reduce((previous, current) => {           
  const firstErrorLine = current.split('\n')[0]; 
  const fileNameMatcher = firstErrorLine.match(/( **\.** {1,2}| **\/** )?([A-z]| **\/** |-)\ ***\.** js(x?)/);          
  if (fileNameMatcher) {            
    const fileName = path.join(process.cwd(), fileNameMatcher[0]);            
    const errorMessage = current.substring(current.indexOf('\n') + 1);            
    if (!previous[fileName]) {
      previous[fileName] = [];            
    }            
    previous[fileName].push(errorMessage);
  }          
  return previous;
}, {});

Now that we have the errors per file object we just need to iterate all of our tests and see if they have an entry in that object. If they have entry it means that we have an error in that file, otherwise we can mark this file as pass. We will use the pass and fail functions from create-jest-runner to mark our tests by their status. At the end of the method we will resolve the promise we returned.

tests.forEach((t) => {        
  let testResults;        
  if (errorsPerFile[t.path]) {        
    testResults = fail({      
      start,              
      end: +new Date(),        
      test: { path: t.path, errorMessage: errorsPerFile[t.path] },            
    });        
  } else {        
    testResults = pass(
      { start, end: +new Date(), test: { path: t.path } });          
  }          
  onResult(t, testResults);        
  });        
resolve();

That’s it. I would love to hear your thoughts about this post and what else do you think can be done with Jest-Runners. Please leave a comment or DM me on Twitter. The full code can be found in the repo.

Discussion (0)

pic
Editor guide