DEV Community

Cover image for How we wrote our CLI integration tests
Florian Rappl
Florian Rappl

Posted on

How we wrote our CLI integration tests

Cover image from Unsplash by Glenn Carstens-Peters

One of the most important parts of software is ensuring that it works - not only on your machine, but also on the target machines.

The more variables are there the more complex it is to create reliable software. What seems easy first quickly becomes a mess of checking edge cases and identifying scenarios.

For the command line tooling of our micro frontend framework Piral we needed to be sure that it properly runs. This includes

  • testing against different operating systems (Windows, Linux, Mac)
  • testing against different versions of Node.js (starting with 12)
  • testing against different bundlers (most importantly Webpack, but also Parcel, esbuild, vite, ...)

All in all not an easy task. While we have quite a high (90+) percentage of unit test coverage, experience has taught us that nothing can replace integration tests. This is the only way to identify issues with underlying operating systems or runtimes.

Testing

Let's see what we did to run our tests.

The basic setup

Our tests will run in the command line using a tool set consisting of

  • Jest (test runner)
  • Playwright (to check if debugging / build artifacts work properly) together with expect-playwright for simplified assertions
  • TypeScript (to make sure the test code base itself does not contain some easy mistake)
  • Azure Pipelines (running the tests in different environments)

The code for our CLI integration tests is on GitHub.

The setup of Jest (done via the jest.config.js file) can be broken down to the following:

const { resolve } = require('path');

const outDirName = process.env.OUTDIR || 'dist';
const outputDirectory = resolve(process.cwd(), outDirName);

process.env.OUTPUT_DIR = outputDirectory;

module.exports = {
  collectCoverage: false,
  globals: {
    NODE_ENV: 'test',
    'ts-jest': {
      diagnostics: false,
    },
  },
  testEnvironmentOptions: {
    'jest-playwright': {
      browsers: ['chromium'],
      exitOnPageError: false,
      collectCoverage: false,
      launchOptions: {
        headless: true,
      },
    },
  },
  setupFilesAfterEnv: ['expect-playwright'],
  testTimeout: 2 * 60 * 1000,
  preset: 'jest-playwright-preset',
  reporters: [
    'default',
    [
      'jest-junit',
      {
        outputDirectory,
      },
    ],
  ],
  transformIgnorePatterns: [
    '<rootDir>/node_modules/',
    'node_modules/@babel',
    'node_modules/@jest',
    'signal-exit',
    'is-typedarray',
  ],
  testPathIgnorePatterns: ['<rootDir>/node_modules/'],
  modulePathIgnorePatterns: ['<rootDir>/node_modules/'],
  roots: ['<rootDir>/src/'],
  testRegex: '(/__tests__/.*|\\.test)\\.ts$',
  testURL: 'http://localhost',
  transform: {
    '^.+\\.ts$': 'ts-jest',
    '^.+\\.js$': 'babel-jest',
  },
  moduleFileExtensions: ['ts', 'js', 'json'],
  moduleNameMapper: {},
  verbose: true,
};
Enter fullscreen mode Exit fullscreen mode

While some parts, e.g., the integration of ts-jest for TypeScript support, are rather straight forward, other parts are not. Especially the transformIgnorePatterns and testEnvironmentOptions require some explanation.

The transformIgnorePatterns (along the testPathIgnorePatterns and modulePathIgnorePatterns) are necessary to actually support the use case of providing the tests via an npm package (i.e., as a library). This use case is necessary to make the tests also available to other bundler plugins, which are not already covered by running the tests within the repository. We'll go into details later.

The testEnvironmentOptions enable the use of Playwright. Playwright is a browser automation tool that helps us controlling a browser, e.g., to check if certain elements are also rendered. This is necessary for some tests to actually verify that everything was done right.

Matrix testing

To run the tests in different environments we use a CI/CD feature called matrix strategy. This will run the same pipeline in different variations.

strategy:
  matrix:
    linux_node_12:
      imageName: "ubuntu-20.04"
      nodeVersion: 12.x
    linux_node_14:
      imageName: "ubuntu-20.04"
      nodeVersion: 14.x
    linux_node_16:
      imageName: "ubuntu-20.04"
      nodeVersion: 16.x
    linux_node_17:
      imageName: "ubuntu-20.04"
      nodeVersion: 17.x
    windows_node_14:
      imageName: "windows-2019"
      nodeVersion: 14.x
    macos_node_14:
      imageName: "macOS-11"
      nodeVersion: 14.x
Enter fullscreen mode Exit fullscreen mode

Whenever we have a new environment to be tested we just add it here. All the rest, e.g., what base image is selected to run the pipeline is then determined using the variables from the matrix.

The remaining steps in the CI/CD pipeline are then rather straight forward:

steps:
  - task: NodeTool@0
    inputs:
      versionSpec: $(nodeVersion)
    displayName: "Install Node.js"

  - script: npm install --legacy-peer-deps
    displayName: "Setup Tests"

  - script: npm test
    continueOnError: true
    displayName: "Run Tests"
    env:
      CLI_VERSION: ${{ parameters.piralCliVersion }}

  - task: PublishTestResults@2
    inputs:
      testResultsFormat: "JUnit"
      testResultsFiles: "dist/junit*.xml"
      mergeTestResults: true
Enter fullscreen mode Exit fullscreen mode

We first change to the selected version of Node.js and then prepare running the tests by installing all dependencies. Then - and this is the most important step - we actually run the tests. We pass in version of the CLI we actually want to test. By default, this is set to the next tag on npm of the piral-cli package.

We could also run the tests for a different version. All we would need to do is pass a different value for this parameter when starting the pipeline.

Finally, we publish the test results. We use the package jest-junit to store the results in the JUnit format, which is compatible with the PublishTestResults@2 task of Azure Pipelines.

Code structure and utilities

The code contains three directories:

  • bin has a small wrapper that can be used to run the tests as an npm package
  • src contains all the tests
  • src/utils contains the utilities to efficiently write the tests

The utilities make it possible to conveniently provide integration tests for our CLI tool. These utilities can be categorized:

  • context / jest enhancing
  • convenience for input / output handling
  • dealing with processes (starting, stopping, monitoring, ...)
  • running a server to emulate CLI to service interaction

While standard Jest unit tests look a bit like

import someFunction from './module';

describe('Testing module', () => {
  it('works', () => {
    // arrange
    const input = 'foo';
    // act
    const output = someFunction(input);
    // assert
    expect(output).toBe('bar');
  });
});
Enter fullscreen mode Exit fullscreen mode

the tests in this repository look a bit different:

import { runTests } from './utils';

runTests('cli-command', ({ test, setup }) => {
  // "common" arrange
  setup(async (ctx) => {
    await ctx.run(`npm init -y`);
  });

  test('some-id', 'works', ['feature'], async (ctx) => {
    // act
    await ctx.run('npm test');

    // assert
    await ctx.assertFiles({
      'coverage/coverage-final.json': true,
    });
  });
});
Enter fullscreen mode Exit fullscreen mode

First of all, there are no modules or functions to import for testing here. We only import utilities. The most important utility is the runTests wrapper. This will give us access to further (specialized) wrappers such as setup and test. The former is a generic arrange. Everything that is run in there, will produce content that can be used (i.e., will be present) for each test.

Since some commands may install packages or perform longer operations (in the area of 10 to 40 seconds) it is crucial to not run the actual common arrange steps again. Instead, it is assumed that there are some outputs to the context directory, which can then be just copied from a temporary arrange location to the temporary test location.

Efficiency

The ability to conveniently have a temporary directory underneath (where everything else is relative to) is the reason for having these wrappers such as runTests, setup, or test.

The basic flow here is:

  1. For a test suite create a "container" directory in a predefined output directory (usually dist)
  2. Run the setup steps (once for all tests in a test suite) in a dedicated "template" directory inside the container directory
  3. Run the tests, each test creates its own temporary directory inside the container directory
  4. For each test first copy the contents of the template directory to it

That way, the outcome can be easily inspected and removed. Otherwise, finding the outcome - or cleaning it up - becomes a mess.

To find individual tests easier the directory of each test is prefixed with the id (in the example above some-id) that we give it. It also contains a random string to make sure there are no collisions.

Running the tests

Let's look at one of the more complicated tests:

import axios from 'axios';
import { cliVersion, runTests, selectedBundler, getFreePort } from './utils';

runTests('pilet-debug', ({ test, setup }) => {
  setup(async (ctx) => {
    await ctx.run(`npx --package piral-cli@${cliVersion} pilet new sample-piral@${cliVersion} --bundler none`);
    await ctx.run(`npm i ${selectedBundler} --save-dev`);
  });

  // ...

  test(
    'debug-standard-template-with-schema-v0',
    'can produce a debug build with schema v0',
    ['debug.pilet'],
    async (ctx) => {
      const port = await getFreePort(1256);
      const cp = ctx.runAsync(`npx pilet debug --port ${port} --schema v0`);

      await cp.waitUntil('Ready', 'The bundling process failed');

      await page.goto(`http://localhost:${port}`);

      const res = await axios.get(`http://localhost:${port}/$pilet-api`);
      const pilets = res.data;

      expect(pilets).toEqual({
        name: expect.anything(),
        version: expect.anything(),
        link: expect.anything(),
        spec: 'v0',
        hash: expect.anything(),
        noCache: expect.anything(),
      });

      await expect(page).toHaveSelectorCount('.pi-tile', 1);

      await expect(page).toMatchText('.pi-tile', 'Welcome to Piral!');
    },
  );
});
Enter fullscreen mode Exit fullscreen mode

Here we set up a micro frontend (called a "pilet") using npx with the piral-cli command. Then we install the selected bundler to be able to verify the debug command.

To prevent potential conflicts on the used port we use an utility for finding the next free port (default: 1256). Then we start an ever-running command npx pilet debug. Unlike the simple run the runAsync will run concurrently by default. Still, we want to wait until the command would print "Ready" in the console. If we find something like "The bundling process failed" (or even a terminated application) then we will have a failed test.

After the debug process is ready we can finally use Playwright to go to the page and run some assertions. We check against the debug server if it contains the expected API response.

Furthermore, we can assertions on the website. We should find a tile on the dashboard coming from the micro frontend that we are currently debugging.

So how can we run it? We can run it from the command line using npm start. If we want to run a specific test, e.g., for the pilet debug command we can also run jest directly.

npx jest src/pilet-debug.test.ts
Enter fullscreen mode Exit fullscreen mode

Theoretically, we could also run a specific test:

npx jest src/pilet-debug.test.ts -t 'can produce a debug build with schema v0'
Enter fullscreen mode Exit fullscreen mode

This works in almost all test suites except the ones using Playwright. In those test suites the page object remains undefined as some "magic" that is performed by the Jest Playwright integration is not present in such a scenario.

Besides running (all) the tests from the test repository the tests could also be installed and run locally:

npm i @smapiot/piral-cli-integration-tests
npx piral-cli-tests
Enter fullscreen mode Exit fullscreen mode

Note that this will not run all tests, but only the tests that require a bundler. Using this way one could test a self-developed bundler plugin. In the future, this would also provide the whole CLI test infrastructure to quickly allow testing other piral-cli plugins, too.

Results

Right now the tests run on-demand, even though they could (for whatever reason) also be scheduled. Already when writing the tests we've detected some edge cases and little improvements that helped us make the piral-cli even better.

Image description

So overall, besides having the assurance with new releases that we did not unintentionally break something, we already gained quite a bit of value from having integration tests in that area.

Image description

Right now failing tests are essentially reported as "partially failing" as we continue in the pipeline to actually publish them.

Top comments (3)

Collapse
 
dantederuwe profile image
Dante De Ruwe

Very interesting approach and article. I'm not too familiar with integration test frameworks for CLI tools; so: are there existing ones too? If so: what made you decide on a custom approach? If not: would you consider creating an open-source framework around this approach? Curious to hear your thoughts!

Collapse
 
florianrappl profile image
Florian Rappl

Yeah, I don't know any (CLI integration test tools).

It's actually not so much custom as it is a very lightweight wrapper around Jest. So Jest is still doing the heavy lifting - all we implemented is a thin wrapper to make it more convenient (and the tests more readable).

Yeah I thought about about open-sourcing this, but since its really just 4-5 small files (that are at the moment quite coupled to our specific needs, e.g., starting / controlling something like a Piral feed service) I concluded that having an article would be sufficient. If in the future a generalization of that approach (i.e., open-sourcing this) would be helpful then sure - I can definitely see this happening!

Collapse
 
dantederuwe profile image
Dante De Ruwe

Sometimes a wrapper is all you need for a great DX I would believe :)
Either way like you said, this article also serves as a great inspiration.