DEV Community

Paul J. Lucas
Paul J. Lucas

Posted on

Writing and running End-to-End Tests in Autotools

Introduction

A previous article in this series showed how to write and run unit tests in Autotools. While unit tests are useful to ensure basic functionality especially in edge cases, they’re insufficient. Additional testing — end-to-end (E2E) testing — helps ensure software as a whole does what it’s supposed to do. Autotools can also help with E2E testing.

Autotools also includes Autotest that can be used to write test suites. However, according to its own documentation, it’s unstable. But, even if it were stable, it seems overly complicated to me.

Fortunately, it provides an escape hatch and allows you to use your own custom test drivers. That’s what this article will be covering.

One assumption I’m making is that E2E testing here involves, for each test, running your program with specific test options and input, collecting the program’s output, and comparing it against the program’s expected output. If it matches, the test passes; otherwise it fails.

Directory Layout

For a project, in addition to the lib (Gnulib source), m4 (Autoconf macros), and src (your project’s source), create a test subdirectory with the following files and subdirectories:

my_proj/
    ...
    test/
        Makefile.am
        data/
        expected/
        run_test.sh
        tests/
Enter fullscreen mode Exit fullscreen mode

where:

  • Makefile.am: a new makefile for this directory (more later). Reminder: for any new makefile, you need to add it to the set in AC_CONFIG_FILES in your configure.ac
  • data: an optional subdirectory that contains data files needed for testing.
  • run_test.sh: a custom test driver (more later).
  • tests: a subdirectory containing test files.
  • expected: a subdirectory containing files of expected output corresponding to files in tests.

test/Makefile.am

The test/Makefile.am file should look like this:

AUTOMAKE_OPTIONS = 1.12  # needed for TEST_LOG_DRIVER

TESTS = tests/ad-no_options.test \
        tests/ad-A.test \
        tests/ad-b16-B2.test \
        tests/ad-B16-e1.test

AM_TESTS_ENVIRONMENT = BUILD_SRC=$(top_builddir)/src; export BUILD_SRC ;
TEST_EXTENSIONS = .test

TEST_LOG_DRIVER = $(srcdir)/run_test.sh

EXTRA_DIST = run_test.sh tests data expected

dist-hook:
        cd $(distdir)/tests && rm -f *.log *.trs
Enter fullscreen mode Exit fullscreen mode

where:

  • TESTS lists the test files (the tests) to run (more later).
  • AM_TESTS_ENVIRONMENT sets up the environment to run a test. In this case, we want to export BUILD_SRC that points to where the executable for your project is built because that’s what we want to test. Note that whatever you put here must end with a semicolon.
  • TEST_EXTENSIONS is a space-separated list of filename extensions that are for tests. You can choose any extensions you want. Here, it’s simply .test.
  • TEST_LOG_DRIVER is the “test driver” program for tests whose extension is .test, in this case our custom driver. In general, for any extension .xxx in TEST_EXTENSIONS, there needs to be a corresponding XXX_LOG_DRIVER to run those tests.
  • EXTRA_DIST lists the additional files and directories we want included in a distribution tar file via make dist. dist-hook is a special target whose commands are run when you make dist. When running tests, the GNU test framework creates .log and .trs files (one each per test run). We don’t want those inside the generated tar file, so we remove them.

Test Files

What’s in a test file? Since we’re using a custom test driver, what’s in a test file can be anything you want. It should either be in a format that’s easily parseable by run_test.sh or an executable in its own right. For example, for my wrap program, test files are in the form:

prog | config-file | options | input-file | expected-exit

for example, wrap-E1.test — that is a pipe (|) separated set of fields where:

  • prog is the program to be tested (either wrap or wrapc).
  • config-file is the name of the configuration file to use, if any (or /dev/null if none).
  • options are command-line options, if any.
  • input-file is the file to process.
  • expected-exit is the expected exit status, e.g., 0 for success, or a non-zero specific status code.

run_test.sh

The custom test driver can be written in any language you want. That said, plain old shell, e.g., bash, is probably best because it’s ubiquitous. If you were to write it in, say, Python, then the user’s machine would have to have (the right version of) Python installed.

For Python in particular, it’s even worse because the name of the Python executable can be one of python, python3, or even python2.

The first couple hundred lines of run_test.sh (up to the Run test comment) used for wrap would work for testing any program, i.e., those lines are necessary boilerplate code (at least as far as I’m concerned). Consequently, I’m not going to explain it in detail since this is an article on Autotools, not shell programming.

However, one thing I will point out is this line:

PATH="$BUILD_SRC:$PATH"
Enter fullscreen mode Exit fullscreen mode

As shown earlier, BUILD_SRC is set via AM_TESTS_ENVIRONMENT in tests/Makefile.am to be the path to the build src directory where the program’s executable is built. The above line puts that path first so we’re sure that we’re testing the current version of the executable and not some older version that might be installed somewhere else along PATH, e.g., /usr/local/bin.

The code after the Run test comment are specific to wrap. If you wanted to use my run_test.sh as a starting point for your own, you’d only need to replace that code.

I’ll go through this code a bit. The lines:

[ "$IFS" ] && IFS_old=$IFS
IFS='|'; read COMMAND CONFIG OPTIONS INPUT EXPECTED_EXIT < $TEST
[ "$IFS_old" ] && IFS=$IFS_old
Enter fullscreen mode Exit fullscreen mode

temporarily set the input field separator (IFS) to the pipe character so read will use it to split the contents of a test file into its components. The line:

EXPECTED_OUTPUT="$EXPECTED_DIR/$(echo $TEST_NAME | sed s/test$/$EXT/)"
Enter fullscreen mode Exit fullscreen mode

determines the expected output file corresponding to the test. The scheme I used was that the expected file’s name is the same as the test except its extension is replaced by txt. The line:

if $COMMAND -c"$CONFIG" $OPTIONS -f"$INPUT" -o"$ACTUAL_OUTPUT" 2> "$LOG_FILE"
Enter fullscreen mode Exit fullscreen mode

runs $COMMAND (either wrap or wrapc) giving it all necessary options and capturing both its standard output and error. If the if is true (the command succeeded via a 0 exit status):

then
  if [ 0 -eq $EXPECTED_EXIT ]
  then
    if diff "$EXPECTED_OUTPUT" "$ACTUAL_OUTPUT" > "$LOG_FILE"
    then pass; mv "$ACTUAL_OUTPUT" "$LOG_FILE"
    else fail
    fi
  else
    fail
  fi
Enter fullscreen mode Exit fullscreen mode

then it checks that 0 was the expected exit status. If so, it diffs the expected output against the actual output. If there’s no difference, the test passed; otherwise it failed. If 0 is not the expected exit status (the command succeeded when it wasn’t supposed to), that’s an immediate failure.

If the if is false (the command failed via a non-zero exit status):

else
  ACTUAL_EXIT=$?
  if [ $ACTUAL_EXIT -eq $EXPECTED_EXIT ]
  then
    pass
  else
    case $ACTUAL_EXIT in
    0)  fail ;;
    *)  fail ERROR ;;
    esac
  fi
fi
Enter fullscreen mode Exit fullscreen mode

then if the actual exit status is the expected exit status (the command failed in the expected way), the test passed; otherwise it failed.

Running Tests

Once all that’s done, all you have to do from either the top-level or test subdirectory is type:

$ make check
Enter fullscreen mode Exit fullscreen mode

and it’ll run all your tests printing test results as it goes. You can even run multiple tests in parallel by using make’s -j option that specifies the number of parallel jobs to run:

$ make -j10 check
Enter fullscreen mode Exit fullscreen mode

The value for -j should be roughy the number of CPU cores your machine has. I have a cpus script that prints the number of CPUs on a machine that works for various operating systems. I also have an mj shell alias for make -j$(cpus).

Conclusion

Autotools does offer a testing framework, but is unstable and, IMHO, complicated. Using a custom test driver is much simpler since you have complete control over the script and test file formats and so can customize them for your projects. Feel free to use my run_test.sh script as a starting point.

Top comments (0)