DEV Community

Cover image for [Rust Guide] 11.1. Writing and Running Tests
SomeB1oody
SomeB1oody

Posted on

[Rust Guide] 11.1. Writing and Running Tests

If you find this helpful, please like, bookmark, and follow. To keep learning along, follow this series.

11.1.1 What Is Testing

In Rust, a test is a function used to verify whether non-test code behaves as expected.

A test function usually performs three actions:

  • Arrange data/state
  • Act on the code under test
  • Assert the result

In some languages, these three actions are called the 3A steps.

11.1.2 Anatomy of a Test Function

A test function is still just a function; the difference is that it must be annotated with the test attribute.

An attribute is metadata for Rust code. It does not change the logic of the code it decorates; it only adds decoration, or annotation. In fact, we already used this in 5.2. Struct Usage Example - Printing Debug Information.

Adding #[test] to a function turns it into a test function.

11.1.3 Running Tests

Putting aside what is inside the test function for now, how do we run it after writing it? We use the cargo test command to run all tests.

This command builds a test runner executable. It runs the functions annotated with test one by one and reports whether they succeeded.

When you create a library project with Cargo, it generates a test module with a ready-made test function that you can use as a reference when writing other test functions. In fact, you can add any number of test modules or test functions.

For example:
Create a new library project named adder:

$ cargo new adder --lib
     Created library `adder` project
$ cd adder
Enter fullscreen mode Exit fullscreen mode

Open the project (lib.rs):

pub fn add(left: usize, right: usize) -> usize { 
left + right 
}

#[cfg(test)]
mod tests {
    use super::*;

    #[test]
    fn it_works() {
        let result = add(2, 2);
        assert_eq!(result, 4);
    }
}
Enter fullscreen mode Exit fullscreen mode

This is a test function because it is annotated with #[test], not because it is inside a test module. A test module can also contain ordinary functions.

Use cargo test to run the tests:

$ cargo test
   Compiling adder v0.1.0 (file:///projects/adder)
    Finished `test` profile [unoptimized + debuginfo] target(s) in 0.57s
     Running unittests src/lib.rs (target/debug/deps/adder-92948b65e88960b4)

running 1 test
test tests::it_works ... ok

test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s

   Doc-tests adder

running 0 tests

test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s


Enter fullscreen mode Exit fullscreen mode

Let’s analyze this output:

  • First come compiling, finishing, and running.
  • Next is running 1 test, which means one test is being executed. The next line shows that the test is tests::it_works. Its result is ok. This project has only one test, but if there were multiple tests, cargo test would run all of them.
  • Then comes test result: ok., which means all tests in the project passed. Specifically, 1 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out means one test passed, zero failed, zero were ignored, zero were benchmark tests, and zero were filtered out.
  • Doc-tests adder refers to the results of documentation tests. Rust can compile code that appears in API documentation, which helps ensure that documentation always stays in sync with the actual code.

If we rename the function, where will the output change?

pub fn add(left: usize, right: usize) -> usize {
    left + right
}

#[cfg(test)]
mod tests {
    use super::*;

    #[test]
    fn exploration() { // renamed to exploration
        let result = add(2, 2);
        assert_eq!(result, 4);
    }
}
Enter fullscreen mode Exit fullscreen mode

Output:

$ cargo test
   Compiling adder v0.1.0 (file:///projects/adder)
    Finished `test` profile [unoptimized + debuginfo] target(s) in 0.59s
     Running unittests src/lib.rs (target/debug/deps/adder-92948b65e88960b4)

running 1 test
test tests::exploration ... ok

test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s

   Doc-tests adder

running 0 tests

test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s

Enter fullscreen mode Exit fullscreen mode

You can see that the test name changed from tests::it_works to tests::exploration.

11.1.4 Test Failures

If a test function triggers panic!, the test fails. Since each test runs in its own thread, the main thread monitors those threads. When the main thread sees that a test has crashed by triggering panic!, that test is marked as failed.

For example:

pub fn add(left: usize, right: usize) -> usize {
    left + right
}

#[cfg(test)]
mod tests {
    use super::*;

    #[test]
    fn exploration() {
        let result = add(2, 2);
        assert_eq!(result, 4);
    }

    #[test]
    fn another() {
        panic!("Make this test fail");
    }
}
Enter fullscreen mode Exit fullscreen mode

The another function calls panic! directly. Run it and see the result:

$ cargo test
   Compiling adder v0.1.0 (file:///projects/adder)
    Finished `test` profile [unoptimized + debuginfo] target(s) in 0.72s
     Running unittests src/lib.rs (target/debug/deps/adder-92948b65e88960b4)

running 2 tests
test tests::another ... FAILED
test tests::exploration ... ok

failures:

---- tests::another stdout ----
thread 'tests::another' panicked at src/lib.rs:17:9:
Make this test fail
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace


failures:
    tests::another

test result: FAILED. 1 passed; 1 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s

error: test failed, to rerun pass `--lib`
Enter fullscreen mode Exit fullscreen mode

tests::another failed, while tests::exploration is still ok. The reason is thread 'tests::another' panicked at src/lib.rs:17:9, which means panic! was triggered at line 17, column 9 of src/lib.rs, that is, where the macro appears in the source code.

To summarize, test result: FAILED means the overall test run failed. More specifically, 1 passed; 1 failed; 0 ignored; 0 measured; 0 filtered out.

Top comments (0)