As a best-selling author, I invite you to explore my books on Amazon. Don't forget to follow me on Medium and show your support. Thank you! Your support means the world!
Testing your code shouldn't feel like a chore you tack on at the end. In my experience with Rust, it’s woven right into the fabric of how you write programs. The tools are there from the start, designed to help you build confidence in every line you write. This isn't about finding every single bug before shipping—that's impossible. It's about creating a safety net that catches problems early, when they're cheap and easy to fix.
Think of it like building a chair. You wouldn't glue all the pieces together first and only then check if it wobbles. You'd test each joint as you make it. Rust encourages that same mindset for software. You write a small piece of functionality, and then immediately write a small test to prove it works. The compiler and the test runner are your partners in this.
Let's start with the most basic building block: the unit test. In Rust, you write these tests right next to the code they're checking. You create a special module marked with #[cfg(test)]. This tells the compiler, "Only include this code when we're running tests." It keeps your final executable clean and fast.
Inside that module, you mark functions with #[test]. Each of these functions is an individual test case. Here’s a simple example from a hypothetical math library:
pub fn add(a: i32, b: i32) -> i32 {
a + b
}
pub fn divide(numerator: i32, denominator: i32) -> i32 {
if denominator == 0 {
panic!("Cannot divide by zero!");
}
numerator / denominator
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_add_positive_numbers() {
let result = add(2, 3);
assert_eq!(result, 5);
}
#[test]
fn test_add_with_negative() {
assert_eq!(add(-5, 10), 5);
}
#[test]
#[should_panic(expected = "Cannot divide by zero!")]
fn test_divide_by_zero_panics() {
divide(10, 0);
}
}
You run these with cargo test, and Cargo automatically finds and runs every function marked with #[test]. The assert_eq! macro is your friend. It checks if two values are equal. If they aren’t, the test fails, and it prints out both values so you can see what went wrong. The #[should_panic] attribute is for when you expect the code to crash—like when someone tries to divide by zero. The test only passes if the function panics.
What I love about this setup is the immediacy. The test is inches away from the code. When I change the add function, my eyes naturally drift down to the test_add functions. It’s a constant, gentle reminder to verify my work. Because these tests sit in the same module, they can also test private functions, which is incredibly useful for checking the internal "scaffolding" of your code.
But unit tests alone aren't enough. They check pieces in isolation. How do you know those pieces fit together? That's where integration tests come in. In Rust, you place these in a tests directory at the top level of your project, right next to src.
Each file in tests is treated as a separate, standalone crate. This is key. It means your integration tests can only call the public functions of your library, just like a real user would. You can’t reach into the private internals. This forces you to test your public API, which is what truly matters to people using your code.
Here's what a simple integration test file might look like:
// tests/integration_test.rs
use my_math_library;
#[test]
fn test_combined_operations() {
// We can only use public functions here.
let sum = my_math_library::add(10, 20);
let quotient = my_math_library::divide(sum, 5);
assert_eq!(quotient, 6);
}
When I run cargo test, it runs my unit tests and my integration tests. Seeing both pass gives me a much stronger sense that the whole system works. If an integration test fails but all unit tests pass, it’s a clear signal: my individual components are fine, but I made a mistake in how they interact. That’s a different, and very valuable, kind of feedback.
Now, here’s a feature that still surprises me with its cleverness: documentation tests. Have you ever read documentation with a beautiful code example, tried it, and it just didn’t work? In Rust, that’s much harder to let slip through. You can write executable examples right in your code comments, and cargo test will run them too.
Look at this:
/// Adds two numbers together.
///
/// # Examples
///
/// ```
{% endraw %}
/// let result = my_crate::add(2, 2);
/// assert_eq!(result, 4);
///
{% raw %}
pub fn add(a: i32, b: i32) -> i32 {
a + b
}
The triple backticks in that comment? Rust's documentation tool, `rustdoc`, sees that, extracts the code, compiles it, and runs it as a test. If I ever change the `add` function in a way that breaks that example, my test suite fails. My documentation is forced to stay accurate. This creates a virtuous cycle: writing good examples becomes a way of writing good tests, and writing tests ensures the examples are correct.
So far, we've been writing what are called "example-based" tests. We, the programmers, think of specific cases (`add(2,2)`) and verify the output. This is great, but our imagination is limited. We might forget to test a weird edge case, like an empty string or a negative number mixed with zero.
This is where property-based testing comes in. Instead of saying "for input X, the output should be Y," you describe a *property* that should *always* be true for any valid input. A library like `proptest` then generates hundreds of random inputs and checks if your property holds.
It feels like having a tireless assistant who throws random data at your function, looking for a combination that breaks it. Let's test a function that reverses a string.
```rust
use proptest::prelude::*;
fn reverse_string(s: &str) -> String {
s.chars().rev().collect()
}
proptest! {
#[test]
fn test_reverse_twice_is_original(s in ".*") {
let reversed = reverse_string(&s);
let back_to_normal = reverse_string(&reversed);
prop_assert_eq!(s, back_to_normal);
}
}
The property is simple: if you reverse a string and then reverse it again, you should get back exactly what you started with. The s in ".*" is a strategy that tells proptest to generate random strings. It will try empty strings, strings with weird Unicode characters, long strings, short strings—everything. If this test passes, I'm very confident my reverse_string function is correct for any input.
Sometimes, your code doesn't just calculate a value; it performs actions, like writing to a database or calling a web API. Testing this can seem daunting. You don't want your tests to actually send emails every time you run them. This is where mocking comes in.
A "mock" is a fake version of a dependency that you control completely for the purpose of a test. The mockall crate is fantastic for this. Let's say you have a trait that sends notifications.
pub trait Notifier {
fn send(&self, message: &str) -> Result<(), String>;
}
pub struct OrderProcessor {
notifier: Box<dyn Notifier>,
}
impl OrderProcessor {
pub fn process(&self, order_id: u64) {
// ... some processing logic ...
let _ = self.notifier.send(&format!("Order {} processed", order_id));
// ...
}
}
In a real application, notifier might send an SMS or an email. In a test, that's a problem. With mockall, you can create a mock Notifier that just records what it was asked to do.
#[cfg(test)]
mod tests {
use super::*;
use mockall::predicate::*;
use mockall::*;
mock! {
pub NotifierImpl {}
impl Notifier for NotifierImpl {
fn send(&self, message: &str) -> Result<(), String>;
}
}
#[test]
fn test_order_processor_sends_message() {
let mut mock_notifier = MockNotifierImpl::new();
// Expect the `send` method to be called once, with a specific message.
mock_notifier.expect_send()
.with(eq("Order 42 processed"))
.times(1)
.returning(|_| Ok(()));
let processor = OrderProcessor {
notifier: Box::new(mock_notifier)
};
processor.process(42);
// The test passes if `send` was called as we expected.
// If it wasn't called, or was called with wrong arguments, the test fails.
}
}
This is powerful. It lets you test the behavior of your OrderProcessor—specifically, that it correctly asks the notifier to send a message—without needing a real notification system. You isolate the logic you care about.
Performance is another dimension of correctness. A function can give the right answer but be too slow to be useful. Rust has built-in support for benchmarks, though it requires a nightly compiler for the basic features. For stable Rust, the criterion crate is the standard. It doesn't just measure speed; it performs rigorous statistical analysis to tell you if a change actually made things faster or if you're just seeing random noise.
A simple benchmark with criterion looks like this:
use criterion::{black_box, criterion_group, criterion_main, Criterion};
fn fibonacci_slow(n: u64) -> u64 {
match n {
0 => 0,
1 => 1,
n => fibonacci_slow(n-1) + fibonacci_slow(n-2),
}
}
fn fibonacci_fast(n: u64) -> u64 {
let mut a = 0;
let mut b = 1;
for _ in 0..n {
let temp = a;
a = b;
b = temp + b;
}
a
}
fn benchmark_fibs(c: &mut Criterion) {
c.bench_function("fibonacci slow 20", |b| b.iter(|| fibonacci_slow(black_box(20))));
c.bench_function("fibonacci fast 20", |b| b.iter(|| fibonacci_fast(black_box(20))));
}
criterion_group!(benches, benchmark_fibs);
criterion_main!(benches);
Running this with cargo bench will time both functions. criterion runs them many times, warms up the CPU cache, and produces detailed reports. This helps you guard against performance regressions. If you accidentally rewrite fibonacci_fast to be slow, your benchmark suite will catch it.
All these tools come together in a real development workflow. You might start your work session by running the existing test suite to make sure you haven't broken anything. Then, as you implement a new feature, you write a few unit tests for the core logic. You add an example in your documentation comments. You think about edge cases and maybe write a property-based test to cover them.
When your feature involves external dependencies, you write tests using mocks. Once everything is integrated, you write an integration test to make sure the new feature works from the public API. Finally, if performance is critical, you add or update a benchmark.
This multi-layered approach is what builds real confidence. A unit test might catch a logic error in a helper function. An integration test might catch a mistake in how two modules communicate. A doc test ensures your public examples are valid. A property-based test might find that obscure edge case you never considered.
The result is that when you run cargo test and see all those green ok lines, it means something. It means your code likely does what you think it does, in isolation and in combination, and your documentation matches reality. It doesn't guarantee perfection, but it drastically reduces the space where bugs can hide. It turns the complex, scary task of verifying software into a routine, manageable, and even satisfying part of the job. You're not just hoping your code works; you have a solid, repeatable process that shows you it does.
📘 Checkout my latest ebook for free on my channel!
Be sure to like, share, comment, and subscribe to the channel!
101 Books
101 Books is an AI-driven publishing company co-founded by author Aarav Joshi. By leveraging advanced AI technology, we keep our publishing costs incredibly low—some books are priced as low as $4—making quality knowledge accessible to everyone.
Check out our book Golang Clean Code available on Amazon.
Stay tuned for updates and exciting news. When shopping for books, search for Aarav Joshi to find more of our titles. Use the provided link to enjoy special discounts!
Our Creations
Be sure to check out our creations:
Investor Central | Investor Central Spanish | Investor Central German | Smart Living | Epochs & Echoes | Puzzling Mysteries | Hindutva | Elite Dev | Java Elite Dev | Golang Elite Dev | Python Elite Dev | JS Elite Dev | JS Schools
We are on Medium
Tech Koala Insights | Epochs & Echoes World | Investor Central Medium | Puzzling Mysteries Medium | Science & Epochs Medium | Modern Hindutva
Top comments (0)