DEV Community

Cover image for Fuzz Testing & Invariants in Solidity: Secure Smart Contracts with Foundry
Régis
Régis

Posted on

Fuzz Testing & Invariants in Solidity: Secure Smart Contracts with Foundry

Discover How to Catch Critical Bugs in Your Smart Contracts Using Automated Testing in Foundry

Smart contracts are the backbone of decentralized applications and, once deployed, they are publicly accessible. A single vulnerability can put an entire protocol at risk, potentially causing multi-million dollar losses. This is not theoretical, it has happened repeatedly in DeFi. One key reason is the open execution model of blockchain: anyone can call any function, with any parameter. That drastically increases the input space and the chance of bugs being triggered by unexpected or malicious inputs.

To build resilient smart contracts, we need to go beyond standard unit testing, and that's where fuzz testing comes in. In this article, you'll learn how to use fuzz testing and invariant checking in Solidity using Foundry, a powerful testing framework for smart contracts. These techniques will help you to catch bugs before they reach Mainnet.

A Brief History of Fuzz Testing

In 1990, Barton P. Miller and his colleagues encountered a peculiar issue: spurious characters from a noisy dial-up connection were causing standard Unix utilities to crash. These were common tools, widely used, supposedly reliable. Yet, simple malformed inputs revealed deep instability.

Intrigued, they launched a systematic experiment: feeding random data into common Unix programs to observe how they behaved. The result was alarming: around 30% of utility programs was crashing or producing instable behavior.

This led to their seminal paper, "An Empirical Study of the Reliability of UNIX Utilities", co-authored by Miller, Lars Fredriksen, and Bryan So. It exposed a fundamental truth: even mature, widely-used software can fail under unexpected inputs. This led to the formalization of fuzz testing, a technique that generates random or semi-random inputs to uncover crashes, logic errors, or unexpected behavior in code.

From UNIX to Ethereum

Fast forward to today, and the same problem applies to smart contracts. Any failure can have irreversible consequences. We expect them to be stable and reliable under all input combinations.
This is especially the case as smart contracts operate under an open execution model: anyone can call any function with arbitrary inputs at any time. This drastically increases the input space, making it difficult to anticipate and test every possible scenario manually.

To ensure reliability across all possible inputs, we need to move beyond static and unit tests. We want to make sure, that under any input combinations we do not reach any edge cases, overflow/underflow, malicious crafted inputs… This applies for the life of the protocol. And this is where fuzz testing comes in to simulate and test those kind of behavior and validate the core assumptions of the protocol.

What Is Fuzz Testing? A Powerful Technique for Securing Smart Contracts

Randomness is all that you need

Fuzz testing is a software testing technique that feeds random, unexpected, or invalid inputs into a program to uncover crashes, bugs, or security vulnerabilities. Rather than testing specific cases, it explores the vast input space automatically, exposing edge cases developers might overlook.
It is often coupled with the notion of invariants which represent properties that must always be valid. In smart contracts, an invariant might be something like: "the sum of all user balances must equal the contract's total balance." If a sequence of inputs causes that invariant to break, it signals a potentially critical bug.

To illustrate it, suppose we define an invariant in our protocol as a > b. If during fuzzing tests we found a combination of inputs that cause a = b, we have found an unexpected behavior that violates our protocol assumption. This can highlight a logic flaw or unchecked edge case.

Fuzzing in Foundry: How to Use Fuzz Testing in Solidity

Let's see how we can create fuzzing tests using Foundry. For the demo, I have created a simple smart contract. Observant readers will notice that instead of using traditional uint256 we are using uint64. This smaller type will help us to illustrate how with fuzzing tests we can expose this kind of edge cases more easily.

// SPDX-License-Identifier: MIT
pragma solidity ^0.8.13;

contract App {

    mapping(address => uint64) public balances;

    function deposit() external payable {
        balances[msg.sender] += uint64(msg.value);
    }

    function withdraw(uint64 amount) external {
        require(balances[msg.sender] >= amount, "Insufficient balance");
        balances[msg.sender] -= amount;
        (bool success, ) = msg.sender.call{value: amount}("");
        require(success, "Failed to withdraw");
    }
}
Enter fullscreen mode Exit fullscreen mode

Now, let's create a traditional unit test for this smart contract.

// SPDX-License-Identifier: MIT
pragma solidity ^0.8.13;

import {Test, console} from "forge-std/Test.sol";
import {App} from "../src/App.sol";

contract AppTest is Test {
    App public app;
    address user;

    function setUp() public {
        app = new App();
        user = address(1);
    }

    function test_DepositAndWithdraw() public {
        uint256 userBalance = 100;
        vm.deal(user, userBalance);

        vm.startPrank(user);  
        app.deposit{value: userBalance}();
        app.withdraw(uint64(userBalance));
        vm.stopPrank();

        assertEq(app.balances(user), 0);
        assertEq(user.balance, userBalance);
    }
}
Enter fullscreen mode Exit fullscreen mode

What interest us here is the test_DepositAndWithdraw() function. Currently, it tests only a simple value. You may already see the different approaches with parameterize parameter where you are defining a set of values where you are going to test it one by one. Those approaches work well when you want to test specific behavior or scenario. However, it lacks a range of value that could impact your protocol. Let's now see how you can create your own fuzzing test.

How to Write Fuzzing Tests in Foundry for Solidity Smart Contracts

Let's create the same previous test, but this time by adding fuzzing!

function testFuzz_DepositAndWithdraw(uint256 amount) public {
    vm.deal(user, amount);

    vm.prank(user);  
    app.deposit{value: amount}();

    assertEq(app.balances(user), amount);
    assertEq(user.balance, 0);

    vm.prank(user);  
    app.withdraw(uint64(amount));

    assertEq(app.balances(user), 0);
    assertEq(user.balance, amount);
}
Enter fullscreen mode Exit fullscreen mode

Notice that we have changed the function name by prefixing it by testFuzz_ and that we add an additional parameter in our function called amount. This parameter is set as uint256 type and Foundry will automatically generate randomized values for this input. Now, if you run it, by using forge test you should have the following issue:

[FAIL: assertion failed: 6788299089036262325 != 9618775958796846875934621660669098933; counterexample: calldata=0x27ce517100000000000000000000000000000000073c8244f5a7e9715e34e20e371d3bb5 args=[9618775958796846875934621660669098933 [9.618e36]]] testFuzz_DepositAndWithdraw(uint256) (runs: 0, μ: 0, ~: 0)
Enter fullscreen mode Exit fullscreen mode

This happens because the fuzzing engine may generate an amount value that overflows uint64 during the internal cast in the smart contract. That's a critical bug: the type mismatch causes unexpected behavior.

In case we want to generate uint64, we can modify the input parameter:

function testFuzz_DepositAndWithdraw(uint64 amount) public {
Enter fullscreen mode Exit fullscreen mode

We should now have a successful result.

[PASS] testFuzz_DepositAndWithdraw(uint64) (runs: 257, μ: 64882, ~: 65480)
Enter fullscreen mode Exit fullscreen mode

However to fully fix this issue, we should accept uint256 directly and not cast msg.value in our initial code. Here, we just wanted to illustrate how you can select and generate specific type using Foundry.

Additionally, each run represents a different randomized input value. Note that these are stateless tests, meaning that the environment is reset before each iteration to ensure reproducibility and isolation.

Fuzzing output - Understanding Run Metrics and Gas Usage

Let's understand the output parameter. Here for our previous test case, we had (runs: 257, μ: 64882, ~: 65480).
As runs may suggest, it represent the number of scenarios we are running. This can be changed by modifying the foundry configuration foundry.toml. You can then customize the number of runs you want as follow:

[fuzz]
runs = 1000
Enter fullscreen mode Exit fullscreen mode

Notice that by increasing the number of runs, you are increasing the number of tests but also increasing the running time. And, if you are wondering, yes big companies are using dedicated cluster to run fuzzing test.

Finally, µ represents the mean gas used across all fuzz runs and ~ represents the median gas used across all fuzz runs.

Advanced Fuzzing Configuration in Foundry

If we explore a bit more foundry configuration, we can see a list of other parameter available. One parameter that you may interest you is the seed parameter, which controls the randomness used in fuzz input generation. By reusing this seed, you can reproduce the exact failing case without rerunning all fuzz iterations. This is especially helpful for debugging, tracing, or sharing test failures with others.

Another parameter that may interest you is the failure_persit_dir which defined the path where fuzz failures are recorded and replayed. It can be useful if you are running it locally and need to share it with someone else. Or if you are deploying it on a CI and need to extract the data or the seed.

Advanced Fuzzing Techniques in Foundry

How to Guide Fuzz Input Generation in Solidity Tests

When running fuzzing test, you may already know that certain input values are more likely to uncover edge cases or vulnerabilities in your smart contract. You may want to generate or eventually guide the fuzzing generation by providing some parameter.

One way to guide fuzzing in Foundry is by using fixtures or parameterized tests. The idea is to define a set of values that will be prioritized during input generation. In Foundry, you can do this by declaring a variable prefixed with fixture, followed by the name of the parameter you want to control. For example, to guide fuzzing for a parameter named amount, you would define a variable called fixtureAmount. Here an example:

uint64[] public fixtureAmount = [0, 1, 100, type(uint64).max];

function testFuzz_DepositAndWithdraw(uint64 amount) public {
     // ...
}
Enter fullscreen mode Exit fullscreen mode

To add more customization or even more control, instead of using a variable, you can define a function. You will have to return a list:

function fixtureAmount() public returns (uint64[] memory) {
     // ...
}
Enter fullscreen mode Exit fullscreen mode

Handling Multiple Parameters in Fuzzing Tests

Our first example is relatively simple. We are generating a single input on our test. If we need multiple parameters, we can simply add new one in our function.
But what if we want to enforce constraints between those parameters? For example, ensuring that one parameter is always less than another, or that certain combinations are invalid? Foundry is providing some utility function that you can use as display in the example below:

function testFuzz_DualInput(uint256 a, uint256 b) public {
    vm.assume(a > b); // constraint: a must be greater than b
    // ...
}
Enter fullscreen mode Exit fullscreen mode

Another way to enforce constraints on fuzzed inputs is by bounding a variable within a specific range using:

a = bound(a, 1e8, 1e36);
Enter fullscreen mode Exit fullscreen mode

Notice that the order here have an impact. For instance, if you assume that a should be greater than b, but then you bound a to a specific range it will crash, as the value of a could be above a certain range. For that, you should handle the a range before assuming the comparaison between a and b. As an example, you will have:

function testFuzz_DualInput(uint256 a, uint256 b) public {
    a = bound(a, 1e8, 1e36); // bound a value
    vm.assume(a > b); // constraint: a must be greater than b
    // ...
}
Enter fullscreen mode Exit fullscreen mode

Invariant test in Foundry

Invariant testing is a powerful technique for uncovering flawed assumptions and incorrect logic in smart contracts. By executing randomized sequences of function calls with fuzzed inputs, it reveals edge cases and failures that often go unnoticed in conventional testing, especially in complex or stateful protocols.

Writing Invariant Tests in Foundry

Let's create a basic example to showcase how we can create an invariant test:

// SPDX-License-Identifier: MIT
pragma solidity ^0.8.13;

contract AppInvariant {

    bool public isValid = true;
    bool activated;

    constructor () {
        activated = false;
    }

    function activate(uint256 n) external payable {
        if (n == 42) {
            activated = true;
        }
    }

    function boom(uint256 n) external payable {
        if (n % 2 == 0 && activated) {
            isValid = false;
        }
    }
}
Enter fullscreen mode Exit fullscreen mode

In this example, we have defined our invariant as isValid should always be true. However, depending on some condition and events, it is possible given a set of action to modify this value. Let's see how we can use invariant test to discover this issue:

// SPDX-License-Identifier: MIT
pragma solidity ^0.8.13;

import {Test, console} from "forge-std/Test.sol";
import {AppInvariant} from "../src/AppInvariant.sol";

contract AppInvariantTest is Test {

    AppInvariant appInvariant;

    function setUp() public {
        appInvariant = new AppInvariant();
    }

    function invariant_isValid() public view {
        assertTrue(appInvariant.isValid());
    }
}
Enter fullscreen mode Exit fullscreen mode

Our test case is really simple. We have prefix our function by invariant_, then specify our invariant by checking our isValid variable that should remain true. When running this test case by doing a forge test, we can see an error:

Encountered 1 failing test in test/AppInvariant.t.sol:AppInvariantTest
[FAIL: invariant_isValid replay failure]
        [Sequence]
                sender=0x000000000000000000000000000000000000008b addr=[src/AppInvariant.sol:AppInvariant]0x5615dEB798BB3E4dFa0139dFa1b3D433Cc23b72f calldata=activate(uint256) args=[42]
                sender=0x000000000000000000000000000000000000147f addr=[src/AppInvariant.sol:AppInvariant]0x5615dEB798BB3E4dFa0139dFa1b3D433Cc23b72f calldata=boom(uint256) args=[586]
 invariant_isValid() (runs: 1, calls: 1, reverts: 1)
Enter fullscreen mode Exit fullscreen mode

The trace reveals the issue: a call to activate(42) followed by boom(586) violates the defined invariant, causing the test case to fail.

This example is simple enough that the issue can be spotted by manual code review. However, it highlights a key risk: relying solely on unit tests may overlook such behaviors, leading to hidden bugs. Invariant testing combined with fuzzing strengthens your test suite by exploring diverse input combinations and sequences you might not anticipate. That said, invariant tests are complementary, they do not replace targeted unit tests for specific scenarios but rather enhance overall coverage.

Configure Your Invariant Tests

Similar to fuzzing tests, you can configure invariant tests in Foundry by adding an [invariant] section to your foundry.toml file:

[invariant]
runs = 256
depth = 500
Enter fullscreen mode Exit fullscreen mode
  • runs: Number of times that a sequence of function calls is generated and run.
  • depth: Number of function calls made in a given run. Invariants are asserted after each function call is made. If a function call reverts, the depth counter still increments.

Keep in mind that increasing these parameters directly impacts the total number of executions and test duration. Larger values provide deeper coverage but require more time and resources. You should balance these settings based on the security requirements of your protocol and the available testing infrastructure.

Integrate Your Fuzzing Tests into CI/CD

Running fuzzing tests locally is a great way to catch bugs early. However, integrating them into your CI/CD pipeline, especially on PR branches, can significantly improve code quality.

As we saw, you can run fuzz testing with thousands iterations or with billion. The scaled will not be the same nor the time needed. A potential idea could be to define dedicated policies based on the branch and the maturity of the project. Indeed, when pushing to a feature branch, you may only need to execute quick fuzzing test on outlier value. While on master, before heading into production, you may want to test it intensively. This approach balances rapid feature development with reliable deployment.

But, again, it will depend on the resource available and how you want to manage your workflow depending on the security and speed you want.

Conclusion

In this article, we explored what fuzz testing is and how you can integrate it into your Solidity development workflow. While fuzzing does not replace traditional unit tests, it significantly strengthens your testing strategy, especially for uncovering unexpected edge cases and ensuring protocol robustness before mainnet deployment.

If you're aiming to build a secure and modern smart contract development pipeline, incorporating fuzzing is essential. Integrating it into your CI/CD processes is a straightforward and highly effective step.

If you need guidance or support in setting up fuzzing tests in your project, feel free to connect with me on LinkedIn. Will be happy to discuss about your project and see how we can collaborate.

Top comments (0)

Some comments may only be visible to logged-in visitors. Sign in to view all comments.