DEV Community

metadevdigital
metadevdigital

Posted on

The Anatomy of a Smart Contract Audit: What Auditors Look For

The Anatomy of a Smart Contract Audit: What Auditors Look For

cover

In November 2022, a single integer overflow bug in Wormhole's token bridge drained $325 million in wrapped Ethereum.1 The code was audited twice. The vulnerability existed in plain sight: a lack of proper state validation that allowed an attacker to forge signatures and drain the vault. This wasn't a novel zero-day. It was Protocol 101 stuff, executed poorly.

If you're about to launch a smart contract and thinking an audit is just a rubber stamp—or worse, that it's optional—this article is your wake-up call.


What Auditors Actually Hunt For

Auditors look for four categories of bugs: access control failures and reentrancy, arithmetic errors and overflow/underflow, state management issues and improper validation, and cryptographic and signature vulnerabilities. Most audits take 2–6 weeks and cost $10k–$500k+. They still miss edge cases. Assume your code is broken until proven otherwise.


How an Audit Actually Works

A competent smart contract audit doesn't happen in a weekend. It's layered, methodical, and often infuriatingly slow (from a developer perspective).

First comes automated tooling. Auditors start with static analysis—Slither, Mythril, Echidna.2 These run in minutes and catch reentrancy patterns, unprotected delegatecall, integer arithmetic issues, missing zero-address checks, and visibility problems. Automated tooling catches maybe 40% of real vulnerabilities in audited code.

Then a human reads your code. Usually several humans, if you're paying real money. They're not trying to understand what you meant to do. They're trying to break what you actually did. This is where the Wormhole bug lived:

// DO NOT DO THIS (simplified version of Wormhole's actual bug)
mapping(address => bool) initialized;

function initialize(address guardian) external {
    require(!initialized[guardian], "already initialized");
    initialized[guardian] = true;
    // Store guardian, set up state...
}

// Problem: No signature verification. An attacker could call
// initialize() with ANY guardian address, forging a "legitimate" setup.
Enter fullscreen mode Exit fullscreen mode

The correct approach requires proper state validation:

// DO THIS
bool private initialized;
address private guardian;

function initialize(address _guardian, bytes calldata signature) external {
    require(!initialized, "already initialized");

    // Verify the signature actually came from someone authorized
    bytes32 digest = keccak256(abi.encodePacked(_guardian));
    address signer = recoverSigner(digest, signature);
    require(signer == DEPLOYER, "invalid signature");

    initialized = true;
    guardian = _guardian;
}
Enter fullscreen mode Exit fullscreen mode

One assumes the caller is honest. The other doesn't.

Finally, they threat-model your contract. Auditors build mental models of how your contract will be used—and abused. What happens if this function is called during a reentrancy attack? Can I flash-loan my way into the vault? What if this external call fails silently? Can I exploit the order of operations in a transaction? This is where experience matters. A junior auditor might miss the fact that your ERC-20 transfer relies on the token contract not being malicious. (Spoiler: it can be.)

The Big Four Bug Categories

After auditing dozens of contracts across DeFi, NFT protocols, and bridge systems, ~80% of real vulnerabilities fall into four buckets.

Access Control Failures. Your contract probably has owner functions. Do they actually check who's calling?

// DO NOT DO THIS
function withdrawAll() external {
    // "Only the owner should call this"
    // But we never actually check...
    uint256 balance = address(this).balance;
    payable(msg.sender).transfer(balance);
}

// DO THIS
function withdrawAll() external onlyOwner {
    uint256 balance = address(this).balance;
    payable(owner).transfer(balance);
}
Enter fullscreen mode Exit fullscreen mode

Bonus points if your access control is so tangled that even you can't remember who can call what. (I've seen this exact situation in three separate audits.)

Reentrancy and Call Ordering. The classic. An attacker recursively calls your contract before a state update completes. This is why Checks-Effects-Interactions (CEI) matters:

// DO NOT DO THIS
function withdraw(uint256 amount) external {
    balances[msg.sender] -= amount;  // STATE CHANGE
    (bool success, ) = msg.sender.call{value: amount}("");
    require(success);  // EXTERNAL CALL LAST (wrong!)
}

// DO THIS
function withdraw(uint256 amount) external {
    require(balances[msg.sender] >= amount);  // CHECK
    balances[msg.sender] -= amount;  // EFFECT
    (bool success, ) = msg.sender.call{value: amount}("");  // INTERACTION
    require(success);
}
Enter fullscreen mode Exit fullscreen mode

Arithmetic Errors. Even with Solidity 0.8+ (which has overflow protection by default), you can still mess this up:

// DO NOT DO THIS
uint8 count = 255;
unchecked { count++; }  // Now it's 0. Whoops.

// DO THIS
// Use appropriate types and document why you're opting out of compiler checks.
uint256 count = 255;
count++;  // Protected by compiler unless you explicitly opt out.
Enter fullscreen mode Exit fullscreen mode

Cryptographic and Signature Issues. This is where protocols like Wormhole stumbled. Signature verification is hard, and mistakes are expensive. Watch out for signature malleability (v/r/s can be flipped), missing nonce checks (replay attacks), incorrect hash construction (collision risks), and using ecrecover() without validating the return value.


Why Audits Aren't Magic

A good audit costs $50k–$200k+ and takes 4–8 weeks.3 A great one costs $300k+. Even then, it's not insurance. It's a probabilistic reduction in risk.

Some of the worst exploits happen in audited contracts. Not because auditors are incompetent, but because auditors work within scope boundaries, economic incentives change post-audit, and complex interactions with other protocols aren't always foreseeable.

The question isn't "Will an audit catch everything?" It's "Are the remaining risks acceptable?"


Pre-Audit Checklist

Run Slither first and fix the obvious stuff. Have internal review rounds—you know your protocol better than anyone else will. Write tests, lots of them, including fuzz tests. Get someone who didn't write the code to read it with fresh eyes. Do these things and you'll look like you take security seriously. Skip them and you'll look like Wormhole.



  1. Wormhole bridge exploit (February 2022). The vulnerability was in the token bridge code, allowing signature forgery. Lesson: even "audited" contracts can have critical flaws. 

  2. Trail of Bits maintains Slither; Mythril is maintained by Consensys. Both are free, both are useful, neither is perfect. 

  3. Based on market rates in 2023–2024 for reputable firms. Faster audits = higher risk that things were missed. 

Top comments (0)