DEV Community

ohmygod
ohmygod

Posted on

Aderyn vs Slither in 2026: A Head-to-Head for Solidity Static Analysis

You just finished writing a Solidity contract. Before you ship it to a $50M TVL protocol, you run it through a static analyzer. But which one?

Slither has been the default since 2019. Cyfrin Aderyn arrived as the Rust-built challenger. In March 2026, both tools have matured significantly — and the question isn't which is "better," but which combination covers the most ground.

I spent a week running both tools against 12 real-world contracts from recent exploits (Resolv USR, Truebit overflow, Venus Protocol donation attack, Moonwell governance) and a curated set of intentionally vulnerable contracts. Here's what I found.

The Setup

Slither v0.10.x — Python-based, Trail of Bits. Supports Hardhat, Foundry, and raw Solidity. 90+ built-in detectors spanning informational to high severity.

Aderyn v0.4.x — Rust-based, Cyfrin. Parses the Solidity AST directly. ~40 detectors focused on high-impact patterns. Designed for speed and CI/CD integration.

Both tools were run against:

  • 8 contracts from 2026 exploit post-mortems (source-verified on Etherscan)
  • 4 intentionally vulnerable contracts from Damn Vulnerable DeFi v4 and Ethernaut

Environment: Ubuntu 22.04, 16GB RAM, Foundry project structure.

Round 1: Speed

This isn't even close.

Metric Slither Aderyn
Cold start (first run) 4.2s avg 0.8s avg
Warm run (cached) 2.1s avg 0.6s avg
50-contract batch 48s 9s

Aderyn's Rust foundation gives it a 4–5x speed advantage. For CI pipelines where every second counts, this matters. For a one-off audit? Both are fast enough.

Verdict: Aderyn wins decisively.

Round 2: Detection Breadth

Slither ships 90+ detectors. Aderyn ships ~40. But raw count is misleading.

Against the 8 exploit contracts:

Vulnerability Slither Aderyn
Reentrancy (Resolv-style) ✅ Detected ✅ Detected
Unchecked return values ✅ Detected ✅ Detected
Integer overflow (pre-0.8) ✅ Detected ❌ Missed
Missing access control ✅ Detected ✅ Detected
Donation attack surface ❌ Missed ❌ Missed
Governance flash-loan risk ❌ Missed ❌ Missed
Unsafe delegatecall ✅ Detected ✅ Detected
Storage collision ✅ Detected ❌ Missed

Slither caught 6/8 patterns. Aderyn caught 5/8. Neither tool flagged the donation attack surface (balance manipulation via direct transfer) or governance flash-loan vectors — those require economic reasoning beyond static analysis.

Verdict: Slither wins on breadth, but the margin is smaller than the detector count suggests.

Round 3: False Positive Rate

This is where Aderyn shines.

Against the 12-contract set:

Tool Total findings True positives False positives FP rate
Slither 147 89 58 39.5%
Aderyn 62 51 11 17.7%

Slither's broader detector set means more noise. The reentrancy-benign and reentrancy-events detectors alone generated 23 findings that were technically correct but practically irrelevant.

Aderyn's tighter detector set means you spend less time triaging. Alert fatigue is the silent killer of security tooling adoption.

Verdict: Aderyn wins convincingly.

Round 4: Ecosystem Integration

Slither:

  • Native Foundry/Hardhat/Brownie support
  • slither-check-erc for ERC compliance
  • slither-mutate for mutation testing
  • Printers for call graphs, inheritance trees, data flow
  • GitHub Actions available
  • Python API for custom detectors

Aderyn:

  • Foundry-native (reads foundry.toml)
  • Markdown/JSON report output
  • GitHub Actions available
  • Rust API for custom detectors
  • Growing but smaller ecosystem

Slither's ecosystem is deeper. The printers alone (call-graph, human-summary, inheritance-graph) are invaluable during manual review.

Verdict: Slither wins on ecosystem depth.

Round 5: Custom Detector Authoring

Slither custom detector — write Python, inherit from AbstractDetector, implement _detect(). Access to Slither's IR (SlithIR) with SSA form and data flow analysis.

class MyDetector(AbstractDetector):
    ARGUMENT = "my-detector"
    HELP = "Detects my pattern"
    IMPACT = DetectorClassification.HIGH
    CONFIDENCE = DetectorClassification.HIGH

    def _detect(self):
        results = []
        for contract in self.compilation_unit.contracts_derived:
            for function in contract.functions:
                pass  # Your logic here using SlithIR
        return results
Enter fullscreen mode Exit fullscreen mode

Aderyn custom detector — write Rust, implement the IssueDetector trait. Faster to compile and run, but you work with raw AST nodes rather than an IR.

impl IssueDetector for MyDetector {
    fn detect(&mut self, context: &WorkspaceContext) -> Result<bool, Box<dyn Error>> {
        for function in context.function_definitions() {
            // Your logic here using AST nodes
        }
        Ok(!self.found_instances.is_empty())
    }
}
Enter fullscreen mode Exit fullscreen mode

For complex cross-function data flow analysis, Slither's IR is strictly more powerful. For simple AST pattern matching, Aderyn is cleaner.

Verdict: Tie — depends on complexity.

The Real Answer: Use Both

Here's the pipeline I recommend for any serious Solidity project:

jobs:
  static-analysis:
    runs-on: ubuntu-latest
    steps:
      - name: Aderyn Quick Scan
        run: aderyn . --output report.md
      - name: Slither Full Scan  
        run: slither . --json slither-output.json
      - name: Filter New Issues
        run: python3 scripts/diff-findings.py
Enter fullscreen mode Exit fullscreen mode

Aderyn runs first as a fast gate. Slither runs second for deep analysis.

The overlap is roughly 70%. The remaining 30% is where running both pays off:

  • Slither catches storage collisions, cross-contract reentrancy, and complex data flow issues that Aderyn misses
  • Aderyn's lower false-positive rate means its unique findings are more likely real bugs

What Neither Tool Catches

Both tools fundamentally do single-contract or single-project static analysis. They cannot detect:

  1. Economic attacks — Flash loan vectors, oracle manipulation, liquidity pool imbalance
  2. Cross-protocol composability risks — Behavior through 3 layers of DeFi Legos
  3. Governance timing attacks — Voting power manipulation via token borrowing
  4. Upgrade proxy storage conflicts — Real-world proxy patterns often defeat static analysis
  5. MEV-specific vulnerabilities — Sandwich attacks, JIT liquidity, backrunning

For these, you need:

  • Fuzzing: Echidna, Medusa, or Foundry's built-in fuzzer
  • Formal verification: Certora Prover, Halmos, or Kontrol
  • Economic simulation: Agent-based modeling with custom tooling
  • Runtime monitoring: Forta, Hypernative, or custom mempool watchers

Static analysis is Layer 1 of your security stack. Essential, but not sufficient.

Recommendations by Use Case

Scenario Recommendation
CI pipeline for a fast-moving team Aderyn (speed + low FP rate)
Pre-audit deep review Slither (breadth + printers)
Legacy contract (Solidity <0.8) Slither (better pre-0.8 detection)
Custom detector development Both (depends on complexity)
Quick sanity check Aderyn
Maximum coverage Both in sequence

Bottom Line

Neither tool is obsolete. Neither is sufficient alone. The real vulnerability isn't in your choice of static analyzer — it's in thinking one tool is enough.

Run both. Fuzz after. Verify what matters. Then get a human to look at the business logic.

That's where the $100M bugs actually live.


Part of the DeFi Security Tooling series.

Top comments (0)