So you want to hunt bugs for a living. Maybe you've seen the headlines — $10M payouts on Immunefi, white hats earning more than most dev salaries in a single report. But you have no idea where to start.
I've been on both sides: as an auditor reviewing hundreds of submissions, and as a researcher submitting findings. This is the guide I wish existed when I started.
Why Immunefi?
Immunefi is the dominant bug bounty platform for Web3. As of early 2026:
- $100M+ paid out to white hats
- 350+ active bounty programs
- Median critical payout: $50,000-$100,000
- Largest single payout: $10M (Wormhole)
Compared to traditional platforms like HackerOne or Bugcrowd, the payouts in Web3 are 10-100x higher. The tradeoff: the bugs are harder to find, and the competition is fierce.
Prerequisites: What You Need to Know
Must-Have Skills
- Solidity proficiency — You need to read smart contracts fluently. Not just syntax; understand the EVM, storage layout, gas optimization patterns
- DeFi fundamentals — AMMs, lending protocols, staking, bridges, oracles. Know how they work architecturally
- Common vulnerability classes — Reentrancy, access control, oracle manipulation, rounding errors, front-running
- Testing frameworks — Foundry (preferred) or Hardhat. You need to write PoCs
Nice-to-Have Skills
- Formal verification basics (Certora, Halmos)
- Cross-chain messaging (LayerZero, Wormhole, Axelar)
- MEV and transaction ordering attacks
- Vyper, Rust (Solana/CosmWasm programs)
Minimum Setup
# Install Foundry
curl -L https://foundry.paradigm.xyz | bash
foundryup
# Clone a target protocol
git clone https://github.com/[target-protocol]
cd [target-protocol]
forge build
forge test
If you can't get a protocol's test suite running, you're not ready to audit it.
Step 1: Choose Your First Target
Don't start with Aave or Uniswap. Those codebases have been picked over by hundreds of experienced auditors. Instead:
Look for:
- New programs — Just launched on Immunefi, <2 weeks old
- Recent code changes — Check the GitHub for recent commits and PRs
- Moderate TVL ($1M-$50M) — Big enough to pay well, small enough that top hunters aren't camping
- Complexity sweet spots — Protocols with custom math (vaults, staking, options) rather than simple token contracts
- Protocols you *use* — If you're a DeFi power user, you already understand the intended behavior
Avoid:
- Programs with $0 TVL and unclear funding
- Protocols that haven't updated their scope in months
- Anything where the bounty max is lower than the time investment
Step 2: Reconnaissance
Before reading code, build context:
Documentation Deep Dive
- Read the protocol's docs end-to-end
- Understand the token flow: where does money come in, where does it go out?
- Map every role: admin, operator, user, keeper, liquidator
- Identify external dependencies: oracles, bridges, other protocols
Code Reconnaissance
# Get a feel for the codebase
find src -name '*.sol' | wc -l # How many contracts?
cloc src/ # Lines of code
grep -r 'external\|public' src/ | wc -l # External attack surface
grep -r 'onlyOwner\|onlyAdmin' src/ # Access control points
grep -r 'transfer\|call\|send' src/ # Money movement
Build a Mental Model
Draw the architecture. Seriously. Box-and-arrow diagrams of:
- Contract interactions
- Money flows
- State transitions
- Trust boundaries
I use a plain text file:
User -> Vault.deposit() -> Vault stores shares
Vault -> Strategy.invest() -> Strategy deploys to Aave
Keeper -> Vault.harvest() -> pulls profits, updates share price
User -> Vault.withdraw() -> burns shares, returns assets
TRUST BOUNDARIES:
- Keeper is semi-trusted (can trigger harvest but not steal)
- Strategy is fully trusted by Vault
- Oracle is external trust dependency
Step 3: Systematic Code Review
Don't just randomly read code. Use a systematic approach:
Pass 1: Follow the Money
Trace every path that moves tokens:
- Deposits → where do tokens go?
- Withdrawals → where do tokens come from?
- Fees → who collects, how calculated?
- Liquidations → what triggers, what's the math?
Pass 2: Access Control Audit
For every external and public function:
- Who can call this?
- What's the worst case if a malicious actor calls this?
- Can this be called in unexpected states?
Pass 3: Math Verification
For every arithmetic operation:
- Can this overflow/underflow? (Even with Solidity 0.8+, there are casting issues)
- Does division round correctly?
- Can intermediate values exceed uint256?
- Is the order of operations optimal? (
a * b / cvsa / c * b)
Pass 4: Integration Boundaries
For every external call:
- What if the external contract is malicious?
- What if it reverts?
- What if it returns unexpected values?
- Reentrancy possibilities?
Step 4: Writing a PoC
Found something suspicious? Don't submit a report yet. Write a proof-of-concept first.
A Foundry test is the gold standard:
// SPDX-License-Identifier: MIT
pragma solidity ^0.8.19;
import "forge-std/Test.sol";
import "../src/Vault.sol";
contract ExploitTest is Test {
Vault vault;
address attacker = makeAddr("attacker");
address victim = makeAddr("victim");
function setUp() public {
vault = new Vault();
// Setup initial state
deal(address(token), victim, 100e18);
vm.startPrank(victim);
token.approve(address(vault), 100e18);
vault.deposit(100e18);
vm.stopPrank();
}
function testExploit() public {
uint256 victimBalanceBefore = token.balanceOf(victim);
// Execute exploit
vm.startPrank(attacker);
// ... exploit steps ...
vm.stopPrank();
// Verify impact
uint256 victimBalanceAfter = token.balanceOf(victim);
assertLt(victimBalanceAfter, victimBalanceBefore);
console.log("Stolen:", victimBalanceBefore - victimBalanceAfter);
}
}
Run it:
forge test --match-test testExploit -vvvv
If you can't write a PoC, you probably don't have a real bug. Most rejected submissions on Immunefi are theoretical issues without working exploits.
Step 5: Writing the Report
Your report structure should be:
Title
Clear, specific, and scary (but accurate).
- ❌ "Potential issue with deposit function"
- ✅ "First depositor can steal 99% of subsequent deposits via share inflation attack"
Severity Assessment
Use Immunefi's severity classification:
- Critical — Direct theft of funds, permanent freezing of funds
- High — Theft under specific conditions, temporary freezing, governance manipulation
- Medium — Griefing, gas issues, minor accounting errors
- Low — Best practice violations, informational
Description
- Summary — One paragraph explaining the bug and impact
- Vulnerability Detail — Technical explanation with code references
- Impact — Dollar value estimation with realistic assumptions
- Proof of Concept — Your working Foundry test
- Recommendation — How to fix it (this earns goodwill)
Pro Tips for Reports
- Include line numbers and contract addresses
- Show your math for economic impact
- Explain the attack sequence step by step
- Address potential counterarguments ("this isn't exploitable because..." — explain why it is)
- Be professional, not dramatic
Step 6: After Submission
Expect Delays
- Triage: 1-3 days
- Review: 1-4 weeks
- Payout: 1-8 weeks after acceptance
Handle Rejections Gracefully
Most first submissions get rejected. Common reasons:
- Not a real vulnerability (theoretical only)
- Out of scope
- Already known
- Duplicate
- Insufficient impact
Don't argue. Learn from the feedback, improve your methodology, and submit a better report next time.
Escalation
If you genuinely believe a valid report was rejected unfairly, Immunefi has a mediation process. Use it sparingly and professionally.
Common Mistakes (I Made Them All)
1. Submitting Without a PoC
I cannot stress this enough. No PoC = probable rejection.
2. Overestimating Severity
Not every bug is critical. A rounding error that loses 1 wei per transaction is Low, not Critical. Accurate severity builds credibility.
3. Shotgun Approach
Don't submit 20 low-quality reports hoping one sticks. One well-researched critical > twenty speculative mediums.
4. Ignoring the Scope
Read the bounty scope document carefully. Many programs exclude certain contracts, chains, or vulnerability types.
5. Not Reading Previous Audit Reports
Check if the protocol has been audited before. Read those reports. Understand what was already found — and look for things the auditors missed.
Building Your Reputation
Public Profile
- Contribute to open-source security tools
- Write about your findings (after disclosure)
- Participate in audit contests (Code4rena, Sherlock, Cantina)
- Share educational content
Skills Development
- Week 1-4: Complete Damn Vulnerable DeFi
- Month 2-3: Study past exploit postmortems (rekt.news)
- Month 3-6: Enter audit contests
- Month 6+: Start submitting to Immunefi bounties
Track Record
Immunefi shows your paid bounties on your profile. Each accepted submission makes the next one easier. Protocols notice repeat contributors.
Realistic Expectations
- First 3 months: Lots of learning, probably zero payouts
- Month 3-6: Maybe a Medium finding ($1K-$10K)
- Month 6-12: First High or Critical ($10K-$100K)
- Year 2+: Consistent income if you're good and persistent
Most people quit in month 2. The ones who don't quit are the ones who earn six figures.
Resources
- Immunefi Platform
- Damn Vulnerable DeFi
- Foundry Book
- SWC Registry — Smart Contract Weakness Classification
- Rekt News — Exploit postmortems
- Solodit — Searchable audit finding database
We're Hashlock — we audit the protocols that host these bounties. If you're building in DeFi and want your codebase hardened before the bounty hunters arrive, get in touch.
Already hunting? What was your first bounty experience like? Share in the comments — the community learns from every story, successful or not.
Top comments (0)