Open source bounty systems — from Web3 audit contests to traditional bug bounties — share a single structural flaw that corrupts nearly every platform in the ecosystem: the entity deciding whether to pay is the same entity that benefits from not paying. This conflict of interest, combined with AI-generated spam, platform collapses, and extreme earnings inequality, has created a system where skilled developers and security researchers routinely perform high-value work for free. The evidence spans every major platform and reveals not isolated bad actors but a systemic pattern of value extraction dressed up as opportunity.
The judge is the defendant: why every platform has the same problem
The core failure is architectural. In virtually every bounty system — HackerOne, Bugcrowd, Immunefi, Code4rena, Algora — the bounty poster unilaterally decides whether to pay. There is no binding contract, no neutral arbiter, and no meaningful legal recourse for the researcher. Bug bounty terms are written with phrases like "at the company's sole discretion," and platforms consistently side with their paying customers over their unpaid labor force.
Katie Moussouris, who helped create the Pentagon's bug bounty program and co-authored the ISO vulnerability disclosure standards, has publicly called this "an inherent conflict of interest." Andrew Crocker, Senior Staff Attorney at the EFF, noted that bounty program terms "give the company leeway to determine 'in its sole discretion' whether a researcher has met the criteria." Immunefi's own blog acknowledges: "We've seen some less-than-healthy behavior from projects, and whitehats end up feeling like they don't have any negotiating power and that they should be happy to just accept whatever payment they're offered."
The tactics for avoiding payment are remarkably consistent across platforms. Companies retroactively narrow scope to exclude reported vulnerabilities. They mark critical findings as "informational" to avoid payouts. They claim bugs were "already known" — without producing evidence or timestamps. They silently patch reported vulnerabilities and then deny the report was valid. When Immunefi removed a project for failing to pay a minimum of $500,000 in bounties, the whitehats still received nothing; Immunefi has no enforcement mechanism beyond removal.
The power asymmetry extends to silencing. Bugcrowd classifies all submissions as "Confidential Information of the Program Owner" by default. Researchers who go public risk suspension or permanent bans — as Trust Security discovered when Immunefi suspended them for 90 days after they criticized a bounty decision publicly. Jonathan Leitschuh, who discovered a critical Zoom vulnerability, declined the bounty because the NDA would have prevented him from ever discussing the flaw: "A lot of these programs are structured on this idea of non-disclosure. What I end up feeling like is that they are trying to buy researcher silence." Bruce Schneier shared academic research showing "legal agreements surrounding vulnerability disclosure muzzle researchers while allowing companies to not fix the vulnerabilities."
A wall of shame worth $2.5 million in unpaid Web3 bounties
The Web3 bounty ecosystem, anchored by Immunefi and Code4rena, demonstrates these failures at scale. The Bug Bounty Wall of Shame — a community-maintained ledger of projects that "rugged" security researchers — documents nearly $2.5 million in unpaid bounties across dozens of projects.
The cases follow a pattern. Arbitrum advertised a $2 million maximum bounty; when a whitehat found a vulnerability putting 352,000 ETH (~$680 million) at risk, they received only 25% of the maximum. Cronos had $2.5 million at risk; after the project silently fixed the bug before even responding to the report, the researcher received $1,600 as a "token of appreciation." dHEDGE, with $14.44 million at risk, paid $500 in "goodwill." Magic Link ignored a vulnerability affecting $10 million in user funds through eight reminders over a month, paid nothing, then publicly announced the patch as a "new security feature." GhostMarket ignored Immunefi's mediation order entirely and went silent.
Immunefi's explicit "No Fix, No Pay" policy creates a perverse incentive: projects can acknowledge a critical vulnerability and still avoid payment by choosing not to fix it. The platform admits this is "an eternally frustrating experience for whitehats."
One anonymous researcher documented "14 weeks in Immunefi limbo" on Medium after discovering a critical vulnerability in a project advertising seven-figure payouts. The project closed the report within two days citing an irrelevant technical reason. After Immunefi mediation confirmed the report was valid, the project changed its rejection reason to "duplicate" — two months after submission. The researcher observed: "Beyond pausing a BBP, Immunefi actually has no enforcement mechanisms in place for most projects, and bad-faith actors clearly don't care about being booted from the platform."
Code4rena's competitive audit model compounds the problem with extreme earnings inequality. In 2023, the platform processed 31,512 bug submissions across 114 audits and paid out $4,823,059 total. Of more than 10,000 registered wardens, only 1,323 earned anything — meaning roughly 87% earned zero. The average earning warden took home $3,646 for the year. cmichel, the first person to earn $1 million on Code4rena, watched his hourly rate drop from $2,000 to $500 as competition increased from ~10 to ~59 wardens per contest. "This is great for the sponsors who receive an insane amount of value," he wrote, "but bad for the auditors as they all compete for the same pot." Zellic, which acquired Code4rena in 2024, eventually admitted the economics "make more sense as a public good rather than a rent-seeking business."
AI slop made the economics completely untenable
On January 31, 2026, Daniel Stenberg shut down curl's bug bounty program after 6.5 years, 87 confirmed vulnerabilities, and over $100,000 paid to researchers. The reason: AI-generated submissions had overwhelmed the program's volunteer security team. The confirmed vulnerability rate plummeted from above 15% to below 5% — "not even one in twenty was real." By July 2025, submission volume had spiked to eight times the normal rate. In the first 21 days of January 2026 alone, 20 submissions arrived with zero valid vulnerabilities.
The reports weren't obviously bad. They featured walls of perfectly formatted text, plausible-sounding technical language, and references to functions that simply don't exist. Stenberg wrote: "We are effectively being DDoSed," and later: "The never-ending slop submissions take a serious mental toll to manage and sometimes also a long time to debunk. Time and energy that is completely wasted while also hampering our will to live."
The economic mechanism is simple. Previously, generating a credible security report required time, skill, and deep codebase knowledge — a natural quality filter. AI reduced the submission cost to near zero while leaving the triage cost unchanged or higher, since AI-generated reports look more plausible and take longer to debunk. This is a classic market failure: when the cost of filing claims drops to zero but the cost of evaluating claims stays constant, the system collapses.
Curl was not alone. Django's Security Team documented receiving AI-fabricated reports "on a nearly daily basis." Node.js dealt with a 19,000-line AI-generated pull request and imposed minimum reputation scores. libxml2's sole maintainer ended support for embargoed vulnerability reports entirely, citing the "unsustainable burden of handling security triage as an unpaid volunteer." CycloneDX pulled its bounty program after receiving "almost entirely AI slop reports." Apache Log4j's maintainer reported reviewing 67 submissions since July 2024, half arriving in the final two months. tldraw began auto-closing all external pull requests in January 2026.
Mitchell Hashimoto's Ghostty terminal emulator moved to invitation-only contributions, requiring all contributors be vouched for by existing trusted members. His assessment: "It's a fucking war zone out here man. Maintainer morale at an all time low." RedMonk analyst Kate Holterhoff coined the term "AI Slopageddon" to describe the phenomenon. An Oxford Academic paper studying COVID-19's supply shock on Bugcrowd — when submissions increased 151% — showed that AI represents a far larger version of the same dynamic.
HackerOne and Bugcrowd: the traditional platforms are failing too
Traditional security bounty platforms exhibit the same structural failures with additional layers of institutional dysfunction. Tommy DeVoss, one of HackerOne's highest-earning researchers with over $2 million earned, stated flatly: "Historically, mediation with HackerOne has been worthless." John Jackson of Sakura Samurai described submitting a vulnerability to Ford, watching it get fixed, being ignored for months on disclosure requests, and then getting banned from Ford's program when he pushed back. Patrick Martin had a plaintext credential disclosure go 10 months without response or payment despite being silently fixed.
In February 2022, HackerOne froze $50,000 in already-earned bounties from Russian researcher Anton Subbotin after the Ukraine invasion — for work already completed, discussed, and patched. CEO Mårten Mickos initially tweeted the funds would be donated to UNICEF without the researcher's consent. In January 2026, researcher Jakub Ciolek reported that HackerOne completely ghosted him on an $8,500 payout from the Internet Bug Bounty program for months; the company responded only after The Register published the story. HackerOne's IBB program has since paused submissions entirely.
The earnings data reveals stark inequality behind the platform's marketing. HackerOne touts $81 million in annual payouts and "hacker millionaires," but only about 1.6% of registered accounts produce valid work. Only 12% of researchers earn $20,000 or more annually. The median per-bounty payment is approximately $800. One researcher tracked 782 hours over 150 days and earned $5,650 — an hourly rate of $9.80, below minimum wage in most Western countries. A critical RCE vulnerability that pays $500–$5,000 on bounty platforms could fetch $50,000–$500,000 on the gray market — a gap of 100x to 500x. Lviv Van Houtven of Latacora put it bluntly: "HackerOne has weaponized triage... Their business model is misery."
Apple's program drew particular criticism. Researcher RenwaX23 discovered a Universal Cross-Site Scripting vulnerability in Safari rated Critical at 9.8/10, capable of impersonating users and accessing iCloud. Apple paid $1,000 — from a program advertising payouts up to $2 million. The Washington Post interviewed more than two dozen researchers who complained about Apple's pattern of slow fixes, limited feedback, and non-payment.
Bountysource proved platforms can simply steal the money
The Bountysource collapse is the clearest illustration of how bounty systems can fail catastrophically. The platform, which facilitated bounties across 55,000 GitHub issues for projects including BorgBackup, elementary OS, Nextcloud, and Nim, was acquired by cryptocurrency company CanYa in 2017 and then sold to The Blockchain Group in 2020.
On June 16, 2020, Bountysource emailed users announcing a new Terms of Service with a critical clause: any bounty unclaimed for two years would be "retained by Bountysource." The change was retroactive, with only two weeks' notice. After backlash forced a reversal, trust was destroyed — elementary OS published "Goodbye, Bountysource" and withdrew entirely.
The final collapse was quieter. By mid-2023, payouts stopped entirely. Bountysource went silent on all communication. The Blockchain Group filed for bankruptcy in November 2023. Investigative reporting by Evan Boehs documented at least $21,000 in stolen developer earnings — money for work already completed. The NewPipe project lost approximately €6,400 in accumulated bounties. Users on GitHub issue #1586 — titled "CRITICAL: Bountysource is Insolvent, do not use!" — called the behavior "abuse of the escrow, essentially an embezzlement" and discussed filing with French financial authorities.
Boehs observed: "There has been shockingly little discussion of this event. The community quietly accepted their loss, and the voices of developers who lost thousands of dollars were never amplified."
The maintainer-as-judge problem enables quiet self-dealing
Beyond platform-level failures, bounty systems create perverse dynamics at the project level. In September 2023, Wasmer's CEO used Algora to post a $5,000 bounty on the Zig project's repository without consulting maintainers, triggering multiple developers to start duplicate work simultaneously. Zig's community manager responded: "We're not going to let startups burn our contributor community so that they can squeeze one extra tweet out of their moat-building efforts." The Zig team later published a landmark critique arguing that development bounties "foster competition at expense of cooperation," transfer all risk to workers, and "penalize any form of thoughtfulness in favor of reckless action."
Developer Valentin Chmara documented a pattern on the Orama project: "The maintainer eventually closed every related PR, including mine, and shipped the fix internally." A review of 23 crypto bounty programs found systemic use of Contributor License Agreements as gates — developers sign away code rights, their PR gets closed without merge, and their code may appear in the product anyway. Academic research on Bountysource confirms the pattern: bounty issues actually have lower closing rates than non-bounty issues, and it takes longer for bounty issues to get closed.
What a fair bounty system would actually require
A handful of models point toward solutions. The FreeBSD Foundation acts as an intermediary, taking money from companies wanting a feature, finding a qualified contractor, and ensuring quality review — exclusive assignment rather than competition. Opire charges zero fees to developers, placing all platform costs on bounty creators. Immunefi enforces "The Bug Bounty Program Is Law" — projects cannot retroactively change terms, and minimum payouts for critical bugs are binding. Zellic now runs Code4rena contests at zero platform fee, acknowledging the old model was extractive.
The structural reforms needed are clear from the evidence. Escrow would prevent companies from receiving vulnerability information before committing funds. Independent arbitration by named third parties would eliminate the judge-is-the-defendant problem. Binding scope would prevent retroactive exclusions. Duplicate splitting — dividing bounties among researchers who independently find the same bug within a reasonable window — would compensate parallel work. Mandatory disclosure rights after 90 days would prevent indefinite NDAs from buying silence. Exclusive assignment for development bounties would eliminate the wasteful battle-royale dynamic and allow collaboration.
The quiet extraction engine behind the opportunity narrative
The bounty ecosystem functions as a remarkably efficient mechanism for extracting high-value labor at below-market rates. A program paying $100,000 per year in bounties replaces $500,000+ of professional penetration testing. The platform takes a 20% cut. Researchers split the remainder for work that, if sold on the gray market, would be worth orders of magnitude more. HackerOne's own data shows top earners in India earn 16x the median local software engineer salary — revealing that developing-world researchers accept far lower absolute amounts, creating a global race to the bottom.
63% of ethical hackers have withheld security flaws they discovered, with the top reason being threatening legal language. Parsia Hakimian, a senior offensive security engineer, compared the economics to "multilayer marketing operations, where very few people make most of the money while the rest don't make much at all." Legal experts told CSO Online that bounty platforms likely violate California AB 5 and the Federal Labor Standards Act by treating researchers as independent contractors when they meet employee criteria.
The system persists because the marketing works. Headlines about hacker millionaires and $81 million annual payouts obscure the reality that the median researcher earns poverty-level wages for highly skilled work, has no legal protection, cannot build a public portfolio due to NDAs, and can have completed work rejected without explanation or recourse. As one Hacker News commenter who spent seven years in the bug bounty community summarized: "The problem is that the majority of companies don't act in good faith. Even when you have something fully exploitable and valid, they will many times find some way to not pay you or lower the severity to pay you very little."
Conclusion
The evidence across every major bounty platform — Immunefi, Code4rena, HackerOne, Bugcrowd, Bountysource, Algora — reveals not a collection of individual failures but a single structural defect replicated across the entire ecosystem. When the party deciding payment is the party that benefits from non-payment, exploitation is not a bug but the equilibrium state. AI slop has accelerated the collapse by making it economically impossible for volunteer-maintained programs to process submissions, but the underlying power asymmetry existed long before large language models. The bounty economy's fundamental promise — that talented individuals can earn fair compensation for valuable security and development work — requires structural reforms that no major platform has yet been willing to implement fully: escrow, binding contracts, independent arbitration, and the elimination of the judge-as-defendant model. Until those changes arrive, bounty systems will continue to function as what they demonstrably are: mechanisms for transferring risk and extracting labor from the people who can least afford to absorb it.
Top comments (0)