DEV Community

Aamer Mihaysi
Aamer Mihaysi

Posted on

Vulnerability Research Is Cooked: What AI Just Did to Security

Vulnerability Research Is Cooked: What AI Just Did to Security

Thomas Ptacek didn't mince words. "Vulnerability research is cooked."

Not dying. Not changing. Cooked.

The same tools that help you write code can now find bugs in it. And they're getting better faster than security teams can adapt.

This isn't another "AI will transform cybersecurity" thinkpiece. This is about what's already happening, what's breaking, and what comes next.


What Changed

For decades, vulnerability research was a specialized craft.

You needed:

  • Deep knowledge of memory layouts, calling conventions, and ABI quirks
  • Patience to read disassembly and trace execution paths
  • Intuition for where bugs hide
  • Time—lots of time—to find the one exploitable path among thousands

This was a human-scale problem. The difficulty curve kept researchers rare and valuable.

AI collapsed that curve.

Large language models can now:

  • Read source code and identify suspicious patterns
  • Generate test cases that stress edge conditions
  • Explain what a vulnerability looks like in context
  • Suggest exploitation strategies
  • Write proof-of-concept exploits

Not perfectly. Not autonomously. But at a scale and speed that changes the economics entirely.


The Ptacek Thesis

Ptacek's argument, distilled:

  1. The cost of finding vulnerabilities dropped — AI-assisted tools find bugs faster than manual review

  2. The cost of writing exploits dropped — LLMs can draft working exploits from vulnerability descriptions

  3. The supply of vulnerabilities is spiking — More people can find more bugs with less expertise

  4. The demand can't keep up — Every bug found isn't automatically fixed. Patch cycles lag discovery.

  5. The incentives are misaligned — Bug bounties pay for found bugs, not for not-introducing bugs

The result: vulnerability research as a specialized discipline becomes economically unsustainable.


What This Means for Security Teams

If you're running security at an organization, the assumptions just shifted.

Old model: Hire a few security researchers. They find bugs. You fix them. Good enough.

New model: Assume bugs are being found constantly by people you've never heard of, using tools you can't control.

This changes priorities:

  • Shift left harder — Find bugs before they ship, because post-shipment discovery is now constant
  • Automated patching infrastructure — Your patch cadence needs to match discovery cadence
  • Attack surface reduction — Every exposed API is now an AI target
  • Defense in depth — Assume any single vulnerability will be found and exploited

The window between "bug introduced" and "bug exploited" just collapsed.


What This Means for Bug Bounty Programs

Bug bounties were designed for a world where finding bugs was hard.

You paid researchers for their expertise and time. The economics made sense: a $10,000 bounty was cheaper than a breach.

When anyone with an LLM can find bugs, bounty programs become arbitrage opportunities.

  • Run AI tools against a target
  • Submit findings at scale
  • Collect payouts

This isn't hypothetical. It's already happening. Programs are seeing:

  • Higher volume of submissions
  • Lower signal-to-noise ratio
  • More duplicate findings
  • Researchers using AI tools they won't disclose

The bounty model needs to adapt or collapse.


What This Means for Open Source

Open source has always lived on trust. You use a library because you trust the maintainers.

That trust model assumed manual review.

When AI can scan every commit for vulnerabilities, every dependency becomes a potential attack surface.

Maintainers already struggle with:

  • Limited time
  • No security budget
  • Demanding users

Now add: "AI-powered vulnerability scanners finding bugs in your code faster than you can review PRs."

The Axios supply chain attack used individually targeted social engineering. But the next generation of supply chain attacks won't need social engineering—they'll just find the bugs that exist.


What Survives

Not everything breaks.

Human expertise still matters for:

  • Novel vulnerability classes (AI trains on known patterns)
  • Complex multi-stage exploits (AI struggles with long reasoning chains)
  • Business logic flaws (context-dependent, not pattern-matched)
  • Adversarial research (finding bugs in AI defenses)

But the baseline work—the tedious scanning, the pattern matching, the first-pass analysis—that's automating.

Security researchers become security architects. Vulnerability hunters become exploit mitigators.


The Strategic Shift

Organizations that adapt will:

  1. Invest in AI-resistant defenses — sandboxing, capability reduction, monitoring
  2. Reduce attack surface aggressively — fewer exposed services, smaller blast radius
  3. Automate everything between discovery and patch — manual remediation is too slow
  4. Assume constant compromise — detection and response over prevention

Organizations that don't will:

  • Treat AI as "just another tool" while their exposure grows
  • Pay bug bounties that fund AI-assisted discovery operations
  • Discover vulnerabilities in incident reviews instead of proactive scans

The Takeaway

Ptacek's right. Vulnerability research as we knew it is cooked.

The craft wasn't killed by better researchers. It was killed by scale. AI didn't make finding bugs impossible—it made finding bugs so easy that the craft lost its scarcity value.

The security industry now has two jobs:

  1. Build systems that survive constant vulnerability discovery
  2. Figure out what security looks like when anyone can find bugs

We're not prepared for either.


The next vulnerability in your code is already being found by someone's LLM. The question isn't whether they'll find it. It's whether you'll have fixed it by then.

Top comments (0)