DEV Community

Valerii Vainkop
Valerii Vainkop

Posted on

Vibe Coding Is Having Its Maker Movement Moment

Vibe Coding Is Having Its Maker Movement Moment

In the first 21 days of 2026, 20% of all submissions to cURL's public bug bounty were AI-generated.

Not one found a real vulnerability.

Daniel Stenberg, the creator and maintainer of cURL, shut the program down. Mitchell Hashimoto banned AI-generated code from Ghostty entirely. Steve Ruiz closed all external pull requests to tldraw.

These are not fringe reactions. These are some of the most respected engineers in open source — people who have spent years actively welcoming contributions — drawing a line.

If you're paying attention to the "vibe coding" conversation, this week's data is the most concrete signal yet of what that era actually produces in the wild.

And I've seen this before. Not with AI — but the pattern is familiar.


The Maker Movement Ran This Play First

In 2013, the maker movement was at peak energy. 3D printers, Arduino boards, Raspberry Pis, laser cutters. "Everyone can build" was the headline across every tech publication. Open-source hardware was going to decentralize manufacturing. Startups were going to come out of garages with products that competed with factories.

The prototypes were impressive. The community energy was real. The tooling genuinely got cheaper and more accessible.

But here's what actually happened: most maker projects stayed in the "cool prototype" category indefinitely. The gap between a functional prototype and a shippable product — regulatory compliance, manufacturing tolerances, supply chain, support infrastructure — remained exactly as wide as it always was. The real beneficiaries of the maker movement weren't the makers. They were the hardware vendors. Filament companies, PCB fabs, tooling platforms, Kickstarter, and later Hackaday and Adafruit as media properties. The ecosystem grew. The number of shipped products stayed small.

Vibe coding is following the same arc. It's just happening faster.


What Karpathy Actually Said

Andrej Karpathy coined "vibe coding" in early 2025. By December, he was describing something qualitatively different — he said models gained "significantly higher quality, long-term coherence and tenacity" and can now "push through problems" in a way they couldn't before.

He's not wrong. Something did shift in the December 2025 timeframe. Claude Sonnet 4.6, the OpenAI o3 family, Cursor's cloud agents — they're operating at a level that would have been genuinely surprising 18 months ago.

Cursor reports that 35% of their own internal pull requests are now generated by their coding agents. GitHub just shipped self-review for Copilot — agents reviewing their own output before it reaches human reviewers. These are real capabilities. The tools are better.

And the cURL maintainer isn't seeing better bug reports. He's seeing more noise.

Both things are true simultaneously, and that's the tension worth understanding.


Why the Quality Bar Didn't Move

The tools getting better at generating code doesn't automatically move the bar for what counts as useful contribution.

Open source contribution has always had a quality funnel. Most PRs get closed without merging. Most bug reports turn out to be user error. Most feature requests describe the reporter's specific problem, not the project's actual direction. The ratio of signal to noise has always been poor — that's a known, managed cost of running a public project.

What AI coding tools did is dramatically increase throughput at the top of the funnel without changing the signal rate. The throughput increase is real. The signal rate is unchanged. So maintainers are spending more time on triage for the same number of meaningful contributions.

That's the maker movement problem. 3D printers made it easier to produce physical things. They didn't make it easier to produce physical things that anyone would want to buy.

There's a missing variable in both cases: judgment. The judgment to know which bug is real, which feature belongs in the project's scope, which architectural choice will survive production. That judgment is not encoded in the tool. It lives in the person using the tool.


The Signal Problem in Practice

For teams shipping production systems, this plays out differently than it does in open source — but the underlying dynamic is the same.

AI coding agents are getting very good at producing code that passes tests, passes linters, and looks reasonable in code review. What they're not good at yet is understanding the implicit contracts that hold a system together — the unwritten rules about what this function is actually used for, why this timeout exists, why that retry loop is bounded the way it is.

Those implicit contracts are often not written anywhere. They live in the post-mortem from 18 months ago, in the Slack thread from the migration, in the comment that got deleted because someone thought it was obvious.

When an agent refactors code, it operates on what's visible. It's often correct about the visible parts. It's frequently wrong about the invisible ones.

The result is code that looks right, tests that pass, and an incident six weeks later that traces back to a boundary condition nobody documented.


What Actually Works

I'm not arguing against AI coding tools. I use them. They're genuinely useful for what they're good at — and understanding the boundary of "what they're good at" is the whole game.

Here's what I've found useful for teams integrating AI-generated code into production workflows:

Flag agent-generated code for an extra review pass. Not a full audit — just a focused check on boundary conditions, error handling, and interactions with other services. The stuff that tests won't catch.

A simple GitHub Actions step that labels PRs containing AI-generated code (based on commit metadata or PR description conventions) helps route these to the right reviewer:

# .github/workflows/label-ai-prs.yml
name: Label AI-generated PRs

on:
  pull_request:
    types: [opened, edited]

jobs:
  label:
    runs-on: ubuntu-latest
    steps:
      - name: Check for AI authorship marker
        id: check
        run: |
          BODY="${{ github.event.pull_request.body }}"
          if echo "$BODY" | grep -qi "\[ai-generated\]\|generated by claude\|generated by copilot\|generated by cursor"; then
            echo "is_ai=true" >> $GITHUB_OUTPUT
          fi

      - name: Add label
        if: steps.check.outputs.is_ai == 'true'
        uses: actions/github-script@v7
        with:
          script: |
            github.rest.issues.addLabels({
              owner: context.repo.owner,
              repo: context.repo.repo,
              issue_number: context.issue.number,
              labels: ['ai-generated', 'needs-boundary-review']
            })
Enter fullscreen mode Exit fullscreen mode

This doesn't slow down the workflow. It just ensures the right context reaches the reviewer.

Enforce test coverage on agent-modified files. Agents are good at writing tests when you ask. Make it structural, not optional:

#!/usr/bin/env bash
# pre-commit hook: enforce coverage on AI-touched files
# Place in .git/hooks/pre-commit and chmod +x

STAGED=$(git diff --cached --name-only --diff-filter=M | grep -E '\.py$|\.go$|\.ts$')

if [ -z "$STAGED" ]; then
  exit 0
fi

# Run coverage only on staged files
echo "Running coverage check on modified files..."
coverage run --source="$(echo $STAGED | tr '\n' ',')" -m pytest tests/ -q

COVERAGE=$(coverage report --include="$STAGED" | tail -1 | awk '{print $NF}' | tr -d '%')

if [ "$COVERAGE" -lt 80 ]; then
  echo "Coverage $COVERAGE% is below 80% threshold on modified files."
  echo "If this code was agent-generated, add tests before committing."
  exit 1
fi
Enter fullscreen mode Exit fullscreen mode

This catches the most common failure mode: agents that write code without writing the tests that would catch the edge cases.


The Production Gap Is Not a Tooling Problem

The maker movement plateau happened not because 3D printers stopped improving, but because the gap between prototype and product was never a tooling problem in the first place.

Shipping a product requires understanding users, managing supply chains, providing support, maintaining quality at scale. None of those things are solved by making it easier to produce an initial artifact.

The production gap in software is the same thing. Shipping a feature requires understanding the system it lives in, the users who depend on it, the failure modes that aren't visible in happy-path tests, and the operational burden it will create. None of those things are solved by making it easier to generate an initial implementation.

Vibe coding democratizes the prototype. The production gap remains.

That's not a pessimistic take. It's an accurate one. And it's actually good news if you're an engineer whose value is in closing that gap.

There are now far more vibe-coded things in the world that need someone who can evaluate them. Someone who reads post-mortems. Someone who has been on-call. Someone who knows why that retry logic looks weird and what it actually protects against.

The cURL maintainer's problem — more volume, same signal — is also an opportunity for the engineers who can distinguish one from the other.


What This Means for Engineering Teams Right Now

If you're leading a team that's adopting AI coding tools, a few things are worth establishing now rather than later:

Define what counts as "done" for agent-generated code. Tests passing is not done. Code review is not done. Done means the engineer who owns the code can explain its behaviour under failure conditions.

Keep a signal log for AI-generated code in your codebase. Track which PRs had significant agent involvement, and track which ones produced incidents or required significant rework later. This data will tell you more about where AI tools are and aren't useful in your specific codebase than any benchmark.

The implicit contracts problem doesn't disappear with better models. It gets smaller over time as agents get better at reading context — but it doesn't disappear. The human who understands the unwritten rules of a system remains essential.

The maker movement produced a lot of useful things. It also produced a lot of prototypes that taught their builders something valuable. Vibe coding will do both.

The engineers who know the difference will be fine.


LinkedIn

Top comments (0)