Vibe Coding Made Me 10x Faster. Here's the Dark Side Nobody Mentions.
I shipped two SaaS products this year using AI-assisted coding. It's incredible. It's also creating a mess most builders aren't talking about.
The promise is real
I'm not a trained developer. I'm 15 years into digital — design, SEO, agency work, now building my own products. Tools like Cursor and Claude changed everything for me. ListingVid (AI video generation for real estate agents) and EST8 (a modern real estate CRM) both exist because vibe coding made it possible for one person to do what used to require a team.
So when I say the productivity gains are real, I mean it. I'm not here to dunk on the trend.
But something is happening at the infrastructure layer that solo builders need to wake up to.
The open source crisis building underneath us
The numbers are stark. Daniel Stenberg shut down cURL's bug bounty program after AI-generated submissions hit 20% of total volume — mostly junk that wasted maintainer time. Mitchell Hashimoto banned AI code from Ghostty entirely. Steve Ruiz closed all external PRs to tldraw.
These aren't fringe projects. cURL runs on billions of devices. The people maintaining these tools are doing it on their own time, for free, because they care. And right now they're getting buried under a wave of AI slop from people who mean well but aren't thinking about the externalities.
Here's the math: vibe coding lowers the cost of submitting to near zero. But it doesn't lower the cost of reviewing. That cost lands entirely on the maintainer.
My take after shipping two products this way
The vibe coding revolution is real and it's not going away. But most people using it are one abstraction layer removed from understanding what they're building on top of.
Every npm package, every open source library in your stack — that's someone's evenings and weekends. When AI makes it trivial to generate a PR and submit it without real understanding, we shift burden from the builder to the maintainer. Scale that across millions of new vibe coders and you get what we're seeing now: maintainers quitting, projects closing, the commons eroding.
This isn't about AI being bad. It's about defaults. The default is to generate and ship. The better default is to generate, understand, test, and contribute thoughtfully.
3 things I do differently now
- Read before contributing. If I'm using an OSS project heavily, I actually read the CONTRIBUTING.md and issues before touching anything. Takes 10 minutes. Saves a maintainer hours.
- Test your AI output. Don't submit what the AI generated. Submit what you've verified works. There's a difference.
- Give back in ways that don't require review. Sponsor maintainers. Write docs. Triage issues. The open source economy needs more than code contributions.
The builders who win long-term aren't the ones who extract the most from the ecosystem. They're the ones who understand it well enough to give back.
Vibe code your product. Don't vibe away the infrastructure it runs on.
What's your take on responsible AI-assisted contribution? Drop it in the comments — @lmoncany on X if you want to continue the thread.
Top comments (1)
Great article. I’ve been experimenting a lot with vibe coding recently and the productivity boost is definitely real. But I’ve also noticed the same dark side you mention: AI can generate features much faster than we can validate the systems around them.
A concrete example I ran into was testing email flows in CI/CD.
When AI generates signup flows, password reset logic, OTP verification, etc., the code often looks correct. But when you try to test it in automated pipelines, you suddenly realize that handling emails in a deterministic way is surprisingly hard.
Most teams either:
That problem actually pushed me to build a programmable temporary email infrastructure specifically for automated testing and AI agents — basically giving tests a real inbox they can create, read, and destroy via API.
What I find interesting is that vibe coding accelerates application code, but it also exposes how important reliable developer infrastructure becomes.
Curious how other teams here are handling email confirmation / OTP testing in CI pipelines when using AI-generated code.