I build fast. Like most founders using Bolt, Lovable, and Cursor -
I ship first and think later.
Last month I pushed 40+ commits to my SaaS.
I had no idea what was actually breaking with each one.
Not until I built something to tell me.
The problem with vibe coding at speed
When you're prompting an AI to build your app, you're not reading
every line it writes. Nobody is. That's the point.
But here's what happens in practice:
- Commit 1: AI adds auth. Looks fine.
- Commit 7: AI refactors a helper. Accidentally exposes an API route.
- Commit 23: AI installs a package. It has 3 known CVEs.
- Commit 31: AI adds logging. Now you're logging user emails to console.
You don't see any of this. Your users might.
What I tried first
I ran my app through the usual suspects:
Lighthouse - Told me my performance score. Useful, but it's
a snapshot. Doesn't tell me what commit caused the regression.
Snyk - Great for dependency CVEs. Misses everything else.
GitHub Dependabot - Only catches known CVE packages. Silent
on everything AI introduces structurally.
Manual PR review - I'm a solo founder. Who am I reviewing with?
None of these answered the one question I actually cared about:
What did my last push break?
What I actually needed
After every commit, I want to know:
- Did this push introduce a new security issue?
- Did my score go up or down vs the last scan?
- What are the top 3 things I should fix right now?
Not a 200-line SAST report. Not a generic "you have 47 warnings."
Just - what changed, what broke, what do I fix first.
What VibeDoctor showed me
I built VibeDoctor (vibedoctor.io) to answer exactly this.
Here's what a real scan of my own app surfaced:
Security: 3 Anthropic API keys and a Stripe token committed
to test files. Low severity because they're test files - but
still embarrassing.
Performance: LCP 4.3s. TTI 8.3s. My own landing page
was loading like it's 2009.
Code Health: 1,575 total issues. 71 Critical. 2 Blockers.
Vibe Coding Health Score: 7/100. CRITICAL.
That last one stings because it's my own product.
But the part that changed how I work?
The push scan.
Every time I push a commit, VibeDoctor runs automatically and
shows me a before/after:
Score before push: 71 | Score after push: 64
New issues introduced: 4 | Fixed: 1
Still open: 847
That's the thing no other tool was giving me. Not just
"here's your health score" - but "here's what THIS push did."
Why this matters for vibe coders specifically
When you're using Bolt or Lovable, you're not the one writing
the code. You're directing it.
That means bugs don't look like bugs. They look like features.
AI doesn't hallucinate in obvious ways. It halluccinates in
subtle ones - importing packages that don't exist, leaving
console.logs with sensitive data, building SQL queries from
raw user input.
These aren't the things Lighthouse catches. These aren't even
things a senior dev reviewer always catches on a fast PR.
You need something scanning specifically for what AI coding
tools tend to get wrong.
The 5 things AI coders miss most often
After scanning dozens of vibe-coded repos, here's what comes
up constantly:
- Hardcoded secrets - API keys, tokens, passwords in source code
- Hallucinated imports - packages the AI invented that don't exist
- Exposed API routes - endpoints with no auth that AI forgot to protect
- N+1 queries - database calls inside loops that will destroy you at scale
- Dependency CVEs - AI picks popular packages, not necessarily safe ones
If you're building on Bolt, Lovable, Cursor, or v0 - run a scan
before you show it to anyone.
Try it
VibeDoctor.io is free to scan. Sign up at vibedoctor.io, connect
your GitHub repo, and get your score in under 5 minutes.
If your vibe coding health score is above 60, I'll be genuinely
impressed.
Most aren't.


Top comments (0)