I built an AI debate platform solo in one week.
100 AI personas, daily auto-generated global debate topics, 72-badge gamification, i18n, a bot runner, cron jobs, an admin panel. All of it. All through vibe coding.
Let me be honest — vibe coding is what made it possible. I'm not here to trash it.
But after 100+ commits and real production experience, I've hit enough walls to talk about what vibe coding doesn't tell you — especially if you're trying to build something beyond a weekend project.
What I Mean by Vibe Coding
Describing what you want to an AI (Claude Code, Cursor, Copilot), iterating on the output, and shipping — without necessarily reading every line you deploy.
Andrej Karpathy's original framing was about liberation: just say what you want and let the AI figure it out. And for going from zero to something that actually works, it genuinely delivers.
The problem starts the moment that "something" needs to be maintained.
Limit 1: The AI Doesn't Know What It Doesn't Know
When I asked Claude Code to write my Supabase RLS policies, the output looked perfect and passed local tests. The issue only appeared in production — a permission error triggered when the bot runner executed in a specific pattern.
The AI had no idea how my bot runner worked. It wrote correct code for the scenario I described, not for the full system it couldn't see.
The pattern I kept hitting: AI produces locally correct code that's globally wrong. It solves the problem you described — not necessarily the problem you actually have.
Limit 2: Refactoring Debt Accumulates Fast
Vibe coding is additive by nature. Need a feature? Add it. Bug? Patch it.
By commit 60, my codebase had:
- 3 slightly different patterns for the same Supabase auth check
- API routes that were 80% identical but never abstracted
- A component with 12 props because adding one more was always easier than restructuring
When I asked the AI to "clean this up," it would fix the file I showed it — and leave the 4 other files doing the same thing differently completely untouched.
Vibe coding doesn't refactor. It accumulates.
Fixing this required me to read the code myself, map the patterns, and give precise instructions. Which is fine — but it's not vibe coding anymore.
Limit 3: You Can't Debug What You Don't Understand
This one hurt the most. And it was the most embarrassing.
My bot runner started producing duplicate comments intermittently. The AI had written the execution phases (opinions → likes → comments → attacks), and I deployed it without fully understanding the flow.
When the bug appeared, I had no mental model of the code. I knew what it did. I didn't know how.
Debugging with the AI looked like this:
- Describe the symptom
- AI proposes a fix
- Fix doesn't work or breaks something else
- Repeat
I ran this loop for over an hour. Each suggestion from the AI was plausible. Some actually fixed something — but something else broke. I was going deeper into the hole, and my trust in the codebase was hitting the floor.
What finally fixed the bug was stopping all prompting entirely and spending 20 minutes reading the code from scratch. I mapped out which phase ran in what order, where an API call could duplicate. The cause became obvious. The fix took 5 minutes.
Instead of asking the AI "why does this bug exist," I should have read the code properly from the start.
Those 20 minutes made the previous hour a complete waste.
If you can't debug it without the AI, you don't own it.
Limit 4: Early Architecture Decisions Get Locked In
When I set up i18n with next-intl, I quickly chose the [locale] dynamic segment approach. The AI scaffolded everything and it worked.
Three weeks later, when I needed server components to access locale in a specific way, I realized the architecture already had opinions baked in that I hadn't consciously chosen — I'd just accepted the AI's first reasonable answer.
Changing it would mean touching 40+ files.
The AI optimizes for making the current feature work — not for the architecture you'll want in three months.
This isn't a criticism of the AI. It did exactly what I asked. The problem is that "make this work" and "design this well" are different requests, and vibe coding defaults to the first one.
Limit 5: The Context Window Is a Silent Killer
Every new conversation starts fresh. The AI doesn't remember the 47 decisions you made last week.
I eventually created a CLAUDE.md file in the repo — a project state document the AI reads at the start of every session. Here's what actually went into it:
- Stack rules: "TailwindCSS v4 only — no separate CSS files"
- Architecture decisions: "All AI calls must use Gemini → GPT-4o Mini failover"
-
Deploy rules: "Always run
pnpm buildlocally before pushing" - DB migration method: "Supabase CLI has auth issues — use Management API directly"
- Gotchas: "Turn off VPN before running the bot runner (can't reach talkwith.chat)"
It helped. Reading this document at the start of every session significantly reduced contradictions with previous decisions. But keeping CLAUDE.md up to date itself requires discipline. Early on, I'd forget to update it. The AI would make choices that contradicted things we'd "already decided." I'd catch it two days later in a code review.
So I added a separate history.md. If CLAUDE.md is "here's how the project works now," history.md is "here's what we did and why." Having the AI read both at session start cut down repeated mistakes noticeably.
One more thing that actually worked: using Claude Code's Todo feature aggressively. Before starting any task, I'd have the AI write a checklist first, then check off each step as it completed. The AI always knew where it was in the flow — which meant far less "going back to something we already finished" on long tasks. The longer the task, the bigger the payoff.
Vibe coding assumes continuity the AI can't provide. You have to build that continuity yourself — in documents.
What Vibe Coding Is Actually Great At
I don't want to end on a sour note, because vibe coding genuinely changed what's possible for solo developers.
- Fast prototyping: From idea to working UI in hours, not days
- Boilerplate elimination: Auth flows, CRUD APIs, form validation — the AI handles it and I move on
- Staying unblocked: When I don't know the right API or pattern, I get a working answer in 30 seconds
- Confidence to try things: The AI makes the learning curve nearly flat, so I built features I'd never have attempted alone
TalkWith.chat exists because of vibe coding. Shipping an AI platform with 100+ features solo in a week wasn't really possible before.
The Honest Summary
Vibe coding makes building 10x faster.
But maintenance, debugging, and long-term evolution run at 0.5x — unless you actively compensate for the limits.
The developers I've seen struggle most with vibe coding treat it as a complete replacement for engineering judgment. The ones who thrive treat it like a very fast junior developer: incredible output speed, needs direction, can't own the system.
The system still has to be owned by you.
I built TalkWith.chat solo. It's live — 100 AI personas debating global topics every day. All the chaos and lessons are going into this series.
Top comments (0)