DEV Community

Ava Barron
Ava Barron

Posted on

Vibe with Code: Plan First, Build Second

The most expensive mistake engineers make with AI isn't the code. It's skipping the plan.

Software engineering has never been about writing code first. It's about understanding the problem, exploring approaches, analyzing tradeoffs, then building. But when AI enters the workflow, all of that disappears. We collapse the process into one prompt, the AI fills in every blank on its own, and we call it productivity.

Rate limiting? The AI picks something. Auth strategy? It picks something. Pagination? It picks something.

Not necessarily what your system actually needs. But the code compiled, so it must be right, right?

Vibe coding works with structure

That structure is plan mode.

Engineers keep dming me asking how I build apps with AI that don't fall apart. I plan and use agentic teams that review each other's work. It's not one AI doing everything solo. It's a whole pipeline. This is how I get quality output without babysitting every line. This weekend, I decided I would take the time to walk through my actual process.

This is Part 1: the planning phase. The part most people skip, and the most expensive part to skip.

Plan mode is bringing your engineering discipline into the AI workflow. Same rigor as a design review or an architecture proposal, just applied to how you direct the AI. The tool changed. The engineering didn't.

The path: brain dump your idea → AI generates a full plan → you review it like a design doc → iterate until the architecture is tight. Same starting mess. Completely different result.

If you know spec-kit: GitHub's spec-kit automates this planning scaffold with /specify, /plan, and /tasks before implementation. This article shows what that discipline looks like under the hood and where your engineering judgment still decides what the gates check and what standards they enforce. If you're already using spec-kit, skip to Iterating on the plan

Start With the Brain Dump

Before you plan, brain dump. Everything in your head: features, vibes, who it's for, random research you found at 2am, all of it. No filter. The mess is the raw material.

What I actually typed into Claude Code, in plan mode so I could show the whole process:

That's it. Messy. Incomplete. "idk what else" is literally in there. And that's the point. You're not trying to be polished yet. You're getting the idea out of your head and into a format the AI can work with.

The key move: This happens in plan mode. You're not just brain dumping into a notes app. You're telling the AI "take this and map it out." Brain dump and plan request in one shot.

The Initial Plan Drops

The AI doesn't just say "cool app idea." It comes back with a full Technical Design Document, the kind of artifact a senior engineer hands you in an architecture review.

It started like this…

# Tech Stack
| Layer       | Choice             | Why                                                    |
|-------------|--------------------|--------------------------------------------------------|
| Backend     | Node.js + Fastify  | Native JSON Schema validation, fast, clean plugin model|
| ORM + DB    | Prisma + PostgreSQL| Type-safe, great migration UX                          |
| Frontend    | Next.js App Router | SSR for public pages, client-side for app shell        |
| Auth        | Clerk              | OAuth + magic links out of box                         |
| Charts      | Recharts           | Composable, works well in React.                       |
Enter fullscreen mode Exit fullscreen mode

Plus a full file/folder structure, database schema with entity relationships, API endpoints, and an agentic build team. 4 agents across 3 phases:

What just happened: One prompt in plan mode > a full engineering spec. Tech stack, schema, APIs, build team, dependency graph. Without this step, the AI is making every decision for you. With it, you have something to review and update before a single line of code exists.

Stop

Now here's the part that matters more than anything else in this entire article.

After generating the TDD, the AI is ready to go. "Let's execute step 1." The plan is sitting right there. The momentum is pulling you forward. Every instinct you have is saying let's gooooo!

Do. Not. Touch. That. Enter. Key.

This is the moment. This is the whole game right here. The TDD looks impressive: tech stack tables, dependency graphs, four agents ready to build. It's giving senior architect energy. And that's exactly why you need to stop and read it!!!

The move: Copy the whole plan. Paste it into a Google Doc. Mark it up like it's a real design review. It is one. Do something. You're the engineering lead now. That TDD is a proposal, not a mandate. Read it like a HLD from a junior engineer. Respect the effort, but question every decision in it.

I read mine and caught something on the first pass, Clerk costs money. I'm just showing the process here. I want this free to build but still production safe.

Still in plan mode.

Tech stack decision changed before a single line of code was written. Say "yeah go" here and two days later you're ripping auth out of every route handler. Instead, one prompt. Zero code rewritten. Because there was no code yet. That's the whole point.

Call out your constraints early. Want everything free and open-source? Say that now. Already committed to Vercel or AWS? Tell the AI so the architecture accounts for it. Know you want Postgres over Mongo? Flag it. The earlier these constraints live in the plan, the less you refactor later.

Iterating on the plan

You don't review once and ship the plan. You iterate. Each pass applies a different engineering lens: cost, security, architecture, standards.

This is where your actual engineering knowledge is important. The AI gave you the ingredients and a recipe. Your job is to taste it before you serve it. Nobody's grandma ever followed a recipe without adjusting the seasoning first.

My plan went through 7 iterations before I let a single line of code get written.

It went like this…

1. The Auth Swap

I immediately caught the plan said to use Clerk. Clerk costs money. I wanted this to be a free build. Stayed in plan mode, asked the AI for alternatives, landed on Better Auth. Free, self-hosted, same OAuth and magic link support. One prompt. Zero code rewritten. Because there was no code yet.

2. Adding Gate Agents

The initial plan had 4 builder agents but no reviewers. It had phases but nothing between "build it" and "ship it."

I added review agents, CI, Security, UX and docs to run between each phase.

They run as blocking gates between build phases. Next phase doesn't start until they sign off. Think of it like requiring approvals on a PR before it merges, but automated into the build plan itself.

3. Give your plan a degree

If you already know the best practices for your stack, tell it exactly how you want to update the plan.

But if you're working in a stack where you have some gaps, feed it engineering books on the design. Best practices, books from authors of the language, data structures and algos, and systems design books on your stack.

I copied my architecture into a different chat and asked what books I should be following for a TypeScript web app with this structure. I got back with Effective TypeScript, Clean Architecture, Designing Data-Intensive Applications, and a few others.

Pasted those recommendations back into plan mode. Now the AI isn't following generic best practices. It's following specific books that apply to what you're building.

Now the plan has an engineering degree.

5. User Review Pauses

I added structured checkpoints after each build phase for me to test. I updated the plan again so that it also gives me a full tutorial on how to test the features it built after each phase. Now when the agentic team pauses and asks me to review, I'm not guessing what to check. The testing guide is right there for me to follow step by step.

6. Extracting the rules

At this phase the engs rules were embedded in the plan document. I originally pasted a contributing guide I made into the plan. Then I asked plan mode to extract it out of the planning doc and into the repo.

7. DRYing the Plan Itself

Last pass for now, I applied the same engineering principles to the plan that I apply to code. And actually you can even make this easier by just asking the plan to do this.

The Transformation

Here's what reviewing and iterating on the plan actually produces.

Remember it started as a messy brain dump, the "idk what else" prompt. Same app idea. But look at the difference between what the AI first handed me versus what came out after I actually engineered it.

Production Gaps

Before:

After:

Every single line in that "after" would have been a production bug or a 2am refactor. Missing indexes on a leaderboard query? That's a slow query that gets slower every day. No timezone strategy? Your streak breaks at midnight for every user not in your time zone. All caught before a single line of code existed.

Engineering Standards

Before:

(nothing tbh)
Enter fullscreen mode Exit fullscreen mode

After:

One file. Single source of truth. Every agent follows it.

The plan went from zero engineering standards to a CONTRIBUTING.md backed by five books. Now it's building against a spec that would pass a real code review.

The Build Pipeline

Before: 4 agents across 3 phases. No reviewers. No gates. No standards. Build it. Ship it. Hope it works?

After: 12 builder agents + 4 gate reviewers (CI, Security, UX, Docs) across 4 phases with blocking gates. Every agent reads CONTRIBUTING.md first. User review pause after every gate with copy-paste testing guides.

Same starting point. Completely different destination. The brain dump didn't change. The engineering did. That's what happens when you treat the plan like a real design review instead of a formality to skip past.

Try It Tonight

Open your AI coding tool. Claude Code, Cursor, Copilot, whatever you use. Start a planning session. Dump your app idea. The messy version, the one with "idk" still in it. Add: "map this out E2E with an agentic team."

Read what comes back like an engineering lead. Copy it into a Google Doc and mark it up like it's a real design review, because it is one. When you read that plan, look for one thing first: the riskiest assumption. That's your first iteration. Question it, look for gaps, add books, etc.

Ask plan mode to review its own plan. Keep going until the plan is something you would stake production traffic on.

Or if brain-dumping isn't your style, write the TDD yourself first and hand it to the AI. "Here's my design doc. Build me an agentic team around this plan." Same destination, different starting point. The point is, have a real plan.

Up next: Your plan is done. Now we build. The next article covers speccing features collaboratively with the AI, the build → review → you check → iterate loop, and what "ship-ready" actually means when your whole team is AI agents.

Everything referenced in this article, from tools to books to research:

References

Tools

  • Claude Code — AI coding tool with plan mode
  • GitHub Spec Kit — Open-source toolkit for spec-driven development (/specify, /plan, /tasks)
  • Better Auth — Free, self-hosted authentication
  • Fastify — Fast Node.js web framework with native JSON Schema validation
  • Prisma — Type-safe ORM for Node.js and TypeScript
  • PgBouncer — PostgreSQL connection pooler
  • date-fns-tz — Timezone support for date-fns
  • Recharts — Composable charting library for React

Books Referenced

Research

Top comments (0)