Your AI coding agent can write a feature in minutes. But did it write the right feature?
I've been using Claude Code, Cursor, and Copilot for the past year, and the pattern is always the same: you describe what you want in natural language, the agent generates code, and then you spend the next hour fixing the parts it got wrong. Not because the AI is bad — but because your intent was never structured enough for it to get right.
That loop — prompt, wrong output, re-prompt, repeat — is what people call vibe coding. It works for prototypes. It doesn't work for anything you need to maintain.
The missing layer
The gap isn't in the AI's coding ability. It's between your head and the agent's context window. You know what the feature should do, how it fits into the product, what the edge cases are, and which acceptance criteria matter. The agent knows... whatever you typed into the prompt.
Spec-driven development closes that gap by structuring your intent before the agent starts writing code. Not a 40-page requirements document. Just enough structure that the AI knows:
- What the feature is and why it exists (business goal)
- Where it fits in the product hierarchy (parent feature)
- What "done" looks like (acceptance criteria)
- What status it's in (can it be implemented yet?)
What this looks like in practice
I've been building a tool called SPECLAN that takes this approach — it's a free VS Code extension that manages specifications as a tree of Markdown files with YAML frontmatter, living in your Git repository.
I recorded a 7-minute walkthrough that shows the full workflow from importing a raw product idea to orchestrating AI agents against structured work packages:
Here's what the video covers:
0:00 — The problem. Why your AI agent keeps getting it wrong, and what's actually missing.
0:25 — Installation. One click from the VS Code Marketplace.
0:40 — Importing an idea. You paste a high-level product description. SPECLAN's AI decomposes it into a hierarchy: goals, features, requirements — each as a separate Markdown file.
1:10 — The specification tree. A navigable tree view in VS Code's sidebar. Goals break down into features, features into sub-features, sub-features into requirements. The hierarchy is your product structure.
1:35 — WYSIWYG editing. A rich text editor inside a VS Code webview, so you can write specs without thinking about Markdown syntax. What you see round-trips cleanly to Markdown + YAML frontmatter.
1:55 — AI chat assistant. Ask questions about your spec, get suggestions, refine requirements — all within the editor panel.
2:15 — Copy AI Prime Context. This is where it gets practical. One click copies a structured prompt containing the spec, its parent feature, the business goal, acceptance criteria, and surrounding context. Paste that into Claude Code or any agent, and it actually knows what to build.
2:40 — Status lifecycle. Specs move through draft -> review -> approved -> in-development -> under-test -> released. Only approved specs can be implemented. This prevents the "building against a moving target" problem.
2:55 — SWARM implementation. Break approved specs into work packages and let multiple AI agents work on them in parallel — with the specification as the shared source of truth.
3:20 — Change Requests. When an approved spec needs modification, you don't edit it directly. You create a Change Request — a separate file that tracks what changed and why. No more spec drift.
3:45 — Git integration. Every spec is a Markdown file in Git. You get diffs, branches, and merge workflows for free. Your specs live next to your code, versioned the same way.
Why Markdown files in Git?
I chose this approach over a database or a cloud service for one reason: portability.
Your specs are plain text files. They work with any editor, any AI agent, any CI pipeline. If you stop using SPECLAN tomorrow, your specifications are still there — readable, diffable, greppable Markdown. No export step, no migration, no vendor lock-in.
The YAML frontmatter carries the structured metadata (ID, status, parent reference, owner), while the Markdown body carries the human-readable content. Git gives you the audit trail. The VS Code extension gives you the GUI.
The ecosystem is growing
SPECLAN isn't the only tool exploring this space. The BMAD Method uses specialized AI agent personas for structured development. OpenSpec adds a spec layer for existing codebases. GitHub's Spec Kit provides CLI templates for spec-driven workflows. Kiro from AWS takes a steering-file approach.
Each tackles the same insight from a different angle: specifications are the missing layer between human intent and AI execution. The methodology matters more than any single tool.
Try it
SPECLAN is free and open source. Install it from the VS Code Marketplace, point it at any project, and see if structured specs change how your AI agent performs.
The docs are at speclan.net. The source is on GitHub.
I'm the creator — full disclosure. I built this because I was tired of re-prompting Claude Code with the same context every session. If you have questions or feedback, I'm in the comments.
What's your experience with spec-driven development? Are you structuring your prompts before sending them to AI agents, or do you find the overhead isn't worth it? Curious to hear what's working for others.
Top comments (0)