Spec First, Code Later: My Spec-Driven Development Adventure with AI
Introduction
Lately, I've been seeing a lot of "I vibe-coded this thing!" posts popping up everywhere.
Being able to build something just by going with the flow -- that's pretty amazing in its own right.
But what I wanted to try was a slightly different approach:
Write the specs first, then let the AI do the coding -- Spec-Driven Development.
I happened to have an upcoming presentation at work, and I went looking for a timekeeper app that fit my needs. Couldn't find one. Since I knew exactly what I wanted, and I had the perfect excuse to experiment, I thought, "Why not build it myself?"
This is the story of that experience.
What I Built
PresentationTimekeeper
https://presentation-timekeeper.pages.dev/
A timekeeper app for presentations.
- Runs entirely in the browser (frontend only, no server needed)
- Set expected durations per section, track progress in real time
- Import/export settings as JSON
- Built with Vanilla JavaScript -- no frameworks
The idea is simple: just glance at it during your talk, and you instantly know where you are and how much time is left.
What Is Spec-Driven Development?
In a nutshell, it's a development style where the spec is the source of truth, and the AI agent handles the implementation.
Instead of tossing a vague "build me this" at the AI, you work out the requirements and design yourself, then hand those specs over to the AI for coding. You write specs instead of code -- think of the spec as your "instruction manual" for the AI.
Writing the Specs Together with AI
For this project, I created the following documents in Markdown, collaborating with the AI on each one.
1. Requirements Document
- App name, purpose
- Functional requirements (section setup, auto-start timer, progress display, JSON I/O, etc.)
- Non-functional requirements (performance, compatibility, usability)
2. Basic Design Document
- System architecture (SPA, LocalStorage, no external dependencies)
- Tech stack (Vanilla JavaScript, CSS3)
- Module design
- Inter-module interfaces
- Error handling policy
3. UI Design Document
- Screen list and navigation flow
- Screen layout design (settings screen / timer screen)
- UI/UX guidelines, theme and color design
Naturally, each document needed multiple rounds of back-and-forth with the AI to get right. You don't just accept the first output -- you refine it through feedback like "this requirement is missing" or "this module's responsibilities are too vague."
I Skipped the Detailed Design
I decided not to create a detailed design document this time. Here's why:
- The basic design already covered module responsibilities, interfaces, and data structures well enough
- If you're letting the AI handle all the coding, you can probably skip function-level flow diagrams and UML (hopefully)
Looking back, this turned out to be the right call. As long as the basic design was solid, the AI generated appropriate code just fine.
Setting Rules for the AI (MDC Files)
In Cursor, you can place MDC (Markdown Configuration) files under .cursor/rules/ to define rules for how the AI agent should behave.
Here are some of the rules I set up:
- Don't do anything extra: Only execute what's explicitly asked in the prompt
- Don't touch the specs without permission: If specs need changes, ask the user first
- Suggestions are welcome: Propose improvements, but only apply them after approval
Basically: "Don't go rogue. But feel free to speak up."
I found that this MDC system plays really nicely with traditional project management practices. Defining project rules, managing a WBS (Work Breakdown Structure), setting communication guidelines...
With AI's dramatic arrival, it feels like we need some massive paradigm shift. But honestly, evolving existing project management practices for the AI era seems more realistic and easier to adopt on the ground.
UI/UX Belongs to Vibe Coding
On the flip side, when it came to UI/UX, I felt like Vibe Coding was the better fit.
Trying to nail down every little design detail, animation, and spacing in a spec document just isn't practical. How something feels when you interact with it -- that's best figured out by actually trying it and tweaking on the spot.
That said, even here, having specs paid off. When the UI design doc already defined "what goes on which screen" and "the general color theme direction," Vibe Coding became less wandering and more intentional.
Specs provide the skeleton; Vibe Coding adds the flesh.
That's the kind of balance I was able to strike.
Wrapping Up
These days, you see a lot of ad-hoc "I just built this thing!" posts on social media. And sure -- if you know exactly what you want, just let the AI agent build it. Nothing wrong with that.
But if things are still fuzzy, starting from requirements and design documents the old-fashioned way tends to produce better results. Having that "map" in the form of specs keeps both the AI and the human from getting lost.
Or, even as AI reshapes how software gets built, if you're doing contract development for small and mid-sized businesses, combining a waterfall-ish approach with AI agents might just be a viable path forward.
Writing specs isn't about losing your job to AI -- it's about sharing the work with AI. That's what this experience taught me.
Let's survive this together, folks :)
Top comments (5)
Curious whether you found the AI respecting the spec boundaries consistently, or did it drift when modules started interacting with each other?
If the overall purpose of the system is clearly defined in advance, it seems to me that the likelihood of drift would decrease.
It may be necessary to help the AI understand that it is not building isolated modules, but rather constructing an entire system.
that's a good distinction. the "isolated modules vs whole system" framing matters a lot for how the AI scopes its changes. i'd imagine that when the spec only describes individual features, the AI optimizes locally and you get inconsistencies at the boundaries. a short system-level overview at the top of the spec that says "this is a checkout flow, these modules talk to each other like this" probably goes a long way. even a paragraph.
that's a good framing. i'd expect the drift to get worse when the spec describes modules in isolation without explaining how they connect. giving the AI the dependency graph upfront, even just a rough sketch, probably helps it make better local decisions. do you define those system-level boundaries in the spec itself or in a separate architecture doc?
I've had similar results with the spec vs vibe-coding split. When I give AI a spec with module boundaries already defined the output is way more predictable than going back and forth in conversation.