Stop guessing what works. Discover it.
The Problem
You've been here:
Write a prompt
↓
AI generates code
↓
Not what you meant?
↓
Rollback, rewrite, repeat
Each handoff loses intent. "Make it clean" means 100 things to 100 models.
Why Traditional Approaches Fail
Define everything upfront
↓
Implement (expensive)
↓
Test with users
↓
Discover it's wrong
↓
Start over (very expensive)
Validation happens AFTER expensive implementation.
A Different Approach: Discovery First
Minimal spec
↓
AI explores possibilities (cheap)
↓
See variants (cheap)
↓
Discover what works (cheap)
↓
Commit to implementation
Validation happens BEFORE implementation.
This is Evolutionary Specification — specs evolve from exploration to precision through human-AI collaboration.
Spec as Verification Layer
The key insight: structured specification becomes a checkpoint between intent and implementation.
Without spec:
Your idea
↓ (you formulate)
Text instruction
↓ (AI interprets)
Code
↓
Not right? → Rollback → Repeat
With spec:
Your idea
↓
UI Spec (structured intent)
↓
Preview ← "Is this what I want?"
↓
AI Agent ← unambiguous instruction
↓
Code matches spec
Why this works:
Preview Gate — See the result BEFORE implementation. Don't like it? Change the spec, not the code.
No Drift — Spec = source of truth. Code drifts? Regenerate from spec.
Structured Intent — "Make it beautiful" → 100 interpretations. purpose + structure + theme → 1 interpretation.
Cheap iterations on spec. Expensive implementation when you're sure.
Progressive Precision
Control how much you specify:
Level 0 — Exploration:
screens:
dashboard:
purpose: "User dashboard"
5 AIs → 5 wildly different results. That's the point. You're exploring.
Level 1 — Structure:
blocks:
- id: stats
purpose: "Quick stats: total, completed, in progress, overdue"
- id: task_list
purpose: "Task list with status indicators"
Same structure, styling varies.
Level 2 — Detailed:
metadata:
theme: "Glass morphism, soft shadows"
palette:
primary: "Blue-purple gradient"
blocks:
- id: stats
columns: 12
purpose: "4 metric cards in responsive grid"
blocks:
- id: total
purpose: "Icon, large number, label 'Total'"
- id: completed
purpose: "Icon, number, label 'Completed', green accent"
Structure locked. Nested blocks define components. Details — AI's choice.
Level 3 — Full Control:
purpose: "4 metric cards: responsive grid (4 cols desktop, 2 tablet, 1 mobile).
Each card: glass background, 20px padding, 16px border-radius,
icon (24px, primary color), number (32px bold), label (14px muted).
Hover: scale 1.02, transition 200ms."
Full control when you need it.
Mix levels freely — Level 1 spec with one Level 3 block for critical component. Control where it matters.
Purpose = What + How
Purpose describes not just appearance, but behavior:
blocks:
- id: menu_bar
purpose: "Horizontal dropdown triggers (File, Edit, Image). Hover opens submenu."
- id: filename
purpose: "Current filename and save status. Click turns text into input for renaming."
- id: canvas
purpose: "Image display stack. Renders layers in real-time. Shows selection marching ants."
- id: ai_prompt
purpose: "Text input with placeholder. Focus expands width. Enter generates."
What to show + How it behaves = Complete intent.
The Format
UI Spec — open YAML format, AI-native:
docType: ui-spec
format_version: "0.0.2"
metadata:
name: Task Manager
theme: "Clean minimal, soft shadows"
palette:
primary: "Blue-purple gradient"
surface: "White with subtle transparency"
screens:
dashboard:
title: Dashboard
purpose: "Overview of user's tasks and productivity"
blocks:
- id: stats
purpose: "4 metric cards: total, completed, in progress, overdue"
columns: 12
- id: task_list
purpose: "Scrollable task list with checkboxes and status badges"
columns: 8
Principles:
- Self-describing — AI understands without instructions
- Purpose > Implementation — describe WHAT, not HOW
- Works anywhere — copy-paste to any AI
- Progressive — add detail only where needed
The Stack
1. METHODOLOGY → Evolutionary Specification
2. FORMAT → UI Spec + Data Spec (open standard)
3. TOOL → SpecCanvas (reference implementation)
Methodology > Format > Tool
UI Spec lives without SpecCanvas. AI understands it natively. Open standard.
SpecCanvas
Visual editor for this workflow:
- Edit specs visually
- Generate HTML from multiple AIs
- Compare side-by-side
- Rate and iterate
You don't need it. Any text editor + any AI works. SpecCanvas just makes it faster.
→ app.speccanvas.dev — free, no account required
See it in action:
| Claude | GPT | Gemini |
|---|---|---|
![]() |
![]() |
![]() |
Lumina (Photoshop-like app): same spec, three AI interpretations. Works with Claude, GPT, Gemini — even Figma AI reads UI Spec.
→ Try this example in SpecCanvas — opens with project pre-loaded
Start Here
→ Explore the Lumina example
See a real spec with AI-generated HTML previewsCreate your own project
Click "Create with AI" → describe your app → AI generates the specGenerate HTML
Pick AI model (Claude, GPT, Gemini) → compare results side-by-sideRefine where needed
Add detail to blocks where AI interpretations divergeExport to coding agent
When happy with the spec → hand it to Claude Code, Cursor, or any AI coder
Or skip the tool: paste any spec to Claude/GPT with "Generate HTML from this UI Spec"
1 source of truth. 0 misinterpretations.
Don't guess what works. Discover it.



Top comments (0)