DEV Community

Cover image for The Missing Layer Between AI and Design Consistency
Ashmeet
Ashmeet

Posted on

The Missing Layer Between AI and Design Consistency

By now, "vibe coding" is completely normalized. You describe a thing, AI builds it, you nudge it until it gets it right. Nobody bats an eye.

But what about vibe designing?

The idea has come up in cycles. AI-generated UI, prompt-driven mockups, no-Figma workflows. Every time it almost gets traction, it fizzles. The outputs are too generic, you lose control of consistency, or it only works for throwaway prototypes you'd never ship.

I've been poking at Google Stitch lately, and something about it finally made the concept feel workable. Specifically DESIGN.md, a spec format that quietly reframes how AI and design systems talk to each other. Google open-sourced it last week under Apache 2.0, so any agent that writes UI code can use it, not just Stitch.

What Even Is Google Stitch?

Stitch is Google's AI-native design tool. You describe an app (vibe, colors, features), it generates screens, you iterate on them with natural language. There's code export, a component system, an MCP server, and agent skills that plug into your existing coding setup.

Earlier this year they shipped a significant update: an AI-native infinite canvas, a smarter design agent that reasons across entire projects, voice input, multi-screen generation, and design system support including DESIGN.md.

A Markdown File as Your Design Contract

DESIGN.md isn't merely a style guide you write once and forget. It's a machine-readable contract between your design intent and whatever AI is building your UI.

You export it from Stitch, and it contains your color tokens, spacing values, rounding rules, typography - everything. Structured as YAML, inside a markdown file. It's readable by humans and by agents. It works with Claude Code, Cursor, Copilot, Gemini CLI, anything that writes UI code.

The practical upside: changes to your design propagate automatically. Non-developers can update the design in Stitch without touching the codebase. It's also useful for catching drift, e.g., components that have wandered from the source of truth show up clearly when you have a spec to check against.

Part 1: The Static Export

I started with the simplest path. Export DESIGN.md from a Stitch project (a toy music player) and hand it to Gemini CLI as context in the project root.

A trimmed look at the generated DESIGN.md:

---
name: Lumina Audio
colors:
  surface: '#12121d'
  surface-dim: '#12121d'
typography:
  display-lg:
    fontFamily: Be Vietnam Pro
    fontSize: 48px
rounded:
  sm: 0.25rem
  DEFAULT: 0.5rem
spacing:
  unit: 8px
  container-padding: 32px
---

## Brand & Style

This design system is built for an immersive, high-fidelity desktop music experience. It leverages **Glassmorphism** to create a sense of depth and airiness, making the interface feel like a digital lens over a living, breathing soundscape. 

## Colors

The palette is rooted in a deep, nocturnal neutral to allow vibrant accents to pop. The primary, secondary, and tertiary colors are designed to be used within mesh gradients for the application background, creating a "lava lamp" effect that shifts behind the frosted glass panels.

- **Display Type:** Large headlines use a tighter letter spacing and heavy weights to anchor the layout against the soft glass backgrounds.

## Layout & Spacing

The layout follows a **Fluid Grid** model with high-margin "safe zones" to allow the background gradients to frame the content. 

## Elevation & Depth

Depth is not communicated through traditional shadows, but through **cumulative backdrop blurring** and **border luminosity**.
Enter fullscreen mode Exit fullscreen mode

The question: could the agent generate a UI just from the spec file, without seeing the actual Stitch screens?

Short answer: kind of.

The agent respected the design system. Colors, spacing, and typography all came through correctly. But it didn't reproduce the screens. It's like giving someone the same bricks from your house and expecting them to build the same house but without ever actually seeing it.

There's a gap between "follows the rules" and "knows what the layout looks like." DESIGN.md tells the agent how things should look, not what things should exist.

Gemini CLI's attempt at recreation with DESIGN.md alone

So if you're hoping the file alone bridges design and code, it gets you maybe 60% of the way there. The tokens are right. The vibe is right. The actual layout structure? That's not in the file.

Part 2: Adding the MCP

To close that gap, I connected Stitch directly to Gemini CLI via MCP. This is the difference between handing the agent a style guide and giving it actual eyes on your project.

Step 1: Get a Stitch API key

Go to stitch.withgoogle.com, sign in, open profile settings, and create a key under the API Keys section.

Step 2: Add the MCP server

gemini mcp add stitch --transport http https://stitch.googleapis.com/mcp \
  --header "X-Goog-Api-Key: YOUR_API_KEY"
Enter fullscreen mode Exit fullscreen mode

Step 3: Verify it connected

Restart Gemini CLI. Inside your session, run:

/mcp list
Enter fullscreen mode Exit fullscreen mode

You should see the Stitch server listed with its available tools. From there, you can prompt it like:

Stitch, extract the design context from my 'Lumina' project into DESIGN.md.
Enter fullscreen mode Exit fullscreen mode
Stitch, give me the React code for the sidebar component.
Enter fullscreen mode Exit fullscreen mode

Once the MCP was connected, the agent wasn't just following rules from a file. It could query actual layout data from my Stitch project. That's when the generated screens started matching what I'd designed in the Stitch canvas.

Exactly matching the Stitch canvas

What Actually Came Out of It

The refinement loop is where this got genuinely interesting. I changed primary in DESIGN.md from #ecb2ff to #00ffcc, told the agent to sync, and it updated tailwind.config.js and the components together. One instruction, consistent everywhere.

The result of adding a fresh page that still convincingly respected the design system of the app:
The result of adding a wrapped-like page that still convincingly respects the design system of the app

The outputs were coherent in a way that AI-generated UI usually isn't, because there was an actual spec anchoring everything. I prompted, it generated, I nudged, it regenerated. Nothing drifted. The agent always had something to check itself against.

That's the thing that's been missing from vibe designing. Not better generation, but something to keep the generation consistent.

A Couple of Honest Caveats

Stitch works best when you're incremental. "Make the primary button larger and use the brand blue" lands better than "redesign the login screen." One thing at a time, especially early in a project.

DESIGN.md is also still in a sort of public beta. The spec and token schema are under active development, so things might change.

Why I Think This Is Interesting Anyway

Vibe coding caught on because it lowered the floor for building. You didn't need to know every pattern, you could describe what you wanted and iterate towards it.

Vibe designing always had the same promise but kept stumbling on the same problem: consistency. One-off mockups are easy. A coherent design system, maintained across an entire app, updated by prompts without breaking things, is hard.

DESIGN.md is a direct answer to that problem. It gives the AI something to stay consistent against, not just vibes in, vibes out. And because the spec is agent-agnostic, it's not tied to any one tool in your workflow.

The static export gets you the spec. The MCP gives the agent eyes on your actual design. Together they're the most credible version of vibe designing I've seen so far.

Fair warning: I'm not a designer or a senior dev, I'm still figuring this out too. If you've tried Stitch, I'd genuinely love to hear how you're using it and what else you're pairing it with!

Top comments (0)