DEV Community

고광웅
고광웅

Posted on • Originally published at ernham.substack.com

Claude Design Looks Like AI Magic. Reading the Source, It's Four Engineering Patterns.

Before the Hype Settles

Anthropic shipped Claude Design on April 17, and most of the discussion has framed it as a Figma-challenging AI design tool. I used it differently. Instead of treating it as a design tool, I treated it as a specimen — generated a handful of skill bundles, exported the output, then spent more time reading the source than tweaking the design. I was more interested in how the product works than in what it can design.

What I found is more interesting than "AI can design now." The product appears to be mostly four discrete engineering patterns that happen to have a model at the entry point. The model doesn't feel like the magic — it's writing into a carefully structured runtime that almost any team could build.

This post walks through those four patterns, what they actually do, and what I'm taking into my own stack. Caveat up front: these are observations from a source read of a small number of bundles, not a rigorous evaluation of the product's full behavior. I'll flag assumptions as I go.

What I Inspected

A Claude Design skill bundle we looked at contained:

  • Wireframes.html — a 72KB single-file wireframe document with five navigation variations across three screens each, plus a live Tweaks engine.
  • IR Deck - Hi-fi.html and IR Deck - Wireframes.html — 1920×1080 slide decks wrapped in a custom Web Component.
  • deck-stage.js — a 621-line Web Component that provides the slide runtime.
  • colors_and_type.css — a 160-line design token sheet organized into seven categories.
  • SKILL.md — a 20-line skill manifest with frontmatter.
  • README.md — a 223-line brand and voice guide.
  • preview/ — twelve single-file "at-a-glance" cards, one per token category.
  • ui_kits/web/ — a React 19 UMD clickable prototype.

The total footprint is small. What's striking is how the pieces fit together — and how few moving parts there actually are.

Pattern 1 — Tweaks: data-* Attributes with CSS Variables

The Tweaks panel is what makes the output feel interactive: click "dusk" and the whole design shifts to a warm dark palette. Click "compact" and the layout tightens. No regeneration. No API round-trip.

The mechanism is mundane. Every theme, accent, layout, and density option is a :root CSS variable override keyed to a data-* attribute on the root element:

:root, [data-theme="paper"] { --ink:#1a1a1a; --paper:#f4efe6; --accent:#c53b1e; }
[data-theme="dusk"]         { --ink:#eae3d2; --paper:#2a2620; --accent:#e77c5f; }
[data-theme="midnight"]     { --ink:#f0ebde; --paper:#14110d; --accent:#ff8a6a; }
[data-accent="gold"]        { --accent:#d4a017; }
[data-layout="stack"] .flow { grid-template-columns: 1fr !important; }
[data-density="compact"] main { padding: 14px 18px; }
Enter fullscreen mode Exit fullscreen mode

A click handler sets document.documentElement.dataset.theme = "dusk", persists to localStorage, and postMessages the host window so it can save the selection against the artifact. That's the entire switching layer.

Four axes, three-to-four options each, roughly 144 combinations available without regenerating anything. The design-system work is done at token definition time, not at runtime.

The takeaway I'm sitting with: what feels like "AI variant generation" in interactive design tools may be mostly static CSS token switching. The AI wrote the tokens once. The switching is attribute swapping.

Pattern 2 — deck-stage.js: A 621-Line Web Component That Replaces a Slide Tool

The decks in the output aren't Reveal.js or a bespoke React app. They're a custom Web Component, <deck-stage>, containing plain <section> children as slides.

What the component does:

  • Fits a design-size canvas (default 1920×1080) to whatever viewport it's rendered in, using transform: scale().
  • Handles keyboard navigation (arrows, PgUp/PgDn, Space, Home, End, 0–9, R).
  • Adds mobile tap zones (left third / right third).
  • Persists the current slide to localStorage, keyed by document path, so a refresh restores position.
  • Renders a floating overlay with previous/next/reset controls and a slide counter.
  • Injects a <style> tag into document.head with @page { size: 1920px 1080px; margin: 0; } so the browser's native "Print to PDF" produces one page per slide.
  • Emits a slidechange CustomEvent with bubbles: true, composed: true and a reason field (init/keyboard/click/tap/api) — listenable cleanly from outside the shadow DOM.
  • Reads a <script type="application/json"> block of speaker notes and posts them to the host window.
  • Honors a noscale attribute for PPTX export cases where the CSS transform is undesirable.

Two implementation details stood out:

The @page rule has to be injected into the outer document because shadow DOM ignores @page. So the component walks up and writes into document.head during connectedCallback. This is the kind of detail that gets no documentation credit but separates "works in print" from "falls apart at export time."

Slides are hidden, not unmounted, with visibility: hidden; opacity: 0. That preserves the state of videos, iframes, form inputs, and React subtrees across navigation. If you're building a slide system in React with conditional rendering, you're quietly discarding state every time the user hits the arrow key. A cheap fix with meaningful UX consequences.

Pattern 3 — SKILL.md: A Manifest Format, Not a System Prompt

The skill manifest is smaller than I expected. Three frontmatter fields:

---
name: <skill-kebab-case>
description: "Use this skill to generate well-branded interfaces and assets for <domain>. Contains <key files> for <style context>."
user-invocable: true
---
Enter fullscreen mode Exit fullscreen mode

The body reads like a protocol, not a persona:

  • "Read README.md within this skill first."
  • "Then look at colors_and_type.css."
  • "If creating visual artifacts (slides, mocks): copy assets out, produce static HTML."
  • "If working on production code: treat <path>/frontend/app/ as canonical."
  • "If invoked with no other guidance: ask 3–5 questions about scope and audience, then act as an expert designer."
  • "Always flag: font substitutions, chart color choices, and any deviation from the documented color contract."

Three things about this format stood out:

Description is the routing signal. The orchestrator decides when to invoke the skill by reading the description alone. So the description has to encode domain, output type, and stylistic signal in one paragraph — different from how most agent frameworks define a role.

The body is a branched protocol. "If A, do X. If B, do Y." Not a soft persona, not a goal statement. Concrete execution paths keyed to invocation context.

"Always flag" is mandatory self-reporting at the manifest level. Fonts were substituted? Flag it. Deviated from the color contract? Flag it. It's an anti-hallucination pattern written into the skill definition rather than left to the model to remember.

I don't think the manifest format itself is novel — it's structurally close to how Claude Code's existing SKILL.md works. But its use as an agent interface for a consumer-facing design product is a concrete shape I haven't seen written down this cleanly before.

Pattern 4 — Output = Self-Contained HTML Bundle

The artifact isn't stored in a proprietary database. It's a folder:

IR/
├── assets/
│   └── colors_and_type.css
├── IR Deck - Hi-fi.html
└── deck-stage.js
Enter fullscreen mode Exit fullscreen mode

The HTML references the CSS by relative path and the JS by relative path. Everything is physically co-located.

Zip it, upload to any static host, it works. No build step. No framework runtime. No server-side rendering required.

There's a small interesting trick: the same colors_and_type.css file appears as copies in multiple subfolders — one for the deck, one for the UI kit, one for the preview cards. The bundle is optimized for survival, not deduplication. If a user downloads just the deck folder, they don't lose styling.

More bytes, no broken links. For a consumer product where users will definitely cut-and-paste the wrong subset of files, that tradeoff probably earns itself back quickly.

Why This Shape Is Interesting

Going in, my mental model was roughly: "Claude Design is a big AI system that generates design output."

What I came away with is closer to: "It looks like a thin AI orchestration layer over a carefully engineered runtime and manifest format, and the interesting work is in the runtime."

The model writes HTML, CSS, and SKILL.md files into this system. The system is what makes the output interactive, exportable, and robust across environments. If Anthropic swapped the model tomorrow for a comparable one, my guess is the user experience would barely change — because the experience is mostly the runtime.

That reframes the build-vs-buy question for anyone working in this space. You may not need a design-specialized model to get most of the user-facing value. What seems to matter more is:

  • A token CSS file with tight, opinionated choices.
  • A data-* attribute theming layer.
  • A Web Component (or equivalent) that handles presentation concerns: scale, navigation, print.
  • A manifest format for the skills that generate into the runtime.
  • A bundle format that survives being zipped and sent around.

Build that scaffold, then any capable LLM plausibly becomes your design engine. If this read is right, the hard work isn't the AI — it's the runtime the AI writes into. I'm hedging because one bundle inspection isn't enough to generalize.

What I'm Taking

Four patterns I'm adapting into our own stack:

  • Four-axis Tweaks (theme / accent / layout / density). Roughly fifty lines of CSS and JavaScript for a meaningful UX upgrade. Low risk, high visibility.
  • @page dynamic injection for PDF export. Potentially removes the need for a separate PDF library in slide-style outputs.
  • SKILL.md manifest format for our agents. Three-field frontmatter, branched body, mandatory "Always flag" section. Structural improvement on how we currently define agent behavior.
  • Self-contained HTML bundles as the default artifact. No server dependency, zippable, survives cut-and-paste. Lowers the support surface dramatically for client-facing deliverables.

I'm leaving aside porting the slide Web Component for now, because we already have a working runtime and the license review cost isn't obviously worth the marginal gain. The patterns above are portable with a day or two of work each.

The Open Question

The specific shape I'm describing — four patterns around a model — isn't obviously unique to design. It's roughly the shape of a lot of AI-labeled products right now. The product does something useful and the model is the visible new thing, but most of what makes it work seems to be conventional engineering inside a well-thought-out structure.

If that holds, a lot of "AI products" are really platform products where the AI is the entry point rather than the engine. The scarce skill in that world isn't prompting. It's designing the runtime the prompts write into.

I'm not sure whether that's a durable observation or a snapshot of where we are in this early phase of AI product maturity. But it's the one I came away from this source read with, and it's shifted how I'm thinking about our own architecture.

If you're building in this space, reading the bundles your tools produce might teach you more than reading the marketing. That's the part I'd put the most confidence on.

Top comments (0)