DEV Community

Petri Lahdelma
Petri Lahdelma

Posted on

From Template to Tested Product: Launching the LLM Component Schema Standard

If you lead a design system, frontend platform, or design engineering team and you are trying to make AI-generated UI predictable in production, this is for you.

After a lot of positive feedback on the original LLM Component Schema Template, I expanded it into a fully tested and versioned standard with npm packages, CI enforcement, migration tooling, and release artifacts.

Quick links

The core idea (short version)

Most AI UI output fails because component intent is implicit.

This project makes intent explicit and machine-readable:

  1. Tokens define visual possibilities.
  2. Component schema defines structure, behavior, accessibility, and constraints.
  3. Generative rules define context-aware adaptation.
  4. Storybook + CI + Code Connect enforce contract fidelity in delivery.

That shifts teams from "AI draws screens" to "AI follows a system."

What is now production-grade

This repository is no longer just templates. It now includes:

  • Reference implementations (examples/) with real components + schema.json pairs.
  • Golden test corpus (fixtures/) for valid and invalid schema behavior.
  • Schema drift detection across component props, schema fields, and Storybook args.
  • Versioned contracts + migration path (v1 -> v2).
  • Agent eval harness with pass/fail scenarios.
  • Failing examples library to train quality instincts.
  • Release automation with immutable artifacts and checksums.
  • Governance defaults (CODEOWNERS, PR template, schema issue templates).

Screenshots

Canonical repo structure

Canonical repo structure

Tagged release pipeline output

Tagged release output

npm packages (contracts + CLI)

npm contracts package

npm cli package

Get started in minutes

Install contracts + CLI:

npm install @petritapanilahdelma/llm-component-contracts
npm install -D @petritapanilahdelma/llm-component-cli
Enter fullscreen mode Exit fullscreen mode

Validate a schema, check drift, migrate contract versions, and scaffold a new component starter:

npx llm-component-schema validate examples/base/components/Button/schema.json
npx llm-component-schema drift-check --root . --out dist/schema-drift-report.json
npx llm-component-schema migrate --from v1 --to v2 --input v1.json --output v2.json
npx llm-component-schema init StatusChip --style tailwind --target examples/generated/components
Enter fullscreen mode Exit fullscreen mode

Run the full quality gate locally:

scripts/schema-lint.sh
scripts/run-fixture-tests.sh
scripts/check-schema-drift.sh
scripts/run-agent-evals.sh
Enter fullscreen mode Exit fullscreen mode

Use the contracts directly in code

import { schemas, schemaPaths, migrateV1ToV2 } from "@petritapanilahdelma/llm-component-contracts";

console.log("Current contract:", schemas.contracts.v2.title);
console.log("Base template path:", schemaPaths.templates.base);

const upgraded = migrateV1ToV2({
  version: "v1",
  componentName: "Button",
  props: [{ name: "label", type: "string", required: true }],
  variants: [],
  states: [],
  accessibility: { role: "button", keyboard: [] },
  behaviorRules: [],
  generativeRules: ["Follow design tokens"]
});

console.log("Upgraded version:", upgraded.version);
Enter fullscreen mode Exit fullscreen mode

Example CI check (GitHub Actions)

name: Quality Gates

on:
  pull_request:
  push:
    branches: [main]

jobs:
  validate:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-node@v4
        with:
          node-version: 20
      - run: npm ci
      - run: scripts/schema-lint.sh
      - run: scripts/run-fixture-tests.sh
      - run: scripts/check-schema-drift.sh
      - run: scripts/run-agent-evals.sh
Enter fullscreen mode Exit fullscreen mode

Where Claude + Figma Code Connect fits

The practical production loop:

  1. Define component contract and generative rules.
  2. Implement component + Storybook + schema in one folder.
  3. Attach Code Connect stubs so design and implementation stay aligned.
  4. Use Claude prompts from the playbook to generate/iterate safely.
  5. Gate every change through drift checks + evals before merge.

This prevents "looks right, behaves wrong" output from entering your system.

Why this matters

For teams scaling AI-assisted UI delivery, the hard part is not generation speed. The hard part is contract fidelity across design, code, and runtime behavior.

This project is designed to make that fidelity enforceable.

If your team has drift incidents, inconsistent generated components, or repeated accessibility regressions, this gives you a concrete operating model to stop that at the system level.

What to do next

  1. Clone the repo and run the full quality gates.
  2. Start with one high-volume component (Button, Alert, Input) and model it end-to-end.
  3. Add drift + eval checks as required CI gates.
  4. Adopt versioned contracts and migration policy before broad rollout.

If useful, I can publish a follow-up with a real migration walkthrough (v1 -> v2) and a complete CI/CD setup for multi-package releases.

Top comments (0)