If you lead a design system, frontend platform, or design engineering team and you are trying to make AI-generated UI predictable in production, this is for you.
After a lot of positive feedback on the original LLM Component Schema Template, I expanded it into a fully tested and versioned standard with npm packages, CI enforcement, migration tooling, and release artifacts.
Quick links
- Canonical repository: github.com/PetriLahdelma/llm-component-schema-template
- Contracts package: npmjs.com/package/@petritapanilahdelma/llm-component-contracts
- CLI package: npmjs.com/package/@petritapanilahdelma/llm-component-cli
- Agentic design system playbook: WORLD_CLASS_AGENTIC_DESIGN_SYSTEM_PLAYBOOK.md
- Claude + Figma Code Connect instructions: CLAUDE_FIGMA_CODE_CONNECT_INSTRUCTIONS.md
The core idea (short version)
Most AI UI output fails because component intent is implicit.
This project makes intent explicit and machine-readable:
- Tokens define visual possibilities.
- Component schema defines structure, behavior, accessibility, and constraints.
- Generative rules define context-aware adaptation.
- Storybook + CI + Code Connect enforce contract fidelity in delivery.
That shifts teams from "AI draws screens" to "AI follows a system."
What is now production-grade
This repository is no longer just templates. It now includes:
- Reference implementations (
examples/) with real components +schema.jsonpairs. - Golden test corpus (
fixtures/) for valid and invalid schema behavior. - Schema drift detection across component props, schema fields, and Storybook args.
- Versioned contracts + migration path (
v1 -> v2). - Agent eval harness with pass/fail scenarios.
- Failing examples library to train quality instincts.
- Release automation with immutable artifacts and checksums.
- Governance defaults (CODEOWNERS, PR template, schema issue templates).
Screenshots
Canonical repo structure
Tagged release pipeline output
npm packages (contracts + CLI)
Get started in minutes
Install contracts + CLI:
npm install @petritapanilahdelma/llm-component-contracts
npm install -D @petritapanilahdelma/llm-component-cli
Validate a schema, check drift, migrate contract versions, and scaffold a new component starter:
npx llm-component-schema validate examples/base/components/Button/schema.json
npx llm-component-schema drift-check --root . --out dist/schema-drift-report.json
npx llm-component-schema migrate --from v1 --to v2 --input v1.json --output v2.json
npx llm-component-schema init StatusChip --style tailwind --target examples/generated/components
Run the full quality gate locally:
scripts/schema-lint.sh
scripts/run-fixture-tests.sh
scripts/check-schema-drift.sh
scripts/run-agent-evals.sh
Use the contracts directly in code
import { schemas, schemaPaths, migrateV1ToV2 } from "@petritapanilahdelma/llm-component-contracts";
console.log("Current contract:", schemas.contracts.v2.title);
console.log("Base template path:", schemaPaths.templates.base);
const upgraded = migrateV1ToV2({
version: "v1",
componentName: "Button",
props: [{ name: "label", type: "string", required: true }],
variants: [],
states: [],
accessibility: { role: "button", keyboard: [] },
behaviorRules: [],
generativeRules: ["Follow design tokens"]
});
console.log("Upgraded version:", upgraded.version);
Example CI check (GitHub Actions)
name: Quality Gates
on:
pull_request:
push:
branches: [main]
jobs:
validate:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: 20
- run: npm ci
- run: scripts/schema-lint.sh
- run: scripts/run-fixture-tests.sh
- run: scripts/check-schema-drift.sh
- run: scripts/run-agent-evals.sh
Where Claude + Figma Code Connect fits
The practical production loop:
- Define component contract and generative rules.
- Implement component + Storybook + schema in one folder.
- Attach Code Connect stubs so design and implementation stay aligned.
- Use Claude prompts from the playbook to generate/iterate safely.
- Gate every change through drift checks + evals before merge.
This prevents "looks right, behaves wrong" output from entering your system.
Why this matters
For teams scaling AI-assisted UI delivery, the hard part is not generation speed. The hard part is contract fidelity across design, code, and runtime behavior.
This project is designed to make that fidelity enforceable.
If your team has drift incidents, inconsistent generated components, or repeated accessibility regressions, this gives you a concrete operating model to stop that at the system level.
What to do next
- Clone the repo and run the full quality gates.
- Start with one high-volume component (Button, Alert, Input) and model it end-to-end.
- Add drift + eval checks as required CI gates.
- Adopt versioned contracts and migration policy before broad rollout.
If useful, I can publish a follow-up with a real migration walkthrough (v1 -> v2) and a complete CI/CD setup for multi-package releases.




Top comments (0)