CSS architecture has long been built around human-centered workflows. The naming conventions of BEM/SMACSS and the design philosophy of utility-first both assume authors who memorize rules, make judgment calls, and maintain consistency through review.
But that assumption is changing. AI coding agents like Claude Code, Cursor, and Windsurf are becoming a normal part of UI implementation. In the Tailwind ecosystem, llms.txt — a file that feeds project rules and usage patterns to AI — has emerged as one approach to guide AI output.
llms.txt represents a "teach the rules to AI" approach. But the real question is: can we design an architecture that doesn't break down even without that teaching?
For tech leads and technical directors responsible for maintaining CSS quality across a team — and for developers ready to move beyond "CSS that just works" toward professional-grade architecture — this is an unavoidable question.
The Limits of Convention-Based Design
Utility-first is often said to pair well with AI. Since class names directly represent styles, AI can reproduce designs more easily. But whether the output is "production quality" is a separate question. For example, should a card component's margin belong to the parent or the child? Should visually identical elements be shared or duplicated? These structural decisions remain even with utility-first.
Convention-based designs like BEM face the same challenge. The model of maintaining quality through conventions + reviews shares the same weaknesses whether the author is human or AI:
- There's a cost to memorizing conventions (or teaching them to AI)
- Decisions are subjective, and review standards tend to drift
- Consistency erodes as teams grow
Even if you feed AI a long instruction document to enforce rules, accuracy drops as context grows. The same problem humans face reappears in a different form.
From "Rules to Follow" to "Feedback Systems"
CSS architecture should shift from "rules to memorize and follow" to "feedback systems."
TypeScript's type system is a useful reference. You don't need to memorize type rules — just write code and the compiler tells you "this is wrong" and "fix it like this." Developers simply follow the feedback, and the code converges to type safety.
CSS architecture needs the same structure. Instead of teaching rules upfront, the system should respond to what's written with "what's wrong" and "how to fix it." Humans read error messages and fix issues; AI agents parse lint output and self-correct — a feedback loop where the design converges regardless of who writes the code.
About This Series
This series uses SpiraCSS — a CSS architecture methodology I created and use in production — as a concrete example to explore what CSS architecture should look like in the AI era.
Existing CSS methodologies assume humans memorize and follow naming conventions and file structures. SpiraCSS takes a different approach. It makes component structure and property placement mechanically verifiable through Stylelint, with error messages that tell you exactly what to fix and how — going well beyond conventional CSS lint, which typically covers only naming and property duplication.
- Part 2: CSS Drift in the AI Era — Why Conventions Break Down and How Machine-Verifiable Rules Fix It
- Part 3: CSS Architecture Lint in the AI Era — TypeScript-Style Errors That Tell You How to Fix It
- Part 4: Parent Owns Layout — A New CSS Architecture for the AI Era — Drift-Resistant by Design
- Part 5: Same Lint, Same Result — A Stylelint Toolchain for Humans and AI Agents
If your AI agents keep producing inconsistent CSS, or your team keeps raising the same issues in review — this series may offer a useful perspective.
Next up, we start by defining what resilience (drift resistance) concretely means.
How does your team maintain CSS design consistency? Guidelines, reviews, lint — what's working and what's breaking down?
SpiraCSS's design specs, tools, and source code are all open source.
- SpiraCSS: https://spiracss.jp
- GitHub: https://github.com/zetsubo-dev/spiracss
Top comments (0)