DEV Community

Cover image for CSS Drift in the AI Era — Why Conventions Break Down and How Machine-Verifiable Rules Fix It
zetsubo
zetsubo

Posted on • Edited on

CSS Drift in the AI Era — Why Conventions Break Down and How Machine-Verifiable Rules Fix It

In Part 1, I argued that CSS architecture should shift from "rules to memorize and follow" to a "feedback system." But what exactly does "resilient" (drift-resistant) design mean? This part defines the phenomenon of design degradation as "drift" and introduces the concept of "invariants" to prevent it.

Drift — How Design Breaks Down

CSS design doesn't collapse all at once. Small decision inconsistencies accumulate, and by the time you notice, coherence is lost. I call this "drift."

In this series, "breaking down" refers specifically to three things:

  • Component boundary ambiguity: Where one component ends and another begins differs from person to person
  • Layout responsibility confusion: Whether margin belongs to parent or child varies file by file
  • Naming and structural inconsistency: Naming conventions and file organization vary across the project

For example, here are common forms of drift in BEM-based projects:

  • One team member writes margin on the child component itself; another specifies it from the parent
  • "Is this element an independent component or part of its parent?" — answers differ by person
  • Modifier granularity varies from file to file

None of these are "rule violations." The interpretive latitude in guidelines allows decision drift. And that latitude widens as teams grow.

Introducing AI coding agents doesn't change the structure. Even with guidelines packed into the context window, output varies where judgment is required. Human drift simply becomes AI drift.

Why Conventions Can't Prevent Drift

Convention-based designs share a structural weakness:

  1. Memorization burden: The longer the guidelines, the harder it is for everyone to accurately remember and apply them
  2. Subjective decisions: Questions like "where does one component end?" have no objective answer
  3. Verification difficulty: Whether something violates a convention can't be determined mechanically — you can only rely on reviews

When all three of these are present, drift is inevitable. Reviews maintain quality only while the reviewer's judgment stays consistent. And reviewers are human — their judgment drifts too.

Invariants — Eliminating Interpretive Ambiguity

Programming has the concept of "invariants" — conditions that must hold at every point in a program. An array index staying within bounds, an account balance never going negative — these conditions are mechanically enforced through types and assertions.

The same idea applies to CSS architecture. Eliminate rules whose answers change based on interpretation, and adopt only conditions that can be mechanically evaluated as true or false.

Concrete examples:

Convention-based (requires interpretation) Invariant (mechanically verifiable)
"Decompose components into appropriate granularity" "A Block (component) can only contain Blocks or Elements as direct children" (details in Part 4)
"Parents manage layout" "margin-top can only be specified from the parent's > .child selector"
"Keep naming consistent" "Blocks use two-word kebab-case; Elements use a single word"
"Distinguish between state and variation" "Variants use data-variant; States use data-state"

The left column requires human interpretation to judge correctness. The right column can be judged through pattern matching on class names and selector structure. Automated lint verification becomes possible.

The examples in the right column are actual invariants used in SpiraCSS, which I've developed and use in production. Part 4 covers them in detail.

Design Criteria for Invariants

Not every rule can become an invariant. To function as one, a rule needs these properties:

  1. Binary evaluation: Violation or compliance is determined unambiguously (no gray zones)
  2. Syntax-verifiable: Can be determined through source code syntax analysis alone (no runtime needed)
  3. Locally verifiable: Can be determined by looking at the target file alone (no project-wide scanning needed)

If the design is composed entirely of rules meeting these criteria, a lint tool can serve as the "gatekeeper." Humans don't need to make judgment calls — just follow the tool's verdict. The same goes for AI agents.

"Isn't this just freezing preferences rather than defining 'correctness'?" That's a fair objection. "Write margin from the parent" isn't the only correct approach. But there's value in fixing design preferences to make all authors converge on the same output — especially in an era where the pool of authors includes both humans and AI. Having an unwavering standard itself supports team productivity.

Invariants Alone Aren't Enough

However, defining invariants alone won't preserve the design. Without detecting violations and communicating "what's wrong and how to fix it," corrections don't happen. Defining invariants, detecting violations, and presenting fix instructions — this concrete sequence is the "feedback loop" that turns the feedback system from Part 1 into something actionable.

The next part dives deeper into the design of this feedback loop. When lint returns not just "what's wrong" but "how to fix it," how should those error messages be designed?


SpiraCSS's design specs, tools, and source code are all open source.

Top comments (0)