In the previous article, I talked about how our team moved from chaos to consistency.
This time, let's go deeper - into the engineering side of our design system: how we chose the right tools, why we made some trade-offs, and how these decisions shaped the foundation for everything that came later.
1. High-level architecture overview
Before diving into the details, it's worth mentioning that the actual technology stack or tool isn't what truly matters - it's why you pick one over another.
Tools come and go, but the reasoning behind your choices defines how maintainable and scalable your system will be in the long run.
Below, I'll walk through each technology we used and the reasoning behind those choices. Sometimes, the logic was the result of long discussions; other times, it was simply obvious (and that's perfectly fine - not every decision has to be over-engineered π).
To give a quick overview, here's our tech-stack snapshot:
| Area | Tool / Approach |
|---|---|
| Packaging | Monorepo (Turborepo) |
| Styles | PostCSS Modules + Design Tokens |
| UI Components | Radix Primitives + Custom components |
| Documentation | Storybook |
| Testing | Chromatic visual diffs + Unit tests |
| Developer Experience | Scaffolding CLI, ESLint rules |
2. Consistency by Design: Mirroring Figma
We have a strong design team in the company, and Figma is our single place for creating and maintaining visual consistency - from colors and typography to component states and prototypes.
Designers use variables for colors, font styles, spacing, and shared components like Button, Input, or Dropdown to ensure reusability and alignment across all product teams.
When it comes to implementation, developers receive the final designs for a feature and translate them into code.
That translation, however, can easily become a bottleneck if both sides speak different "languages".
So before we even started building the design system, we defined one guiding principle:
No translation layer between Figma and code.
That meant - if a developer sees variant="secondary" on a Button in Figma, they should expect the exact same variant="secondary" prop in the UI kit component.
This approach brought two key benefits:
- A single source of truth - Figma. The design system in code is not a copy of Figma; it's an extension of it. We don't reinvent naming conventions or variants - we mirror them.
- Reduced cognitive overhead. Developers no longer need to mentally translate "what does Primary / Neutral / Subtle mean in code?" Instead, the codebase becomes a 1:1 reflection of the design decisions.
Of course, this approach wasn't free of trade-offs:
- Designers now need to be careful when renaming or restructuring Figma components, since the code directly relies on those definitions.
- Complex components (like
comboboxesordate pickers) sometimes don't map cleanly to Figma due to differences in interaction logic or platform constraints.
Still, the result was worth it - this alignment made handoffs almost frictionless, and helped both designers and developers think in terms of system design, not just pixels or props.
3. Technology choices & rationale
3.1. βοΈ Base Components
This choice was one of the most critical ones - because once we picked the approach, it would define the foundation for years. Changing the architecture later would require enormous effort, so we took this decision seriously.
For the first iteration, I created an ADR (Architecture Decision Record) describing all possible options, along with their pros and cons:
- Use an existing third-party design system.
- Build a fully custom system from scratch.
- Follow a hybrid approach - adopt a base library and build our own components on top of it.
After gathering feedback from fellow developers, we quickly ruled out the first option. Given the nature of our product, we wanted much more control over component behavior and appearance - our designers often push beyond standard UI patterns, and a pre-built system would become a limitation rather than a shortcut.
That left us with two viable options:
- building our own design system from scratch, or
- adopting Radix Primitives, a well-known component library among our developers.
For context: Radix Primitives provides fully unstyled, accessible components that offer full control over visual implementation, while still handling all logic and accessibility behind the scenes.
To make the comparison more objective, I used a simple "traffic light" approach: I defined several critical criteria - such as time-to-market, team expertise, scalability & future-proofing, maintainability & long-term viability, etc. - and evaluated both options accordingly.
In the end, both options proved to be viable - the evaluation showed that neither clearly outperformed the other. However, we decided to go with Radix Primitives, as the time-to-market (and therefore budget) criteria were the most critical for us as a fast-moving product company. We needed to move fast, ship reliably, and avoid reinventing every accessibility behavior.
However, we established a few important internal rules to keep our implementation consistent and maintain control:
- We never expose Radix components directly. All components are wrapped and exported only through our internal
ui-kitpackage. - Each component exposes only a minimal set of props. This enforces consistency and prevents uncontrolled API sprawl across teams.
- Custom components follow the same composition pattern as Radix. This keeps the implementation predictable and cohesive throughout the system.
3.2. π§© CSS Architecture
Before starting the design system, we were using a CSS-in-JS approach. It worked well at the beginning, but by 2024 it had already started showing several limitations:
-
styled-componentsrely on React Context, which makes server-side rendering inefficient - and even incompatible with React Server Components. We wanted to take full advantage of both Next.js SSR and React Server Components, so this became a blocker. - Runtime styling overhead - no built-in CSS caching or minification, and a visible delay in style application. This could be improved with time investments, but it wasn't worth the complexity at that stage.
Since we were building the new design system from scratch, it was a great opportunity to rethink our CSS architecture and choose a more scalable, modern, and compatible solution.
To make the decision structured, we again used the traffic light approach, comparing four main options:
-
CSS Modules - simple, widely supported, with preprocessors like
SASS,PostCSS, orLESS. - StyleX (Meta) - a compile-time CSS-in-JS approach with the benefits of static extraction, but also with migration complexity.
- Tailwind CSS - utility-first, scalable, and mature ecosystem.
- Our current CSS-in-JS - used as a baseline for comparison.
As a result, we had two promising candidates: CSS Modules and Tailwind CSS.
StyleX, while technically appealing, would have added unnecessary infrastructure overhead - especially if we had to support two CSS-in-JS systems simultaneously during migration. It also came with a learning curve, since it isn't a one-to-one replacement for styled-components.
While Tailwind is scalable, well-known, and future-proof, it came with a significant drawback for our setup. We wanted to keep Figma as the single source of truth and ensure a smooth handoff between design and code. The utility-first nature of Tailwind, however, introduced an additional translation layer: developers had to interpret design specs into utility classes, which added friction and made it harder to maintain a direct connection between the design system in Figma and its implementation in code.
That was the main reason why we decided to go with CSS Modules - they allowed us to keep naming conventions and structure aligned with Figma, making the implementation process more natural for both designers and developers.
When selecting a preprocessor, we picked PostCSS, because:
- It works on top of
.module.cssas a lightweight transpiler - no custom syntax or runtime overhead. - It's highly extensible, with a huge ecosystem of plugins, yet doesn't introduce unnecessary abstraction.
- It's fully aligned with the official CSS spec, including modern features like CSS Nesting, giving it long-term stability.
- By 2024,
SASSandLESSwere already considered legacy, whilePostCSScontinued evolving with modern tooling and browser support.
Ultimately, we went with a combination of CSS Modules + PostCSS - a simple, fast, and standards-based approach that gave us flexibility without sacrificing performance or future compatibility.
3.3. π¨ Design Tokens as the Source of Truth
As you may have already noticed, the design system project became a pivot point for us - an opportunity to introduce modern technologies, workflows, and best practices, while gradually updating our existing codebase to align with them.
Before that (as I mentioned in the first article of the series), we were using abstract design tokens implemented as CSS variables to style UI components.
While this approach served us reasonably well, it revealed clear limitations once we started aiming for a more scalable and maintainable design system.
First, the same token was often serving multiple purposes.
For example, inkMain might be used both as a text color in one case and as a background color in another.
This made updates risky: whenever we changed a token's value, we had to search for all its usages and verify that the new color didn't cause contrast ratio or readability issues.
Second, it was hard to establish a consistent color scale - for example, from light gray to dark gray - because tokens were used inconsistently, and there was no clear hierarchy between base (primitive) and contextual (semantic) values.
To address this, we introduced a three-layer token model:
Primitive, Semantic, and Component-specific tokens.
This approach didn't require much debate - it was warmly welcomed by both designers and developers, since it created a shared vocabulary and made color updates safer and more predictable.
You can find similar models described in Figma Learn, Atlassian Design Tokens, or Adobe Spectrum.
Despite the obvious benefits, this structure also introduced a few new challenges:
-
Designers had to learn the new naming conventions and meanings of tokens. For example, both
--color-background-paneland--color-background-interactivemight reference the same primitive token--color-gray-95, but they serve different purposes: one for static surfaces (panels, containers), and the other for interactive elements (buttons, inputs, etc.). - Developers had to adapt as well - using only semantic tokens instead of falling back to primitive ones, even when the design itself seemed to suggest otherwise. This required some muscle memory and discipline, but it paid off in long-term maintainability and consistency.
3.4. π Storybook + Chromatic
So, the foundation was ready: Design Tokens, CSS Modules with PostCSS, Radix Primitives for base components (and custom UI components where needed), and the existing infrastructure for the Design System package.
The bare minimum was in place - now it was time to make it both clear for its users (developers) and trustworthy in terms of quality and stability.
We were already using Storybook in our main applications to showcase user stories, and it had proven to be a great fit for our workflow.
So there was no debate - we simply adopted it for the design system as well.
Since the design system can also be valuable for non-developers (e.g., designers or PMs who want to check if a certain pattern or interaction already exists in code), we decided to publish our Storybook using Chromatic.
It made the stories easily accessible to anyone in the company, with versioned previews for every change.
I won't go deep into Storybook configuration or story implementation here - there are already plenty of excellent resources on that.
Design system components are visually rich but behaviorally simple - they mostly render UI with the correct appearance and delegate event handlers to underlying DOM elements.
Because of that, the reliability of our system depends heavily on visual accuracy rather than complex business logic.
To ensure quality, we adopted a three-layer testing strategy:
- Static analysis, such as type checking and linting (enabled by default).
- Unit tests, covering isolated logic - e.g., disabled states, keyboard interactions, or component hooks.
- Visual regression tests, to catch unintended UI changes.
Let's skip static and unit tests - those are well-covered by industry practices - and focus on the visual regression part.
For that, we used Chromatic's visual snapshot testing, integrated directly with our Storybook setup (docs).
It's not worth creating visual snapshots for every single story - that would be costly and redundant.
Instead, we focused only on representative visual states for each component.
To keep this consistent, we disabled automatic snapshots by default and introduced a rule:
Each component must have a dedicated
Snapshotstory that renders all relevant visual variations for regression testing.
It's also worth mentioning that Chromatic provides other testing capabilities - visual, accessibility, and interaction tests.
However, in our case, those felt like overhead, as the same coverage could be achieved through unit tests - and, importantly, free of charge.
Finally, to ensure no UI change goes unnoticed, we added a Chromatic CI step (docs) into our CI/CD pipeline.
It highlights visual diffs for every pull request and requires a manual review before merging - giving us confidence that no visual regressions slip into production.
3.5. π§ Developer Experience
Any technological change or new initiative inevitably meets some friction - people need time to learn, adapt, and develop new habits.
To make this process smoother, more reliable, and to improve the overall maintainability of the system, we introduced several developer experience (DX) practices.
First, we created a simple scaffolding script to generate the boilerplate for new UI components - React file, styles file, tests, stories, and the barrel export.
This ensured a consistent folder structure and prevented developers from accidentally skipping any required files.
/Button
βββ Button.tsx
βββ Button.module.css
βββ Button.test.tsx
βββ Button.stories.tsx
βββ index.ts
Second, we built a flow to export design tokens from Figma and convert them into CSS variables.
There are plenty of Figma plugins for exporting variables (for example, Export/Import Variables) into a JSON file.
From there, a simple script can generate themed CSS files for both primitive and semantic tokens.
To boost IDE productivity, we used the CSS Var IntelliSense plugin for VS Code-based IDEs.
It improves autocomplete for CSS variables and allows defining a custom source file, preventing suggestions from unrelated local component tokens.
However, since Figma variables can be renamed, removed, or overwritten (by accident or intentionally), we wanted to make sure no invalid tokens could slip into the codebase.
To address that, we created a custom ESLint rule that allows only variables either declared locally or listed in the auto-generated source file. This provided a basic but effective safety layer.
Sometimes you may also need to use a CSS variable inside TypeScript code - for example, setting the gap property in a Grid component.
To ensure both type safety and token consistency, we extended the generation script to produce a tokens.ts file that exports an object of all available variables, e.g.:
export const tokens = {
colorText: 'var(--color-text)',
colorBackground: 'var(--color-background)',
};
Developers can then import these tokens directly into components, while the linter and TypeScript will catch any non-existent references during CI.
Finally, we prepared thorough documentation, explaining how to use the design system, how to contribute, and how to migrate from legacy components to the new ones.
Storybook's .mdx syntax worked perfectly for that, allowing us to deploy the docs via Chromatic - keeping everything in a single source of truth.
The migration guide itself was generated using Cursor, and we plan to use the same AI tool to automate the migration of old components to the new system.
4. Outcome and lessons learned
These decisions weren't just about tools - they defined how we work. We moved faster, shipped safer, and established a technical culture around clarity and ownership. More importantly, the process helped us align engineering and design thinking - something that shaped how we approach UI development as a team.
However, the adoption of the design system was not entirely frictionless - it raised many organizational and technical challenges, which I'll dive deeper into in the next article.
Next: In Part 3, I'll share how we rolled out the design system across multiple teams - how we handled adoption challenges, established governance, measured success, and made the system a natural part of our daily development workflow.



Top comments (0)