DEV Community

Cover image for From Figma to Tailwind Without Writing a Single px — A Real Automation Case in Production
Giovanni Lima
Giovanni Lima

Posted on

From Figma to Tailwind Without Writing a Single px — A Real Automation Case in Production

From Figma to Tailwind Without Writing a Single px — A Real Automation Case in Production

"The design is ready. Now just grab the values and drop them into the code."
Every front-end developer has heard that line. And every front-end developer knows it lies.

Behind that "just", there's a silent workload: opening Figma, inspecting frame by frame, transcribing padding values, font sizes, background colors, spacing — and repeating that for every state variation of the component. Mobile open. Mobile closed. Desktop open. Desktop closed. And hoping you don't mistype a single px along the way.

That's exactly the work we automated on a themed banner in production. And the process taught me more about the limits of Figma — and what AI actually means in this context — than any documentation ever could.

The Tool That Made It All Possible

Before getting into the architecture, it's worth setting the context that gave rise to all of this.

The company made a direct bet on AI adoption across its teams — not as a trend to be observed from a distance, but as a concrete investment in accelerating, streamlining, and improving the work of those on the front lines. That encouragement has been essential, and it's being put to very good use.
It was within that context that Claude Code proved to be an excellent fit.

Claude Code is an AI agent developed by Anthropic that operates directly in the terminal, with access to the file system, the project's codebase, and external tools via MCP. It's not an assistant that answers questions — it acts. It reads files, understands codebase context, makes decisions, and executes changes.

It was inside Claude Code that the skill was created, the flow was designed, and the mutations to the project files happened. Without the ability to act directly on code — and not just suggest — the automation I'm about to describe simply wouldn't exist in the form it does.

The Architecture That Made Automation Possible

Before talking about what the AI did, I need to talk about what was already in place.

The themed banner follows a simple and deliberate architecture: each theme is identified by a mnemonic — a unique key received via API that determines which banner should be displayed. From that mnemonic, the system loads the corresponding .tsx file, which holds the full component structure. That component, in turn, consumes a separate .ts file — responsible exclusively for Tailwind styles, organized by state variation.

And here it's worth pausing to understand what that styles file actually controls — because it goes well beyond color and typography.

The banner is a living component. It exists in four distinct states: mobile closed, mobile open, desktop closed, and desktop open. In each of those states, practically everything can change. Font sizes and weights vary. The logo scales accordingly. But what makes the system genuinely complex is that the spacing between every element also recalculates — the gap between logo and title, the internal container padding, the vertical distribution of elements when the banner is expanded.

There's no fixed layout that simply "stretches" from mobile to desktop. Each state has its own spatial distribution logic, explicitly defined in the .ts file. One object for mobile closed, another for mobile open, another for desktop — each one describing not just appearance, but composition.

That separation between structure (.tsx) and style (.ts) wasn't designed with automation in mind. It was designed for readability and maintainability. But it created something valuable: a clear target. An exact place where any design data needed to land.
Without that structure, the automation would have had no path to follow — making the whole task far more costly.

How the Figma Integration Works in Practice

To connect Figma to this flow, a dedicated skill was developed — a structured instruction that the AI agent recognizes and executes as a coherent sequence of steps.
The command is simple and straightforward:

/figma-to-tailwind-banner {mnemonic} {figma_link}

With those two parameters, the agent already knows everything it needs: which .ts file to update (via mnemonic) and where to pull the design data from (via the Figma link). The banner images are already in the project — the skill doesn't need to deal with assets, only styles.

This approach turns what would otherwise be an ad hoc process into something repeatable and predictable. Any new themed banner that follows the same architecture can be styled with the same command, with no adaptation required. The mnemonic changes. The Figma link changes. The process stays the same.

What Figma Delivers — and What It Hides

The Figma integration happens via MCP (Model Context Protocol), which allows the AI agent to connect directly to Figma as if it were a collaborator — not just consuming an API, but navigating the design structure contextually.

The first step is always get_metadata: the agent discovers which frames exist, how they're named, and what the hierarchy looks like. From there, get_design_context extracts the styles for each variation.

What Figma delivers well: frame structure and layer hierarchy, spacing, paddings and margins, font sizes, weights, line-heights, and the widths and heights of individual elements. For a component that varies as much as this banner, that data is the heart of the extraction — every gap, every internal padding, every logo size per state is there, accessible and structured.

What Figma hides:
When a frame's background is an image or a vector — as was the case with this themed banner, which uses an illustration as its backdrop — Figma doesn't expose the color. There's no #hexcode value available. The design is there visually, but programmatically, there's no data.

The automation had to infer. Based on the available context — the component state, the banner theme, the colors already known to the Tailwind config — the agent made a fallback decision. It didn't invent a random value, but it also didn't stall: it made a reasoned, explicit choice that the developer would review afterward.

Another limitation: the states exist in Figma, but the logic of how they combine lives in the code. The design shows the open and closed banner separately. But who decides that "mobile closed is the base and desktop uses the md: prefix" is the project's convention — not Figma. The automation had to understand that by reading the React code, not the design.

The Flow Under the Hood

Simplified, the automation runs five steps in sequence:

  1. Discovery — get_metadata maps the available frames and identifies the four relevant variations.

  2. Extraction — get_design_context pulls the styles for each variation: spacing between elements, typography, logo sizes, and container dimensions — everything that changes between states.

  3. Validation — before generating any class, the agent cross-references the values against the project's tailwind.config.ts. Classes that don't exist in the config can be purged in production — so known tokens take priority over raw values.

  4. Mobile-first merge — the four variations are merged into a single object: mobile as the base, desktop with the md: prefix. The agent applies the project's convention, not a generic one.

  5. Direct mutation — the .ts file is updated. Only the class strings are replaced. No component logic is touched.
    Not a single visual value was written manually. No px, no color, no font-size, no gap.

What Still Requires a Developer's Eye

The automation delivers a reviewed starting point — not a final deliverable. A few things that always require human attention:

The color fallback. When Figma doesn't provide the data, the inference needs to be validated visually.
The agent flags when this happens, but the final call belongs to the developer.

The closed banner height. This property lives in an inline style on the React component — not in Tailwind. The automation knows the height exists, but can't generate it via classes. It's a conscious architectural constraint, not a flaw.

Decisions that look visual but are architectural. The logo's positioning within the banner, for example, involves choices that depend on product context — not just what Figma shows. The spatial distribution between elements can be correct in absolute values and still need adjustment once real content is inserted. In those cases, the agent flags the question and waits for a human decision.

Automation as an Accelerator, Not a Replacement

What this experience made clear is that the AI worked well here because the environment was ready to receive it. The clear separation between .tsx and .ts, the mnemonic as a unique identifier, the explicit organization of styles by state, the consistent naming of frames in Figma — all of that created the conditions for the agent to have enough context to act with precision.

And the real complexity of the component — four states, independent spacing, logo and typography that adapt together — is exactly what makes the automation most valuable. The more manual variations a process requires, the higher the risk of inconsistency. And the greater the gain from eliminating it.

The automation eliminated the mechanical work. The work that requires judgment — visual review, edge case validation, architectural decisions — remains with the developer.

And perhaps that's exactly how it should be.

This case is part of an ongoing effort to integrate AI tools into the real front-end development workflow — not as a replacement for the process, but as part of it.

Top comments (0)