Front-end teams now work with more moving parts than ever. Modern applications depend on design systems, component libraries, accessibility constraints, responsive behaviour, routing, state management, and increasingly detailed UI patterns. Every feature requires visual precision and code that aligns with the project’s existing structure.
Design teams mirror this complexity inside Figma. They create layouts using auto-layout, tokens, variants, constraints, and nested components. These files describe what the interface should look like with a high degree of structure, even though they do not describe behaviour or code semantics.
Despite this structure on both sides, the path from Figma to a working application remains manual primarily. Engineers inspect spacing, measure alignment, recreate components, wire interactions, and rewrite everything so it fits a project’s conventions. The workflow is dependable, but slow and repetitive.
This gap has pushed teams to look for ways to automate parts of the handoff. Modern AI models can analyse codebases, understand project patterns, and reason about UI logic.
At the same time, design tools like Figma expose structured information such as hierarchy, constraints, and layout rules. Together, these capabilities create an opportunity to automate more than just static HTML scaffolds.
To explore how far current tools have progressed, this article evaluates three approaches on the same design and the same stack:
- Figma MCP: It extracts structured visual information directly from the Figma file and provides that data to a coding agent for generation.
- Codex CLI: It generates UI code by reasoning within an existing repository, understanding files, dependencies, and established conventions.
- Kombai: It interprets both the Figma design and the surrounding codebase together, producing context-aware components that align with real project structure.
The goal is not to generate a screenshot-accurate mockup, but to understand whether these tools can produce code that is reusable, consistent with the framework, and ready for production.
Why Turning Designs into Code Remains a Hard Problem
Figma describes the visual structure of an interface, while a codebase defines how that interface should function. Both represent the same UI, but they capture very different types of information. Figma focuses on layout and appearance, whereas production code must express behaviour, data flow, interaction, and the architectural patterns of the project.
This gap becomes clearer when looking at the layout. Auto Layout provides order and alignment inside Figma, but it does not map directly to Flexbox, CSS Grid, or responsive breakpoints. A layout that appears precise inside Figma often behaves differently once rendered in a browser. Many generators try to preserve the artboard by hard-coding fixed values. The output looks correct at first, but it fails when the text grows longer, the content becomes dynamic, or the viewport changes.
The difficulties increase when the generated UI meets a real codebase. Most projects already contain reusable components, naming conventions, design tokens, routing structures, and state management patterns. Generic generators cannot infer these rules. They often recreate components the team already has, producing code that runs but does not align with the existing system.
Behaviour adds another layer of complexity. Figma shows static visual states but does not encode their logic. Icons may imply dropdowns, date pickers, or navigation, yet none of this behaviour is present in the design file.
These gaps explain why most tools struggle to produce code that is ready for real applications.
Benchmark Design and Evaluation Environment
To compare the three tools fairly, a single benchmark design was used across all evaluations. The homepage of a publicly available Figma template served as the test case. It includes navigation, a hero section, property cards, search and filtering elements, and a detailed footer, offering enough layout variety and visual complexity to assess how each tool handles a realistic interface rather than a minimal example.
Figma Access: This evaluation used the Desktop MCP server, which requires the Figma Desktop application with Dev Mode enabled. The Desktop server provides full access to the design structure and metadata. (Figma also offers a Remote server that does not require Dev Mode, but it was not used here.)
Local Setup: Codex CLI and Kombai both run locally. Their setup processes differ, and the required steps are explained separately for each tool.
Evaluation Focus: The goal is to observe whether each system can generate code that:
- Uses a logical component structure
- Separates data from UI
- Handles assets cleanly
- Produces working interactions
- Matches the visual design accurately
Running all three tools against the same design ensures that differences in output reflect the tools’ capabilities rather than variations in input.
Figma MCP
Figma MCP brings an interesting approach to turning designs into code. Instead of exporting images or relying on visual screenshots, MCP exposes the actual structure of a Figma file, frames, auto-layout rules, typography, spacing, and component metadata. An external agent can request this information and generate code based on what it learns from the design.
In theory, this should produce code that is more accurate than a pixel-based interpretation. Since the agent receives hierarchy and layout data directly from Figma, it has the information needed to understand how elements are positioned and how they behave.
Setup
- For this evaluation, Figma MCP was accessed through the local server exposed by the Figma Desktop app with Dev Mode enabled. This environment gives external tools full access to the file’s structural data, including hierarchy, auto-layout behaviour, constraints, spacing, typography, and component metadata.
- Since the server streams structured information rather than screenshots, the connected agent receives an accurate representation of how the interface is organised inside Figma.
- Cursor was used as the MCP-compatible client during this test, but any IDE or agent that supports MCP could be used in the same way. The local server URL was added to the client’s configuration so it could request design metadata directly from the active file.
- Once connected, selecting a frame or sharing its link allowed the agent to fetch its underlying properties and generate React code based on the actual layout and component structure defined in Figma, without requiring exports or manual inspection.
Outcome
Here is a screenshot of the output (see the full site at https://estatein-website-bice.vercel.app/).
The MCP-based generation produced a functional React page, and the overall page structure closely resembled the original homepage hierarchy. However, many sections were rebuilt in a simplified form. Icons were replaced with solid placeholder shapes, background images were dropped, and card components lost their visual details, such as shadows, rounded corners, and overlays.
Sections that relied on nested Auto Layouts in Figma were reconstructed using broad container divs with fixed spacing, which resulted in uneven gaps and typography scales that did not match the Figma values.
From a code perspective, MCP generated a single large component for most of the page instead of splitting it into smaller reusable units. The output did not separate data from UI, and mock values were embedded directly inside JSX. Asset handling was incomplete because only a small subset of vector information was extracted, and no images or SVGs appeared in the repository.
The page was responsive, but the responsiveness came from generic flex layouts rather than breakpoints modeled after the design.
Interactive elements such as buttons and inputs were present as basic HTML, but no behaviour was inferred from the design. Most importantly, the visual accuracy was low: spacing, colors, and component proportions differed significantly from the Figma file, so the result served more as a rough scaffold than a close match to the original design.
Codex CLI
Codex CLI works as a local AI coding agent. Instead of relying on a design-specific API like Figma MCP, Codex operates directly inside the developer’s workspace. It can create new files, install dependencies, modify existing code, run shell commands, and build projects through natural language instructions.
The CLI supports common frontend stacks such as React and Next.js, which makes it useful for quickly scaffolding projects from a visual reference or developer prompt.
Setup
- Codex CLI was evaluated as a local AI coding agent that works directly inside the developer’s workspace. It requires an active OpenAI plan (Plus, Pro, Business, Edu, or Enterprise) and operates by analyzing the project folder, understanding existing files, and generating new code through natural language instructions.
- Once installed and initialized in a fresh directory, Codex could scaffold a full React or Next.js project, install packages, and run commands without needing any additional configuration.
- Because Codex cannot read Figma files or query design metadata, the homepage design used in this comparison was exported as a PNG and added to the project. The agent was instructed to recreate the UI based on the screenshot and generate reusable components.
- Codex produced the initial layout, set up the project structure, and launched the local development server. Any refinements, such as adjusting spacing, splitting components, or revising styles, were done through follow-up prompts, which Codex translated into file edits.
Outcome
Here is a screenshot of the output (see the full site at https://codex-cli-1fqs.vercel.app/).
Codex reproduced the overall page layout from the screenshot and generated a functional Next.js project. The structure of the hero section, navigation, cards, and footer followed the exact broad ordering as the Figma file, and the page behaved responsively due to Codex relying heavily on Tailwind’s default flex and grid utilities.
However, the generated UI diverged noticeably from the real design. Spacing between elements was uneven, column widths did not follow the grid proportions from Figma, and the typography scale drifted from the intended hierarchy, especially in headings and card titles.
Because Codex works from an exported PNG, none of the original assets appeared in the output. All icons, property images, background artwork, and decorative UI elements were omitted, leading to multiple sections looking empty or structurally incomplete. For example, the property cards contained text but lacked images; the navigation relied entirely on plain text; and the hero section was rendered without its main visual anchor.
The underlying code showed partial componentisation. Some sections were placed in their own files, while others remained within the page component, and the layout logic was tightly coupled to JSX rather than abstracted into reusable components. Data and UI were mixed in the duplicate files, and no interaction logic was generated because Codex cannot infer behaviour from a static screenshot.
The final output resembled the design only at a high level. It served as a usable draft for layout scaffolding. Still, it would require significant manual work to restore assets, correct grids, refine typography, and rebuild components to align closely with the original Figma design.
Kombai
Kombai is built specifically for frontend development, and Figma-to-code is one part of what it supports. It is designed to generate production-ready UI across 30+ modern frontend libraries, including React, TypeScript, Next.js, Vue, Svelte, Mantine, MUI, and more.
When reading a Figma design, Kombai does not treat it as a screenshot or a raw JSON export. Instead, it interprets the layout, structure, and UI patterns the way a frontend engineer would. It identifies elements such as grids, cards, navigation bars, inputs, buttons, and form layouts and converts them into clean, reusable components that fit the conventions of the selected framework.
A key difference from MCP and Codex is Kombai’s understanding of project context. It can generate code for a new repository or integrate directly into an existing one, reusing components, hooks, styles, and design tokens already present. This allows the generated output to align with the project’s architecture, making it suitable for real production use rather than serving as a draft that requires extensive cleanup.
Setup
- Kombai had the simplest setup of all three tools. There were no servers to configure and no API tokens to manage. The extension was installed from the IDE’s marketplace and appears as a panel in supported editors such as VS Code, Cursor, Windsurf, and Trae.
- After signing in, the extension prompts the user to connect their Figma account through the standard Figma login flow. Once connected, Kombai can read design files directly without requiring exported PNGs or manually prepared assets.
- To bring the homepage design into Kombai, the Figma frame link was copied and added inside the extension. After the selection was confirmed, Kombai analysed the design and completed a planning stage to understand the layout, components, and UI patterns. Code generation typically takes a few minutes.
- When it finishes, Kombai writes React components, pages, assets, and routing files directly into the repository. The project runs immediately inside the IDE, allowing refinement or extension without any additional setup.
Outcome
Here is a screenshot of the output (see the full site at https://kombai-agent-roan.vercel.app/).
Kombai produced a React project that closely followed the original Figma design. The layout matched the intended grid, spacing rules, and visual hierarchy, and the project ran immediately without missing dependencies or broken imports. Instead of generating a large monolithic page, Kombai split the UI into reusable React components with clear props and separated data structures.
All icons, images, and background graphics were extracted directly from the Figma file and placed in the correct asset folders. The generated components referenced these assets cleanly through typed imports.
The visual fidelity was high. Typography scales, color values, and spacing patterns aligned with the original design rather than an approximated interpretation. Interactive elements such as buttons, navigation links, and cards behaved as functioning UI components. The output required only one correction during testing: a minor logo color mismatch.
Post-generation debugging:
- After code generation, the project was opened in Kombai Browser, which allows visual inspection and element-level refinement. The logo color issue was fixed by selecting the logo via the Reference Selection tool and prompting Kombai to apply the original Figma color.
- The system regenerated only the relevant part of the code within a few seconds, and the correction applied cleanly without manual edits. This level of targeted debugging made refinement faster and more controlled compared to re-prompting or editing files manually.
Evaluation of the tools
All three tools received the same Figma homepage and the same prompt: generate a working React version of the design with no manual help.
- Figma MCP generated a functional page but missed many visual details, resulting in a scaffold rather than an accurate translation of the design.
- Codex CLI recreated the general layout from the screenshot, but spacing, typography, and all major assets were off, making the output only loosely aligned with the original.
- Kombai produced the closest and most complete match. It accurately replicated layout, styling, components, and assets, and ran without fixes, with visual debugging available for targeted refinements.
Here is the comparison table:
Kombai was the only tool that reproduced the homepage almost exactly as seen in Figma. The layout, spacing, type scale, cards, navigation, and assets came through correctly, and the project ran immediately with no dependency issues or missing files.
The generated code followed a clean structure. Pages were split into reusable components, styling was organized, and assets were extracted instead of replaced with placeholders. Buttons, dropdowns and inputs worked as real interactive elements, not just static UI.
Post-generation refinement made a noticeable difference. After the first run, the project could be opened inside Kombai Browser, which shows a live preview of the generated UI. If something looked off, the element could be selected visually and Kombai regenerated that part of the code. In this test, the logo appeared white instead of dark. Selecting it in the browser and asking Kombai to apply the original Figma color fixed the issue in seconds.
Kombai also works inside existing codebases and reuses components already present in a repo. This made it the only tool that produced a high-fidelity result and integrated cleanly into real projects.
Conclusion
This comparison highlights that Figma MCP, Codex CLI, and Kombai each approach design-to-code generation differently. MCP provides reliable access to structured Figma metadata, and Codex performs well as a general-purpose coding assistant inside a project. Both can bootstrap a layout, but neither maintains enough visual or structural accuracy to move beyond an early scaffold.
Their limitations become clearer when applied to a real design. MCP preserves hierarchy but loses assets and key visual details. Codex reconstructs the layout from a screenshot but cannot recover typography, spacing, or images. In both cases, substantial manual work is required before the output resembles the intended design or matches a project’s coding patterns.
Kombai closes this gap by interpreting both the Figma file and the codebase. It generates reusable components, extracts assets correctly, and recreates the layout with high fidelity. For teams looking for production-ready design-to-code automation, it delivered the most complete result in this evaluation.










Top comments (0)