<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: echo-ae</title>
    <description>The latest articles on DEV Community by echo-ae (@echoae).</description>
    <link>https://dev.to/echoae</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/echoae"/>
    <language>en</language>
    <item>
      <title>Why AI coding agents struggle with Figma (and what actually worked)</title>
      <dc:creator>echo-ae</dc:creator>
      <pubDate>Thu, 30 Apr 2026 21:21:59 +0000</pubDate>
      <link>https://dev.to/echoae/why-ai-coding-agents-struggle-with-figma-and-what-actually-worked-3opi</link>
      <guid>https://dev.to/echoae/why-ai-coding-agents-struggle-with-figma-and-what-actually-worked-3opi</guid>
      <description>&lt;p&gt;AI coding agents have become surprisingly capable at generating frontend code. In isolated tasks they often perform well, but once you try to plug them into a real design workflow — especially one based on Figma — the results tend to degrade.&lt;/p&gt;

&lt;p&gt;What I kept running into wasn’t really a limitation of the models themselves. It was the way design data is passed into them.&lt;/p&gt;

&lt;p&gt;A common instinct is to give the agent as much context as possible: the entire Figma file, or at least a large portion of it. On paper that sounds reasonable. In practice, it makes things worse.&lt;/p&gt;

&lt;p&gt;Figma files contain a lot more than just the structure of the UI. They include deeply nested trees, inconsistent naming, layout artifacts, and a significant amount of visual metadata that isn’t particularly meaningful for code generation. What looks like a clean design to a human turns into a fairly noisy representation when you inspect the underlying data.&lt;/p&gt;

&lt;p&gt;Once that data is fed into a model, the signal-to-noise ratio drops quickly. The agent starts making weaker assumptions about hierarchy and intent, and the output becomes less predictable. You get code that is technically plausible, but often needs a lot of correction.&lt;/p&gt;

&lt;p&gt;I also tried going through the usual paths — using Dev Mode or APIs to extract design information. While that helps with access, it doesn’t really solve the core issue. You still end up working with raw data that wasn’t designed for machine interpretation, and you introduce additional dependencies along the way.&lt;/p&gt;

&lt;p&gt;What ended up working better for me was not adding more data, but reducing and reshaping it.&lt;/p&gt;

&lt;p&gt;Instead of treating the design file as something to pass through directly, I started thinking of it as something that needs to be interpreted first. The idea was to extract only the part of the design that actually matters for the task at hand, clean it up, and present it in a way that is easier for an agent to reason about.&lt;/p&gt;

&lt;p&gt;Here’s what the workflow looks like in practice:&lt;/p&gt;



  



&lt;p&gt;&lt;em&gt;Select → export → normalize → agent generates UI&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  A local pipeline
&lt;/h2&gt;

&lt;p&gt;That led me to build a small local pipeline.&lt;/p&gt;

&lt;p&gt;On the Figma side, I export a selected fragment of the design — usually a frame or a component — without relying on Dev Mode. That data is then processed locally, where the structure is simplified and normalized. The goal isn’t to perfectly preserve every detail, but to make the hierarchy and relationships between elements clearer.&lt;/p&gt;

&lt;p&gt;The processed result is exposed through a local MCP server, which allows an AI agent to query it as structured context rather than consume raw JSON. At that point, the model is no longer trying to interpret Figma directly. It’s working with something that is closer to a logical representation of a UI.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why this helps
&lt;/h2&gt;

&lt;p&gt;An interesting side effect of this approach is that it naturally limits the scope of what the agent sees. Not by manually trimming input, but by only providing a clean, relevant slice of the design. That alone makes a noticeable difference.&lt;/p&gt;

&lt;p&gt;In practice, the outputs became more consistent. The model made fewer strange layout decisions, and the generated code required less manual fixing. The overall workflow felt less like “prompting and hoping” and more like something you can rely on.&lt;/p&gt;

&lt;h2&gt;
  
  
  Local-first advantages
&lt;/h2&gt;

&lt;p&gt;Keeping everything local turned out to be another advantage. There’s no dependency on external APIs, no usage-based cost, and no requirement to enable specific Figma features. It also makes the setup easier to control and reason about.&lt;/p&gt;

&lt;h2&gt;
  
  
  Open source
&lt;/h2&gt;

&lt;p&gt;I ended up open-sourcing the tool I built around this approach:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/echo-ae/local_figma_port" rel="noopener noreferrer"&gt;https://github.com/echo-ae/local_figma_port&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The implementation itself is still evolving, but the underlying idea has been surprisingly stable: giving the model less, but better-structured context leads to better results.&lt;/p&gt;

&lt;h2&gt;
  
  
  Discussion
&lt;/h2&gt;

&lt;p&gt;I’m curious how others are approaching this problem — especially when it comes to bridging design tools and AI-driven development.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>webdev</category>
      <category>frontend</category>
      <category>machinelearning</category>
    </item>
  </channel>
</rss>
