The Problem We All Know
"Make it like that site" β every frontend developer has heard this phrase. A client sends a link to a competitor's landing page. A PM points at an Amazon product card. A designer nods toward Dribbble. And then the guessing game begins.
Here's what we usually do:
Option 1: Screenshot β LLM
Drop an image into Claude or ChatGPT asking it to "recreate this." The result? The LLM guesses styles. Is that padding 16px or 24px? Is that blue #3b82f6 or #2563eb? Is the shadow 0 2px 8px or 0 4px 12px? Pure hallucination.
Option 2: DevTools β Copy Styles
Open inspector, right-click, "Copy styles." You get 200 properties including -webkit-tap-highlight-color and orphans: 2. Good luck extracting what actually matters.
Option 3: Figma Inspect
Great if you have the original Figma file. But Figma shows designer intent, not browser reality. That auto layout with 16px gap might render completely differently in CSS.
None of these give you what you actually need: the exact computed styles as the browser sees them.
The Design Loop
Here's a workflow that actually works:
βββββββββββββββββββββββββββββββββββββββββββββββββββ
β β
β Reference UI βββΊ Snapshot βββΊ LLM βββΊ Code β
β β² β β
β β βΌ β
β ββββββββ Review in Browser βββββββββ β
β β
β REPEAT β
βββββββββββββββββββββββββββββββββββββββββββββββββββ
Iteration 1: Capture the reference β LLM generates first version
Iteration 2: Capture your result + reference β "find the differences"
Iteration N: Refine until pixel-perfect
The key insight: instead of screenshots, we capture structured snapshots of computed styles. Tools like E2LLM can extract the actual CSS the browser computed β not what the developer wrote, not what Figma intended, but what pixels actually rendered. I covered the underlying schema in Runtime Snapshots #7 β Inside SiFR: The Schema That Makes LLMs See Web UIs.
Why Structured Snapshots Beat Screenshots
| Screenshot | Structured Snapshot |
|---|---|
| LLM guesses: padding 16px? | Exact: padding: 24px 32px
|
| "Looks kinda blue" |
#3b82f6 with opacity: 0.9
|
| Can't see hover states | actionType: "clickable" |
| No hierarchy info | Full DOM tree with nesting |
| Font looks bold-ish |
font-weight: 600, font-family: Inter
|
When you give an LLM structured data instead of pixels, it stops guessing and starts implementing. This is the same principle behind why static code analysis falls short β runtime state matters.
Practical Example: Cloning a Product Card
Scenario: Client points at an Amazon product card β "I want exactly this."
Step 1: Capture the reference
Using E2LLM browser extension, click on the element to get:
{
"tag": "div",
"computedStyles": {
"padding": "16px",
"borderRadius": "8px",
"boxShadow": "0 2px 8px rgba(0,0,0,0.1)",
"backgroundColor": "#ffffff"
},
"children": [
{
"tag": "img",
"computedStyles": {
"width": "100%",
"aspectRatio": "1",
"objectFit": "contain"
}
},
{
"tag": "h3",
"text": "Product Title",
"computedStyles": {
"fontSize": "14px",
"fontWeight": "400",
"lineHeight": "1.4",
"color": "#0f1111"
}
}
]
}
Step 2: Prompt the LLM
"Create a React component ProductCard with these exact styles. Use Tailwind where classes match exactly, inline styles for precise values that don't map to Tailwind."
Step 3: LLM generates code
function ProductCard({ image, title, price }) {
return (
<div
className="p-4 rounded-lg bg-white"
style={{ boxShadow: '0 2px 8px rgba(0,0,0,0.1)' }}
>
<img
src={image}
className="w-full aspect-square object-contain"
/>
<h3
className="text-sm font-normal leading-snug"
style={{ color: '#0f1111' }}
>
{title}
</h3>
</div>
);
}
Step 4: Capture your result
Open your component in the browser, capture it the same way.
Step 5: Diff prompt
"Compare these two snapshots. First is the reference, second is my implementation. List style differences and suggest fixes."
LLM responds:
Differences found:
- lineHeight: reference 1.4 vs yours 1.375 (leading-snug)
- Consider using style={{ lineHeight: '1.4' }} for exact match
Repeat until the diff is empty.
The Diff Mode: Two Pages Side by Side
For complex pages, you can capture both simultaneously:
Reference Site Your Implementation
β β
βΌ βΌ
[Capture] [Capture]
β β
ββββββββββΊ LLM ββββββββββββ
β
βΌ
"Differences:
- padding: 24px vs 16px
- font-weight: 600 vs 400
- border-radius: 12px vs 8px"
This turns subjective "looks close enough" into objective "here are the 3 remaining differences." Similar to how we use snapshots for semantic regression detection, but for design fidelity instead of functionality.
Why This Loop Works
- Measurable progress β each iteration shrinks the diff
- Objectivity β no more "it looks similar," just concrete numbers
- Documentation β snapshots save as artifacts for future reference
- In-context learning β the LLM learns from its mistakes within the session
This is the same loop pattern I described in Runtime Snapshots #10 β The Loop, applied to design workflow instead of QA.
When This Is Especially Useful
- Agency work: Client shows competitor's site as reference
- Redesigns: Updating UI while preserving the "feel"
- Design systems: Extracting tokens from existing UI
- Responsive work: Capture at different viewports β LLM adapts breakpoints
Try It Yourself
- Install E2LLM for Chrome (also available for Firefox)
- Open any site you want to recreate
- Click the extension icon β Select Element β click a component
- Paste the JSON into Claude or ChatGPT: "Recreate this as a React component"
- Render your result, capture it
- Send both snapshots: "Find the differences"
- Fix and repeat
The gap between "reference" and "implementation" becomes a number that shrinks to zero. That's the Design Loop.
This is part of the Runtime Snapshots series, exploring how structured browser data changes the way we work with LLMs for frontend development. Check out E2LLM to try it yourself.
Top comments (0)