I had a Framer site (nocodetalks.co) running for 2 years. $10/month for hosting a static website. No dynamic content, no CMS, no forms. Just HTML on Framer's servers.
When I looked into moving it to Vercel (free tier, same performance), I hit a wall: Framer has no code export. Their help center says it flat out. You can build on Framer, but you can't take your files and leave.
I talked to a few friends who were in the same spot. Same frustration. They all wanted to move to Vercel or Cloudflare Pages but had no way to get their code out.
So I built a tool to do it. What started as a script for my own site turned into a product. Here's what I learned about how Framer works under the hood, and why "just saving the HTML" doesn't work.
Why View Source doesn't work
Framer sites look like normal HTML when you right-click and View Source. But they're React apps. The server sends pre-rendered HTML, then the client loads a JavaScript bundle that calls hydrateRoot() to take over the DOM.
If you save the HTML file and open it locally:
- The React bundle tries to load from Framer's CDN
- Hydration runs, but the API calls fail because you're not on Framer's domain
- React either throws errors or wipes the DOM
- You get a blank page or broken layout
The HTML you see in View Source is just the initial server render. The real site lives in React's runtime.
The 5 problems I had to solve
1. Stripping React without breaking the page
The first thing the crawler does after capturing each page's HTML: remove every <script> tag related to React hydration. That includes the main entry point, any modulepreload hints, and inline scripts that call hydrateRoot.
But you can't just delete all JavaScript. Framer injects scripts for interactive components too: FAQ accordions, mobile nav menus, tab switchers. Removing those makes the page static in the wrong way. Accordions don't open, menus don't toggle.
My fix: strip the React/hydration layer, then inject small vanilla JS scripts that replicate the interactive behavior. A FAQ accordion is about 20 lines of JS. A mobile menu toggle is 15. These replace hundreds of kilobytes of React runtime.
2. Invisible content (scroll animations)
This one took a while to figure out. Exported pages looked fine in the preview but entire sections were missing.
Framer uses opacity: 0 on elements that are supposed to animate in when you scroll to them. On the live site, Framer's JavaScript detects scroll position and fades them in. In the exported version, those scripts are gone, so the elements stay invisible.
The crawler now detects elements with opacity values below 0.1 and tags them with a CSS class (.framer-reveal). The injected stylesheet sets them to opacity: 1 by default, with a CSS animation that fades them in. No JavaScript needed.
3. Hover effects applied through JavaScript
Most websites use CSS :hover for hover effects. Framer does it differently. Their whileHover prop triggers JavaScript that calls element.style.setProperty() to apply inline styles on mouseenter.
You can't replicate this by inspecting the CSS. The hover styles don't exist in any stylesheet. They're generated at runtime.
To capture these, I use Chrome DevTools Protocol to:
- Move the virtual mouse over each element
- Watch for DOM mutations using
MutationObserver - Record which CSS properties change
- Move the mouse away and verify the styles revert (to confirm it's a hover effect, not a click handler)
- Convert the captured properties into real CSS
:hoverrules
This is probably the most involved part of the whole system. About 200 lines of JS just for hover detection.
4. Lazy-loaded images
Framer lazy-loads everything below the fold. If you capture the HTML on initial page load, most <img> tags have placeholder src attributes or no src at all.
The crawler auto-scrolls each page from top to bottom, pausing at intervals to let images load. After scrolling, it waits for the DOM to settle (no new mutations for 500ms) before capturing the final HTML.
5. CSS url() references
Framer stylesheets reference fonts and background images using absolute URLs that point to Framer's CDN (like https://framerusercontent.com/...). The crawler downloads every asset referenced in CSS url() declarations, saves them locally, and rewrites the URLs to relative paths.
This happens recursively. Some CSS files @import other CSS files, which reference fonts, which reference other assets. The crawler follows the chain until everything is local.
The architecture
Nothing exotic:
- Puppeteer (headless Chrome) for rendering pages and executing JavaScript
- Cheerio for parsing and rewriting HTML after capture
-
Regex for rewriting CSS
url()paths (Cheerio doesn't parse CSS) - Express for the API and serving the preview
- p-queue for concurrency control (max 2 browser instances at a time)
- archiver for streaming ZIP creation
The crawling is BFS (breadth-first). Start at the homepage, extract all internal links, visit each one, repeat. Capped at 50 pages per export.
Asset downloads run 8 at a time using plain https.get. I tried using Puppeteer for asset downloads early on but it was 10x slower. Direct HTTP requests with no browser overhead made the biggest performance difference.
The whole thing is about 1,100 lines of JS for the crawler, 400 for the URL rewriter, and 200 for the ZIP packager.
What the output looks like
A typical exported Framer site gives you:
index.html
about.html
pricing.html
contact.html
assets/
css/
a3f2c1_styles.css
b7e9d4_chunk.css
js/
menu-toggle.js
faq-accordion.js
scroll-reveal.js
images/
hero.webp
team-photo.jpg
logo.svg
fonts/
inter-var.woff2
playfair-display.woff2
manifest.json
Each HTML file is self-contained with relative paths. No CDN dependencies, no API calls, no framework. Open index.html in a browser and it works.
File sizes drop a lot. A typical Framer site loads 800KB+ of JavaScript (React runtime, Framer library, hydration bundle). The exported version is usually under 100KB of JS total (just the small interaction scripts).
Pricing and why it's one-time
$10.99 per Framer URL. You pay once, download the ZIP, and that's it.
I went with one-time because the use case is transactional. You export a site once. Maybe you come back months later to re-export after making changes in Framer. But it's not something you do daily or weekly.
For context: Framer's Pro plan is $30/month per site. If you export and move to Vercel's free tier, you save $30/month going forward. The $10.99 pays for itself in about 11 days.
What I'd tell other devs building export tools
Don't trust the initial HTML. Any site that hydrates on the client (React, Vue, Svelte, etc.) gives you a snapshot that doesn't represent the real page. You need to render it in a real browser and wait for JavaScript to finish.
Scroll the page. Lazy loading is everywhere now. If you don't scroll, you miss half the content.
Watch for JS-driven styles. More and more sites apply visual states through JavaScript instead of CSS. Hover effects, scroll triggers, intersection observers. If you want to capture the visual behavior, you need to simulate user interaction and observe DOM changes.
Test on 20+ real sites before you ship. Every Framer site uses a slightly different combination of components. The edge cases are endless. Carousels, tabs, nested accordions, sticky headers, video backgrounds. Each one needs specific handling.
If you want to try it: letaiworkforme.com. The export and live preview are free. You pay to download the ZIP.
The code runs on Node.js + Puppeteer. Happy to answer questions about any of the technical details.
Top comments (0)