DEV Community

nareshipme
nareshipme

Posted on

How We Ditched Backend Rendering and Went Full Client-Side with framewebworker

The Problem with Server-Side Video Rendering

If you've ever built a video editing app, you know the pain: users create clips, hit "render," and then... wait. The video gets shipped to a server, processed with ffmpeg or Remotion, and eventually comes back. It's slow, expensive, and creates a terrible user experience.

At ClipCrafter, we lived with this architecture for months. Our rendering pipeline involved Inngest background functions, six dedicated API routes, a beefy Docker image with Chromium and Remotion dependencies, and a Deno runtime layer. Every render meant a round-trip to the server, and our infrastructure costs were climbing.

Last week, we ripped it all out.

Enter framewebworker

The key insight was simple: modern browsers are powerful enough to handle video rendering directly. We replaced our entire backend rendering stack with framewebworker, a lightweight library that processes video frames in a Web Worker.

The migration touched almost every layer of the app. Here's what changed in our biggest PR of the quarter (#214):

What We Deleted

  • All Inngest background render functions
  • Six API routes dedicated to render orchestration
  • Remotion bundler and renderer packages
  • Chromium and system-level dependencies from our Docker image
  • Deno runtime from the worker container
  • Render-related database columns via a new migration

That last point is worth emphasizing. We didn't just swap libraries — we removed an entire category of server state. No more tracking render jobs, polling for completion, or handling failed renders on the backend.

What We Added

The replacement is a single React hook: useClientStitch. It calls framewebworker's API directly in the browser, passing video segments with caption overlays and timing data. The rendering happens in a Web Worker, so the main thread stays responsive.

// Before: complex server orchestration
const response = await fetch('/api/render', {
  method: 'POST',
  body: JSON.stringify({ clipIds, options })
});
// Then poll for completion...

// After: direct client-side rendering
const { render } = useClipRender();
await render(segments, {
  onProgress: (progress) => updateUI(progress)
});
Enter fullscreen mode Exit fullscreen mode

The difference in developer experience is night and day.

The Follow-Up: Rich Progress UI

Once rendering moved client-side, we could do something that was nearly impossible before: show real, granular progress. In the server model, the best we could offer was "processing..." with maybe a percentage that updated every few seconds.

With framewebworker running locally, we built a RenderStatusPanel (PR #216) that shows per-clip progress with spinner and check icons, an overall progress bar with calculated ETA based on elapsed time, and real RichProgress data with per-clip metrics.

We went from a vague loading spinner to a detailed breakdown of exactly what's happening and when it will finish.

Iterating on the API

The migration wasn't a single PR. After the initial swap, we refined the integration across several follow-up changes.

PR #217 bumped framewebworker to v0.1.1 and wired up the real RichProgress types, removing our local progress simulation. Then PR #218 migrated from the stitch() API to a new render() function that loads a single video once instead of re-loading per segment. This introduced buildVideoSegments() to map clips to framewebworker segments with captions and timing.

Results

Docker Image Size

Removing Chromium, Remotion, and Deno shaved roughly 200 MB off our worker Docker image (PR #215). Faster deploys, lower storage costs.

Infrastructure

Six API routes and all background job infrastructure: gone. Our server is simpler, cheaper to run, and has fewer failure modes.

User Experience

Renders start instantly — no upload, no queue, no polling. Users see exactly what's happening with per-clip status and a real ETA.

Takeaways for Your Projects

If you're building a media processing app, consider whether the browser can handle more than you think. Web Workers and modern APIs like OffscreenCanvas and WebCodecs are closing the gap with server-side processing for many use cases.

The key questions to ask: Does every user need to render, or just a few? (If most users render, client-side saves massive server costs.) Are your renders CPU-bound or GPU-bound? (Browsers have GPU access.) Can you tolerate slightly longer render times in exchange for zero infrastructure?

For ClipCrafter, the answer to all three made client-side rendering a clear win.


ClipCrafter is an open-source video clipping tool. Check out the repo at github.com/clipcrafterapp/clipcrafter-app and give it a star if this kind of engineering interests you.

Top comments (3)

Collapse
 
harsh2644 profile image
Harsh

This is a refreshingly bold take.

Most teams default to SSR without thinking twice. Moving full client-side with Web Workers is definitely swimming against the current.

I'm curious about a few things:
SEO how are you handling crawlers that don't execute JS well?
FrameworkWebWorker is this something you built? Open source plans?
Initial bundle size did moving rendering logic to the client increase it significantly?

The main thread benefits make sense. Just wondering about the trade-offs you had to accept.

Thanks for sharing always good to see people challenging best practices. 🙌

Collapse
 
nareshipme profile image
nareshipme

Hey Harsh, great questions!

SEO — framewebworker handles video export, not page content rendering. The exported video is generated client-side (OffscreenCanvas + ffmpeg.wasm) so there's nothing for crawlers to index. ClipCrafter's actual page content is still SSR'd normally — no SEO impact.

framewebworker — Yes, we built it and just open-sourced it! npm: npmjs.com/package/framewebworker | GitHub: github.com/nareshipme/framewebworker. v0.2.0 is live with exportClips(), mergeClips(), React hooks, and per-clip render metrics.

Bundle size — Is just 474kb on its own, and I plan to reduce it. A webassembly ffmpeg.wasm is ~30MB but loaded lazily — only when the user triggers an export. Initial page load is unaffected. First export has ~2-3s cold start, then stays in memory. The real tradeoff: requires COOP/COEP headers. Worth it for us, but worth knowing before adopting.

Thanks for the thoughtful comment! 🙌

Collapse
 
nareshipme profile image
nareshipme