<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Bojan Josifoski</title>
    <description>The latest articles on DEV Community by Bojan Josifoski (@bojan_josifoski_76e9fd65d).</description>
    <link>https://dev.to/bojan_josifoski_76e9fd65d</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/bojan_josifoski_76e9fd65d"/>
    <language>en</language>
    <item>
      <title>How I Built a Programmatic Video Engine for SaaS Product Demos</title>
      <dc:creator>Bojan Josifoski</dc:creator>
      <pubDate>Fri, 15 May 2026 12:39:56 +0000</pubDate>
      <link>https://dev.to/bojan_josifoski_76e9fd65d/how-i-built-a-programmatic-video-engine-for-saas-product-demos-5292</link>
      <guid>https://dev.to/bojan_josifoski_76e9fd65d/how-i-built-a-programmatic-video-engine-for-saas-product-demos-5292</guid>
      <description>&lt;p&gt;Every SaaS product needs a demo video. The standard options are screen recording, which looks amateur and breaks every time the UI changes, or hiring a motion designer, which costs thousands of dollars and takes weeks. Both produce a static artifact that is outdated the moment you ship a new feature.&lt;/p&gt;

&lt;p&gt;I needed a product walkthrough for &lt;a href="https://samplehq.io" rel="noopener noreferrer"&gt;SampleHQ&lt;/a&gt;. Instead of choosing between those two bad options, I built a third: a React-based framework that generates cinematic demo videos programmatically. The result is &lt;a href="https://github.com/codeverbojan/remotion-cinematic" rel="noopener noreferrer"&gt;remotion-cinematic&lt;/a&gt;, an open-source template built on &lt;a href="https://remotion.dev" rel="noopener noreferrer"&gt;Remotion&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Here is the video it produced:&lt;/p&gt;

&lt;p&gt;&lt;iframe src="https://player.vimeo.com/video/1185215935" width="710" height="399"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Code Instead of a Screen Recording
&lt;/h2&gt;

&lt;p&gt;A screen recording captures pixels. When you redesign a page, rename a feature, or change the navigation structure, the recording is wrong. You re-record, re-edit, re-export. Every time.&lt;/p&gt;

&lt;p&gt;A programmatic video captures intent. The video is a React composition that renders your actual UI components. When the UI changes, you update the component and re-render. The choreography, camera movement, cursor interactions, and transitions stay intact. The whole process takes minutes, not days.&lt;/p&gt;

&lt;p&gt;There are other advantages. Version control. Branching. Code review. The same workflow you use for your product applies to your marketing video. You can diff two versions of a demo video the way you diff two versions of a feature.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Remotion Cinematic Does
&lt;/h2&gt;

&lt;p&gt;Remotion is a framework for making videos with React. You write components, they render frames, and Remotion stitches those frames into an MP4. It is excellent infrastructure but it does not solve the choreography problem. Moving a cursor smoothly across the screen, timing window entrances, coordinating camera movement with scene transitions: that is a lot of custom code.&lt;/p&gt;

&lt;p&gt;Remotion Cinematic is the layer on top. It provides:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Prop-driven choreography.&lt;/strong&gt; Every visual element, including window positions, sizes, entrance animations, rotation, and z-index, is defined as an input prop. You edit values in Remotion Studio's right panel and see the result immediately. No code changes for layout tweaks.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Geometry-aware cursor.&lt;/strong&gt; The cursor targets real elements by their DOM ID. It follows arc, linear, or eased curves between waypoints. When it clicks a button, the button actually responds. When it drags a window, the window moves. The cursor changes shape based on what it is interacting with.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scene-relative camera.&lt;/strong&gt; Camera keyframes reference scene names, not absolute frame numbers. If you change a scene's duration or reorder scenes, the camera timeline adjusts automatically. The camera supports zoom, pan, and rotation with per-keyframe easing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A full visual editor.&lt;/strong&gt; Click any element in the preview to select it. Drag to reposition. Use handles to resize. Snap guides appear when elements align. Double-click text to edit inline. A floating property panel shows context-aware controls for whatever is selected. There is a visual cursor path editor with SVG overlays for drawing and adjusting cursor movement.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;App UI from JSON.&lt;/strong&gt; A descriptor format defines entire app interfaces: sidebars, navigation bars, data tables, stat cards, chat panels. Drop in a JSON object and the framework renders a full application interface with interactive elements. The descriptor can be generated from Figma files or screenshots.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Figma and screenshot import.&lt;/strong&gt; A CLI tool converts Figma frames into app descriptors via the Figma REST API. Alternatively, point it at a screenshot and Claude vision generates the descriptor. Either way, you go from a design to a rendered app UI in the video without manually coding components.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Architecture
&lt;/h2&gt;

&lt;p&gt;The framework is organized into three layers.&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;engine&lt;/strong&gt; handles motion. It includes the choreography system (resolving window positions at any frame), the cursor system (interpolation, anchoring, shape switching), the camera rig (global transforms applied to the composition), and the audio manager (music bed with auto-fade, sound effects with volume ducking).&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;primitives&lt;/strong&gt; are reusable visual components. macOS-style windows with traffic lights. Serif headlines with word-stream animation. An end card with logo and call to action. Push transitions that slide scenes continuously instead of cutting. 17 app-ui building blocks that compose into realistic application interfaces.&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;editor&lt;/strong&gt; is a separate layer that reads and writes input props. The composition itself is pure: it receives props and renders frames. The editor overlays selection boxes, drag handles, snap guides, and property panels on top. This separation means the editor can be disabled for production renders without touching composition code.&lt;/p&gt;

&lt;p&gt;The interaction layer ties everything together. A UIKeyframe system defines state changes on a timeline: at frame 24 the sidebar highlights item 1, at frame 40 the tab switches to "Orders," at frame 56 a table row highlights. These keyframes sync with cursor actions. When the cursor clicks a sidebar item, the sidebar responds. No manual wiring required.&lt;/p&gt;

&lt;h2&gt;
  
  
  Claude Integration
&lt;/h2&gt;

&lt;p&gt;The template includes a Claude skill file that teaches &lt;a href="https://bojanjosifoski.com/ai-agent-queries-operational-data/" rel="noopener noreferrer"&gt;Claude&lt;/a&gt; the full API. When you open the project in Claude Code, Claude can add new scenes from a description, wire up cursor choreography, build app UI from screenshots, and adjust timing and easing. The same &lt;a href="https://bojanjosifoski.com/ai-in-sample-operations/" rel="noopener noreferrer"&gt;AI-assisted approach&lt;/a&gt; that works for operational software works for video production.&lt;/p&gt;

&lt;p&gt;This is not a gimmick. Scene creation involves writing a React component that uses specific engine APIs, registering it in the composition, adding window layout entries, defining cursor waypoints, and setting camera keyframes. Having Claude handle the boilerplate while you focus on creative direction is a genuine productivity multiplier.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Numbers
&lt;/h2&gt;

&lt;p&gt;The framework has 509 tests covering the engine, primitives, editor, CLI, schema validation, and end-to-end wiring. It ships with 5 example scenes: a chaos desktop, product reveal, feature showcase, headline, and end card. The tech stack is Remotion 4.x, React 19, TypeScript 5.9, and Zod for schema validation. Output is 1920x1080 at 30fps.&lt;/p&gt;

&lt;p&gt;The entire thing is MIT licensed. Clone it, swap in your screenshots and brand colors, and you have a cinematic product demo.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npx degit codeverbojan/remotion-cinematic my-video
&lt;span class="nb"&gt;cd &lt;/span&gt;my-video
npm &lt;span class="nb"&gt;install
&lt;/span&gt;npm run studio
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  What I Learned
&lt;/h2&gt;

&lt;p&gt;Three things surprised me during the build.&lt;/p&gt;

&lt;p&gt;First, the visual editor was harder than the engine. Getting drag-to-move, resize handles, snap guides, and inline text editing to work correctly inside Remotion Studio required a careful separation between the editor overlay and the composition iframe. The postMessage bridge between them is where most of the complexity lives.&lt;/p&gt;

&lt;p&gt;Second, scene-relative timing is essential. Early versions used absolute frame numbers for everything. Changing one scene's duration broke every cursor waypoint and camera keyframe that followed. Referencing scene names instead of frame numbers eliminated an entire class of bugs.&lt;/p&gt;

&lt;p&gt;Third, the prop-driven approach pays for itself immediately. Being able to hand the Remotion Studio interface to someone who does not write code and let them adjust copy, colors, timing, and layout without touching a source file changes who can iterate on the video. The feedback loop goes from "file a request and wait" to "change it and preview."&lt;/p&gt;

&lt;h2&gt;
  
  
  Try It
&lt;/h2&gt;

&lt;p&gt;The repository is at &lt;a href="https://github.com/codeverbojan/remotion-cinematic" rel="noopener noreferrer"&gt;github.com/codeverbojan/remotion-cinematic&lt;/a&gt;. The README has a quickstart, customization guide, and full project structure. The docs folder covers the engine API, scene creation, and advanced customization.&lt;/p&gt;

&lt;p&gt;If you are building a SaaS product and need a demo video that does not go stale, this is the approach. Fork it, make it yours, render it.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;a href="https://samplehq.io" rel="noopener noreferrer"&gt;SampleHQ&lt;/a&gt; is the product this framework was built to demo. It manages samples, tracks fulfillment, and connects sample activity to pipeline data.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>react</category>
      <category>opensource</category>
    </item>
  </channel>
</rss>
