Most developers share code as screenshots. Carbon, ray.so, a quick crop from VS Code. They look fine, but they sit still in a feed full of motion, and most people scroll past.
A short video of the same code, typing itself out, behaves very differently. It catches the eye for the first second, which is usually all you need.
You can build this with React. The library is Remotion. Once you understand two things, that a frame is just a function of time, and that React already gives you everything else, the rest is small.
This post walks through a minimal version: a typing animation with syntax highlighting, rendered to MP4. Around 100 lines.
Why React for video
Remotion treats a video as a React tree. You write components. The library renders them once per frame and stitches the frames into a video file. There is no timeline panel, no keyframes, no After Effects.
Two hooks do most of the work:
-
useCurrentFrame()returns the current frame number. -
interpolate(frame, [from, to], [a, b])maps that frame to any animated value.
Everything else is React you already know.
Setting up
npm create video@latest -- --template hello-world
cd my-code-video
npm run dev
This opens Remotion Studio at localhost:3000. The studio is just a preview; the source of truth is src/Composition.tsx. Replace its contents as we go.
The entry point registers a composition (width, height, fps, duration) and points it at a component:
// src/Root.tsx
import { Composition } from "remotion";
import { CodeVideo } from "./CodeVideo";
export const Root = () => (
<Composition
id="CodeVideo"
component={CodeVideo}
durationInFrames={120}
fps={30}
width={1080}
height={1920}
/>
);
That is the whole config for a 4-second 1080×1920 vertical video at 30 fps.
Syntax highlighting without the async trap
Remotion renders frames synchronously. Shiki, the highlighter, loads its themes and grammars asynchronously. If you call it inside a component, the first frame renders with no colors.
The fix is delayRender / continueRender. You tell Remotion to pause until your data is ready:
import { delayRender, continueRender } from "remotion";
import { useEffect, useState } from "react";
import { codeToHtml } from "shiki";
function useHighlightedCode(code: string, lang: string, theme: string) {
const [html, setHtml] = useState<string | null>(null);
const [handle] = useState(() => delayRender("Highlighting code"));
useEffect(() => {
codeToHtml(code, { lang, theme }).then((out) => {
setHtml(out);
continueRender(handle);
});
}, [code, lang, theme, handle]);
return html;
}
Now the first frame is colored, and Remotion does not start rendering until Shiki has loaded.
The typing animation
The trick is that the visible substring of the code is a function of the frame:
import { useCurrentFrame, interpolate } from "remotion";
const code = `function hello(name) {
return \`Hi, \${name}\`;
}`;
export const CodeVideo = () => {
const frame = useCurrentFrame();
const charsVisible = Math.floor(
interpolate(frame, [0, 90], [0, code.length], {
extrapolateRight: "clamp",
}),
);
const visible = code.slice(0, charsVisible);
const html = useHighlightedCode(visible, "javascript", "github-dark");
return (
<div
style={{
flex: 1,
background: "#0d1117",
fontFamily: "JetBrains Mono, monospace",
fontSize: 48,
padding: 80,
whiteSpace: "pre",
}}
dangerouslySetInnerHTML={{ __html: html ?? "" }}
/>
);
};
Frame 0 shows nothing. Frame 90 shows the full snippet. Frames in between show a growing prefix. clamp keeps it pinned to the end so the last 30 frames are a steady "finished" state, useful for the eye to land and read.
interpolate does linear easing by default. Pass easing: Easing.out(Easing.cubic) if you want it to slow down at the end, which feels more like real typing.
Loading a real font
System monospace varies across platforms, and on AWS Lambda or any headless renderer you cannot rely on it at all. Load a font explicitly:
import { loadFont } from "@remotion/google-fonts/JetBrainsMono";
const { fontFamily } = loadFont();
// then: fontFamily in your style
@remotion/google-fonts is build-aware. It registers the font with the renderer so it shows up correctly during headless rendering.
A blinking cursor (optional, 6 lines)
const cursorVisible = Math.floor(frame / 15) % 2 === 0;
// in your JSX, after the highlighted code:
{cursorVisible && <span style={{ background: "#fff", width: 20, height: 56 }} />}
That is all. The cursor is just a div whose visibility flips every 15 frames.
Rendering to MP4
npx remotion render src/index.ts CodeVideo out/video.mp4
The first render is slow because Chromium boots once. After that, frames render in parallel across CPU cores. A 4-second 1080×1920 video on a laptop takes 10 to 20 seconds.
For vertical platforms (TikTok, Reels, Shorts), keep the 1080×1920 composition. For Twitter or a blog header, change the composition to 1920×1080 and you are done. Same component, different framing.
Where to take this
The piece above is the floor. Things you can layer on, each maybe a weekend:
- Line-by-line reveal instead of character-by-character. Feels less frantic for longer snippets.
-
Diff mode. Two versions of the code, the deletions fade out, the additions fade in. The
diffnpm package gives you the line ops; the rest isinterpolate. - Token burst. Shiki returns tokens with positions; you can animate each one independently.
-
Keystroke audio.
<Audio />component with a short click sample, played on every character increment. -
Backgrounds. Gradients, particles, a subtle drift. Just CSS or SVG, animated with
interpolate.
Each of these is its own post. The core never changes: a frame is a function of time, and React handles the rest.
A short aside
I built all of the above for myself, then realized I did not want to copy-paste Remotion configs every time I wanted to post a snippet. So I packaged it as a small web app called code2clip: paste code, pick a theme and aspect ratio, get an MP4. If this tutorial reads like more weekend than you have, that exists. If it does not, the Remotion approach above is the whole picture; you do not need a tool.
Closing
Code screenshots are not going away, and they should not. But for the specific case of a developer feed (Twitter, LinkedIn, TikTok, Shorts), a 10-second video of code typing itself out behaves measurably differently from a static image. The infrastructure to build one is small, and you already know most of it.
What format are you using for code right now: screenshots, gifs, video? Curious whether anyone else has tested the same snippet in two formats.
Top comments (0)