Adding subtitles to a video shouldn't require handing your file over to some cloud service. Whether it's a personal vlog, a client interview, or a clip you don't want floating around the internet, uploading feels like overkill for something this simple.
We thought the same thing. So we built a subtitle editor that runs entirely in your browser. You pick your video, type your subtitles, tweak the look, and hit go. The output video gets rendered locally — your footage never touches a server.
Here's how we pulled it off.
Why Keep Subtitle Editing in the Browser?
Server-side subtitle tools are everywhere, but they all share the same annoying traits:
- You have to upload first: A 2-minute 1080p video can easily be 200MB. That's a lot of waiting on a mediocre connection.
- Privacy is murky: Once your file is on someone else's server, who knows where it ends up?
- They nickel-and-dime you: Free tier gives you 3 videos a month, then suddenly it's $15.
- No control over styling: Pick one of three presets and hope for the best.
Doing it in the browser fixes all of that:
- Zero uploads: Everything stays on your machine
- Instant processing: No queue, no "please wait while we process your file" emails
- Full styling control: Font size, color, position — you decide
- Actually free: You're using your own CPU, so we don't need to charge you
The catch? You're limited by your device's memory. But for short to medium-length clips, modern hardware handles it just fine.
How the Whole Thing Flows
From the moment you drop a video to the moment you download the subtitled version, here's what happens:
Let's walk through the interesting parts.
The Data Model
We keep the state intentionally flat. A video is represented by one VideoFile object, and subtitles are just an array of entries:
interface VideoFile {
id: string;
file: File;
previewUrl: string;
outputUrl?: string;
outputFileName?: string;
error?: string;
processing?: boolean;
progress?: number;
videoWidth?: number;
videoHeight?: number;
duration?: number;
}
interface SubtitleEntry {
id: string;
startTime: number;
endTime: number;
text: string;
}
The duration field on VideoFile is useful because it lets us validate subtitle timings — you can't have a subtitle that starts after the video ends. previewUrl is a standard URL.createObjectURL() pointing to the raw file, so the browser's native <video> element can play it immediately.
Loading FFmpeg on Demand
FFmpeg.wasm is heavy. We're talking several megabytes of JavaScript and WebAssembly. We definitely don't want to make users download that just to look at our landing page.
Our loader fetches it lazily and caches the instance:
// utils/ffmpegLoader.ts
import { fetchFile, toBlobURL } from "@ffmpeg/util";
let ffmpeg: any = null;
let fetchFileFn: any = null;
export async function loadFFmpeg() {
if (ffmpeg) return { ffmpeg, fetchFile: fetchFileFn };
const { FFmpeg } = await import("@ffmpeg/ffmpeg");
ffmpeg = new FFmpeg();
fetchFileFn = fetchFile;
const baseURL = "https://cdn.jsdelivr.net/npm/@ffmpeg/core@0.12.6/dist/umd";
await ffmpeg.load({
coreURL: await toBlobURL(`${baseURL}/ffmpeg-core.js`, "text/javascript"),
wasmURL: await toBlobURL(`${baseURL}/ffmpeg-core.wasm`, "application/wasm"),
}, {
corePath: await toBlobURL(`${baseURL}/ffmpeg-core.js`, "text/javascript"),
});
return { ffmpeg, fetchFile };
}
The dynamic import (await import("@ffmpeg/ffmpeg")) keeps the module out of the initial bundle. toBlobURL fetches the core files from jsdelivr and turns them into blob URLs, sidestepping CORS issues. And because we cache the ffmpeg instance at the module level, subsequent conversions don't need to pay the startup cost again.
From User Input to SRT
The heart of any subtitle tool is the SRT format. It's dead simple, which is why it's still the universal standard after 20+ years:
1
00:00:00,000 --> 00:00:03,500
Hello, this is the first subtitle
2
00:00:03,500 --> 00:00:07,200
And here's the second one
Each entry has an index, a time range in HH:MM:SS,mmm format, the text itself, and a blank line separator.
Our formatTime function converts raw seconds into that format:
const formatTime = (seconds: number): string => {
const hrs = Math.floor(seconds / 3600);
const mins = Math.floor((seconds % 3600) / 60);
const secs = Math.floor(seconds % 60);
const ms = Math.floor((seconds % 1) * 1000);
return `${hrs.toString().padStart(2, "0")}:${mins.toString().padStart(2, "0")}:${secs.toString().padStart(2, "0")},${ms.toString().padStart(3, "0")}`;
};
And generateSRT stitches the entries together:
const generateSRT = (): string => {
return subtitles.map((sub, index) => {
return `${index + 1}\n${formatTime(sub.startTime)} --> ${formatTime(sub.endTime)}\n${sub.text}\n`;
}).join("\n");
};
One subtle UX decision: when you add a new subtitle, it automatically starts at the end of the previous one. No one wants to manually type 00:00:15.000 for the fifth time.
const addSubtitle = () => {
const lastSub = subtitles[subtitles.length - 1];
const newStart = lastSub ? lastSub.endTime : 0;
setSubtitles([...subtitles, {
id: Date.now().toString(),
startTime: newStart,
endTime: newStart + 3,
text: ""
}]);
};
Burning the Subtitles with FFmpeg
This is where the magic happens. FFmpeg has a subtitles video filter that can read an SRT file and burn the text directly into the video frames. Combined with force_style, we get full control over how it looks.
const processVideo = async () => {
if (!selectedFile || !ffmpegRef.current) return;
const validSubtitles = subtitles.filter(s => s.text.trim() !== "");
if (validSubtitles.length === 0) {
setError("Please add at least one subtitle with text");
return;
}
setIsProcessing(true);
setSelectedFile(prev => prev ? { ...prev, processing: true, progress: 0 } : null);
try {
const ffmpeg = ffmpegRef.current;
const inputName = "input.mp4";
const srtName = "subtitles.srt";
const outputName = "output.mp4";
const fileArrayBuffer = await selectedFile.file.arrayBuffer();
await ffmpeg.writeFile(inputName, new Uint8Array(fileArrayBuffer));
const srtContent = generateSRT();
await ffmpeg.writeFile(srtName, new TextEncoder().encode(srtContent));
await ffmpeg.exec([
"-i", inputName,
"-vf", `subtitles=${srtName}:force_style='FontSize=${settings.fontSize},PrimaryColour=&H00FFFFFF,OutlineColour=&H00000000,Outline=1,Shadow=0,MarginV=20,Alignment=${settings.position === "top" ? "6" : "2"}'`,
"-c:a", "copy",
"-y",
outputName
]);
const data = await ffmpeg.readFile(outputName);
const uint8Data = data instanceof Uint8Array ? data : new Uint8Array();
const buffer = new ArrayBuffer(uint8Data.length);
new Uint8Array(buffer).set(uint8Data);
const blob = new Blob([buffer], { type: "video/mp4" });
const outputUrl = URL.createObjectURL(blob);
const baseName = selectedFile.file.name.replace(/\.[^/.]+$/, "");
setSelectedFile(prev => prev ? {
...prev,
outputUrl,
outputFileName: `${baseName}_with_subtitles.mp4`,
processing: false,
progress: 100,
} : null);
await ffmpeg.deleteFile(inputName);
await ffmpeg.deleteFile(srtName);
await ffmpeg.deleteFile(outputName);
} catch (err) {
setError("Failed to add subtitles to video");
setSelectedFile(prev => prev ? { ...prev, error: "Processing failed", processing: false } : null);
} finally {
setIsProcessing(false);
}
};
Breaking down that FFmpeg command:
-
-i input.mp4— the source video -
-vf subtitles=subtitles.srt:force_style=...— the subtitle filter with custom styling -
-c:a copy— copy the audio stream as-is, no re-encoding -
-y— overwrite output if it exists
The force_style parameters deserve a closer look:
| Parameter | What it does |
|---|---|
FontSize |
Self-explanatory — text size in pixels |
PrimaryColour |
Text color in BGR hex (&H00FFFFFF = white) |
OutlineColour |
Outline color (&H00000000 = black) |
Outline |
Outline thickness — 1 gives a nice crisp edge |
Shadow |
Drop shadow depth — we keep it at 0 for cleanliness |
MarginV |
Vertical margin from the edge |
Alignment |
2 = bottom center, 6 = top center |
Using -c:a copy is a deliberate optimization. Re-encoding audio is slow and unnecessary for subtitle burning. By copying the audio stream, we save CPU cycles and preserve the original audio quality.
The Styling Controls
Users get three knobs to turn:
Font size (12px to 48px): A slider that lets you balance readability with not-blocking-half-the-video.
Color (white or yellow): White is the safe default. Yellow stands out better on bright scenes.
Position (top or bottom): Bottom is standard for subtitles. Top is useful if you need to avoid existing on-screen text or lower-third graphics.
We also track bgOpacity in the settings state, though the current implementation leans on FFmpeg's built-in outline for readability rather than a background box:
const [settings, setSettings] = useState({
fontSize: 24,
fontColor: "white",
position: "bottom",
bgOpacity: 0.5,
});
The bgOpacity is there as a foundation — if we ever want to add a semi-transparent background behind the text, the state is ready.
Progress Tracking
Nobody likes staring at a frozen spinner. FFmpeg.wasm emits progress events during processing:
ffmpeg.on("progress", ({ progress }: { progress: number }) => {
setSelectedFile(prev => prev ? { ...prev, progress: Math.round(progress * 100) } : null);
});
This feeds a progress bar that actually moves, which goes a surprisingly long way toward making the wait feel acceptable.
Cleanup Is Not Optional
Object URLs and virtual filesystem entries both leak memory if you forget about them. We clean up on two occasions:
After processing:
await ffmpeg.deleteFile(inputName);
await ffmpeg.deleteFile(srtName);
await ffmpeg.deleteFile(outputName);
When the user starts over:
const reset = useCallback(() => {
if (selectedFile) {
URL.revokeObjectURL(selectedFile.previewUrl);
if (selectedFile.outputUrl) URL.revokeObjectURL(selectedFile.outputUrl);
}
setSelectedFile(null);
setError(null);
setSubtitles([{ id: "1", startTime: 0, endTime: 3, text: "" }]);
}, [selectedFile]);
Without this, repeated use of the tool would eventually exhaust the browser's memory limit.
What We Learned Building This
A few things surprised us along the way:
- SRT is surprisingly forgiving: As long as the index and timecode format are correct, FFmpeg doesn't care if there are extra blank lines or weird indentation.
-
The
subtitlesfilter is picky about paths: It expects the SRT filename relative to the working directory. Since we're using FFmpeg's virtual filesystem, the filename alone is enough. -
-c:a copysaves way more time than you'd think: On a 2-minute test clip, re-encoding audio added 40% to the total processing time. Copying it made the tool feel snappy. - Users immediately want to adjust timing after seeing the preview: We originally had a single "process" button with no preview. Adding the video player before processing turned out to be essential — people need to see where their subtitles land.
Try It Out
If you've got a video that needs subtitles, you can burn them in right now. No upload, no account, no waiting in a queue.
Drag your video in, type your subtitles, pick your style, and download the result. Everything happens on your machine — your video is yours from start to finish.

Top comments (0)