When you are building a video editing SaaS, one of the trickiest UX problems is keeping users informed about long-running server-side tasks. Transcoding a clip, generating thumbnails, running AI-based scene detection — these jobs can take anywhere from a few seconds to several minutes. Early on at ClipCrafter, we leaned on the simplest approach: polling.
The Polling Problem
Our first implementation looked something like this:
// hooks/useJobStatus.ts — the naive version
export function useJobStatus(jobId: string) {
const [status, setStatus] = useState<JobStatus>("pending");
useEffect(() => {
const interval = setInterval(async () => {
const res = await fetch(`/api/jobs/${jobId}`);
const data = await res.json();
setStatus(data.status);
if (data.status === "complete" || data.status === "failed") {
clearInterval(interval);
}
}, 2000);
return () => clearInterval(interval);
}, [jobId]);
return status;
}
This worked, but it came with real costs. Every two seconds, every user with an active job was hammering our API. During peak hours we were seeing thousands of unnecessary requests per minute. Our database was fielding repetitive SELECT queries for jobs that had not changed since the last poll. Latency was also mediocre — users could wait up to two seconds after a job finished before the UI updated.
Why We Chose Server-Sent Events Over WebSockets
We evaluated two alternatives: WebSockets and Server-Sent Events (SSE). WebSockets are powerful, but they introduce bidirectional complexity we did not need. Our updates flow in one direction: server to client. SSE gave us exactly that with a simpler protocol, automatic reconnection built into the browser EventSource API, and straightforward compatibility with our Next.js API routes.
The SSE Implementation
On the server side, we created a Next.js Route Handler that streams job updates:
// app/api/jobs/[jobId]/stream/route.ts
import { NextRequest } from "next/server";
import { getJobUpdates } from "@/lib/job-events";
export async function GET(
req: NextRequest,
{ params }: { params: { jobId: string } }
) {
const encoder = new TextEncoder();
const stream = new ReadableStream({
async start(controller) {
const send = (data: Record<string, unknown>) => {
controller.enqueue(
encoder.encode(`data: ${JSON.stringify(data)}\n\n`)
);
};
// Send current state immediately
const current = await getJobUpdates(params.jobId);
send(current);
// Subscribe to future updates
const unsubscribe = onJobUpdate(params.jobId, (update) => {
send(update);
if (update.status === "complete" || update.status === "failed") {
controller.close();
}
});
req.signal.addEventListener("abort", () => {
unsubscribe();
controller.close();
});
},
});
return new Response(stream, {
headers: {
"Content-Type": "text/event-stream",
"Cache-Control": "no-cache",
Connection: "keep-alive",
},
});
}
The onJobUpdate function subscribes to a lightweight Redis Pub/Sub channel. When our background worker finishes a processing step, it publishes an event and every connected client receives the update within milliseconds.
On the client side, the hook became cleaner and more responsive:
// hooks/useJobStatus.ts — SSE version
export function useJobStatus(jobId: string) {
const [status, setStatus] = useState<JobStatus>("pending");
const [progress, setProgress] = useState(0);
useEffect(() => {
const source = new EventSource(`/api/jobs/${jobId}/stream`);
source.onmessage = (event) => {
const data = JSON.parse(event.data);
setStatus(data.status);
setProgress(data.progress ?? 0);
if (data.status === "complete" || data.status === "failed") {
source.close();
}
};
source.onerror = () => {
// EventSource reconnects automatically on transient errors.
// We only close on terminal failures.
console.warn("SSE connection interrupted — retrying");
};
return () => source.close();
}, [jobId]);
return { status, progress };
}
Results After the Switch
The numbers spoke for themselves. API requests related to job status dropped by over 90 percent. Average time-to-UI-update went from around one second down to under 100 milliseconds. Server CPU utilization on our API tier dropped noticeably during peak processing windows, and users started commenting that processing felt faster even though the actual transcoding time had not changed.
Lessons Learned
A few things we picked up along the way. First, always send the current state as the first event in the stream. If a client connects after a job has already progressed, they should not have to wait for the next update to see where things stand. Second, set a reasonable server-side timeout so you do not hold connections open indefinitely for abandoned tabs. We close idle streams after five minutes. Third, use Redis Pub/Sub or a similar lightweight broker rather than polling the database from inside your SSE handler — otherwise you have just moved the polling problem from client to server.
Try It Yourself
If you are building any kind of async processing pipeline in a Next.js app, SSE is a pragmatic middle ground between polling and full WebSocket infrastructure. It took us about a day to migrate and the payoff was immediate.
Want to see this in action? ClipCrafter uses this pattern across all of our video processing workflows. Give it a spin and watch your progress bars update in real time.
Have questions about SSE patterns in Next.js? Drop a comment — happy to dig into edge cases.
Top comments (0)