We shipped a file upload feature for ClipCrafter — paste a video URL or upload a file — and it looked like everything was working. The S3-compatible PUT request to Cloudflare R2 returned a 200. No errors in the console. No red Vercel logs.
Yet every uploaded video sat in limbo, never processed. Here's the bug, the fix, and what we learned.
The Architecture in 30 Seconds
ClipCrafter processes videos using Inngest background functions. The flow looks like this:
- User uploads a file → we generate a presigned R2 URL via a
/api/uploadroute - Client does a direct
PUTto that presigned URL (R2 / Cloudflare) - Client calls
/api/projects/[id]to update the project with metadata - A separate action triggers
inngest.send({ name: "video/process", data: { projectId } }) - Inngest picks up the event and runs the
process-videofunction
Simple enough. So what broke?
The Bug: A Missing PATCH
After a successful PUT to R2, our frontend was calling the project PATCH route — but only to save the video title. We never sent the r2_key back.
// ❌ BEFORE — only patching the title
await fetch(`/api/projects/${projectId}`, {
method: "PATCH",
body: JSON.stringify({ title: fileName }),
});
Meanwhile, in our Inngest process-video function:
const project = await supabase
.from("projects")
.select("r2_key, source_url")
.eq("id", event.data.projectId)
.single();
if (!project.data?.r2_key && !project.data?.source_url) {
throw new Error("No video source found"); // 💥 always hit for uploads
}
The job failed on the very first step, every single time. And because Inngest retries silently in the background, nothing surfaced as an obvious user-facing error — we just never saw the video get processed.
The Fix: Extend the PATCH Route and Send the Key
Two small changes:
1. Extend /api/projects/[id]/route.ts to accept r2_key:
// ✅ AFTER
const { title, r2_key } = await req.json();
const updatePayload: Record<string, string> = {};
if (title) updatePayload.title = title;
if (r2_key) updatePayload.r2_key = r2_key;
await supabase.from("projects").update(updatePayload).eq("id", projectId);
2. Send r2_key from the client after the PUT succeeds:
// ✅ AFTER — send both title and r2_key
const r2Key = `uploads/${projectId}/${fileName}`;
// Direct PUT to presigned URL
await fetch(presignedUrl, {
method: "PUT",
body: file,
headers: { "Content-Type": file.type },
});
// Save the key back to the project row
await fetch(`/api/projects/${projectId}`, {
method: "PATCH",
body: JSON.stringify({ title: fileName, r2_key: r2Key }),
});
That's it. One missing field. One extra line in the payload. Everything downstream just works.
The Lesson: Presigned URLs Create Invisible Gaps
With direct-upload patterns (presigned S3/R2 URLs), your backend is deliberately out of the loop during the actual transfer. That's the point — it offloads bandwidth from your API server. But it also means:
- Your API never sees the bytes
- Your API never confirms the upload succeeded
- Your API's database row stays stale until you explicitly update it
If any step between "PUT succeeds" and "database updated" fails silently, you end up with orphaned uploads. The file is in R2. The DB says it isn't. Downstream jobs trust the DB.
A defensive pattern we're now adopting: after the presigned PUT, always PATCH back a upload_completed_at timestamp alongside the key. If that timestamp is null when the Inngest job runs, skip or surface an error with a meaningful message instead of throwing cryptically.
What Else Landed: Billing Scaffolding (Not Live Yet)
Alongside this fix, we scaffolded the billing layer for ClipCrafter — Stripe + Razorpay support, Supabase subscriptions and usage tables, plan limits enforced inside the Inngest pipeline itself:
// Inside process-video Inngest function
await step.run("check-usage", async () => {
const allowed = await isUsageAllowed(projectOwnerId);
if (!allowed) throw new NonRetriableError("Usage limit reached");
});
// ... process video ...
await step.run("increment-usage", async () => {
await incrementUsage(projectOwnerId);
});
Putting the usage gate inside Inngest rather than at the API layer means even background retries respect plan limits. More on the full billing design in a future post.
If you're building a video tool and want to see this pipeline in action, give ClipCrafter a try — paste a YouTube link or upload a file and get shareable clips in minutes.
Have you been burned by a presigned URL silent-failure bug? Drop a comment — I'd love to compare notes.
Top comments (0)