One of the sneakier bugs you can hit with Inngest step functions: your job works perfectly on the first run, but fails in a confusing way on retry. The culprit is almost always stale event payload data.
The Problem
When you trigger an Inngest function, the event payload is snapshot at that moment:
await inngest.send({
name: "video/process",
data: {
projectId: "abc123",
r2Key: job.r2Key, // ← this might be null or truncated at trigger time
},
});
Now inside your function, if you read event.data.r2Key and it was null at trigger time — your retry will always see null, forever, no matter what happened to the DB record after.
inngest.createFunction(
{ id: "process-video" },
{ event: "video/process" },
async ({ event, step }) => {
// ❌ This reads the value from TRIGGER TIME, not retry time
const r2Key = event.data.r2Key;
await step.run("transcribe", async () => {
// If r2Key was null at trigger → this fails every retry
const transcript = await transcribe(r2Key);
});
}
);
Why It Happens
Inngest is designed for idempotency. The event payload is immutable — same data every run. That's a feature, not a bug: it ensures deterministic replay. But it means you can't rely on the payload carrying data that might have been set after the trigger.
Common scenarios where this bites:
- Upload finishes async, then triggers processing — but the R2 key is written to DB after the event was sent
- A separate job updates a record, then triggers a dependent job — the dependent job reads stale payload data
- String fields get truncated in the payload (e.g., a long URL or key string) but the full value is in the DB
The Fix: Always Read Fresh from Your Source of Truth
The fix is straightforward: re-fetch from the database inside your step, not from the event payload.
inngest.createFunction(
{ id: "process-video" },
{ event: "video/process" },
async ({ event, step }) => {
const { projectId } = event.data; // ✅ IDs are safe — they don't change
await step.run("transcribe", async () => {
// ✅ Read fresh data at step execution time
const project = await db
.from("projects")
.select("r2_key")
.eq("id", projectId)
.single();
const transcript = await transcribe(project.r2_key);
});
}
);
Pass only stable identifiers in the event payload (IDs, enum values, config flags). For anything that might change or be set asynchronously — read it fresh inside the step.
Bonus: This Makes Your Functions More Resilient
Reading fresh data inside steps gives you two more benefits:
1. Your function handles late data naturally. If the R2 key isn't set yet when the step first runs, you can retry/sleep until it is:
await step.run("wait-for-upload", async () => {
const project = await db.from("projects").select("r2_key").eq("id", projectId).single();
if (!project.r2_key) throw new Error("r2_key not ready yet"); // Inngest will retry
return project.r2_key;
});
2. Retries pick up DB fixes. If something went wrong and you fix the data in the DB, a retry will pick up the corrected value — instead of replaying the same broken payload.
The Rule of Thumb
Pass IDs in event payloads. Read everything else from the DB.
Event payloads are for routing and identification — not for carrying the full state of your data at a point in time. Treat them like a notification ("hey, process project abc123"), not a data transfer object.
This pattern works across all job queue systems, not just Inngest — the same principle applies to BullMQ, Temporal, Trigger.dev, etc.
Have you hit this bug before? What other Inngest footguns have you run into? Share in the comments.
Top comments (0)