A signup form POSTing to Google Sheets is the most common "backend" on the indie web.
It works for your landing page demo. It works when you test it with 5 friends. It works right up until the moment you don't want it to fail.
Here's the part nobody tells you: Google's own values.append endpoint silently drops rows under concurrent writes. Two simultaneous POSTs can resolve to the same target row and one of them gets overwritten. No error in your logs. No error in your client. Just rows that silently didn't land.
Every "Sheets as a backend" wrapper you've heard of — SheetDB, Sheety, SheetBest, NoCodeAPI — forwards your request straight to values.append. They inherit the bug.
I spent the last few weeks building a fix. It's called SheetForge, it's MIT-licensed, and this post is about the actual bug and why the fix isn't as simple as "throw a mutex on it."
The bug, reproduced in 4 lines
await Promise.all([
sheets.values.append({ ...rowA }),
sheets.values.append({ ...rowB }),
sheets.values.append({ ...rowC }),
sheets.values.append({ ...rowD }),
]);
Four POSTs. You expect four rows. Under load you often get three. Sometimes two.
The reason is inside values.append. The operation reads the current last row, then writes to the position after it. When two calls race, they can read the same "current last row" and write to the same target cell range. One value wins. The other silently disappears.
Google has documented this. The official workaround is: "Don't write concurrently." That's a fine rule when your launch gets 3 signups. It's actively destructive when you hit HN's front page.
Why your form loses 40% of rows during a traffic spike
Your form looks like this:
// /api/signup
export async function POST(req: Request) {
const { email } = await req.json()
await sheets.values.append({ values: [[email, new Date().toISOString()]] })
return Response.json({ ok: true })
}
This is deployed on Vercel. Each request gets its own isolated serverless invocation. They run truly in parallel. There is zero coordination between them.
Now Product Hunt puts you on the daily leaderboard. Forty people land on your page in the same 10 seconds. Twelve of them submit the form before the others.
Twelve concurrent POSTs to values.append.
You won't end up with twelve rows. You'll end up with something closer to eight, maybe nine. The exact number depends on how Google's backend serializes the writes internally (it doesn't, deterministically — that's the whole problem). The lost rows show no error. The users who submitted them see a green checkmark. Their email is gone.
This isn't theoretical. Levels.fyi wrote a long engineering post about running their entire site on Sheets until exactly this class of problem forced them to migrate.
Why a mutex doesn't fix it
The naive fix is obvious: put a lock in front of the Sheets API call. One write at a time per sheet. Problem solved.
In practice, this is where it breaks:
1. Your API runs on serverless. A lock in Node process memory doesn't work when requests are spread across 12 cold starts. You need a distributed lock.
2. A distributed lock has its own bugs. Redis SETNX with a TTL is the standard answer. But the classic failure mode: process A acquires the lock, gets paused by GC, TTL expires, process B grabs the lock, process A wakes up and releases "its" lock — which is now B's lock. Now two processes think they hold the lock. You're back to silent data loss.
3. Retries break everything. Your client retries a failed POST. The lock-protected write succeeds twice. Now you have duplicate rows. Fixing drops created duplicates.
4. Crashes strand the lock. If your process dies while holding the lock, the next writer has to wait for the TTL to expire. For TTLs long enough to be safe (30+ seconds), that stalls your whole write throughput.
Every one of these is a real bug I hit while prototyping. The fix isn't a mutex. The fix is a proper queue.
How SheetForge actually fixes it
The architecture, in one sentence: every write goes into a per-sheet queue, one worker per sheet pulls from that queue inside a Postgres transaction, and an idempotency key dedupes retries.
Here's the flow:
submitWrite(row)
└─ INSERT INTO write_ledger (sheet_id, idempotency_key, payload)
(partial unique index dedupes retries before we even touch Sheets)
└─ XADD to Redis Stream for this sheet
└─ return { writeId, status: 'pending' }
processNext() ← runs in a loop on the worker
└─ XREADGROUP from the sheet's stream
└─ BEGIN transaction
└─ SELECT pg_advisory_xact_lock(hashtextextended(streamKey, 0))
└─ call sheets.values.append(payload)
└─ UPDATE write_ledger SET status = 'committed'
└─ COMMIT
└─ XACK only after commit
Four things matter here:
The advisory lock is the fence. pg_advisory_xact_lock acquires a lock inside the transaction. If the transaction commits or aborts, the lock is released automatically by Postgres. There is no TTL. No lease clock. No split-brain. If your process dies mid-handler, the transaction rolls back, the lock releases, and the message redelivers.
The idempotency key is the deduper. Every write comes in with an Idempotency-Key header. A partial unique index on (sheet_id, idempotency_key) WHERE status IN ('pending', 'committed') means the database rejects duplicates before the worker even sees them. Retry the same request 100 times, you get 1 row.
The ledger is the truth. The write is durable in Postgres the moment you get the writeId back. Sheets is downstream. If Google's API is down, your writes queue up and flush when it recovers. Your users see a green checkmark and it means something.
XACK happens post-commit. Redis Streams' PEL (pending entries list) redelivers messages if they're not acked. If the worker crashes mid-transaction, Postgres rolls back, Redis redelivers, the idempotency key catches the replay. Exactly-once semantics, for real.
The full test for this concurrency
Every change to the write-queue slice requires a concurrency test. This is the one I do not break:
it('50 parallel writes land 50 rows in order', async () => {
const sheet = await createTestSheet()
const writes = Array.from({ length: 50 }, (_, i) => ({
email: `user-${i}@test.com`,
idempotencyKey: `key-${i}`,
}))
await Promise.all(
writes.map(w => sheetforge.rows.insert(w, { idempotencyKey: w.idempotencyKey }))
)
await waitForQueueDrain(sheet.id)
const rows = await readSheet(sheet.id)
expect(rows).toHaveLength(50)
expect(rows.map(r => r.email)).toEqual(writes.map(w => w.email))
})
50 parallel POSTs. 50 rows. In order. Retry safe.
This same test against raw values.append reliably fails. It fails under SheetDB, Sheety, and SheetBest too — I checked.
The typed SDK is the bonus
Once you have a proper queue, the rest gets interesting. Since SheetForge knows your sheet's header row, it can generate a typed TypeScript client with literal union types inferred from your sample cells.
Header row: email | plan | created_at
Sample cells: hi@example.com | free | 2026-04-15
Generated SDK:
import { createClient } from './sheetforge-client'
const sheet = createClient({
apiKey: process.env.SHEETFORGE_API_KEY!,
sheetId: 'sht_abc123',
})
await sheet.rows.insert(
{
email: 'hi@example.com',
plan: 'free', // 'free' | 'pro' — inferred from sample cells
created_at: new Date().toISOString(),
},
{ idempotencyKey: crypto.randomUUID() },
)
The compiler catches header drift. Rename a column in the sheet and regenerate the client — TypeScript tells you every call site that needs updating.
Getting started
One-click hosted:
https://getsheetforge.vercel.app
Sign in with Google. Connect a sheet. Copy your API key. Done.
Self-host:
git clone https://github.com/Devansh-365/sheetforge.git
cd sheetforge
pnpm install
cp .env.example .env # Google OAuth + DATABASE_URL + Redis
pnpm db:push
pnpm dev # web :3000, api :3001
Prereqs: Node 20+, pnpm 9+, Postgres 14+, Redis 6+ (or Upstash REST).
The OSS core (packages/queue, packages/codegen, packages/sdk-ts) is MIT and stays free forever. The hosted SaaS runs the same code.
What SheetForge is not
It is not a Postgres replacement. If you need complex queries, indices, or relational integrity, use a real database.
It is not a high-throughput pipe. Google caps you at ~60 writes/minute per sheet regardless of what sits in front. SheetForge makes sure those writes land; it doesn't make them faster.
It is not a reason to keep Sheets as your backend forever. It's the right tool for landing pages, waitlists, internal forms, ops tools, and MVPs where you need rows to actually land. If your app outgrows Sheets, it outgrows SheetForge too.
The one takeaway
Every Sheets-as-backend wrapper you've been using has a silent data-loss bug that only shows up when traffic spikes. That's exactly the moment you most care about losing rows.
If you've ever shipped a form on Sheets and watched rows vanish mid-launch, give SheetForge a try. If it saves you one bug, star the repo.
GitHub: github.com/Devansh-365/sheetforge
Hosted: getsheetforge.vercel.app
License: MIT




Top comments (0)