Most developers building their first SaaS make the same mistake I almost made — they reach for sessionStorage because it works in the demo, then discover it breaks the moment a real user opens a second tab. This is the post I wish I'd had before Week 5.
The problem with sessionStorage
Resume Tailor's pipeline works like this: upload a PDF, paste a job description, get AI-rewritten bullets, download a tailored resume. In the demo, sessionStorage holds everything together — the parsed resume, the analysis, the rewritten bullets. It works perfectly. Until a user refreshes the page. Or opens the app on their phone after signing up on their laptop. Or closes the tab by accident. sessionStorage is scoped to a single browser tab. It doesn't survive a refresh. It doesn't sync across devices. It's fine for prototyping — it's not a database. The fix is obvious in hindsight: persist to Postgres, load from the database on every session. But getting there requires a set of decisions that aren't obvious at all.
Why Postgres over MongoDB
The first decision was the database. MongoDB is the default choice for a lot of Node.js developers — flexible schema, JSON-native, easy to get started. For Resume Tailor I chose Postgres, and the reason comes down to one concept: referential integrity.
In MongoDB, if you delete a user, their resume documents don't go anywhere. They sit in the collection, orphaned, pointing at a user ID that no longer exists. You have to remember to clean them up in application code. You will forget.
In Postgres, you enforce the relationship at the schema level:
userId: uuid('user_id')
.notNull()
.references(() => users.id, { onDelete: 'cascade' }),
onDelete: 'cascade' means when a user row is deleted, every resume row that references it is deleted automatically. The database guarantees clean data — not the application code, not a cron job, not a developer remembering to write the right query.
Most developers don't think about what happens to related data when a user deletes their account. That one line is the answer.
The schema
Clerk owns identity. Postgres owns product data. The clerkId field is the bridge between the two systems.
export const users = pgTable('users', {
id: uuid('id').defaultRandom().primaryKey(),
clerkId: text('clerk_id').notNull().unique(),
email: text('email').notNull(),
plan: text('plan').notNull().default('free'),
usageCount: integer('usage_count').notNull().default(0),
usageLimit: integer('usage_limit').notNull().default(5),
createdAt: timestamp('created_at').defaultNow(),
});
export const resumes = pgTable('resumes', {
id: uuid('id').defaultRandom().primaryKey(),
userId: uuid('user_id')
.notNull()
.references(() => users.id, { onDelete: 'cascade' }),
jobTitle: text('job_title'),
jobDescription: text('job_description'),
originalBullets: jsonb('original_bullets').$type<string[]>(),
rewrittenBullets: jsonb('rewritten_bullets').$type<string[]>(),
keywords: jsonb('keywords'),
pdfGenerated: boolean('pdf_generated').default(false),
createdAt: timestamp('created_at').defaultNow(),
});
jsonb for bullets and keywords — not text with JSON.stringify. jsonb is queryable, indexed, and type-safe with Drizzle's .$type(). The difference matters when you want to query resumes by keyword later.
Usage tracking — why atomic increment matters
Free tier means 5 rewrites. Enforcing that limit sounds simple — read the count, check it, increment it. Here's the implementation most developers write:
// Read
const user = await db.select().from(users).where(eq(users.clerkId, clerkId));
// Check
if (user.usageCount >= user.usageLimit) throw new Error('Usage limit reached');
// Write
await db.update(users).set({ usageCount: sql${users.usageCount} + 1 });
This has a race condition. If two requests arrive simultaneously, both read usageCount: 4, both pass the check, both increment. The user ends up at 6 when the limit is 5. I verified this with a Promise.all test — two simultaneous fetch calls to /api/rewrite. Both returned 200. Both incremented.
The fix is a single atomic query:
const result = await db
.update(users)
.set({ usageCount: sql${users.usageCount} + 1 })
.where(
sql${users.clerkId} = ${clerkId} AND ${users.usageCount} < ${users.usageLimit}
)
.returning();
if (result.length === 0) throw new Error('Usage limit reached');
The WHERE clause does the check and the increment in one shot. If two requests arrive simultaneously, only one matches the condition. The other gets no rows back and throws. No race condition possible.
The ownership check — the one line that prevents a data breach
Every resume has a UUID as its ID. UUIDs are hard to guess but not secret — they appear in API responses, URLs, and network logs. Any authenticated user could find a UUID and call DELETE /api/resumes/:id.
Without an ownership check, they can delete anyone's data.
if (result[0].userId !== user.id) { return NextResponse.json({ error: 'Forbidden' }, { status: 403 });
}
This line is not optional. Fetch the row, compare the userId to the authenticated user's ID, return 403 if they don't match. Every delete, every update, every read of sensitive data needs this check.
I tested it deliberately — signed in as User A, called DELETE with a resume ID belonging to User B. Without the check: 200, data gone. With the check: 403, data safe.
What the webhook solves
Clerk handles authentication — sign up, sign in, social providers, session management. But when a user signs up through Clerk, your Postgres database doesn't know about it.
The webhook bridges the two systems:
User signs up on Clerk
→ Clerk fires POST to /api/webhooks/clerk
→ Route verifies the svix signature
→ Inserts a row into the users table
→ clerkId links Clerk and Postgres from this point forward
The same pattern handles account deletion — user.deleted event fires, the route deletes the user row, the cascade foreign key cleans up every resume automatically.
Without the webhook, your database has no record of who signed up. With it, every Clerk event that matters is reflected in Postgres within seconds.
What I'd tell myself before Week 5
Pick Postgres when relationships matter. Use onDelete: cascade — don't leave orphan cleanup to application code. Make usage checks atomic. Always verify ownership before mutating data. Wire the webhook before you build anything that depends on user rows existing.
These aren't advanced concepts. They're the decisions that separate a demo from a product.
Resume Tailor is live at resume-ai-one-lac.vercel.app. Source on GitHub: github.com/Azeez1314/resume-ai.
Top comments (0)