DEV Community

teamVibestrap
teamVibestrap

Posted on

5 boring decisions that shipped my AI SaaS in a weekend

I've been building Next.js + AI SaaS apps for a while. The interesting part of every app — the thing the marketing copy sells — is the AI. The boring part is everything else: auth, billing, webhooks, tests, i18n, the database schema you'll hate yourself for in six months.

I used to spend two weekends on the boring part and then run out of energy before the AI part shipped.

After enough rebuilds, I started noticing the same five decisions kept coming up, and the same five answers kept winning. None of them are clever. None of them will get me retweeted. But they're what let me ship a real AI SaaS in a weekend instead of two months.

1. Drizzle over Prisma (for SaaS specifically)

Prisma's DX feels great for the first two weeks. Then your migration history grows, you need a complex query the client doesn't support, and you're either dropping into raw SQL through $queryRaw or fighting the abstraction.

For SaaS apps with multi-tenant rows, audit logs, soft deletes, and the kind of joins that don't fit ORMs neatly, Drizzle wins:

  • Migrations are real SQL files you can read, diff, and roll back. No mystery shadow database.
  • The query builder is thin enough that complex joins are still readable.
  • Bundle size matters when you're cold-starting on Vercel.

The tradeoff: Drizzle is less hand-holdy. You write db.select().from(...) instead of prisma.user.findMany(). Worth it.

2. Better Auth over Clerk

Clerk is genuinely great for the first 1,000 users. Then you hit the ceiling of either pricing (Clerk's per-MAU pricing scales linearly with success) or platform lock-in (your user records live in their database, not yours).

Better Auth is open source, your DB owns the users, and the implementation is short enough that you can read it end-to-end. Email/password + Google + GitHub OAuth + sessions + admin role takes about 200 lines of config.

The tradeoff: you maintain it. For most indie SaaS, that's a feature, not a bug — you understand exactly what's authenticating your users.

3. Stripe webhooks need idempotency keys, not just signatures

Every Stripe tutorial covers signature verification. Most skip the part that actually breaks in production: Stripe will retry webhooks. If your handler isn't idempotent, you'll double-charge credits, double-send emails, or create duplicate subscription rows.

The fix is small but you have to do it on day one:

// On every webhook handler
const eventId = stripeEvent.id; // evt_1234...

const existing = await db.query.processedWebhooks.findFirst({
  where: eq(processedWebhooks.eventId, eventId),
});
if (existing) return new Response("ok", { status: 200 });

await db.transaction(async (tx) => {
  await handleEvent(stripeEvent, tx);
  await tx.insert(processedWebhooks).values({ eventId, at: new Date() });
});
Enter fullscreen mode Exit fullscreen mode

Yes, you need a processed_webhooks table. Yes, that's a tiny extra schema. No, you can't skip it and "be careful" — Stripe retries are non-negotiable and they happen at the worst possible time.

4. Mock LLM providers in CI

The first time my CI ran integration tests against the live OpenAI API, my monthly bill went from $30 to $75 because of one flaky network week.

The fix: a MockChatProvider that implements the same interface as your real provider but returns deterministic strings. Every test, every Storybook story, every local dev run uses it. Real keys only get loaded in production and a single nightly e2e job.

class MockChatProvider implements ChatProvider {
  async chat(input) {
    return { text: "mock response", usage: { inputTokens: 10, outputTokens: 5 } };
  }
}
Enter fullscreen mode Exit fullscreen mode

This requires you to depend on a ChatProvider interface rather than import openai directly. Which is fine — you should be doing that anyway. (I'll write a longer post on the full provider abstraction pattern next week.)

CI cost dropped to zero. Tests stopped flaking. Recommended.

5. i18n on day one, not in month six

If you ever plan to support a second language, set up next-intl (or your equivalent) the day you scaffold the project. Even if your only locale is en.

Why: locale-aware routing changes your URL structure (/about vs /en/about), which changes your metadataBase, which changes your sitemap, which changes your canonical URLs. Retrofitting this in month six means rewriting half your routes and re-running SEO indexing from scratch.

The day-one cost is small: a JSON file with English strings, a middleware that detects locale, and one config file. The month-six cost is a week of work and a temporary SEO traffic dip.

I learned this the hard way. Twice.


What this looked like in practice

I encoded all five of these (plus auth flows, email templates, and the AI provider abstraction I mentioned in #4) into Vibestrap, the starter I now use as my baseline for new AI SaaS projects. Most of the "shipped in a weekend" claim is just removing the time I would otherwise spend re-deciding these five things.

None of these decisions are universally right. If you're shipping a B2B SaaS with deep enterprise integrations, Clerk's SSO might be worth the cost. If you only have one locale and never will, skip i18n. If your AI calls are cheap, mock providers might not be worth the indirection.

But for the typical "indie hacker shipping their first AI SaaS" path, getting these five out of the way upfront is what made the difference between "weekend project" and "two-month rebuild loop."

What boring decisions have saved you the most time? Genuinely curious — drop them below.

Top comments (0)