How I Structured a Production-Ready Next.js AI SaaS Boilerplate (And What I Learned)
Every new project started the same way. Three weeks of setup before a single line of real product code. I finally got tired of it.
The Problem Every Developer Knows
You have an idea. A real one. An AI-powered SaaS that could genuinely help people.
So you open your terminal and start a new Next.js project.
And then it hits you.
You need auth. Then billing. Then a database with proper Row Level Security. Then AI integration with streaming and token limits. Then transactional emails. Then a deploy pipeline.
Three weeks later you're still configuring — and your motivation is half of what it was on day one.
I've been a senior freelance frontend developer for years. I've built dozens of projects. And I kept repeating the same setup, project after project, client after client.
So I stopped. And I built LaunchKit AI.
What Is LaunchKit AI?
LaunchKit AI is a production-ready Next.js 14 boilerplate built specifically for AI-powered SaaS products.
It's opinionated by design. Every tool in the stack was chosen deliberately. You clone it, fill in your .env file, and you're shipping — not configuring.
Here's the full stack:
Next.js 14 (App Router + React Server Components)
TypeScript — fully typed end to end
Tailwind CSS + shadcn/ui
Supabase — auth, database, storage, RLS
Stripe — subscriptions, one-time payments, webhooks
OpenAI SDK — streaming responses, token counting, per-plan limits
Resend + React Email — transactional email templates
Vercel — one-click deployment with CI/CD
How I Structured It
Here's the folder structure and the thinking behind each decision.
launchkit-ai/
├── app/
│ ├── (auth)/
│ │ ├── login/
│ │ └── signup/
│ ├── (dashboard)/
│ │ ├── layout.tsx ← protected layout with auth check
│ │ ├── dashboard/
│ │ └── settings/
│ ├── api/
│ │ ├── ai/
│ │ │ └── stream/ ← OpenAI streaming endpoint
│ │ ├── stripe/
│ │ │ └── webhook/ ← Stripe webhook handler
│ │ └── auth/
│ └── layout.tsx
├── components/
│ ├── ui/ ← shadcn/ui base components
│ ├── ai/ ← AI-specific components
│ └── billing/ ← Stripe portal + plan display
├── lib/
│ ├── supabase/
│ │ ├── client.ts ← browser client
│ │ └── server.ts ← server client with cookies
│ ├── stripe.ts
│ ├── openai.ts
│ └── resend.ts
├── hooks/
│ ├── useAI.ts ← reusable AI streaming hook
│ ├── useSubscription.ts
│ └── useUser.ts
├── types/
│ └── index.ts ← global TypeScript types
├── prisma/
│ └── schema.prisma ← User, Subscription, UsageLog models
├── emails/
│ ├── WelcomeEmail.tsx
│ ├── ResetPasswordEmail.tsx
│ └── InvoiceEmail.tsx
└── .env.example
The 3 Hardest Parts to Get Right
- Supabase Auth + Middleware Getting Supabase auth to work cleanly with Next.js App Router took the most iteration. The key is using two separate Supabase clients — one for the browser, one for the server — and handling session refresh in middleware. // middleware.ts import { createMiddlewareClient } from '@supabase/auth-helpers-nextjs' import { NextResponse } from 'next/server'
export async function middleware(req) {
const res = NextResponse.next()
const supabase = createMiddlewareClient({ req, res })
// Refresh session on every request
const { data: { session } } = await supabase.auth.getSession()
// Protect dashboard routes
if (!session && req.nextUrl.pathname.startsWith('/dashboard')) {
return NextResponse.redirect(new URL('/login', req.url))
}
return res
}
- OpenAI Streaming with Per-Plan Limits The AI streaming endpoint needed to do three things at once: stream the response to the client, count tokens in real time, and enforce usage limits based on the user's Stripe plan. // app/api/ai/stream/route.ts export async function POST(req: Request) { const { messages, userId } = await req.json()
// Check plan limits before calling OpenAI
const usage = await getUserUsage(userId)
const limit = await getPlanLimit(userId)
if (usage >= limit) {
return new Response('Usage limit reached', { status: 429 })
}
const stream = await openai.chat.completions.create({
model: 'gpt-4o',
messages,
stream: true,
})
// Stream + count tokens simultaneously
return new StreamingTextResponse(stream)
}
- Stripe Webhook Reliability Webhooks fail silently if you're not careful. The boilerplate includes a dedicated webhook handler that verifies the Stripe signature, handles all critical events, and updates Supabase in a single transaction. // app/api/stripe/webhook/route.ts const relevantEvents = new Set([ 'checkout.session.completed', 'customer.subscription.updated', 'customer.subscription.deleted', 'invoice.payment_succeeded', 'invoice.payment_failed', ])
export async function POST(req: Request) {
const body = await req.text()
const sig = req.headers.get('stripe-signature')!
let event: Stripe.Event
try {
event = stripe.webhooks.constructEvent(
body, sig, process.env.STRIPE_WEBHOOK_SECRET!
)
} catch (err) {
return new Response('Webhook signature failed', { status: 400 })
}
if (relevantEvents.has(event.type)) {
await handleStripeEvent(event)
}
return new Response(null, { status: 200 })
}
What I Learned Building This
Ship opinionated products. The most tempting thing was to make everything configurable. A flag for every choice. Support for every database. That's how you build a product nobody can understand. I picked the best tools and committed to them.
Documentation is the product. Half of the value is the README and setup guide. A brilliant codebase with no explanation is useless to someone trying to ship fast.
Your experience is worth packaging. I've spent years learning what breaks, what scales, and what's a waste of time. That knowledge lives in this boilerplate — and developers will pay for it.
What's Next
LaunchKit AI is now live on Gumroad. If you're tired of rebuilding the same foundation for every project, it might be exactly what you need.
👉 Get LaunchKit AI on Gumroad
https://sirsamdev.gumroad.com/l/eqgix
Starter — $197 (core stack)
Pro — $297 (full kit + lifetime updates + Discord community)
Pay once. Build unlimited projects. 7-day refund guarantee.
Built by a senior freelance dev, for devs who want to stop configuring and start shipping.
Drop a comment if you have questions about the architecture — happy to go deeper on any part of the stack.
Top comments (0)