A walkthrough of building a complete SaaS application with auth, payments, AI chat, and background jobs, starting from a pre-wired foundation instead of an empty directory.
I've started SaaS projects from scratch enough times to know how the first weekend usually goes. Friday evening, you're excited. Saturday morning, you're configuring OAuth. Saturday afternoon, you're debugging Stripe webhooks. Sunday, you're wiring up email templates and wondering where the weekend went.
You haven't built any product yet.
This walkthrough shows the alternative: starting from a foundation that already has the infrastructure working, then spending your weekend on the thing that actually matters. Your product.
Friday Evening: From Clone to Running App
The starting point is a working application with auth, payments, database, background jobs, and AI features pre-wired. Instead of scaffolding an empty project, you clone a complete system and start removing or adapting what you don't need. Check out Eden Stack for more information on the template.
First, set up the local database:
make docker-up # Starts Postgres + Neon proxy
make db-push # Pushes the schema
Then start the dev server:
make dev
You have a running application with a landing page, login flow, dashboard, settings, and a pricing page. Auth works. The database has tables. The API is type-safe.
Total time: about 10 minutes.
Friday Night: Make It Yours
The first real task is configuring the external services. This is where traditional projects eat hours: switching between dashboards, copying API keys, debugging connection strings.
Eden Stack has MCP servers for this and a great scaffolding script. You describe what you need, and Claude provisions the services:
Set up my project. I need:
- A Neon database called "my-saas-prod"
- Stripe products: a free tier, a Pro tier at $29/month, and a Premium tier at $79/month
- A Resend domain for transactional email
- PostHog for analytics
- Sentry for error tracking
Claude calls the MCP servers, creates the resources, and populates your .env file. What used to be 60+ minutes of context-switching between dashboards takes about 5 minutes.
Now configure your brand. Update the constants in src/lib/brand/:
export const brand = {
name: "My SaaS",
tagline: "The thing that does the thing",
url: "https://my-saas.com",
// ...
};
Update the landing page copy, swap the colors in your Tailwind config, and you have a branded application.
End of Friday: Running app, services configured, brand applied. You haven't written any infrastructure code.
Saturday Morning: Your First Feature
This is where it starts getting fun. You're building product now, not infrastructure.
Let's say your SaaS is a project management tool with AI-powered task summaries. You need a projects table, CRUD API, React hooks, and UI.
Every feature in Eden Stack follows the same four-layer pattern:
Layer 1: Schema
// src/lib/db/schema.ts
export const projects = pgTable("projects", {
id: text("id").primaryKey().$defaultFn(() => crypto.randomUUID()),
name: text("name").notNull(),
description: "text(\"description\"),"
userId: text("user_id").notNull().references(() => users.id),
createdAt: timestamp("created_at").defaultNow(),
});
make db-push
Layer 2: API
// src/server/routes/projects.ts
export const projectRoutes = new Elysia({ prefix: "/projects" })
.get("/", async ({ request }) => {
const session = await auth.api.getSession({ headers: request.headers });
if (!session) throw new Error("Unauthorized");
const results = await db
.select()
.from(projects)
.where(eq(projects.userId, session.user.id));
return { data: results };
})
.post("/", async ({ body, request }) => {
const session = await auth.api.getSession({ headers: request.headers });
if (!session) throw new Error("Unauthorized");
const project = await db.insert(projects).values({
name: body.name,
description: "body.description,"
userId: session.user.id,
}).returning();
return { data: project[0] };
});
Register it in src/server/api.ts, and Eden Treaty auto-generates the typed client.
Layer 3: Hooks
// src/hooks/use-projects.ts
export const useProjects = () =>
useQuery({
queryKey: ["projects"],
queryFn: () => getTreaty().projects.get().then((r) => r.data),
});
export const useCreateProject = () =>
useMutation({
mutationFn: (data: { name: string; description?: string }) =>
getTreaty().projects.post(data),
onSuccess: () => queryClient.invalidateQueries({ queryKey: ["projects"] }),
});
Layer 4: UI
Build the component using the hooks. Standard React with TanStack Query state management.
The pattern is identical for every feature you add. Schema, API, hooks, UI. Just describe it to Claude. It knows the four-layer pattern using curated Claude Skills. Describe "add a projects feature with CRUD" and it generates all four layers following the codebase conventions.
Saturday Afternoon: The AI Part
Your SaaS needs AI-powered task summaries. The chatbot infrastructure is already wired. You're extending it, not building from scratch.
The agentic chatbot uses Vercel AI SDK with tool calling. Claude decides when to invoke tools based on user intent. You add a new tool:
const summarizeProjectTool = tool({
description: "\"Summarize all tasks in a project\","
parameters: z.object({
projectId: z.string().describe("The project to summarize"),
}),
execute: async ({ projectId }) => {
const tasks = await db
.select()
.from(projectTasks)
.where(eq(projectTasks.projectId, projectId));
return { tasks: tasks.map((t) => ({ title: "t.title, status: t.status })) };"
},
});
Register the tool, and now when a user asks "summarize my project," Claude calls this tool, gets the tasks, and generates a summary. No routing logic needed. The model handles intent classification.
For longer operations, like analyzing a project's velocity over time, you use Inngest for durable background execution:
export const analyzeVelocity = inngest.createFunction(
{ id: "analyze-velocity" },
{ event: "project/analyze" },
async ({ event, step }) => {
const tasks = await step.run("fetch-tasks", () =>
getCompletedTasks(event.data.projectId)
);
const analysis = await step.run("generate-analysis", () =>
generateVelocityReport(tasks)
);
await step.run("save-report", () =>
saveReport(event.data.projectId, analysis)
);
return analysis;
}
);
Each step is checkpointed. If the LLM call in generate-analysis times out, Inngest retries that step, not the entire function.
Sunday: Deploy
The application goes to Vercel - or your preferred deployment provider. The database is already on Neon - or any preferred database service. Background jobs run on Inngest's cloud - or any other preferred service. To understand this composability, read more about it in this blog post.
# Push to GitHub, connect to Vercel
git push origin main
Set your environment variables in Vercel's dashboard (copy them from your .env), and you're live.
For Stripe webhooks in production, point the webhook URL to your Vercel deployment and update the webhook secret. For Inngest, add the production signing key. Both are one-line configuration changes.
Total deployment time: about 15 minutes.
What You Skipped
Here's the infrastructure you didn't build this weekend:
- OAuth with session management and token refresh
- Stripe checkout, webhook handling, and subscription lifecycle
- Email templates with React Email + Resend
- Background job queue with retries and checkpointing
- Error tracking and source maps with Sentry
- Analytics with event tracking and feature flags
- Type-safe API client generation
- Database migrations and schema management
- Protected route middleware
- Multi-tenant workspace management
Each of these works. Each handles its edge cases. You didn't think about any of them because they were already solved.
The Tradeoff
Let's be honest about what this approach costs.
You're inheriting complexity. Even though every integration is isolated in src/lib/, it's still code you need to understand when something breaks. If you've never used Inngest or Elysia, there's a learning curve. Or you could just point Claude to the documentation for the relevant tool.
You might not need everything. If your SaaS doesn't need background jobs or AI features, some of the infrastructure is dead weight until you delete it. The architecture makes removal clean, but it's still a step you need to take.
You're buying into a set of opinions. TanStack Start over Next.js. Elysia over Express. Drizzle over Prisma. These are good opinions (I think), but they're opinions nonetheless.
The alternative is starting from scratch and making every decision yourself. That's a valid choice. It just takes longer, and most of those decisions will land you in the same place anyway.
The Math
A weekend with Eden Stack: clone, configure services, build your core feature, add AI, deploy. You ship a production application with auth, payments, and AI features.
A weekend from scratch: you've got OAuth working and maybe a database connected. Payments and AI are next weekend's problem.
The gap compounds. Every feature you add follows the same four-layer pattern and takes minutes instead of hours. After a month, the difference isn't incremental. It's categorical.
Magnus Rødseth is a developer and consultant based in Oslo, and the creator of Eden Stack, a production-ready starter kit for AI-native SaaS applications.




Top comments (0)