DEV Community

xiu
xiu

Posted on

An AI-Ready NestJS + Next.js Boilerplate for 2026

I built a NestJS + Next.js monorepo I'd happily start any new SaaS on today. It's on GitHub. The README covers what's in it. This post is about three decisions that shaped how it's organized, and why I think they'll hold up as AI assistants keep getting better at writing code.

1. Drizzle, Base UI, oxlint

The project uses Drizzle instead of Prisma, Base UI instead of Radix, and oxlint instead of ESLint. These aren't adventurous picks. They share one property: they reduce the number of things that can go wrong.

Drizzle's schema is its type definition. You write pgTable(...), call $inferSelect, and you have a typed row object. No code generation step, no separate schema format, no runtime engine. When a column changes, I rebuild the database package and every consumer sees the new type. Prisma is excellent, but it asks you to maintain a parallel schema language. I'd rather have one.

Base UI replaces Radix's asChild with an explicit render prop. asChild is powerful, but forgetting it silently renders nested <button><button> and nothing warns you. Base UI's approach is impossible to forget because the element you're rendering is right there in the prop. Less magic, fewer ways to be wrong.

oxlint is a Rust-based linter that runs roughly an order of magnitude faster than ESLint on TypeScript. In a monorepo where turbo lint runs on every branch, that's the difference between a CI step you notice and one you don't. The rule catalog is still catching up — for a few niche plugins I might keep ESLint around — but the direction of the toolchain is clear.

These are the kinds of choices that matter more the longer you're going to live with the project. For a weekend prototype, picking the popular thing is fine. For a codebase I expect to maintain for years, I'd rather pay a small learning cost upfront for tools that stay out of my way.

2. Start simple, grow when the problem demands it

When you start a feature, you rarely know which parts will matter. You don't know if the thing will have concurrent edits, or a state machine, or invariants that span fields. You find out by building it and watching what hurts.

Most boilerplates force you to commit before you know. They scaffold repository + service + domain + event bus for every module and expect you to fill them in whether you need them or not. If you later figure out that this module is just CRUD, it's too late — the layers are there, deleting them feels like regression, and they stay. The code runs through abstractions that solve problems it doesn't have.

I wanted a project that let me start cheap and add structure only when the problem demanded it. The repo has three example modules along that path.

The todo module is a plain service over a repository. Three files, no layers, no events. A todo has a title and a boolean; if the problem ever grew beyond that, this module would be easy to throw away or rewrite. It hasn't grown, so it stayed simple.

The article module is what happens when a CRUD module starts having rules — titles that can't be empty, slugs that need to match a pattern. A light business layer with value objects appears, not because I planned it upfront but because those rules kept leaking into the service. Extracting them was the cheapest way to keep the service honest.

The order module is what happens when the problem actually calls for the full toolkit. Orders have state transitions that must be validated (draft → paid → shipped → cancelled). They have invariants that span fields — you can't pay twice with different amounts. They have concurrent edits that need optimistic locking. At that point an aggregate with value objects, domain events, and a version field isn't ceremony; each piece is solving something real. Take any of them away and something breaks.

The three modules aren't three competing styles. They're the same module type at three stages of the same path: start with the smallest thing that works, grow the structure when the problem forces you to. A new contributor reading the repo sees this shape. When they notice a domain/ directory, it tells them this module has real invariants and to read the aggregate first. When they don't, it tells them the module is still in the simple stage, and the right move is to keep it that way as long as they can.

This is the pattern I want a boilerplate to teach: not every pattern you might need, but when each one earns its weight.

3. The rules live with the code

AI coding tools got good fast. On a typical day I let Claude or Cursor implement whole features while I watch, guiding the parts that need judgment. The collaboration works well.

But any single session only sees the immediate problem. It doesn't know the long-term shape I'm committed to — which patterns are load-bearing, which boundaries I'm protecting, which trade-offs I made three weeks ago for reasons I haven't re-explained today. Retyping that context every conversation is exhausting, and I'd forget half of it under pressure anyway.

So the project has a .claude/rules/ directory:

.claude/rules/
├── constitution.md         # project-wide principles
├── api.md                  # paths: apps/api/**
├── admin-shadcn.md         # paths: apps/admin-shadcn/**
├── database.md             # paths: packages/database/**
└── api-test.md             # paths: apps/api/**/*.spec.ts
Enter fullscreen mode Exit fullscreen mode

Each file has a paths: frontmatter declaration. When I edit something under apps/api/, Claude Code auto-loads api.md. Schema change? database.md loads. Mechanically these are AI context files — the content gets injected into the conversation so the assistant has the project's long-term view without me pasting it in each time.

What makes them worth their weight is that they're written so humans can read them too. Layer responsibilities, context boundaries, naming conventions — the things I'd explain to a new contributor on day one live in the same files the AI consumes. One source of truth, two audiences, nothing to keep in sync.

Without this setup I'd still use AI tools, and code would still get written. But the project's shape would drift toward the AI's defaults — reasonable choices in isolation, stacked into something I wouldn't have designed. The rules are how my long-term thinking stays present in a project that's being written partly by someone else.

What you actually get

Cloning the repo gets you:

  • Auth & RBAC — JWT with Google and GitHub OAuth, role-based guards on the backend, role-aware components (<RequireRole>, <ShowForRole>) on the frontend.
  • Durable writes — Idempotency-Key support with SHA-256 body hashing, ETag and If-Match for optimistic locking, RFC 9457 problem details for every error.
  • End-to-end types — Drizzle drives database types; OpenAPI drives frontend hooks via openapi-react-query. Change a column, rebuild the package, TypeScript flags every broken consumer.
  • Audit logging — every sensitive write emits a domain event the audit-log module picks up. No decorators sprinkled across controllers.
  • Admin frontend — Next.js App Router, TanStack Query, React Hook Form with Zod, shadcn/ui on Base UI, Tailwind v4.
  • Architectural rules — five markdown files documenting the shape of the project, readable by humans and by the AI editing it.

It's a working skeleton, not a tutorial.

Full source is on GitHub, with deeper write-ups on architecture, API conventions, and technology choices. If any of this lands wrong — or if you've tried something similar and it went sideways — I'd like to hear about it.

Top comments (0)