The Problem Every AI-Assisted Developer Knows
Last month I asked Cursor to build a dashboard. Simple request: “Build me an analytics dashboard with auth.”
What I got:
“What framework would you prefer?”
“Should I include dark mode?”
“What authentication provider?”
“Do you want SSO support?”
Four questions before a single line of code. By the time I finished the interrogation, I could have scaffolded the project myself.
This isn’t a Cursor problem. It’s an LLM problem. Models are trained to be helpful, and “helpful” often means clarifying requirements before committing to implementation. In a chatbot context, that’s reasonable. In a code editor where you’re paying per request and context window is precious, it’s friction.
I spent a week building a Cursor skill that flips this behavior. The result: describe what you want, get working code. No clarifying questions unless genuinely ambiguous.
What Cursor 2.4 Actually Enables
Cursor’s January 2025 release introduced two features that make this possible:
Subagents : Independent agents that handle discrete subtasks in parallel. Each gets its own context window, custom prompts, and tool access. The main agent delegates to specialists instead of trying to do everything in one thread.
Image generation : Generate mockups and assets directly from prompts. Save to your project’s assets folder. Useful for visualizing before building.
The subagent architecture is the key innovation. Instead of one overloaded context trying to be a UI expert, database architect, and test writer simultaneously, you get specialists that excel at their domain.
Thanks for reading Olko - Tech/Engineering! Subscribe for free to receive new posts and support my work.
The Skill Architecture
The skill lives in .cursor/rules/ and .cursor/agents/:
.cursor/
├── rules/
│ └── imagine-builder.mdc # Core behavior rules
└── agents/
├── ui-designer.md # Tailwind, shadcn, animations
├── schema-architect.md # Prisma, indexes, relations
├── api-builder.md # Routes, validation, auth
└── test-writer.md # Vitest, RTL, Playwright
The main rule file (imagine-builder.mdc) establishes the core principle: build first, ask later.
---
description: AI Product Builder - Build functional products through conversation
globs: "**/*"
alwaysApply: true
---
# AI Product Builder
## Core Principles
1. **Build real products, not prototypes** - Every output should be deployable
2. **Infer intent aggressively** - Don't ask unnecessary clarifying questions
3. **Ship fast** - Prioritize working code over perfect code
The “infer intent aggressively” directive is doing the heavy lifting. It tells the model to make reasonable assumptions based on context rather than seeking explicit confirmation for every decision.
Why Subagents Matter
Consider a request like “Build a SaaS dashboard with org management and billing.”
Without subagents , the main agent tries to:
Design the database schema
Create API routes
Build React components
Wire up auth
Add Stripe integration
By step 3, the context window is polluted with schema definitions the model no longer needs to reference. Quality degrades.
With subagents , the main agent:
Plans the architecture
Delegates schema design to
@schema-architectDelegates API routes to
@api-builder(in parallel)Delegates UI components to
@ui-designer(in parallel)Aggregates results
Each specialist operates with clean context focused on its domain. The schema architect isn’t thinking about button hover states. The UI designer isn’t worrying about database indexes.
Here’s what the schema architect subagent looks like:
# Schema Architect Subagent
You are a specialized database architect focused on designing
scalable, efficient data models.
## Schema Design Principles
### 1. Always Include Audit Fields
model BaseModel {
id String @id @default(cuid())
createdAt DateTime @default(now())
updatedAt DateTime @updatedAt
}
### 2. Use Proper ID Strategies
- cuid() for distributed systems (default)
- uuid() for high-security requirements
- autoincrement() only for analytics tables
### 3. Soft Deletes for User Data
model User {
deletedAt DateTime?
isActive Boolean @default(true)
}
The subagent includes complete code patterns, not abstract principles. When it generates a schema, it follows these patterns automatically.
The Default Stack Decision
One of the hardest choices: what tech stack to default to when the user doesn’t specify?
I went with:
Layer Choice Rationale Framework Next.js 14 (App Router) Server components, great DX, wide adoption Language TypeScript (strict) Type safety catches bugs early Styling Tailwind + shadcn/ui Fast iteration, good defaults Database Prisma + PostgreSQL Type-safe queries, easy migrations Auth NextAuth.js Flexible, well-documented Validation Zod Runtime + compile-time safety
These aren’t universally “best” choices. They’re sensible defaults that work for most projects. The skill explicitly says: override these if you specify otherwise.
## Tech Stack Defaults
Unless user specifies otherwise:
- **Frontend** : React 18+ with TypeScript, Tailwind CSS, shadcn/ui
- **Backend** : Next.js App Router with Server Actions
- **Database** : Prisma with SQLite (dev) / PostgreSQL (prod)
The “(dev) / PostgreSQL (prod)” detail matters. SQLite for local development means zero setup. The schema is portable to Postgres when you deploy.
What the Output Actually Looks Like
Request: “Build an expense tracking app with receipt uploads”
Output (abbreviated):
// prisma/schema.prisma
model Expense {
id String @id @default(cuid())
amount Decimal @db.Decimal(10, 2)
description String
category Category @relation(fields: [categoryId], references: [id])
categoryId String
receipt String? // S3 URL
userId String
user User @relation(fields: [userId], references: [id])
createdAt DateTime @default(now())
@@index([userId, createdAt(sort: Desc)])
}
// app/api/expenses/route.ts
export async function POST(request: NextRequest) {
const session = await auth();
if (!session?.user) {
return NextResponse.json(
{ success: false, error: { code: 'UNAUTHORIZED' }},
{ status: 401 }
);
}
const formData = await request.formData();
const result = createExpenseSchema.safeParse({
amount: formData.get('amount'),
description: formData.get('description'),
categoryId: formData.get('categoryId'),
});
if (!result.success) {
return NextResponse.json(
{ success: false, error: { code: 'VALIDATION_ERROR', details: result.error.flatten().fieldErrors }},
{ status: 400 }
);
}
// Handle receipt upload...
// Create expense...
}
Notice what’s not there: no comments explaining what @id does, no placeholder // TODO: implement auth, no skeleton code. It’s complete, runnable, follows the established patterns.
The Trade-offs
This approach works when:
You want a working prototype fast
The requirements are reasonably common (CRUD apps, dashboards, landing pages)
You’re comfortable reviewing and adjusting generated code
You trust the default stack choices
This approach fails when:
Requirements are genuinely ambiguous (”build something cool”)
You need an unusual stack (Remix, Hono, Drizzle)
The project has existing patterns the skill doesn’t know about
You want educational explanations alongside code
The skill doesn’t replace understanding. It replaces boilerplate. If you don’t understand the code it generates, you’ll struggle to debug it.
Making It Your Own
The skill is MIT licensed and designed to be forked. Common customizations:
Change the stack:
## Tech Stack Defaults
- **Frontend** : Vue 3 + Nuxt
- **Database** : Drizzle ORM + SQLite
Add domain-specific subagents:
# E-commerce Specialist Subagent
You are an expert in building e-commerce systems.
## Patterns
- Always use optimistic UI for cart operations
- Implement idempotency keys for payment endpoints
- Use database transactions for inventory decrements
Adjust the aggression level:
If you want some clarification, add conditions:
## Response Protocol
Ask for clarification ONLY when:
- Request involves external services not mentioned (Stripe, AWS, etc.)
- Security implications are unclear (public vs. private data)
- Multiple valid interpretations exist with different costs
The Broader Point
AI coding assistants are converging on a question: should the model be a collaborator (asks questions, explains trade-offs, teaches) or an executor (takes requirements, ships code)?
The answer is probably “both, depending on context.” But right now, most defaults lean heavily toward collaborator. That’s appropriate for learning, frustrating for shipping.
Cursor’s subagent architecture lets you build specialized executors that know your patterns, your stack, your preferences. The main agent collaborates at the architecture level; the subagents execute at the implementation level.
This is where AI coding tools are heading: not one general-purpose assistant, but orchestrated specialists that handle different phases of the development workflow.
Try It
The skill is on GitHub: cursor-product-builder
Clone it, copy .cursor/ to your project, and ask it to build something. Start with something concrete: “Build a todo app with categories and due dates” rather than “build me something.”
If you improve it, PRs welcome. Particularly interested in:
Alternative stack presets (Vue, Svelte, HTMX)
Domain-specific subagents (e-commerce, fintech, devtools)
Better test generation patterns
This is part of my series on distributed systems and developer tooling. Subscribe for more deep dives on the infrastructure that actually runs production systems.
Question for readers : What’s your experience with AI code assistants? Too many questions? Not enough? Have you built custom rules or prompts that changed the behavior significantly? Drop a comment - curious what patterns others have found.

Top comments (0)