Your AI coding assistant is only as good as the codebase it navigates. I've watched Claude Code, Cursor, and Copilot struggle with the same project structures that trip up junior developers — and excel in codebases designed with clear boundaries.
After restructuring 8 TypeScript projects specifically to work better with AI agents, here's what actually moves the needle.
Why Project Structure Matters More Now
When you ask an AI agent to "add a new endpoint for user notifications," it needs to:
- Find where endpoints live
- Understand the existing patterns
- Locate related code (models, services, types)
- Follow the conventions already established
In a well-structured project, the agent finds all of this in seconds. In a messy one, it hallucinates paths, invents patterns that don't match your codebase, and produces code you'll spend 20 minutes fixing.
The difference isn't the AI model — it's the signal-to-noise ratio in your file tree.
The Structure That Works
Here's the folder structure I use across all my TypeScript projects (NestJS, Next.js, Express):
src/
├── domain/ # Pure business logic, zero dependencies
│ ├── user/
│ │ ├── user.entity.ts
│ │ ├── user.value-objects.ts
│ │ └── user.errors.ts
│ └── order/
│ ├── order.entity.ts
│ └── order.errors.ts
├── application/ # Use cases + port interfaces
│ ├── user/
│ │ ├── use-cases/
│ │ │ ├── create-user.ts
│ │ │ └── update-user.ts
│ │ └── ports/
│ │ ├── user.repository.ts
│ │ └── email.port.ts
│ └── order/
│ ├── use-cases/
│ └── ports/
├── infrastructure/ # Framework + external implementations
│ ├── database/
│ │ ├── prisma-user.repository.ts
│ │ └── prisma-order.repository.ts
│ ├── http/
│ │ ├── controllers/
│ │ │ ├── user.controller.ts
│ │ │ └── order.controller.ts
│ │ ├── middleware/
│ │ └── dto/
│ │ ├── create-user.dto.ts
│ │ └── update-user.dto.ts
│ └── external/
│ ├── email.service.ts
│ └── payment.gateway.ts
├── shared/ # Cross-cutting utilities
│ ├── types/
│ ├── utils/
│ └── constants/
└── CLAUDE.md # AI agent instructions
Why this works for AI agents: every folder name is a clear signal. When the agent sees application/user/use-cases/, it knows exactly what goes there — and more importantly, what doesn't.
Rule 1: One Concept Per File
This is the single biggest improvement you can make for AI navigation.
// ❌ BAD: user.types.ts — 200 lines of mixed concerns
export interface User { ... }
export interface UserProfile { ... }
export interface CreateUserDto { ... }
export interface UpdateUserDto { ... }
export type UserRole = 'admin' | 'editor' | 'viewer';
export type UserPermission = 'read' | 'write' | 'delete';
export interface UserFilters { ... }
export interface PaginatedUsers { ... }
// ✅ GOOD: split by concept
// domain/user/user.entity.ts
export interface User {
id: string;
email: string;
role: UserRole;
profile: UserProfile | null;
createdAt: Date;
}
export type UserRole = 'admin' | 'editor' | 'viewer';
// domain/user/user-profile.entity.ts
export interface UserProfile {
displayName: string;
avatarUrl: string | null;
bio: string;
}
// infrastructure/http/dto/create-user.dto.ts
export class CreateUserDto {
@IsEmail()
email: string;
@MinLength(8)
password: string;
@IsOptional()
displayName?: string;
}
When an AI agent searches for "User entity," it finds user.entity.ts — not a 200-line grab bag where it has to parse which interface is the domain entity vs. the DTO vs. the filter type.
Rule 2: Predictable Naming Conventions
AI agents learn patterns from your existing files. If your naming is consistent, the agent extrapolates correctly. If it's inconsistent, every new file is a coin flip.
✅ Consistent pattern:
create-user.ts → CreateUser class
update-user.ts → UpdateUser class
delete-user.ts → DeleteUser class
create-order.ts → CreateOrder class
❌ Inconsistent naming:
createUser.ts → CreateUserUseCase class
update_user.ts → UpdateUser class
deleteUserHandler.ts → UserDeleteHandler class
newOrder.ts → OrderCreation class
My naming rules (defined in CLAUDE.md):
-
Files: kebab-case, suffix indicates type —
.entity.ts,.repository.ts,.controller.ts,.dto.ts,.port.ts -
Classes: PascalCase, no suffix redundancy —
CreateUsernotCreateUserUseCase -
Interfaces: PascalCase, prefix with purpose —
UserRepository,EmailPort -
Test files: same name +
.spec.ts—create-user.spec.ts
Rule 3: Explicit Dependency Direction
This is where most projects fall apart for AI agents. When imports go in every direction, the agent can't predict where to add new code.
// ❌ Infrastructure importing from other infrastructure
// infrastructure/database/prisma-user.repository.ts
import { EmailService } from '../external/email.service'; // Wrong layer!
import { UserController } from '../http/controllers/user.controller'; // Circular!
// ✅ Clean dependency direction: domain ← application ← infrastructure
// infrastructure/database/prisma-user.repository.ts
import { User } from '../../domain/user/user.entity';
import { UserRepository } from '../../application/user/ports/user.repository';
The rule is simple: imports only point inward. Infrastructure → Application → Domain. Never the reverse.
I enforce this with a simple TypeScript path alias in tsconfig.json:
{
"compilerOptions": {
"paths": {
"@domain/*": ["src/domain/*"],
"@application/*": ["src/application/*"],
"@infrastructure/*": ["src/infrastructure/*"],
"@shared/*": ["src/shared/*"]
}
}
}
When the AI sees @domain/user/user.entity, it immediately knows the layer. No ambiguous relative paths like ../../../models/user.
Rule 4: CLAUDE.md at the Root
Every project gets a CLAUDE.md that tells the agent how to navigate:
## Project Structure
- `src/domain/` — pure entities, value objects, domain errors. ZERO external imports.
- `src/application/` — use cases and port interfaces. Only imports from domain.
- `src/infrastructure/` — framework code, DB, HTTP, external services.
- `src/shared/` — cross-cutting utilities used by all layers.
## Conventions
- One class/interface per file
- File names: kebab-case with type suffix (.entity.ts, .repository.ts)
- Use cases: one per file in `application/{module}/use-cases/`
- New endpoint = controller method + DTO + use case + port (if needed)
## Adding a New Feature
1. Define entity in `domain/{module}/`
2. Create use case in `application/{module}/use-cases/`
3. Define ports in `application/{module}/ports/`
4. Implement infrastructure in `infrastructure/`
5. Wire up in module file
6. Run `npm run typecheck && npm run test`
This isn't just documentation — it's a navigation map. The AI reads this first and knows exactly where to put new code, what patterns to follow, and what commands to run for verification.
Rule 5: Kill Barrel Exports
This one surprised me. Barrel exports (index.ts files that re-export everything) actually hurt AI agent performance:
// ❌ src/domain/user/index.ts
export * from './user.entity';
export * from './user-profile.entity';
export * from './user.errors';
export * from './user.value-objects';
The problem: when the AI encounters import { User } from '@domain/user', it doesn't know which file contains User. It has to open the barrel, scan all re-exports, then find the source file. With large barrels (20+ exports), the agent frequently picks the wrong source when it needs to modify the definition.
// ✅ Direct imports — AI knows exactly where to look
import { User } from '@domain/user/user.entity';
import { UserNotFoundError } from '@domain/user/user.errors';
Direct imports are longer, but they're unambiguous. The AI can jump straight to the right file. The trade-off is worth it.
Rule 6: Co-locate Tests
❌ Separate test directory:
src/
application/user/use-cases/create-user.ts
tests/
unit/
application/
user/
use-cases/
create-user.spec.ts ← 5 directories deep, mirrors src
✅ Co-located tests:
src/
application/user/use-cases/
create-user.ts
create-user.spec.ts ← right next to the source
When the AI modifies create-user.ts, it naturally finds create-user.spec.ts in the same directory. No searching through a mirrored test tree. It updates both files in one pass.
The Proof: Before vs After
I restructured a 40K-line NestJS project using these rules. Here's what changed in my AI-assisted workflow:
| Metric | Before (flat structure) | After (layered + conventions) |
|---|---|---|
| AI finds correct file on first try | ~60% | ~95% |
| Generated code follows project patterns | ~40% | ~85% |
| "Fix the imports" follow-up prompts | 3-4 per feature | 0-1 per feature |
| Time from prompt to working code | 8-12 min | 2-4 min |
The biggest win wasn't any single rule — it was the combination. When naming is predictable AND dependencies flow one direction AND CLAUDE.md explains the patterns, the AI connects the dots.
What This Doesn't Solve
To be clear: structure alone doesn't make AI write correct business logic. It still:
- Misses edge cases in your domain rules
- Oversimplifies error handling (as I covered in my previous article)
- Doesn't understand your specific performance requirements
- Can't infer undocumented business constraints
Structure makes the AI a better navigator. Your job is still being the architect.
Key Takeaways
- One concept per file — the single biggest improvement for AI navigation
- Predictable naming — consistent conventions let AI extrapolate patterns correctly
- Explicit dependency direction — imports only point inward (infrastructure → application → domain)
- CLAUDE.md at the root — a navigation map the AI reads first before touching any code
- Kill barrel exports — direct imports give AI unambiguous file locations
- Co-locate tests — the AI updates source and tests in one pass
Your codebase is the AI's context window. Make it scannable, predictable, and unambiguous — and the AI goes from "occasionally useful" to "consistently reliable."
More practical guides on AI-augmented architecture on Twitter/X. Connect on LinkedIn for the discussion.
Originally published on my Hashnode blog.
Top comments (4)
Most of this is good software engineering advice that predates AI agents by a decade. One concept per file, consistent naming, unidirectional dependencies, co-located tests. These are all things you should already be doing because they make the codebase navigable for humans. Framing them as "how to structure your project for AI" undersells the actual argument, which is that clean architecture pays compound interest now that agents are a second consumer of your code.
The metrics are where I want to push. "~60% to ~95% accuracy in finding correct files" across 8 projects is a strong claim. How was that measured? What counts as "correct file on first attempt"? Were you tracking agent tool calls, or is this a gut estimate after the fact? The numbers read like vibes with a tilde in front of them. If the improvement is real, and I believe the direction is real, it deserves actual methodology. Log the agent's file reads before and after, count the hits and misses, publish the raw data. That would make this article a reference people cite instead of a listicle people bookmark and forget.
The barrel exports point is the most interesting and the most debatable. Barrel files exist because they simplify the public API surface for humans. import { User } from '@domain/user' is cleaner to read and write than import { User } from '@domain/user/user.entity'. Removing them optimizes for the agent's navigation at the cost of the developer's ergonomics. Whether that tradeoff is worth it probably depends on how much of your code the agent is writing. If it's most of it, sure. If you're still writing the majority yourself, you just made your imports uglier for a tool that could have followed the re-export in two hops anyway.
Also just my opinion but the CLAUDE.md section is underserved here. Everything else on this list is a convention the agent has to infer from patterns. The CLAUDE.md is the one place you can just tell it directly.
You're right that these are good engineering fundamentals — and I'd push back slightly on the framing that I'm "underselling" it. The article is deliberately aimed at people who aren't already doing this. The ones who have a flat
src/with 200 files and wonder why Copilot keeps hallucinating imports. For them, "do it because AI works better" is a more compelling hook than "do it because Uncle Bob said so in 2012." But yes — the real argument is exactly what you said: clean architecture now has two consumers instead of one, and that changes the ROI calculation.On the metrics — fair hit. The ~60% to ~95% numbers are from my own observation across sessions, not from instrumented logs. I tracked it informally: after each "add a feature" prompt, I noted whether the agent's first file read/write was the correct target. Tilde is doing honest work there — it signals "this is directional, not p<0.05." Your suggestion to log agent tool calls systematically is good and I might actually do that for a follow-up. Would make a much stronger case.
The barrel exports tradeoff — you nailed the key variable: what percentage of code is the agent writing? In my workflow it's 70-80%, so the tradeoff is clear. For someone at 20-30%, keeping barrels and accepting the occasional agent miss is probably fine. I should have framed it as a spectrum rather than a blanket "kill them."
And yes, CLAUDE.md deserved more space. I wrote a whole separate article on that one — 5 Things I Put in Every CLAUDE.md — but you're right that this piece would benefit from showing how the instruction file ties all the structural rules together. Noted for a revision.
Genuinely one of the best comments I've gotten. Thanks for pushing on the weak spots.
Some comments may only be visible to logged-in visitors. Sign in to view all comments.