DEV Community

MATKARIM MATKARIMOV
MATKARIM MATKARIMOV

Posted on

How I Turned Claude Code Into a Multi-Agent Dev Team


A practical guide to building a multi-agent development workflow with Claude Code's custom agents — from architecture to implementation, with real configs.

I use Claude Code (Anthropic's CLI tool) as my primary coding assistant. Out of the box, it's a single general-purpose agent. But I've split it into 12 specialized sub-agents, each with a focused role, specific tools, and deep knowledge of my project's conventions.

The result: instead of one agent that kind of knows everything, I have a team of specialists that deeply know their domain.

The Problem With a Single AI Agent

When you use a single AI agent for everything — writing code, reviewing it, debugging, creating types, managing git — you run into predictable issues:

  1. Context dilution: The agent tries to be everything and becomes mediocre at each task
  2. Inconsistent patterns: Without strict enforcement, the same pattern gets implemented differently across features
  3. No separation of concerns: The same agent writes code and reviews it — there's no second opinion
  4. Wasted context window: Every prompt carries the full weight of all instructions, even when you only need a specific capability

The fix was obvious: apply the same separation of concerns principle I use in code to my AI workflow.

Architecture: How Sub-Agents Work

Claude Code supports custom agents via markdown files in ~/.claude/agents/. Each agent is a subprocess that the main session can delegate tasks to.

Main Claude Session (orchestrator)
  |
  |-- delegates "build this feature" --> senior-dev agent
  |-- delegates "review the code" --> code-reviewer agent
  |-- delegates "create types" --> schema-type-generator agent
  |-- delegates "commit changes" --> git-workflow agent
  |
  (agents cannot talk to each other — only through main session)
Enter fullscreen mode Exit fullscreen mode

Key constraints:

  • Agents communicate only with the main session, never directly with each other
  • Each agent has its own system prompt, tool access, and model selection
  • The main session acts as an orchestrator — it decides which agent handles which task
  • Agents can run in parallel for independent work

The 12 Agents

Here's my full agent roster with their roles, models, and tool access:

Agent Model Color Role
senior-dev inherit (Opus) green Primary code writer — builds features end-to-end
component-architect Sonnet cyan Designs component hierarchy before implementation
code-reviewer Sonnet red Reviews code quality, catches bugs, enforces patterns
debugger inherit (Opus) yellow Diagnoses runtime errors and build failures
api-integrator inherit (Opus) blue Connects frontend to backend APIs
schema-type-generator Sonnet magenta Creates TypeScript types and Zod schemas
test-writer inherit (Opus) green Writes Vitest + Testing Library tests
git-workflow Haiku blue Handles commits, branches, PRs
performance-optimizer Sonnet yellow Diagnoses and fixes performance issues
security-auditor Sonnet red Security reviews and vulnerability scanning
refactorer inherit (Opus) magenta Restructures code while preserving behavior
documentation-creator Sonnet cyan Writes technical docs, JSDoc, README files
coding-rules inherit (Opus) purple Enforces all coding standards from CLAUDE.md

Why Different Models?

Not every task needs the most powerful (and expensive) model:

  • Opus for tasks that need deep reasoning: writing complex features, debugging, refactoring
  • Sonnet for tasks that need pattern matching and consistency: reviewing code, generating types, auditing security
  • Haiku for simple, repetitive tasks: git operations (commit messages, branch creation)

This keeps costs down without sacrificing quality where it matters.

Agent Anatomy: What's Inside Each Agent File

Every agent is a markdown file with YAML frontmatter and a detailed system prompt. Here's the structure:

---
name: agent-name
description: >
  When to use this agent. Claude reads this to decide
  which agent to delegate to.
model: inherit | sonnet | haiku
color: green
tools: Read, Write, Grep, Glob, Bash
memory: user
---

[Detailed system prompt with project-specific patterns,
 step-by-step workflows, code examples, and quality checklists]
Enter fullscreen mode Exit fullscreen mode

The description field is critical — it tells the main Claude session when to use this agent. Think of it as a routing rule.

Deep Dive: Key Agents

1. senior-dev — The Implementation Workhorse

This is the agent that writes production code. Its system prompt encodes my exact stack and patterns:

## Workflow

### Phase 1: Understand
Before writing a single line of code:
1. Read the requirement carefully
2. If ambiguous, ask clarifying questions

### Phase 2: Explore
3. Use Glob to find similar components in the codebase
4. Read 2-3 existing files that follow the same pattern
5. Check for shared utilities you can reuse

### Phase 3: Plan
6. List all files you will create or modify
7. Present this plan to the user before proceeding

### Phase 4: Build
9.  Create types and enums first (data contract)
10. Create Zod schemas with i18n validation
11. Build the service layer (ENDPOINTS + featureService)
12. Build TanStack Query hooks (key factory + hooks)
13. Build the UI components last

### Phase 5: Verify
15. Read back every file to catch errors
16. Run tsc --noEmit for type checking
17. Run Biome for formatting
Enter fullscreen mode Exit fullscreen mode

The key insight: the agent builds bottom-up — types first, then schemas, then services, then hooks, then UI. Each layer depends only on the layers below it. This order prevents the cascading type errors you get when building top-down.

Every pattern includes concrete code examples. Here's what the service layer pattern looks like inside the agent prompt:

const ENDPOINTS = {
  LIST: '/users/list',
  CREATE: '/users/create',
  UPDATE: (id: number) => `/users/${id}/update`,
  DELETE: (id: number) => `/users/${id}/delete`,
  DETAIL: (id: number) => `/users/${id}`,
} as const;

export const userService = {
  getList: async (filter: UserFilter): Promise<UserListResponse> => {
    const { data } = await axiosClient.post(ENDPOINTS.LIST, filter);
    return data;
  },
  // ... other methods
};
Enter fullscreen mode Exit fullscreen mode

When the agent sees a "build user management" request, it doesn't improvise — it follows this exact pattern because the examples are embedded in its instructions.

2. code-reviewer — The Second Pair of Eyes

This agent runs on Sonnet (not Opus) intentionally. Code review is pattern-matching: does this code follow the rules? Sonnet is faster, cheaper, and excellent at this.

The review process is systematic:

## Review Process

### Step 2: Bug Analysis
Systematically check for:

**Null/Undefined Risks:**
- Optional chaining missing on potentially undefined values
- TanStack Query `data` accessed without checking `isLoading`

**Logic Errors:**
- Stale closures in useEffect or TanStack Query queryFn
- Incorrect skipToken conditions

**TypeScript Issues:**
- `any` type usage (should be `unknown` or specific)
- Missing `satisfies z.ZodType<Type>` on Zod schemas

### Step 5: Pattern Compliance
Verify each pattern systematically:
- Service layer: ENDPOINTS as const + featureService + axiosClient
- TanStack Query: key factories + skipToken + keepPreviousData
- Zod: createSchema(t: TFunction) + satisfies
- CVA: cva() + VariantProps + cn()
- Enums: const as const + type extraction
Enter fullscreen mode Exit fullscreen mode

Its output is structured with severity levels:

## Critical Issues
- **[BUG]** `user.service.ts:45` - Missing null check on API response

## Pattern Violations
- **[QUERY]** `use-users.ts:12` - Missing keepPreviousData on paginated query

## Verdict
**REQUEST CHANGES** - 2 critical bugs found
Enter fullscreen mode Exit fullscreen mode

3. debugger — The Error Detective

When something breaks, this agent follows a strict diagnostic process. Its prompt includes every common error pattern specific to my stack:

## Management-System Specific Error Patterns

### Axios Interceptor Errors
**Token Refresh Failure (infinite 401 loop):**
- Root cause: Token refresh request returns 401, triggering another refresh
- Check: Read the Axios interceptor in src/lib/axios.ts
- Fix: Add isRefreshing flag to prevent concurrent refresh calls

### TanStack Query Errors
**skipToken Misuse (query fires when it should not):**
- Root cause: queryFn receives undefined param because skipToken condition is wrong
- Check: The skipToken condition must match exactly when param is unavailable

### React Hook Form + Zod Errors
**Schema Validation Not Triggering:**
- Root cause: createSchema called without t function
- Fix: Create schema inside component where t is available
Enter fullscreen mode Exit fullscreen mode

Instead of guessing, the agent matches the error against known patterns and traces the root cause with evidence.

4. schema-type-generator — Types and Validation

This agent specializes in one thing: creating TypeScript types and Zod schemas that follow the project's exact conventions.

// It knows the enum pattern
export const DepartmentStatus = {
  Active: 'ACTIVE',
  Inactive: 'INACTIVE',
  Archived: 'ARCHIVED',
} as const;

export type DepartmentStatus = (typeof DepartmentStatus)[keyof typeof DepartmentStatus];
export const DepartmentStatusValues = Object.values(DepartmentStatus);

// It knows the schema factory pattern
export const createDepartmentSchema = (t: TFunction) =>
  z.object({
    name: z.string({ message: t('validation.required') })
      .min(2, t('validation.minLength', { min: 2 })),
    status: z.enum(DepartmentStatusValues as [string, ...string[]], {
      message: t('validation.required'),
    }),
  }) satisfies z.ZodType<DepartmentInput>;

export type DepartmentFormValues = z.infer<ReturnType<typeof createDepartmentSchema>>;
Enter fullscreen mode Exit fullscreen mode

Why a separate agent for types? Because type definitions are the contract between layers. If the types are wrong, everything downstream breaks. Having a specialist that deeply understands the project's type patterns (especially the const as const enum pattern and the createSchema(t) factory pattern) reduces type-related bugs significantly.

5. git-workflow — The Safe Committer

This runs on Haiku — the cheapest, fastest model. Git operations are formulaic: check status, stage files, write a conventional commit message, push.

## Safety Rules (NEVER break these)
- NEVER force push to main/master
- NEVER use git reset --hard without explicit user confirmation
- NEVER stage .env files or credentials
- ALWAYS review staged changes before committing
- ALWAYS use specific file paths in git add (not git add .)
Enter fullscreen mode Exit fullscreen mode

The commit message format is enforced:

feat(departments): add department creation form with validation
fix(table): correct column resize handle position
refactor(api): extract shared error handling to Axios interceptor
Enter fullscreen mode Exit fullscreen mode

The Orchestration Flow

Here's how a typical feature request flows through the agents:

User: "Add a departments management page with CRUD"

1. Main session analyzes the request
2. Delegates to component-architect:
   "Design the component hierarchy for departments feature"
   -> Returns: component tree, props, state ownership map

3. Delegates to schema-type-generator:
   "Create types and Zod schemas for departments"
   -> Returns: types.ts + department.schema.ts

4. Delegates to senior-dev:
   "Implement the departments feature following the architecture plan"
   -> Returns: service, hooks, components, page

5. Delegates to code-reviewer:
   "Review all new departments feature files"
   -> Returns: review with issues

6. If issues found, delegates back to senior-dev:
   "Fix the issues from the code review"

7. Delegates to git-workflow:
   "Commit the departments feature"
   -> Returns: conventional commit created
Enter fullscreen mode Exit fullscreen mode

Steps 2 and 3 can run in parallel since they're independent. The main session handles the sequencing.

The Shared Knowledge Layer: coding-rules Agent

One special agent ties everything together: coding-rules. It's a 2700+ line document that encodes every coding standard from my project:

## Agent Identity

You are a coding standards enforcement specialist.
You never invent new patterns — you strictly enforce existing ones.

Your philosophy:
- Consistency over preference
- Pattern first, implementation second
- No improvisation
- Reference agents when unsure
Enter fullscreen mode Exit fullscreen mode

This agent serves as the "source of truth" that other agents reference. When the senior-dev builds a component, it follows the same CVA pattern that code-reviewer checks against, which matches the examples in coding-rules.

Setting It Up: Configuration

Agent Files

Each agent lives in ~/.claude/agents/:

~/.claude/agents/
  senior-dev.md
  code-reviewer.md
  component-architect.md
  debugger.md
  api-integrator.md
  schema-type-generator.md
  test-writer.md
  git-workflow.md
  performance-optimizer.md
  security-auditor.md
  refactorer.md
  documentation-creator.md
  coding-rules.md
Enter fullscreen mode Exit fullscreen mode

CLAUDE.md — The Project Context

The ~/.claude/CLAUDE.md file provides project-level context that all agents inherit:

# Tech Stack
- React 19 + TypeScript 5.9 + Vite 7
- TanStack Query v5, TanStack Table v8
- React Hook Form + Zod (i18n validation)
- Axios (auth interceptors, token refresh)
- i18next (en, ru, uz, uz-Cyrl)
- Biome (linter + formatter)

# Quick Reference
- TypeScript: strict mode, interface for objects, type for unions
- Service Layer: ENDPOINTS as const + featureService + axiosClient
- TanStack Query: key factories, skipToken, keepPreviousData
- Zod: createSchema(t: TFunction), satisfies z.ZodType<T>
- Files: feature-based src/features/[name]/
Enter fullscreen mode Exit fullscreen mode

Permissions

The ~/.claude/settings.json controls what agents can do:

{
  "permissions": {
    "allow": [
      "Bash(npm:*)", "Bash(pnpm:*)", "Bash(git:*)",
      "Bash(biome:*)", "Bash(vitest:*)", "Bash(tsc:*)",
      "WebSearch", "WebFetch"
    ],
    "defaultMode": "acceptEdits"
  }
}
Enter fullscreen mode Exit fullscreen mode

Agents inherit these permissions. The code-reviewer only has Read, Grep, Glob, Bash — it can analyze but not write. The senior-dev has full tool access.

What I've Learned

1. Agent descriptions are routing rules

The description field in the YAML frontmatter is how Claude decides which agent to use. Write it like a function signature: clear inputs, clear use cases.

# Bad
description: Helps with code

# Good
description: >
  Expert code review agent. Reviews code quality, catches bugs,
  verifies TypeScript type safety, and checks adherence to project
  patterns (service layer, TanStack Query, Zod i18n, CVA, Biome).
  Use proactively after code changes or before commits.
Enter fullscreen mode Exit fullscreen mode

2. Concrete examples beat abstract rules

Instead of "use proper TypeScript patterns", embed the exact code:

// This is inside the agent prompt:
export const Role = {
  Admin: 'ADMIN',
  User: 'USER',
} as const;

export type Role = (typeof Role)[keyof typeof Role];
Enter fullscreen mode Exit fullscreen mode

The agent will reproduce this pattern exactly.

3. Model selection matters for cost

My git-workflow agent runs hundreds of times a month. Using Haiku instead of Opus for simple commit messages saves significant cost with zero quality loss.

4. Read-only tools for reviewers

The code-reviewer agent only has Read, Grep, Glob, Bash — no Write or Edit. This is intentional. A reviewer should analyze and report, not fix. The fix goes back to senior-dev.

5. Memory enables cross-session learning

Every agent has memory: user, which means they can remember patterns and preferences across sessions. The debugger agent remembers which error patterns it's seen before; the senior-dev remembers your preferred component structure.

Before and After

Before (single agent):

  • "Build a users page" → Generic implementation, missing project patterns
  • Manual review needed to catch pattern violations
  • Inconsistent code across features
  • Every prompt required repeating conventions

After (12 specialized agents):

  • "Build a users page" → senior-dev follows exact patterns, builds bottom-up
  • code-reviewer catches violations automatically
  • Every feature follows identical patterns
  • Conventions are embedded in agent prompts, never repeated

Try It Yourself

  1. Identify your project's core patterns (3-5 patterns you enforce most often)
  2. Create your first agent: start with senior-dev — the code writer
  3. Add a code-reviewer that checks against your patterns
  4. Gradually add specialists as you notice which tasks benefit from focused prompts
  5. Keep a shared coding-rules agent as the source of truth

You don't need 12 agents on day one. Start with 2-3 and grow as your patterns solidify.

This setup is a work in progress — I'm still refining agent prompts, adjusting model choices, and discovering new ways to split responsibilities. If you have suggestions, spot any gaps, or have built something similar with a different approach, I'd love to hear about it in the comments

Website: (https://matkarim.uz)

Top comments (0)