DEV Community

Cover image for 4 Claude Code Subagent Mistakes That Kill Your Workflow (And The Fixes)
Reza Rezvani
Reza Rezvani

Posted on • Originally published at Medium

4 Claude Code Subagent Mistakes That Kill Your Workflow (And The Fixes)

TL;DR

Mistake Symptom Fix
All tools → all agents Token waste, slow responses Explicit allowlists per agent role
No context: fork Main conversation polluted Isolated execution with forked context
Vague descriptions 50% activation rate Action keywords + CLAUDE.md rules
Missing activation hooks Random agent selection PreToolUse hooks for forced evaluation

The Setup Problem Nobody Talks About

You've created your first Claude Code subagent. It has a clever name, a detailed system prompt, and you're excited to see it work.

Then nothing happens. Or worse — the wrong agent activates. Or your main conversation fills with garbage output from a background task.

I've set up 40+ subagents across multiple projects. Here's what actually breaks and how to fix it.


Mistake 1: Allowing All Tools to All Agents

The problem: Your code-reviewer agent has access to Bash, Write, and WebFetch. It doesn't need any of them. Now it's burning tokens on capabilities it shouldn't use.

The fix: Explicit tool allowlists in frontmatter.

---
name: code-reviewer
description: Reviews code for bugs, security issues, and best practices
model: sonnet
tools:
  - Read
  - Glob
  - Grep
  - Task
---
Enter fullscreen mode Exit fullscreen mode

The catch: You need to know which tools each agent actually needs. Start restrictive, add as needed.


Mistake 2: Skipping context: fork

The problem: Your research agent dumps 3,000 tokens of analysis into your main conversation. Now you've lost 15% of your context window to output you don't need inline.

The fix: Isolate agent execution.

---
name: research-agent
context: fork
---
Enter fullscreen mode Exit fullscreen mode

With context: fork, the subagent runs in isolation. Results come back summarized, not dumped raw.

The catch: Forked context is one-way. You can't continue the thread. Design for discrete tasks.


Mistake 3: Vague Descriptions

The problem:

description: Helps with code stuff
Enter fullscreen mode Exit fullscreen mode

Claude doesn't know when to invoke this. Your activation rate tanks.

The fix: Action-oriented keywords that match how you'll actually prompt.

description: |
  Triggers on: review code, check for bugs, security audit, code quality
  Action: Analyzes code for bugs, security vulnerabilities, and style issues
  Output: Structured report with severity levels and fix suggestions
Enter fullscreen mode Exit fullscreen mode

Then reinforce in CLAUDE.md:

## Subagent Routing

When user mentions "review", "audit", or "check code" → invoke @code-reviewer
When user mentions "research", "find out", or "investigate" → invoke @research-agent
Enter fullscreen mode Exit fullscreen mode

Mistake 4: No Activation Hooks

The problem: Even with good descriptions, Claude sometimes picks the wrong agent or skips agents entirely.

The fix: PreToolUse hooks that force evaluation.

# In .claude/hooks.yaml
hooks:
  PreToolUse:
    - pattern: "Task"
      script: |
        echo "Evaluating subagent selection..."
        # Log which agent was selected and why
Enter fullscreen mode Exit fullscreen mode

This creates an audit trail and forces explicit agent selection.


Production Pattern: Sequential Pipeline

Here's how PubNub structures their agent workflow:

pm-spec → architect-review → implementer → tester
Enter fullscreen mode Exit fullscreen mode

Each agent has:

  • Explicit tool permissions (pm-spec: Read only, implementer: Read + Write + Bash)
  • Forked context (no pollution between stages)
  • Handoff hooks (output of one triggers input of next)

Production Pattern: Parallel Specialists

Zach Wills runs /add-linear-ticket with three agents in parallel:

  • PM Agent: Scopes requirements
  • UX-Designer Agent: Creates interface specs
  • Software-Engineer Agent: Estimates complexity

Each gets dedicated 200k context. Results merge at the end.

Key insight: Parallel only works when agents don't depend on each other's output.


Quick Setup Checklist

## Before Creating Any Subagent

- [ ] Tool list: Only what this agent actually needs
- [ ] Context strategy: Fork for background tasks, inline for interactive
- [ ] Description: Action keywords that match your prompting style
- [ ] CLAUDE.md routing: Explicit rules for when to invoke
- [ ] Activation hook: Logging to verify correct selection
Enter fullscreen mode Exit fullscreen mode

Complete Example: Code Reviewer

---
name: code-reviewer
description: |
  Triggers on: review code, check for bugs, security audit, PR review
  Action: Analyzes code for bugs, security issues, and style violations
  Output: Structured report with severity and fix suggestions
model: sonnet
context: fork
tools:
  - Read
  - Glob
  - Grep
---

You are a senior code reviewer. When given code to review:

1. Check for bugs and logic errors
2. Identify security vulnerabilities
3. Flag style inconsistencies
4. Suggest specific fixes

Output format:
## Summary
[One paragraph overview]

## Issues Found
| Severity | Location | Issue | Fix |
|----------|----------|-------|-----|
| HIGH/MED/LOW | file:line | description | suggestion |

## Recommendation
[Ship / Revise / Block]
Enter fullscreen mode Exit fullscreen mode

What's your subagent activation rate? I'm curious what setups are working for others.


AI tools supported the research phase. Configuration patterns and production examples are from my own projects.

About the Author
Alireza Rezvani — Building AI-augmented development workflows
Website | LinkedIn

Detailed breakdown with more enterprise patterns: Read on Medium

Top comments (0)