👉 Full implementation available at shinpr/sub-agents-mcp
I wanted to try Cursor and other emerging AI coding tools. But I kept hitting the same walls without sub-agents — context pollution, inconsistent outputs, and the dreaded mid-task context exhaustion.
Claude Code has this feature called Sub-agents — specialized AI assistants with separate contexts that handle specific tasks. It solves context exhaustion and dramatically improves accuracy. But other AI coding tools don't have it.
So I built an MCP server that makes it happen.
What Sub agents Actually Do
Sub-agents are specialized AI assistants in Claude Code that handle specific tasks with efficient problem-solving and context management.
Why They Matter
Isolated Contexts
Each sub-agent gets its own context window. No more context pollution. No more running out of tokens mid-task.
Task-Specific Precision
A code reviewer needs different context than an implementer. Give each agent exactly what it needs.
Reproducible Results
Define once in Markdown, get consistent quality every time.
Building an MCP Bridge
I built an MCP server that brings sub-agents to any tool that supports Model Context Protocol.
https://github.com/shinpr/sub-agents-mcp
About MCP
Model Context Protocol (MCP) is a standardized protocol that enables host applications (like Cursor and Claude Desktop) to communicate with servers (data sources and tools).
By implementing sub-agents as an MCP server, I made this Claude Code-exclusive feature available to any MCP-compatible tool.
How It Works
Just tell your AI to use a sub-agent:
"Use the document-reviewer agent to check docs/PRD/spec.md"
Your specialized agent takes over with its own fresh context.
Example Run in Cursor
As you can see, the document-reviewer agent produces a structured report rather than freeform text. The output includes a summary of strengths, key issues with severity levels, and actionable suggestions for improvement. This makes it easy to spot gaps and apply consistent review standards across different documents.
Quick Setup
Step 1: Configure Your Tool
Add this to your tool's MCP config (e.g., ~/.cursor/mcp.json
):
{
"mcpServers": {
"sub-agents": {
"command": "npx",
"args": ["-y", "sub-agents-mcp"],
"env": {
"AGENTS_DIR": "/Users/username/projects/my-app/.cursor/agents",
"AGENT_TYPE": "cursor"
}
}
}
}
Note: Use absolute paths for AGENTS_DIR
. I recommend creating an agents
folder in your project.
Step 2: Create Your First Agent
Create a Markdown file in your agents directory:
code-reviewer.md
:
# Code Reviewer
You are an AI assistant specialized in code review.
Please review with the following perspectives:
- Finding bugs and potential issues
- Suggesting performance and readability improvements
- Checking compliance with best practices
Your code-reviewer
sub-agent is ready to use.
Creating Effective Sub-agents
Claude Code's official documentation explains the concept:
Custom sub agents in Claude Code are specialized AI assistants that can be invoked to handle specific types of tasks. They enable more efficient problem-solving by providing task-specific configurations with customized system prompts, tools and a separate context window.
sub-agents-mcp follows this pattern. It interprets Markdown files in the AGENTS_DIR
directory as agent definitions and passes the entire file content as system context to Cursor CLI or Claude Code.
.claude
┗ agents
┣ code-reviewer.md # Works as "code-reviewer" agent
┗ document-reviewer.md # Works as "document-reviewer" agent
Key tips for creating sub-agent definitions:
- Define sub-agents with single responsibility
- Provide necessary information while excluding unnecessary details
- Break down tasks to fit within one context window
For example, I recommend separating generation tasks from review tasks. Implementation gathers lots of context that becomes unnecessary during review. Trying to do both in one task usually exhausts the context before review, resulting in poor quality.
You can find definition samples in my boilerplate repository.
Example: Document Reviewer
Here's a real agent definition I use:
document-reviewer.md:
You are a technical document reviewer.
## Primary Objective
Evaluate document completeness and consistency. Output structured review with actionable improvements.
## Input
- **target**: Absolute path to document file
## Review Process
### Phase 1: Document Analysis
1. Load document from target path
2. Extract all technical claims and requirements
3. Identify document type and expected sections
### Phase 2: Validation Checks
Execute ALL checks in order:
**Consistency Check**
- Find contradictions between sections
- Identify ambiguous statements
- Flag: If found, mark as CRITICAL severity
**Completeness Check**
- Verify mandatory sections exist
- Check technical details depth
- Flag: Missing sections = CRITICAL, insufficient detail = IMPORTANT
**Clarity Check**
- Assess technical terminology usage
- Evaluate logical flow
- Flag: Unclear sections = RECOMMENDED
## Output Requirements
### Mandatory Structure
[METADATA]
document: <filename>
review_date: <ISO-8601>
reviewer: document-reviewer
[SCORES]
consistency: <0-100>
completeness: <0-100>
clarity: <0-100>
[ISSUES]
<For each issue>
id: <ISSUE-001 format>
severity: <critical|important|recommended>
category: <consistency|completeness|clarity>
location: <section/line reference>
description: <what is wrong>
suggestion: <how to fix>
[VERDICT]
decision: <APPROVED|REJECTED|CONDITIONAL>
reason: <one sentence explanation>
### Severity Rules
- CRITICAL: Blocks approval. Contradictions, missing mandatory sections
- IMPORTANT: Should fix. Incomplete technical details, unclear requirements
- RECOMMENDED: Nice to fix. Style, minor clarity improvements
### Decision Logic
- APPROVED: No CRITICAL issues AND <3 IMPORTANT issues
- REJECTED: Any CRITICAL issue exists
- CONDITIONAL: No CRITICAL but ≥3 IMPORTANT issues
## Constraints
- Never skip mandatory sections in output
- Always provide specific line/section references
- Each suggestion must be actionable (not "improve clarity" but "replace X with Y")
- Use exact severity definitions above
Implementation Challenges
CLI Authentication and Timeouts
Cursor CLI can take a long time to respond depending on task complexity. The default timeout is 5 minutes. For complex tasks, extend it in your MCP config:
"EXECUTION_TIMEOUT_MS": "600000" # 10 minutes (maximum)
When I ran the document-reviewer on a 14,000-character document, Cursor CLI sometimes took over 10 minutes. For quick testing, use smaller files or simplify your agent definitions.
You can specify AGENT_TYPE
as either cursor
(for Cursor CLI) or claude
(for Claude Code).
To install Cursor CLI:
curl https://cursor.com/install -fsS | bash
Cursor CLI requires authentication before use. Sessions expire periodically, so if the MCP stops responding, try logging in again:
% cursor-agent login
Technical Implementation Details
MCP Server Structure
export class McpServer {
private setupHandlers(): void {
// Implementing the run_agent tool
this.server.setRequestHandler(
CallToolRequestSchema,
async (request): Promise<CallToolResult> => {
if (request.params.name === 'run_agent') {
const result = await this.runAgentTool.execute(request.params.arguments)
return result as CallToolResult
}
throw new ValidationError(`Unknown tool: ${request.params.name}`)
}
)
// Publishing resources (exposing agent definitions as MCP resources)
this.server.setRequestHandler(
ListResourcesRequestSchema,
async (): Promise<ListResourcesResult> => {
const resources = await this.agentResources.listResources()
return { resources }
}
)
}
}
Building This MCP with Agentic Coding
When implementing this MCP server, I used my Agentic Coding boilerplate, which comes with several sub-agents that helped ensure code quality throughout development:
-
Adaptive Rule Selection: The
rule-advisor
sub-agent analyzed each task and selected only the necessary coding rules, keeping contexts lean -
Staged Quality Assurance: The
quality-fixer
sub-agent automatically ran type checks and tests, fixing errors before they accumulated -
Pre-implementation Approval: The
TodoWrite
pattern required my approval before code changes, preventing the AI from going off-track
These boilerplate sub-agents made it possible to build this MCP server reliably — they're examples of the very patterns this MCP now enables for everyone.
run_agent Tool Specification
The run_agent
tool accepts:
-
agent
: Agent name to execute (required) -
prompt
: Instructions for the agent (required) -
cwd
: Working directory (optional) -
extra_args
: Additional command-line arguments (optional)
sub-agents-mcp
passes the entire agent definition file content as system context to Cursor CLI or Claude Code.
What I Learned
I initially thought it would be simple — just pass prompts to CLI tools and return responses. But I needed streaming interactions, and response formats differ between LLMs, requiring careful abstraction.
Since I was setting up my Agentic Coding environment as a hobby project, it took longer than expected with significant refactoring mid-project. Still, being able to quickly build exactly what I need is satisfying.
Through this development, I realized that effective AI tool usage requires proper context management and task decomposition.
Future Plans
While it's already usable, I'm looking forward to JSON output support once Cursor CLI adds it. Currently receiving streaming text, structured data would open up more possibilities.
Got ideas for new agents or improvements? Issues and PRs are welcome.
Repository
shinpr
/
sub-agents-mcp
MCP server that enables AI CLI tools to invoke specialized agents
Sub-Agents MCP Server
Let your AI assistant (Cursor, Claude) use specialized sub-agents for specific tasks. For example, create a "test-writer" agent that writes tests, or a "code-reviewer" agent that reviews your code.
Prerequisites
- Node.js 20 or higher
- Cursor CLI or Claude Code installed
- Basic terminal/command line knowledge
Installation
npx -y sub-agents-mcp
This command will install and run the MCP server. No manual building or cloning required!
Quick Start (3 minutes)
Step 1: Create Your First Agent
Create a folder for your agents and add a file code-reviewer.md
:
# Code Reviewer
You are a specialized AI assistant that reviews code.
Focus on:
- Finding bugs and potential issues
- Suggesting improvements
- Checking code quality
Step 2: Setup Your AI Tool
For Cursor Users:
# Install Cursor CLI
curl https://cursor.com/install -fsS | bash
# Login (Required!)
cursor-agent login
For Claude Code Users:
# Option 1: NPM (requires Node.js 20+)
npm
…
Top comments (0)