Chaining MCP Tools: Search → Read → Analyze → Write in TypeScript
Building sophisticated AI agents requires more than simple prompt-response interactions. Real-world automation demands multi-step workflows where AI performs sequential operations—gathering information, processing it, and taking action. This is where NeuroLink's Model Context Protocol (MCP) tool chaining capabilities transform what's possible with TypeScript.
The Anatomy of a Tool Chain
Consider a typical developer workflow: searching GitHub for relevant code, reading files, analyzing patterns, and creating issues. Previously, this required orchestrating multiple API calls, managing state, and handling errors across different services. NeuroLink unifies these operations into a single, coherent AI workflow.
Let's build a code review agent that chains MCP tools:
import { NeuroLink } from "@juspay/neurolink";
const neurolink = new NeuroLink();
// Connect GitHub MCP server
await neurolink.addExternalMCPServer("github", {
command: "npx",
args: ["-y", "@modelcontextprotocol/server-github"],
transport: "stdio",
env: { GITHUB_TOKEN: process.env.GITHUB_TOKEN },
});
// Connect code analysis MCP server
await neurolink.addExternalMCPServer("code-analyzer", {
transport: "http",
url: "https://api.codeanalysis.tools/mcp",
headers: { Authorization: "Bearer YOUR_API_KEY" },
});
// Execute a multi-step workflow
const result = await neurolink.generate({
input: {
text: `Search the "acme-corp/payments" repo for all files using "processPayment()"
function, analyze the code for security vulnerabilities, and create a GitHub
issue titled "Security Review: processPayment() usage" with findings.`
},
model: "claude-4-sonnet",
provider: "anthropic"
});
// NeuroLink orchestrates the chain:
// 1. Calls github.search_code()
// 2. Calls github.read_file() for each result
// 3. Calls code-analyzer.analyze() on content
// 4. Calls github.create_issue() with findings
Understanding the Chain
The beauty of NeuroLink's approach is that the LLM decides the optimal sequence of tool calls. When given a complex task, the AI breaks it down into discrete steps:
Step 1: Search
└─ MCP Tool: github.search_code
└─ Query: "processPayment repo:acme-corp/payments"
└─ Result: 5 files found
Step 2: Read
├─ MCP Tool: github.read_file (file 1)
├─ MCP Tool: github.read_file (file 2)
└─ ... (parallel execution)
Step 3: Analyze
└─ MCP Tool: code-analyzer.analyze
└─ Input: Combined file contents
└─ Result: 3 security findings
Step 4: Write
└─ MCP Tool: github.create_issue
└─ Title: "Security Review: processPayment() usage"
└─ Body: Formatted analysis with code snippets
Building Intelligent Routers
For production applications, you need more than basic tool execution. NeuroLink provides a ToolRouter for intelligent routing across multiple MCP servers:
import { ToolRouter, ToolCache } from "@juspay/neurolink";
// Route calls based on capability matching
const router = new ToolRouter({
strategy: "capability-based",
servers: [
{
name: "primary-db",
url: "https://mcp-primary.db.internal/mcp",
capabilities: ["query", "transaction"]
},
{
name: "analytics-db",
url: "https://mcp-analytics.db.internal/mcp",
capabilities: ["aggregate", "report"]
},
{
name: "github",
url: "https://mcp-github.tools.internal/mcp",
capabilities: ["repo", "issue", "pr"]
}
]
});
// Cache expensive operations
const cache = new ToolCache({
strategy: "lru",
maxSize: 500,
ttl: 60_000
});
// Use in your NeuroLink instance
await neurolink.configureMCP({
router,
cache,
batcher: { maxBatchSize: 10, maxWaitMs: 50 }
});
Human-in-the-Loop for Critical Actions
Not all tool chains should execute autonomously. NeuroLink's HITL (Human-in-the-Loop) system ensures human approval for sensitive operations:
const neurolink = new NeuroLink({
hitl: {
enabled: true,
requireApproval: ["github.create_issue", "github.merge_pr", "db.execute"],
confidenceThreshold: 0.85,
reviewCallback: async (action, context) => {
// Send to Slack for approval
const approval = await slack.sendApprovalRequest({
action: action.name,
parameters: action.params,
context: context.conversationSummary
});
return approval.approved;
}
}
});
// The AI will pause before creating issues
// and wait for human approval
const result = await neurolink.generate({
input: { text: "Create a critical bug report for the auth module" }
});
Real-World Example: Documentation Sync Agent
Here's a complete implementation that keeps documentation synchronized with code:
import { NeuroLink } from "@juspay/neurolink";
async function createDocSyncAgent() {
const neurolink = new NeuroLink();
// Add required MCP servers
await neurolink.addExternalMCPServer("github", {
command: "npx",
args: ["-y", "@modelcontextprotocol/server-github"],
transport: "stdio",
env: { GITHUB_TOKEN: process.env.GITHUB_TOKEN }
});
await neurolink.addExternalMCPServer("notion", {
transport: "http",
url: "https://api.notion.com/mcp",
headers: { Authorization: `Bearer ${process.env.NOTION_TOKEN}` }
});
// The agent workflow
const syncDocumentation = async (repo: string, notionPage: string) => {
return await neurolink.generate({
input: {
text: `Sync API documentation between GitHub repo "${repo}" and Notion page "${notionPage}".
Steps:
1. Search ${repo} for all files ending in "API.ts" or containing "@api" JSDoc
2. Read each API file and extract endpoint definitions
3. Compare with existing content in Notion page ${notionPage}
4. Update Notion with any new endpoints or changes
5. Create a summary of updates made`
},
model: "claude-4-sonnet",
provider: "anthropic",
hitl: {
enabled: true,
requireApproval: ["notion.update_page"]
}
});
};
return { syncDocumentation };
}
// Usage
const agent = await createDocSyncAgent();
await agent.syncDocumentation(
"acme-corp/payments-api",
"api-documentation-2024"
);
Best Practices for Tool Chaining
- Design for composability: Build small, focused MCP servers that do one thing well
- Use result caching: Expensive operations like code analysis should be cached
- Implement HITL for mutations: Always require approval for state-changing operations
- Handle failures gracefully: Design chains that can retry or skip failed steps
- Monitor execution: Use NeuroLink's telemetry to track tool usage and latency
Conclusion
MCP tool chaining with NeuroLink transforms AI agents from chatbots into autonomous workers capable of complex, multi-step workflows. By combining search, read, analyze, and write operations through a unified interface, you can automate sophisticated developer workflows while maintaining control and visibility.
The future of AI isn't just smarter models—it's smarter orchestration of tools that work together seamlessly.
NeuroLink — The Universal AI SDK for TypeScript
- GitHub: github.com/juspay/neurolink
- Install:
npm install @juspay/neurolink - Docs: docs.neurolink.ink
- Blog: blog.neurolink.ink — 150+ technical articles
Top comments (0)