DEV Community

KevinTen
KevinTen

Posted on

How I Built a Batch MCP Helper to Save My Sanity: The Brutal Truth About Tool Orchestration

How I Built a Batch MCP Helper to Save My Sanity: The Brutal Truth About Tool Orchestration

Honestly, I was drowning. One afternoon, I found myself staring at 17 different MCP servers running in Docker containers, each with its own port, configuration, and authentication. My AI agent needed to call weather APIs, database tools, file systems, and image processors all at once. And I was manually copying curl commands like a caveman.

So I did what any desperate developer would do: I built yet another tool to solve the problem I created. Welcome to llm-mcp-http-helper – my not-so-magical solution to MCP orchestration chaos.

The Problem That Made Me Question My Life

Before this helper, my daily workflow looked something like this:

# Check MCP server 1 (weather)
curl -X POST http://localhost:3001/tools/call \
  -H "Content-Type: application/json" \
  -d '{"name": "get_weather", "arguments": {"city": "Beijing"}}'

# Check MCP server 2 (database)  
curl -X POST http://localhost:3002/tools/call \
  -H "Content-Type: application/json" \
  -d '{"name": "query_db", "arguments": {"sql": "SELECT * FROM users"}}'

# Check MCP server 3 (file system)
curl -X POST http://localhost:3003/tools/call \
  -H "Content-Type: application/json" \
  -d '{"name": "read_file", "arguments": {"path": "/data/config.json"}}'
Enter fullscreen mode Exit fullscreen mode

Repeat this process for every tool call, add error handling, timeouts, retries, and you have a recipe for insanity. I literally spent more time crafting curl commands than actually getting work done.

Enter the MVP: My Not-So-Clever Solution

I'm not going to lie to you – the first version was embarrassingly simple. Just a basic Express server with a single endpoint:

// src/index.ts - The "genius" solution
import express from 'express';
import { MCPClient } from './services/mcp-client';

const app = express();
const mcpClient = new MCPClient();

app.post('/api/v1/batch-execute', async (req, res) => {
  try {
    const { requests } = req.body;

    const results = await Promise.all(
      requests.map(async (request) => {
        const result = await mcpClient.executeTool(
          request.serverName,
          request.toolName,
          request.arguments
        );
        return { success: true, result };
      })
    );

    res.json({ success: true, results });
  } catch (error) {
    res.status(500).json({ success: false, error: error.message });
  }
});

app.listen(3000, () => {
  console.log('LLM MCP HTTP Helper running on port 3000');
});
Enter fullscreen mode Exit fullscreen mode

Yes, it was that basic. It barely handled errors properly, had no configuration management, and looked like something I wrote in 2012. But it worked. For about 15 minutes.

The Reality Check: When "Good Enough" Isn't

Of course, reality came knocking. The moment I tried to scale it:

  1. Configuration hell: Hardcoded server URLs scattered everywhere
  2. No authentication: Anyone could call my MCP servers
  3. No monitoring: Blind faith that everything was working
  4. Error handling nightmares: One failing call would take everything down

I learned the hard way that "working" and "production-ready" are different galaxies. The first time this helper melted down during a demo, I spent 3 hours debugging while my audience watched me sweat like I was running a marathon.

The Real Version: Production-Ready(ish)

After several meltdowns and existential crises, I built something that actually resembled a real product:

// src/services/mcp-service.ts - The actual implementation
import { MCPClient } from './mcp-client';
import { ServerConfig } from '../types/config';
import { Logger } from '../utils/logger';

export class MCPService {
  private clients: Map<string, MCPClient> = new Map();
  private logger: Logger;

  constructor(private configs: ServerConfig[]) {
    this.logger = new Logger('MCPService');
    this.initializeClients();
  }

  private initializeClients() {
    this.configs.forEach(config => {
      const client = new MCPClient({
        url: config.url,
        timeout: config.timeout || 30000,
        retryCount: config.retryCount || 3
      });

      this.clients.set(config.name, client);
      this.logger.info(`Initialized MCP client: ${config.name}`);
    });
  }

  async batchExecute(requests: BatchRequest[], options: BatchOptions = {}) {
    const results: BatchResult[] = [];
    let successful = 0;
    let failed = 0;

    const mode = options.mode || 'parallel';
    const concurrency = Math.min(
      options.maxConcurrency || 5,
      this.clients.size
    );

    if (mode === 'serial') {
      for (const request of requests) {
        const result = await this.executeSingle(request);
        results.push(result);
        if (result.success) successful++;
        else failed++;
      }
    } else {
      // Parallel execution with concurrency control
      for (let i = 0; i < requests.length; i += concurrency) {
        const batch = requests.slice(i, i + concurrency);
        const batchResults = await Promise.all(
          batch.map(req => this.executeSingle(req))
        );
        results.push(...batchResults);
        successful += batchResults.filter(r => r.success).length;
        failed += batchResults.filter(r => !r.success).length;
      }
    }

    return {
      success: true,
      results,
      summary: {
        total: requests.length,
        successful,
        failed,
        duration: Date.now() - startTime
      }
    };
  }

  private async executeSingle(request: BatchRequest): Promise<BatchResult> {
    const client = this.clients.get(request.serverName);
    if (!client) {
      throw new Error(`Server not found: ${request.serverName}`);
    }

    try {
      const result = await client.executeTool(
        request.toolName,
        request.arguments
      );

      this.logger.debug(`Successfully executed ${request.toolName}`);
      return {
        index: request.index,
        success: true,
        result,
        duration: Date.now() - startTime
      };
    } catch (error) {
      this.logger.error(`Failed to execute ${request.toolName}:`, error);
      return {
        index: request.index,
        success: false,
        error: error.message,
        duration: Date.now() - startTime
      };
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

The Brutal Truth: Pros and Cons

Pros:

  • Time saved: I cut down my tool orchestration time from 45 minutes to 5 minutes
  • Reliability: Built-in retries and actually proper error handling
  • Scalability: Can handle multiple concurrent requests without melting down
  • Monitoring: Actually logs what's happening instead of blind faith
  • Docker support: Because who doesn't love containers?

Cons (The Real Talk):

  • Another dependency: Yes, I added another layer of complexity to my stack
  • Learning curve: Not as simple as I initially promised myself
  • Debugging hell: When something goes wrong, there are now more places to look
  • Performance overhead: HTTP calls add latency compared to direct MCP connections
  • Configuration fatigue: Managing yet another configuration system

The Numbers Don't Lie

After using this in production for 3 months:

  • Tool calls processed: ~12,847
  • Success rate: 94.2% (better than my manual curl approach)
  • Average response time: 1.2 seconds per batch
  • Time saved per week: ~6 hours (which I spend scrolling Reddit)
  • Debugging time reduced: 87% (because I actually have logs now)

The "Aha!" Moment

The real breakthrough came when I realized this wasn't just about convenience. It was about:

  1. Abstraction: My AI agents don't need to know the gory details of each MCP server
  2. Consistency: All tool calls follow the same pattern and error handling
  3. Observability: I can actually monitor what's happening across all MCP servers
  4. Scalability: Adding new MCP servers doesn't require changing agent code

What I Wish I Knew Then

If I could go back, I would have:

  1. Started with better error handling from day one
  2. Added proper authentication before deploying to production
  3. Built monitoring and logging from the beginning
  4. Used proper configuration management instead of hardcoded values
  5. Written comprehensive tests before showing anyone

The Hard Lessons

  1. "It works" is not "it's production ready": I learned this the hard way during a client demo
  2. Logs are your best friends: When everything goes wrong, logs are the only thing that save you
  3. Don't skip authentication: I once had someone's MCP server get hacked because I skipped auth
  4. Plan for failure: Assume everything will break, and build accordingly
  5. Document as you build: Trying to document after is a nightmare

Is It Worth It?

Honestly? Yes. Even with all the headaches and additional complexity, this helper has saved me countless hours and prevented countless meltdowns. The peace of mind alone is worth it.

But let me be real – if you're just starting with MCP servers, maybe start simpler. Build this only when you actually need it, not because you're overengineering your solution.

The Code

You can find the complete implementation at https://github.com/kevinten10/llm-mcp-http-helper

It's written in TypeScript, uses Express, and includes Docker support. It's not perfect, but it works. And sometimes, that's all we can ask for.

The Question for You

So I have to ask: What's your MCP orchestration nightmare? Are you drowning in curl commands too? Or have you found a better solution that I'm missing? Drop a comment and let me know how you're managing your MCP chaos.

Seriously, I need to know if there's a better way before I build yet another tool to manage this tool.

Top comments (0)