DEV Community

extebarrri
extebarrri

Posted on

How I Monitor MCP Servers in Production — Tools and Lessons Learned

Three months ago, one of my MCP servers crashed at 2 AM on a Friday. I didn't find out until Monday morning when a customer opened a support ticket. By then, 60+ API calls had failed silently, and I'd lost two days of data.

That's when I realized: MCP servers have no built-in observability. They fail quietly. There's no error dashboard, no alerts, no uptime tracking.

I spent 8 weeks building a monitoring stack for MCP servers. Here's what I learned.

The Problem

Unlike traditional SaaS APIs, MCP servers often:

  • Run on VPS with minimal logging
  • Don't have native error tracking
  • Fail gracefully but return garbage responses
  • Have no built-in health check endpoints
  • Don't expose metrics in standardized format

What to Monitor

After my 2 AM incident, I identified five critical metrics:

  1. Uptime & Availability — Is the server actually handling requests?
  2. Error Rates — What percentage fail? I set 2% threshold.
  3. Response Times — p50, p95, p99 latency
  4. Token Usage — MCP servers burn tokens fast
  5. Resource Utilization — Memory, CPU, disk

Real Lessons Learned

Lesson 1: Server crashes aren't your biggest problem.
The real problem was the server with 500ms latency for 3 days. Customers quietly switched competitors.

Lesson 2: Logs are not metrics.
You can't alert on logs. Use structured metrics instead.

Lesson 3: Test your health checks.
My health check always returned green, even when the core service failed.

Lesson 4: Set realistic thresholds.
My 1% error threshold got me paged 40 times/day. Now 5%.

Conclusion

Monitoring isn't sexy. But it's the difference between customers discovering outages and you discovering them first.

Cost: 2 hours setup + $50-100/month infrastructure. Upside: Not woken at 2 AM.

What's your monitoring approach?

Top comments (0)