When building MCP servers, we often focus on the happy path: what happens when tools execute successfully. But what about when things go wrong? The quality of your error responses can make the difference between a frustrated user and an AI that recovers gracefully on its own.
Understanding MCP Error Types: Protocol vs Tool Errors
Before diving into error response strategies, it's crucial to understand the distinction between two types of errors in MCP:
MCP Protocol-Level Errors
These are errors in the MCP communication itself:
- Connection closed or request timeout
- Tool not found
- Malformed requests or protocol violations
- Internal server errors
These errors trigger standard JSON-RPC error responses and typically indicate something is fundamentally broken with the request or the server.
{
"jsonrpc": "2.0",
"id": 1,
"error": {
"code": -32001,
"message": "Request Timeout"
}
}
Tools/call Errors (The Focus of This Article)
These are errors that occur during tool execution. The tool was found and called, but something went wrong during the processing. These should not be returned as MCP protocol errors, but as successful MCP JSON-RPC responses with isError: true
in the result payload.
{
"jsonrpc": "2.0",
"id": 2,
"result": {
"content": [
{
"type": "text",
"text": "An error occurred."
}
],
"isError": true
}
}
Tools/call Error Responses Are Context, Not Dead Ends
Why bother sharing so many details about the difference between these two error formats? They're both still errors, right? Nothing that needs much attention.
Wrong! MCP protocol-level errors are captured by the MCP client, eventually surfaced in the UI (like a notification in Claude), and discarded. On the other hand, tools/call errors are injected back into the LLM context window, just like successful responses. Smart error messages can be leveraged by the model as much as any other prompt, giving it a chance to recover from the error without human intervention.
Most open-source MCP implementations I've seen return generic tool error messages that leave the AI (and users) in the dark. Let's look at what it takes to rework error messages and increase your server's overall quality.
3 Use-Cases of Better Error Responses
Here are examples of elevated error messages that improve model task completion rate (the north star metric used to evaluate MCP server quality).
Tool Ordering Guidance
If the application's state prevents the model from using a tool for a given resource, provide instructions on how to update that state to make the tool usable. For example, if you're a famous three-letter infrastructure company exposing a tool to terminate an instance, but this tool can only be called when the instance is in a stopped state, say so in the error message.
{
"content": [
{
"type": "text",
"text": "You can't terminate an instance in the running state. Use the stop_instance tool first on this instance."
}
],
"isError": true
}
Refined Validation Messages
When tool input validation criteria aren't fully representable in JSON schema, use tool error messages to give the model additional context. If you're a travel company exposing a booking tool on your MCP server and the model accidentally misinterprets the current year for your booking request, you can correct it:
{
"content": [
{
"type": "text",
"text": "The requested travel date cannot be set in the past. You requested travel on July 31st, 2024, but the current date is July 25th, 2025. Did you mean to plan for travel on July 31st, 2025 instead?"
}
],
"isError": true
}
Smart Unknown Error Handling
Even when you can't provide precise details about an error, give the model instructions on retry strategy and fallback actions to direct the user to:
{
"content": [
{
"type": "text",
"text": "An unknown error happened. Try again immediately. If it's the 3rd time you're encountering this issue, provide the user with a link to https://mydashboard.example.com/manual-task to perform the task manually."
}
],
"isError": true
}
Conclusion
Error handling in MCP isn't just about graceful failures—it's about creating collaborative experiences where AI can self-correct and recover. By treating error responses as contextual guidance rather than terminal states, you transform frustrating dead ends into stepping stones toward success.
Remember: every error response is an opportunity to teach the AI how to do better next time.
What patterns have you found effective for MCP error handling? Share your experiences in the comments below.
Top comments (2)
Super interesting! Thanks @fredericbarthelet 🙌
Great article ! Well designed MCP error are definetely not used enough!