DEV Community

Cover image for Building Effective MCP Servers: Patterns for AI Collaboration
John Napiorkowski
John Napiorkowski

Posted on

Building Effective MCP Servers: Patterns for AI Collaboration

Building Effective MCP Servers: Advanced Patterns for Production-Ready AI Collaboration

After developing several Model Context Protocol (MCP) servers and observing successful implementations in the community, I've discovered that creating truly useful tools goes far beyond just exposing API endpoints. The key insight is that MCP servers aren't just data pipes—they're AI collaboration partners. Here are the essential patterns I've learned for building MCP servers that work seamlessly with LLMs in production environments.

Essential Pattern #1: Always Include a Help Tool

One of the most valuable additions to any MCP server is a comprehensive help tool. This serves both human users and the LLM itself, acting as a "manual" that the AI can reference to understand capabilities and guide users effectively.

async function handleHelp(): Promise<CallToolResult> {
  return {
    content: [
      {
        type: "text",
        text: `# User Management MCP Server

This server provides comprehensive user management capabilities. Here's how to use it effectively:

## Available Tools:
- **find_users**: Search for users by name, email, or other criteria
- **get_user_details**: Get comprehensive information about a specific user
- **update_user**: Modify user properties and settings

## Recommended Workflow:
1. Start with find_users to locate the user you're interested in
2. Use get_user_details to get complete information about a specific user
3. Use update_user to make changes if needed

## Search Best Practices:
- Use partial names or email addresses for broader searches
- Be more specific if you get too many results
- Use email searches for exact matches

When helping users, always guide them through this workflow and explain what each tool does.`
      }
    ]
  };
}
Enter fullscreen mode Exit fullscreen mode

Essential Pattern #2: JSON Data + Guidance Pattern

This is the most important pattern: return structured JSON data in the first content block using proper MCP format, followed by separate guidance blocks that tell the LLM how to interpret and present the information. This separation makes responses both machine-readable and human-friendly.

In this example we return guidance to the client LLM depending on if the call to the search API returned too many, none, or 'just right' number of users, including a clear description of what the response means as well as instructions the LLM should convey to the end users about what to do next.

async function findUsers(searchTerm: string): Promise<CallToolResult> {
  const users = await searchUsersInDatabase(searchTerm);

  if (users.length === 0) {
    return {
      content: [
        {
          type: "text",
          text: JSON.stringify({
            success: false,
            message: "No users found",
            searchTerm: searchTerm,
            resultCount: 0,
            users: []
          }, null, 2),
          mimeType: "application/json"
        },
        {
          type: "text", 
          text: `**Instructions for LLM**: No users were found for "${searchTerm}". Tell the user and suggest they:
1. Try a broader search term
2. Check for typos in the search
3. Use partial names or email addresses
4. Ask if they'd like to search using different criteria`
        }
      ]
    };
  }

  if (users.length > 50) {
    return {
      content: [
        {
          type: "text",
          text: JSON.stringify({
            success: false,
            message: "Too many results",
            searchTerm: searchTerm,
            resultCount: users.length,
            users: [],
            suggestion: "refine_search"
          }, null, 2),
          mimeType: "application/json"
        },
        {
          type: "text",
          text: `**Instructions for LLM**: Too many results were found (${users.length} users). Tell the user to:
1. Use a more specific search term
2. Include more characters in their search
3. Try searching by full email address instead of partial name
4. Consider what specific user they're looking for and provide more context`
        }
      ]
    };
  }

  return {
    content: [
      {
        type: "text",
        text: JSON.stringify({
          success: true,
          message: "Users found successfully",
          searchTerm: searchTerm,
          resultCount: users.length,
          users: users.map(u => ({
            id: u.id,
            name: u.name,
            email: u.email,
            department: u.department,
            role: u.role,
            status: u.status
          }))
        }, null, 2),
        mimeType: "application/json"
      },
      {
        type: "text",
        text: `**Instructions for LLM**: Present these ${users.length} users to the human in a clear, readable format. Let them know they can:
1. Get more detailed information about any user by asking for their details using the user ID
2. Update any user's information if needed
3. Search again with different criteria if the user they want isn't listed

Always ask which user they'd like to work with next, or if they need help with anything specific.`
      }
    ]
  };
}
Enter fullscreen mode Exit fullscreen mode

Essential Pattern #3: Detailed Schema Descriptions

Your tool descriptions and parameter schemas are critical for LLM understanding. Be verbose and specific—the extra detail pays dividends in correct tool usage.

Your schemas are basically OpenAPI, so feel free to take advantages of using descriptions and examples to teach the client about the meaning of the response.

{
  name: "find_users",
  description: `
    Search for users in the system using various criteria. This tool helps locate users 
    before performing other operations on them.

    Use this tool when:
    - A user asks to find someone by name or email
    - You need to locate a user before getting their details or updating them
    - Someone provides partial information about a user

    Returns structured JSON data with user summaries. For complete user information, 
    use get_user_details with a specific user ID.

    Error handling: Returns guidance for no results or too many results.
    See the help tool for complete workflow guidance.
  `,
  inputSchema: {
    type: "object",
    properties: {
      searchTerm: {
        type: "string",
        description: `Search term to find users. Can be:
        - Full or partial first/last name
        - Full or partial email address  
        - Employee ID or username

        Examples: "john", "smith", "john.doe@company.com", "@company.com"`
      },
      limit: {
        type: "number",
        description: "Maximum number of results to return (default: 50, max: 100)",
        default: 50
      }
    },
    required: ["searchTerm"]
  }
}
Enter fullscreen mode Exit fullscreen mode

Essential Pattern #4: Design Tool Chains with Structured Responses

Create tools that naturally flow into each other, with clear guidance on the progression and structured data that can be easily processed.

async function getUserDetails(userId: string): Promise<CallToolResult> {
  const user = await fetchUserFromDatabase(userId);

  if (!user) {
    return {
      content: [
        {
          type: "text",
          text: JSON.stringify({
            success: false,
            error: "user_not_found",
            message: "User not found",
            userId: userId,
            user: null
          }, null, 2),
          mimeType: "application/json"
        },
        {
          type: "text",
          text: `**Instructions for LLM**: The user ID ${userId} doesn't exist. Tell the user:
1. The user might have been deleted or the ID is incorrect
2. They can search for users again using find_users
3. Ask them to double-check the user ID or provide the user's name/email to search again`
        }
      ]
    };
  }

  return {
    content: [
      {
        type: "text",
        text: JSON.stringify({
          success: true,
          message: "User details retrieved successfully",
          userId: userId,
          user: {
            id: user.id,
            name: user.name,
            email: user.email,
            department: user.department,
            role: user.role,
            status: user.status,
            lastLogin: user.lastLogin,
            permissions: user.permissions,
            recentActivity: user.recentActivity.slice(0, 5).map(a => ({
              date: a.date,
              action: a.action,
              details: a.details
            })),
            metadata: {
              createdDate: user.createdDate,
              lastUpdated: user.lastUpdated,
              accountType: user.accountType
            }
          }
        }, null, 2),
        mimeType: "application/json"
      },
      {
        type: "text",
        text: `**Instructions for LLM**: Present this user information clearly to the human in a well-formatted, readable way. Let them know they can:

1. **Update this user**: Ask if they want to modify any information (name, email, department, role, status, permissions)
2. **Search for other users**: If they need to find additional users
3. **Get help**: If they're unsure what they can do next

If they want to make changes, guide them to specify exactly what they want to update.`
      }
    ]
  };
}
Enter fullscreen mode Exit fullscreen mode

Advanced Pattern #1: Strategic Data Format Choices

Choose your response format based on how the data will be used and consumed, but always lead with structured JSON for data responses:

Structured Data for Processing

async function getAnalyticsReport(timeframe: string): Promise<CallToolResult> {
  const analytics = await generateAnalytics(timeframe);

  return {
    content: [
      {
        type: "text",
        text: JSON.stringify({
          success: true,
          timeframe: timeframe,
          generatedAt: new Date().toISOString(),
          metrics: {
            totalUsers: analytics.totalUsers,
            activeUsers: analytics.activeUsers,
            newRegistrations: analytics.newRegistrations,
            loginFrequency: analytics.loginFrequency
          },
          trends: {
            userGrowth: analytics.trends.userGrowth,
            engagementChange: analytics.trends.engagementChange,
            topDepartments: analytics.trends.topDepartments
          },
          charts: {
            dailyActivity: analytics.charts.dailyActivity,
            departmentBreakdown: analytics.charts.departmentBreakdown
          }
        }, null, 2),
        mimeType: "application/json"
      },
      {
        type: "text",
        text: `**Instructions for LLM**: This analytics data covers ${timeframe}. Present the key metrics in a digestible summary format. Highlight:
1. The most significant trends (growth/decline)
2. Notable patterns in user behavior  
3. Any concerning metrics that need attention
4. Actionable insights from the data

Ask if they want to drill down into specific metrics or see data for a different time period.`
      }
    ]
  };
}
Enter fullscreen mode Exit fullscreen mode

HTML for Rich Interaction with JSON Data

async function getInteractiveDashboard(userId: string): Promise<CallToolResult> {
  const userData = await getUserDashboardData(userId);

  return {
    content: [
      {
        type: "text",
        text: JSON.stringify({
          success: true,
          userId: userId,
          dashboardData: userData,
          availableActions: ["update_profile", "change_password", "view_activity", "manage_permissions"]
        }, null, 2),
        mimeType: "application/json"
      },
      {
        type: "text",
        text: `
<div class="user-dashboard" data-user-id="${userId}">
  <h3>Dashboard for ${userData.name}</h3>

  <div class="quick-stats">
    <div class="stat">
      <span class="label">Last Login:</span>
      <span class="value">${userData.lastLogin}</span>
    </div>
    <div class="stat">
      <span class="label">Total Sessions:</span>
      <span class="value">${userData.totalSessions}</span>
    </div>
  </div>

  <div class="actions">
    <button onclick="showUserActions('${userId}')">Available Actions</button>
    <button onclick="refreshDashboard('${userId}')">Refresh Data</button>
  </div>
</div>

<script>
  function showUserActions(userId) {
    // Display available actions based on the JSON data above
    const actions = ${JSON.stringify(userData.availableActions)};
    console.log('Available actions for user:', userId, actions);
  }
</script>`,
        mimeType: "text/html"
      },
      {
        type: "text",
        text: `**Instructions for LLM**: This interactive dashboard shows key information for the user. The JSON data above contains all the raw information, while the HTML provides a user-friendly interface. Guide the user on what actions they can take based on the availableActions in the JSON response.`
      }
    ]
  };
}
Enter fullscreen mode Exit fullscreen mode

Advanced Pattern #2: State Management with Structured Context

For complex workflows, maintain context through structured JSON responses. In this example we assign an auditId to the start of the workflow. This ID would need to be linked to your session management tool associated with the MCP client session id that got created when the client first connected to your MCP server.

async function beginUserAudit(departmentId: string): Promise<CallToolResult> {
  const auditId = generateAuditId();
  const users = await getUsersByDepartment(departmentId);

  const auditState = {
    auditId,
    departmentId,
    status: 'initialized',
    progress: {
      totalUsers: users.length,
      processedUsers: 0,
      currentStep: 'initialization'
    },
    userIds: users.map(u => u.id),
    startTime: new Date().toISOString(),
    findings: []
  };

  // Store audit state
  await storeAuditState(auditId, auditState);

  return {
    content: [
      {
        type: "text",
        text: JSON.stringify({
          success: true,
          message: "Audit initialized successfully",
          audit: auditState,
          nextActions: ["continue_audit", "get_audit_status", "cancel_audit"]
        }, null, 2),
        mimeType: "application/json"
      },
      {
        type: "text", 
        text: `**Instructions for LLM**: An audit has been started with ID ${auditId} for ${users.length} users. 
        Tell the user the audit is ready and explain they can:
        1. Continue with the audit process using continue_user_audit
        2. Check audit progress anytime with get_audit_status  
        3. Cancel if needed with cancel_audit

        Ask if they want to proceed with the audit.`
      }
    ]
  };
}
Enter fullscreen mode Exit fullscreen mode

Advanced Pattern #3: Performance and Pagination with Structured Responses

Handle large datasets gracefully with smart pagination and structured metadata:

async function searchLargeDataset(query: string, options: SearchOptions = {}): Promise<CallToolResult> {
  const { limit = 25, offset = 0 } = options;

  const totalCount = await getSearchResultCount(query);

  if (totalCount === 0) {
    return {
      content: [
        {
          type: "text",
          text: JSON.stringify({
            success: false,
            message: "No results found",
            query: query,
            totalCount: 0,
            results: [],
            suggestions: ["broaden_search", "check_spelling", "try_partial_terms"]
          }, null, 2),
          mimeType: "application/json"
        },
        {
          type: "text",
          text: `**Instructions for LLM**: No results found for "${query}". Suggest the user try broader search terms or check for typos.`
        }
      ]
    };
  }

  if (totalCount > 1000) {
    return {
      content: [
        {
          type: "text",
          text: JSON.stringify({
            success: false,
            message: "Too many results",
            query: query,
            totalCount: totalCount,
            results: [],
            recommendation: "refine_search",
            suggestions: ["add_filters", "use_specific_terms", "narrow_scope"]
          }, null, 2),
          mimeType: "application/json"
        },
        {
          type: "text",
          text: `**Instructions for LLM**: Search returned ${totalCount} results - too many to process. Ask the user to refine their search with more specific terms or filters.`
        }
      ]
    };
  }

  const results = await searchWithPagination(query, limit, offset);
  const hasMore = totalCount > (offset + limit);

  return {
    content: [
      {
        type: "text",
        text: JSON.stringify({
          success: true,
          message: "Search completed successfully",
          query: query,
          pagination: {
            totalCount: totalCount,
            currentPage: Math.floor(offset / limit) + 1,
            totalPages: Math.ceil(totalCount / limit),
            limit: limit,
            offset: offset,
            hasMore: hasMore,
            showing: {
              from: offset + 1,
              to: Math.min(offset + limit, totalCount)
            }
          },
          results: results,
          nextActions: hasMore ? ["get_next_page", "refine_search"] : ["refine_search", "get_details"]
        }, null, 2),
        mimeType: "application/json"
      },
      {
        type: "text",
        text: `**Instructions for LLM**: Show results ${offset + 1}-${Math.min(offset + limit, totalCount)} of ${totalCount} in a clear format.
        ${hasMore ? `Let the user know there are more results available and they can ask for the next page.` : 'All results shown.'}
        Guide them to get more details on specific items or refine their search.`
      }
    ]
  };
}
Enter fullscreen mode Exit fullscreen mode

Production Considerations

Error Recovery with Structured Responses

async function resilientToolCall(operation: () => Promise<any>): Promise<CallToolResult> {
  try {
    const result = await operation();
    return {
      content: [
        {
          type: "text",
          text: JSON.stringify({
            success: true,
            data: result,
            timestamp: new Date().toISOString()
          }, null, 2),
          mimeType: "application/json"
        }
      ]
    };
  } catch (error) {
    const errorResponse = {
      success: false,
      error: {
        code: error.code || 'UNKNOWN_ERROR',
        message: error.message,
        timestamp: new Date().toISOString()
      }
    };

    let guidance = '';

    if (error.code === 'RATE_LIMITED') {
      errorResponse.error.retryAfter = error.retryAfter;
      guidance = `**Instructions for LLM**: The system is experiencing high load. Tell the user to try again in ${error.retryAfter} seconds.`;
    } else if (error.code === 'PERMISSION_DENIED') {
      errorResponse.error.requiredPermissions = error.requiredPermissions;
      guidance = `**Instructions for LLM**: The user lacks required permissions: ${error.requiredPermissions.join(', ')}. Guide them to contact their administrator.`;
    } else {
      guidance = `**Instructions for LLM**: An unexpected error occurred. Apologize and suggest they try again or contact support with error code: ${error.code}`;
    }

    return {
      content: [
        {
          type: "text",
          text: JSON.stringify(errorResponse, null, 2),
          mimeType: "application/json"
        },
        {
          type: "text",
          text: guidance
        }
      ]
    };
  }
}
Enter fullscreen mode Exit fullscreen mode

Monitoring and Observability

Include telemetry in your tools to understand usage patterns:

async function instrumentedToolCall(toolName: string, params: any, operation: () => Promise<CallToolResult>): Promise<CallToolResult> {
  const startTime = Date.now();

  try {
    const result = await operation();

    // Log successful operation
    logger.info('tool_call_success', {
      tool: toolName,
      duration: Date.now() - startTime,
      params: sanitizeParams(params)
    });

    return result;
  } catch (error) {
    // Log error with context
    logger.error('tool_call_error', {
      tool: toolName,
      duration: Date.now() - startTime,
      error: error.message,
      params: sanitizeParams(params)
    });

    throw error;
  }
}
Enter fullscreen mode Exit fullscreen mode

Example Workflow Implementation

Here's how these patterns work together in a real interaction:

User: "What can this tool do?"

LLM Response (after calling help tool):

This user management server can help you find, view, and update user information. Here's how to get started:

  1. Search for users - I can find users by name, email, or other criteria
  2. Get detailed information - Once we find a user, I can show you their complete profile
  3. Update user information - I can help you modify user details, permissions, and settings

Would you like to search for a specific user to get started?

User: "Find users named john"

LLM (calls find_users with "john", receives structured JSON response):

I found 8 users with "john" in their name:

John Smith - john.smith@company.com (ID: 1234)

  • Department: Engineering, Role: Senior Developer

John Doe - j.doe@company.com (ID: 1235)

  • Department: Marketing, Role: Marketing Manager

Johnson Miller - johnson.miller@company.com (ID: 1236)

  • Department: Sales, Role: Sales Representative

Johnny Wilson - johnny.w@company.com (ID: 1237)

  • Department: IT, Role: System Administrator

John Anderson - john.anderson@company.com (ID: 1238)

  • Department: HR, Role: HR Specialist

Which user would you like to get more details about? I can show you their complete profile, recent activity, and permissions.

User: "Get details for John Smith"

LLM (calls get_user_details with ID 1234, receives structured JSON):

Here are the complete details for John Smith:

Basic Information:

  • Email: john.smith@company.com
  • Department: Engineering
  • Role: Senior Developer
  • Status: Active
  • Last Login: 2024-01-15 09:30 AM

Permissions:

  • Code Repository Access
  • Development Environment
  • Project Management Tools

Recent Activity:

  • 2024-01-15: Logged into development environment
  • 2024-01-14: Updated project documentation
  • 2024-01-13: Submitted code review

Is there anything you'd like to update about John's account? I can help you modify his role, department, permissions, or other information.

Key Takeaways for Production MCP Servers

  1. Structured JSON with mimeType - Use type: "text" with mimeType: "application/json" for all structured data responses
  2. Separate guidance blocks - Use additional text content blocks for LLM instructions and context
  3. Help tools are essential - Both for users and LLMs to understand capabilities
  4. Detailed schemas matter - Verbose descriptions help LLMs use tools correctly
  5. Design tool chains - Make tools flow naturally with structured responses
  6. Handle errors gracefully - Return structured error data with guidance
  7. Plan for scale - Include pagination metadata in structured responses
  8. Manage state explicitly - Use structured context objects for complex workflows
  9. Always include next actions - Tell the LLM what the user can do next
  10. Build for resilience - Structure error responses consistently
  11. Add observability - Include logging and monitoring for production use

The goal is to create responses that are both machine-readable (proper JSON content with mimeType) and human-friendly (through LLM guidance). This dual approach ensures your MCP servers can handle real-world complexity while maintaining excellent user experience.

Remember: the JSON content tells the LLM what happened, while the guidance blocks tell it how to help the user with that information. Together, they create a powerful AI collaboration experience that feels natural and intuitive to users.

Top comments (0)