DEV Community

Cover image for Bridging the Gap Between LLMs and Enterprise APIs using FastMCP: How to Auto-Generate AI Tools from OpenAPI Specs
Seenivasa Ramadurai
Seenivasa Ramadurai

Posted on

Bridging the Gap Between LLMs and Enterprise APIs using FastMCP: How to Auto-Generate AI Tools from OpenAPI Specs

Introduction

In enterprise environments, the challenge isn't just connecting Large Language Models to systems, it's doing so at scale without drowning in manual configuration. Most organizations have dozens of Microservices and APIs, each requiring custom tool definitions for LLM integration. Today, we'll explore how to bridge this gap by building a system that automatically transforms any OpenAPI specification into LLM ready tools, making enterprise API integration both scalable and maintainable.

What We're Building

The breakthrough is that any REST API with an OpenAPI specification can be automatically converted into an MCP server and connected to LLMs using FastMCP SDK whether it's your CRM, inventory system, payment gateway, or user management service. No manual tool definitions, no custom integrations, just instant LLM connectivity.

In this blog, we'll demonstrate this universal approach using a FastAPI example, but the same FastMCP technique works with any OpenAPI compliant REST API from any framework or vendor.

Our complete system has three components:

  1. A REST API - We'll use FastAPI with a friends database as our example (but this could be any OpenAPI-compliant API)
  2. An Auto-Generated MCP Server - Built with FastMCP, automatically created from any OpenAPI specification
  3. An LLM-Powered Client - Using OpenAI's GPT models to interact with the API through natural language

This architecture demonstrates how enterprises can instantly make any existing REST API accessible to LLMs without rebuilding or manual tool creation—exactly what's needed for real-world AI adoption.

Part 1: The Example API - FastAPI REST Service

For our demonstration, we'll use FastAPI to create a simple friends database management system. However, remember that this same approach works with any REST API that provides an OpenAPI specification—including Express.js, Django REST Framework, Spring Boot, .NET Web API, or even legacy systems with OpenAPI wrappers.

Let's start with our example FastAPI service:

from fastapi import FastAPI, HTTPException
from pydantic import BaseModel
from typing import List, Optional
import sqlite3
import uvicorn
from contextlib import contextmanager

app = FastAPI(title="Friends API", description="API for managing Sreeni's friends database")

# Database setup
DATABASE = "friends.db"

@contextmanager
def get_db_connection():
    conn = sqlite3.connect(DATABASE)
    try:
        yield conn
    finally:
        conn.close()

def init_db():
    with get_db_connection() as conn:
        cursor = conn.cursor()
        cursor.execute('''
            CREATE TABLE IF NOT EXISTS friends (
                id INTEGER PRIMARY KEY AUTOINCREMENT,
                name TEXT NOT NULL,
                email TEXT UNIQUE,
                phone TEXT,
                age INTEGER,
                city TEXT,
                created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
            )
        ''')
        conn.commit()


# Sample API Endpoint
@app.post("/friends/", response_model=Friend, summary="Add a new friend")
async def add_friend(friend: FriendCreate):
    """Add a new friend to Sreeni's friends database"""
    with get_db_connection() as conn:
        cursor = conn.cursor()
        try:
            cursor.execute('''
                INSERT INTO friends (name, email, phone, age, city)
                VALUES (?, ?, ?, ?, ?)
            ''', (friend.name, friend.email, friend.phone, friend.age, friend.city))
            conn.commit()

            # Get the inserted friend
            cursor.execute('SELECT * FROM friends WHERE id = ?', (cursor.lastrowid,))
            row = cursor.fetchone()
            return Friend(
                id=row[0],
                name=row[1],
                email=row[2],
                phone=row[3],
                age=row[4],
                city=row[5],
                created_at=row[6]
            )
        except sqlite3.IntegrityError:
            raise HTTPException(status_code=400, detail="Email already exists")

Enter fullscreen mode Exit fullscreen mode

This API provides CRUD operations for managing friends with proper error handling and data validation. The key point is that FastAPI automatically generates OpenAPI specifications (/openapi.json), which we'll leverage in the next step.

Important: While we're using FastAPI as our example, any REST API that provides an OpenAPI spec can be integrated this way—whether it's built with Node.js, Python Django, Java Spring, C# .NET, Ruby on Rails, or any other technology.

Key Features of Any Compatible REST API

  • OpenAPI Specification: The API must provide an OpenAPI/Swagger spec (most modern APIs do)
  • HTTP Methods: Standard GET, POST, PUT, DELETE operations
  • Structured Responses: JSON responses that can be parsed and understood
  • Error Handling: Proper HTTP status codes for different scenarios

Part 2: The Universal Solution - Auto-Generated MCP Server with FastMCP

Here's where the magic happens. Using FastMCP, our MCP server can work with any REST API that provides an OpenAPI specification. Whether your API is built with FastAPI, Express.js, Django, Spring Boot, or any other framework, the FastMCP process is identical:

def get_openapi_spec():
    """Fetch the OpenAPI specification from ANY REST API server"""
    try:
        with httpx.Client() as client:
            # This URL could be any API's OpenAPI endpoint:
            # - FastAPI: http://api.example.com/openapi.json
            # - Express: http://api.example.com/api-docs.json
            # - Spring: http://api.example.com/v3/api-docs
            # - Django: http://api.example.com/schema/
            response = client.get("http://127.0.0.1:8000/openapi.json")
            response.raise_for_status()
            return response.json()
    except httpx.ConnectError:
        print("❌ Could not connect to API server.")
        return None
    except Exception as e:
        print(f"❌ Error fetching OpenAPI spec: {e}")
        return None
Enter fullscreen mode Exit fullscreen mode

The Revolutionary FastMCP Approach

Instead of manually defining each tool, our FastMCP powered MCP server:

  1. Fetches the OpenAPI specification from any running REST API server
  2. Analyzes the endpoints and their parameters automatically
  3. Generates MCP server tools for each API endpoint
  4. Maps all operations to the MCP protocol for LLM consumption
# Create the MCP server using FastMCP and the actual OpenAPI spec
mcp = FastMCP.from_openapi(
    openapi_spec=openapi_spec,
    client=client,
    name="Enhanced Swagger-Based MCP Server",
    route_maps=[
        # Map ALL operations to Tools for OpenAI compatibility
        RouteMap(methods=["GET"], mcp_type=MCPType.TOOL),
        RouteMap(methods=["POST"], mcp_type=MCPType.TOOL),
        RouteMap(methods=["PUT"], mcp_type=MCPType.TOOL),
        RouteMap(methods=["DELETE"], mcp_type=MCPType.TOOL),
    ]
)
Enter fullscreen mode Exit fullscreen mode

This FastMCP approach is revolutionary because it eliminates the manual work of defining tools for each API endpoint. Any OpenAPI-compliant service can instantly become available to LLMs through an automatically generated MCP server.

Part 3: The AI Interface : Natural Language to API Calls

The final piece connects everything together with an intelligent client that uses OpenAI's models to understand natural language and translate it into appropriate API calls:

import asyncio
from fastmcp import Client
from openai import AsyncOpenAI

async def main():
    async with Client("http://127.0.0.1:8001/mcp") as mcp_client:
        # Get available tools from MCP server
        tools = await mcp_client.list_tools()

        # Convert MCP tools to OpenAI format
        openai_tools = []
        for tool in tools:
            tool_format = {
                "type": "function",
                "function": {
                    "name": tool.name,
                    "description": tool.description,
                    "parameters": tool.inputSchema
                }
            }
            openai_tools.append(tool_format)
Enter fullscreen mode Exit fullscreen mode

Natural Language Processing

The AI client can understand queries like:

  • "Show me all my friends"
  • "Add a new friend named Chandar Dhall with email chandar.dhall@company.com"
  • "Find friends who live in Boston"
  • "Update Krishna phone number to 555-9876"

Each query gets processed by GPT, which determines the appropriate API endpoint and parameters needed.

The Complete Workflow

Here's how the entire system works together:

  1. User Input: "Add a new friend named Aaron with email aaron@example.com and age 25"

  2. AI Processing: OpenAI's GPT analyzes the request and determines it needs to call the POST /friends/ endpoint

  3. MCP Translation: The MCP client calls the appropriate tool with the extracted parameters

  4. API Execution: The FastAPI server processes the request and updates the database

  5. Response Processing: The result flows back through the MCP server to the AI, which formats it for the user

Benefits of This Architecture

1. Zero Manual Tool Definition

Perfect for enterprises with hundreds of API endpoints. No need to manually write tool schemas for each endpoint—the OpenAPI specification serves as the single source of truth.

2. Automatic Updates

When your enterprise APIs evolve, the LLM integration automatically adapts. No more maintenance overhead for AI tool definitions.

3. Universal Enterprise Compatibility

Any service with an OpenAPI specification (most modern enterprise APIs) can be instantly made available to LLMs.

4. Type Safety

Leverages existing API validation and type checking without duplication.

5. Scalability

Add new endpoints to your API, and they automatically become available to AI agents.

Real-World Applications

This pattern opens up massive enterprise possibilities:

  • Enterprise Integration at Scale: Connect LLMs to dozens of internal microservices simultaneously
  • Third-Party API Integration: Make external enterprise services accessible through natural language
  • Legacy System Modernization: Wrap older enterprise systems with OpenAPI and instant LLM access
  • Rapid AI Prototyping: Quickly test LLM interactions with any enterprise API

Getting Started

To implement this system:

  1. Set up your FastAPI server with proper OpenAPI documentation
  2. Deploy the MCP server that reads your OpenAPI spec
  3. Configure the AI client with your OpenAI API key
  4. Start conversing with your API through natural language

Best Practices

API Design

  • Use clear, descriptive endpoint names and documentation
  • Implement proper error handling with meaningful messages
  • Follow RESTful conventions for consistency

MCP Server Configuration

  • Handle connection errors gracefully
  • Implement proper logging for debugging
  • Consider caching OpenAPI specs for performance

AI Client Implementation

  • Use appropriate system prompts for context
  • Handle tool call errors elegantly
  • Implement rate limiting for API protection

Enterprise-Grade Security, Logging, and Auditing

One of the most powerful aspects of this FastMCP approach is that security, logging, and auditing can be configured as MCP server middleware, handling these cross-cutting concerns at the infrastructure level rather than in each individual API.

MCP Server Middleware for Cross-Cutting Concerns

FastMCP middleware allows you to add cross-cutting functionality to your server that can inspect, modify, and respond to all MCP requests and responses. This is perfect for enterprise requirements:

from fastmcp.middleware import Middleware

class SecurityAuditMiddleware(Middleware):
    async def on_call_tool(self, context, call_next):
        # Log the request with user context
        self.audit_logger.info({
            "user_id": context.user_id,
            "tool_name": context.method,
            "timestamp": context.timestamp,
            "source_ip": context.client_ip,
            "api_endpoint": context.message.get("method")
        })

        # Check permissions
        if not self.check_permissions(context.user_id, context.method):
            raise PermissionDeniedError("Insufficient permissions")

        # Execute the tool
        result = await call_next(context)

        # Log the response
        self.audit_logger.info({
            "user_id": context.user_id,
            "tool_name": context.method,
            "status": "completed",
            "response_size": len(str(result))
        })

        return result

# Add to your MCP server
mcp.add_middleware(SecurityAuditMiddleware())
mcp.add_middleware(AuthenticationMiddleware())
mcp.add_middleware(RateLimitingMiddleware())
Enter fullscreen mode Exit fullscreen mode

Enterprise Security Features

MCP servers can implement OAuth 2.0 authentication and role-based access control (RBAC), enforcing permissions at the middleware level. Security best practices include real-time alerts for critical security events, centralized log management, and ensuring log integrity to prevent tampering.

Key Security Capabilities:

  • Authentication: OAuth 2.0, JWT tokens, API keys
  • Authorization: Role-based access control (RBAC)
  • Audit Trails: Maintain visibility into what permissions have been granted and when
  • Rate Limiting: Control request frequency per client
  • Input Validation: Sanitize all LLM inputs before API calls

Comprehensive Logging and Monitoring

Common middleware use cases include logging and monitoring to track usage patterns and performance metrics. For enterprise compliance:

class ComplianceLoggingMiddleware(Middleware):
    async def on_message(self, context, call_next):
        # Create comprehensive audit trail
        audit_entry = {
            "request_id": context.request_id,
            "timestamp": context.timestamp,
            "user_identity": context.user_id,
            "api_method": context.method,
            "source_system": context.source,
            "request_data": self.sanitize_pii(context.message)
        }

        try:
            result = await call_next(context)
            audit_entry.update({
                "status": "success",
                "response_time_ms": self.calculate_response_time(context),
                "data_accessed": self.extract_data_categories(result)
            })
            return result
        except Exception as e:
            audit_entry.update({
                "status": "error",
                "error_type": type(e).__name__,
                "error_message": str(e)
            })
            raise
        finally:
            # Send to centralized audit system
            await self.audit_system.log(audit_entry)
Enter fullscreen mode Exit fullscreen mode

Regulatory Compliance Support

Enterprise deployments require detailed auditing and incident investigation capabilities. The middleware approach supports:

  • GDPR Compliance: Data access logging and PII handling
  • SOX Compliance: Financial data access trails
  • HIPAA Compliance: Healthcare data protection and audit logs
  • SOC 2: Security controls and operational monitoring

Resource Isolation and Sandboxing

MCP servers should enforce limits on what the AI can do e.g., the maximum file size it can read, or CPU time for an execution tool. Enterprise deployments ensure operational isolation among multiple MCP servers with distinct permission sets for tools in different domains.

class ResourceIsolationMiddleware(Middleware):
    def __init__(self, max_response_size=1024*1024, timeout_seconds=30):
        self.max_response_size = max_response_size
        self.timeout_seconds = timeout_seconds

    async def on_call_tool(self, context, call_next):
        # Apply resource limits
        with self.create_sandbox_context():
            with timeout(self.timeout_seconds):
                result = await call_next(context)

                if len(str(result)) > self.max_response_size:
                    raise ResourceLimitExceeded("Response too large")

                return result
Enter fullscreen mode Exit fullscreen mode

Result

1. FastAPI running on port 8000

Swagger docs

2. Auto Generated MCP Server from above running FastAPI OpenAPI spec

3. Chatbot - Console App.

Benefits of Middleware Based Security

  1. Centralized Control: All security policies managed in one place
  2. Consistent Application: Same security rules across all API endpoints
  3. Easy Updates: Modify security policies without changing individual APIs
  4. Comprehensive Coverage: Middleware can inspect and modify all MCP requests and responses
  5. Defense in Depth: Multiple layers of security even if one layer fails

Future Possibilities

This architecture sets the foundation for more advanced capabilities:

  • Multi-API Orchestration: Coordinate between multiple APIs in a single conversation
  • Workflow Automation: Chain API calls based on conditional logic
  • Real-time Updates: Stream API responses for better user experience
  • Advanced Security: Zero-trust architecture with dynamic policy enforcement

Conclusion

By automatically generating MCP servers from OpenAPI specifications, we've created a powerful bridge between traditional REST APIs and modern AI agents. This approach eliminates manual tool definition, ensures consistency, and makes any OpenAPI-compliant service instantly accessible to AI systems.

The combination of FastAPI's automatic documentation generation, FastMCP's OpenAPI integration, and OpenAI's function calling creates a seamless experience where natural language becomes the interface to any web service.

This pattern represents the future of human-AI-system interaction: natural, intuitive, and automatically adapting to changes in your underlying systems. Whether you're building internal tools, integrating with third-party services, or creating customer-facing AI applications, this architecture provides a solid foundation for scalable, maintainable AI-powered integrations.

Begin building your own OpenAPI-powered MCP servers today. The future of AI integration is automatic, and it starts with your existing APIs.

Thanks
Sreeni Ramadorai

Top comments (0)