DEV Community

Cover image for Claude MCP Explained: Building Enterprise AI Integrations That Actually Scale
Dextra Labs
Dextra Labs

Posted on

Claude MCP Explained: Building Enterprise AI Integrations That Actually Scale

What the Model Context Protocol actually is, why it changes enterprise AI architecture and how to wire Claude into Postgres, Jira and Slack with working code.

There's a problem that every enterprise AI project hits eventually.

You've built something that works in isolation, Claude answering questions, summarising documents, generating code. It's impressive in demos. Then someone asks the obvious next question: can it also look at our actual data? Can it create a Jira ticket when it finds a problem? Can it post the summary to the team Slack channel instead of a chat interface nobody checks?

And suddenly you're writing custom integration code. Lots of it. API wrappers, authentication handlers, context formatters, response parsers. Every new tool your agent needs is another bespoke integration. The agent that was simple in week one is a maintenance burden by month three.

This is the problem the Model Context Protocol was designed to solve. And if you're building enterprise AI systems that need to talk to real business tools, understanding MCP properly is one of the more valuable hours you'll spend this year.

What MCP Actually Is

The Model Context Protocol is an open standard developed by Anthropic that defines how AI models communicate with external tools, data sources and services. Think of it as the USB-C port for AI integrations, a standardised connector that works regardless of what's on either end.

Before MCP, connecting an LLM to an external tool meant:

  • Writing a custom function or tool definition in whatever format your LLM expected
  • Building the integration logic yourself
  • Handling authentication, error cases and response formatting manually
  • Repeating all of that for every new tool

With MCP, external tools expose themselves as MCP servers following a standard protocol. Your AI application connects to those servers through an MCP client. The protocol handles the communication layer. You write the business logic, not the plumbing.

The architecture has three components:

The MCP servers are the interesting part. They're lightweight services that wrap your existing APIs and databases, expose their capabilities in a standardised format and handle the translation between MCP protocol and
whatever the underlying system expects.

Why This Matters for Enterprise Architecture

The reason Claude MCP and Model Context Protocol changes enterprise AI architecture isn't just developer convenience. It's about three properties that enterprise systems actually need.

Composability: Once you've built an MCP server for Jira, every AI application in your organisation can use it. You're not rebuilding the Jira integration for every new agent, you're reusing a tested, maintained server. The integration work amortises across every use case that needs it.

Security isolation: MCP servers are separate processes. Your PostgreSQL MCP server has exactly the database permissions you configure for it, not more. The Claude model doesn't have direct database access. It calls the MCP server, which enforces its own access controls. This is a significantly better security model than giving your AI agent broad API credentials.

Auditability: Every tool call goes through the MCP protocol. You can log, monitor and audit at the MCP layer without instrumenting each individual integration. For enterprise compliance requirements, this is meaningful.

Building the Integration: Setup

We're going to build an agent that can query a PostgreSQL database, create Jira tickets and post to Slack. Three MCP servers, one Claude agent, working together.

bash
pip install anthropic mcp psycopg2-binary jira slack-sdk asyncio
Enter fullscreen mode Exit fullscreen mode

Start with the MCP client setup:

python
import anthropic
import asyncio
from mcp import ClientSession, StdioServerParameters
from mcp.client.stdio import stdio_client

client = anthropic.Anthropic()

# MCP server configurations
POSTGRES_SERVER = StdioServerParameters(
    command="python",
    args=["servers/postgres_server.py"],
    env={
        "DB_HOST": "your-db-host",
        "DB_NAME": "your-database",
        "DB_USER": "your-user",
        "DB_PASSWORD": "your-password"
    }
)

JIRA_SERVER = StdioServerParameters(
    command="python",
    args=["servers/jira_server.py"],
    env={
        "JIRA_URL": "https://your-domain.atlassian.net",
        "JIRA_EMAIL": "your-email@company.com",
        "JIRA_TOKEN": "your-api-token"
    }
)

SLACK_SERVER = StdioServerParameters(
    command="python",
    args=["servers/slack_server.py"],
    env={
        "SLACK_BOT_TOKEN": "xoxb-your-token"
    }
)
Enter fullscreen mode Exit fullscreen mode

Building the PostgreSQL MCP Server

Each MCP server is a Python script that implements the MCP protocol and exposes tools to the host:

python
# servers/postgres_server.py
import asyncio
import psycopg2
import os
import json
from mcp.server import Server
from mcp.server.models import InitializationOptions
from mcp.types import Tool, TextContent
import mcp.server.stdio

app = Server("postgres-server")

def get_db_connection():
    return psycopg2.connect(
        host=os.environ['DB_HOST'],
        database=os.environ['DB_NAME'],
        user=os.environ['DB_USER'],
        password=os.environ['DB_PASSWORD']
    )

@app.list_tools()
async def list_tools() -> list[Tool]:
    """Declare the tools this server exposes."""
    return [
        Tool(
            name="query_database",
            description=(
                "Execute a read-only SQL query against the database. "
                "Use this to retrieve data, counts, aggregations. "
                "Never use for INSERT, UPDATE, or DELETE operations."
            ),
            inputSchema={
                "type": "object",
                "properties": {
                    "query": {
                        "type": "string",
                        "description": "The SQL SELECT query to execute"
                    },
                    "limit": {
                        "type": "integer",
                        "description": "Maximum rows to return (default 100)",
                        "default": 100
                    }
                },
                "required": ["query"]
            }
        ),
        Tool(
            name="get_table_schema",
            description="Get the schema for a specific database table",
            inputSchema={
                "type": "object",
                "properties": {
                    "table_name": {
                        "type": "string",
                        "description": "Name of the table to inspect"
                    }
                },
                "required": ["table_name"]
            }
        )
    ]

@app.call_tool()
async def call_tool(name: str, arguments: dict):
    """Handle tool calls from the MCP host."""

    if name == "query_database":
        query = arguments["query"]
        limit = arguments.get("limit", 100)

        # Safety check  enforce read-only
        query_lower = query.lower().strip()
        if any(keyword in query_lower 
               for keyword in ['insert', 'update', 'delete', 'drop', 'create', 'alter']):
            return [TextContent(
                type="text",
                text="Error: Only SELECT queries are permitted"
            )]

        # Add limit if not present
        if 'limit' not in query_lower:
            query = f"{query.rstrip(';')} LIMIT {limit}"

        try:
            conn = get_db_connection()
            cursor = conn.cursor()
            cursor.execute(query)

            columns = [desc[0] for desc in cursor.description]
            rows = cursor.fetchall()

            result = {
                "columns": columns,
                "rows": [dict(zip(columns, row)) for row in rows],
                "row_count": len(rows)
            }

            cursor.close()
            conn.close()

            return [TextContent(
                type="text",
                text=json.dumps(result, indent=2, default=str)
            )]

        except Exception as e:
            return [TextContent(
                type="text",
                text=f"Query error: {str(e)}"
            )]

    elif name == "get_table_schema":
        table_name = arguments["table_name"]

        try:
            conn = get_db_connection()
            cursor = conn.cursor()
            cursor.execute("""
                SELECT column_name, data_type, is_nullable, column_default
                FROM information_schema.columns
                WHERE table_name = %s
                ORDER BY ordinal_position
            """, (table_name,))

            columns = cursor.fetchall()
            cursor.close()
            conn.close()

            schema = [{
                "column": col[0],
                "type": col[1],
                "nullable": col[2],
                "default": col[3]
            } for col in columns]

            return [TextContent(
                type="text",
                text=json.dumps(schema, indent=2)
            )]

        except Exception as e:
            return [TextContent(
                type="text",
                text=f"Schema error: {str(e)}"
            )]

async def main():
    async with mcp.server.stdio.stdio_server() as (read_stream, write_stream):
        await app.run(
            read_stream,
            write_stream,
            InitializationOptions(
                server_name="postgres-server",
                server_version="1.0.0"
            )
        )

if __name__ == "__main__":
    asyncio.run(main())
Enter fullscreen mode Exit fullscreen mode

Jira MCP Server

python
# servers/jira_server.py
import asyncio
import os
import json
from jira import JIRA
from mcp.server import Server
from mcp.server.models import InitializationOptions
from mcp.types import Tool, TextContent
import mcp.server.stdio

app = Server("jira-server")

def get_jira_client():
    return JIRA(
        server=os.environ['JIRA_URL'],
        basic_auth=(os.environ['JIRA_EMAIL'], os.environ['JIRA_TOKEN'])
    )

@app.list_tools()
async def list_tools() -> list[Tool]:
    return [
        Tool(
            name="create_jira_ticket",
            description=(
                "Create a new Jira issue. Use when a problem, "
                "bug, or task needs to be tracked in Jira."
            ),
            inputSchema={
                "type": "object",
                "properties": {
                    "project_key": {
                        "type": "string",
                        "description": "Jira project key (e.g. 'ENG', 'OPS')"
                    },
                    "summary": {
                        "type": "string",
                        "description": "Issue title/summary"
                    },
                    "description": {
                        "type": "string",
                        "description": "Detailed description of the issue"
                    },
                    "issue_type": {
                        "type": "string",
                        "enum": ["Bug", "Task", "Story"],
                        "description": "Type of issue to create"
                    },
                    "priority": {
                        "type": "string",
                        "enum": ["Highest", "High", "Medium", "Low"],
                        "description": "Issue priority"
                    }
                },
                "required": ["project_key", "summary", "issue_type"]
            }
        ),
        Tool(
            name="search_jira_issues",
            description="Search for existing Jira issues using JQL",
            inputSchema={
                "type": "object",
                "properties": {
                    "jql": {
                        "type": "string",
                        "description": "JQL query string"
                    },
                    "max_results": {
                        "type": "integer",
                        "default": 10
                    }
                },
                "required": ["jql"]
            }
        )
    ]

@app.call_tool()
async def call_tool(name: str, arguments: dict):
    jira = get_jira_client()

    if name == "create_jira_ticket":
        try:
            issue_dict = {
                'project': {'key': arguments['project_key']},
                'summary': arguments['summary'],
                'description': arguments.get('description', ''),
                'issuetype': {'name': arguments['issue_type']},
            }

            if 'priority' in arguments:
                issue_dict['priority'] = {'name': arguments['priority']}

            issue = jira.create_issue(fields=issue_dict)

            return [TextContent(
                type="text",
                text=json.dumps({
                    "success": True,
                    "issue_key": issue.key,
                    "issue_url": f"{os.environ['JIRA_URL']}/browse/{issue.key}",
                    "summary": arguments['summary']
                })
            )]

        except Exception as e:
            return [TextContent(
                type="text",
                text=f"Jira error: {str(e)}"
            )]

    elif name == "search_jira_issues":
        try:
            issues = jira.search_issues(
                arguments['jql'],
                maxResults=arguments.get('max_results', 10)
            )

            results = [{
                "key": issue.key,
                "summary": issue.fields.summary,
                "status": issue.fields.status.name,
                "priority": issue.fields.priority.name 
                          if issue.fields.priority else None
            } for issue in issues]

            return [TextContent(
                type="text",
                text=json.dumps(results, indent=2)
            )]

        except Exception as e:
            return [TextContent(
                type="text",
                text=f"Search error: {str(e)}"
            )]

async def main():
    async with mcp.server.stdio.stdio_server() as (read_stream, write_stream):
        await app.run(
            read_stream,
            write_stream,
            InitializationOptions(
                server_name="jira-server",
                server_version="1.0.0"
            )
        )

if __name__ == "__main__":
    asyncio.run(main())
Enter fullscreen mode Exit fullscreen mode

The Agent That Connects Everything

Now the interesting part, the agent that uses all three servers together:

python
# agent.py
import asyncio
import json
import anthropic
from mcp import ClientSession
from mcp.client.stdio import stdio_client

async def run_enterprise_agent(user_query: str):
    """Run an agent with access to Postgres, Jira and Slack."""

    client = anthropic.Anthropic()

    # Connect to all MCP servers
    async with stdio_client(POSTGRES_SERVER) as (pg_read, pg_write), \
               stdio_client(JIRA_SERVER) as (jira_read, jira_write), \
               stdio_client(SLACK_SERVER) as (slack_read, slack_write):

        async with ClientSession(pg_read, pg_write) as pg_session, \
                   ClientSession(jira_read, jira_write) as jira_session, \
                   ClientSession(slack_read, slack_write) as slack_session:

            # Initialise all sessions
            await pg_session.initialize()
            await jira_session.initialize()
            await slack_session.initialize()

            # Collect all available tools from all servers
            pg_tools = await pg_session.list_tools()
            jira_tools = await jira_session.list_tools()
            slack_tools = await slack_session.list_tools()

            # Convert MCP tools to Anthropic tool format
            all_tools = []
            tool_session_map = {}

            for tool in pg_tools.tools:
                all_tools.append({
                    "name": tool.name,
                    "description": tool.description,
                    "input_schema": tool.inputSchema
                })
                tool_session_map[tool.name] = pg_session

            for tool in jira_tools.tools:
                all_tools.append({
                    "name": tool.name,
                    "description": tool.description,
                    "input_schema": tool.inputSchema
                })
                tool_session_map[tool.name] = jira_session

            for tool in slack_tools.tools:
                all_tools.append({
                    "name": tool.name,
                    "description": tool.description,
                    "input_schema": tool.inputSchema
                })
                tool_session_map[tool.name] = slack_session

            # Run the agent loop
            messages = [{"role": "user", "content": user_query}]

            system_prompt = """You are an enterprise AI assistant with access to 
            the company database, Jira project management and Slack messaging.

            When you find issues in data, create Jira tickets to track them.
            When you complete analysis, post summaries to the appropriate Slack channel.
            Always explain what you're doing and why."""

            while True:
                response = client.messages.create(
                    model="claude-sonnet-4-5",
                    max_tokens=4096,
                    system=system_prompt,
                    tools=all_tools,
                    messages=messages
                )

                if response.stop_reason == "end_turn":
                    # Extract final text response
                    for block in response.content:
                        if hasattr(block, 'text'):
                            print(f"\nAgent: {block.text}")
                    break

                if response.stop_reason == "tool_use":
                    messages.append({
                        "role": "assistant",
                        "content": response.content
                    })

                    tool_results = []

                    for block in response.content:
                        if block.type == "tool_use":
                            print(f"\n→ Calling tool: {block.name}")
                            print(f"  Input: {json.dumps(block.input, indent=2)}")

                            # Route to correct MCP session
                            session = tool_session_map[block.name]
                            result = await session.call_tool(
                                block.name,
                                arguments=block.input
                            )

                            result_text = result.content[0].text \
                                         if result.content else "No result"
                            print(f"  Result: {result_text[:200]}...")

                            tool_results.append({
                                "type": "tool_result",
                                "tool_use_id": block.id,
                                "content": result_text
                            })

                    messages.append({
                        "role": "user",
                        "content": tool_results
                    })

# Run it
asyncio.run(run_enterprise_agent(
    "Check the orders table for any orders with status 'failed' in the last 24 hours. "
    "If you find more than 5, create a high-priority Jira bug in the ENG project "
    "and post a summary to the #operations Slack channel."
))

Enter fullscreen mode Exit fullscreen mode

What Happens When You Run This

The agent receives the query. It calls get_table_schema to understand the orders table structure. It queries the database for failed orders in the last 24 hours. If the count exceeds five, it creates a Jira ticket with the relevant details. It posts to Slack with a formatted summary. All of this happens in a single agent session, with each step using the appropriate MCP server.

The tool routing is automatic, Claude reads the tool descriptions and decides which tool to use for each step. The MCP protocol handles the communication. You wrote the business logic, not the integration plumbing.

Enterprise Considerations

Three things that matter when you move from demo to production.

Server lifecycle management: MCP servers are processes. In production, you need process supervision, health checks and restart policies. Consider running MCP servers as containerised services with proper orchestration rather than spawning them as subprocesses.

Authentication and secrets: The environment variable approach above works for development. In production, pull credentials from a secrets manager (AWS Secrets Manager, HashiCorp Vault) rather than environment variables. Your MCP servers should never have credentials baked in.

Rate limiting and quotas: Your Jira and Slack MCP servers are calling external APIs. Implement rate limiting at the MCP server level to prevent an aggressive agent from exhausting your API quotas. This is significantly cleaner than rate limiting inside the agent.

MCP as Infrastructure

The real value proposition of MCP isn't any individual integration. It's the accumulation of tested, reusable MCP servers that your organisation builds over time. The Postgres server you build for one agent is available to every subsequent agent. The Jira server your DevOps agent uses can be reused by your customer success agent. You build the integration library
once and amortise it across every AI use case that follows.

This is why MCP matters for enterprise AI at scale, it transforms integration work from a per-project cost to a shared infrastructure investment.

MCP is the plumbing that makes enterprise agents work. For teams building production agent systems, the server architecture, authentication patterns, monitoring and multi-tool orchestration that goes beyond a single tutorial, Dextra Labs designs and deploys these integrations end-to-end.

Published by Dextra Labs | AI Consulting & Enterprise Agent Development

Top comments (0)