DEV Community

lucas noah
lucas noah

Posted on

Building an MCP Server for a Niche Local Service: Philadelphia Restoration

Building an MCP Server for a Niche Local Service: Philadelphia Restoration

How we built a Model Context Protocol server to make Philadelphia's water and fire damage restoration expertise available to every AI agent.


Why Build an MCP Server for a Local Service?

Most MCP servers expose general-purpose tools — file systems, databases, code interpreters. We built one for a very specific use case: helping Philadelphia homeowners deal with water and fire damage.

Philadelphia Restoration started as a concierge service connecting homeowners with vetted restoration professionals. We had a REST API with 7 tools covering damage assessment, insurance coverage analysis, cost estimation, emergency guidance, neighborhood-specific risk profiles, a deep knowledge base, and professional callback scheduling.

The question was: how do we make this expertise accessible to AI agents — Claude, ChatGPT, LangChain agents, custom assistants — so they can help homeowners directly during conversations?

The answer: MCP.

Architecture: Go + mcp-go, Dual-Port, Single Binary

We chose Go for the MCP server because our REST API was already written in Go. The mark3labs/mcp-go library made it straightforward.

Single Binary, Two Ports

Our API binary serves both REST and MCP:

  • Port 8080: REST API (existing)
  • Port 8081: MCP server (new)

Both share the same store, embedder, and notification services. This keeps deployment simple — one container, one process, two protocols.

// Create MCP server sharing the same dependencies
mcpSrv := mcpserver.NewMCPServer(store, embedder, discordNotifier)
httpSrv := server.NewStreamableHTTPServer(mcpSrv, server.WithStateLess(true))
Enter fullscreen mode Exit fullscreen mode

We use stateless mode (WithStateLess(true)) because our tools are request/response — no session state needed.

The Proxy Pattern: MCP Tools Delegate to REST Handlers

The key architectural decision was how MCP tool handlers relate to REST handlers. We had two options:

  1. Duplicate logic — write separate MCP handlers with the same business logic
  2. Proxy pattern — MCP handlers call REST handlers internally

We chose the proxy pattern. Each MCP tool handler marshals its arguments as a JSON body, creates an httptest.Request, calls the existing REST handler, and returns the response as MCP text content:

func makeHandler(h http.HandlerFunc) server.ToolHandlerFunc {
    return func(ctx context.Context, req mcp.CallToolRequest) (*mcp.CallToolResult, error) {
        body, _ := json.Marshal(req.Params.Arguments)
        httpReq := httptest.NewRequest(http.MethodPost, "/", bytes.NewReader(body))
        httpReq.Header.Set("Content-Type", "application/json")
        w := httptest.NewRecorder()
        h.ServeHTTP(w, httpReq)
        return mcp.NewToolResultText(w.Body.String()), nil
    }
}
Enter fullscreen mode Exit fullscreen mode

This means we maintain business logic in exactly one place. When we update a REST handler, the MCP tool automatically gets the same update.

Tool Design: Optimized for Registry Search

MCP tool descriptions matter — they're how registries and agents decide whether to use your tools. We optimized ours to be specific and searchable:

Before (generic):

"Assess damage and return results"

After (registry-optimized):

"Returns structured damage assessment for water and fire damage in Philadelphia residential properties. Classifies 13 damage types (burst pipe, roof leak, sewage backup, flooding, appliance leak, structural fire, kitchen fire, electrical fire, smoke damage, arson, and more) with severity grading, immediate safety steps, timeline risks, cost estimates, and Philadelphia-specific context including rowhouse and pre-1978 home considerations."

Key principles for tool descriptions:

  • Be specific about data returned — not just "assess damage" but what the assessment includes
  • State geographic focus — "Philadelphia residential properties"
  • List concrete values — "13 damage types", "$64–$183/hr"
  • Include keywords — the damage types, standards (IICRC), policy types (HO-3)

Reasoning Kits: Structured Data for AI Agents

Every tool returns a "reasoning kit" — structured data designed for an AI agent to reason over, not just pass through:

{
  "result": { ... },
  "_meta": {
    "reasoning_instructions": "Present severity assessment first...",
    "related_tools": ["check_insurance_coverage", "estimate_cost"]
  },
  "_sources": ["IICRC S500 §12.3"],
  "cta": {
    "text": "Connect with a vetted restoration professional",
    "action": "request_callback"
  }
}
Enter fullscreen mode Exit fullscreen mode

The _meta.reasoning_instructions field tells the agent how to interpret and present the data. The related_tools array creates natural workflow chains. Sources provide attribution for IICRC standards and other references.

Workflow Specification: Arazzo 1.0.1

We publish an Arazzo workflow specification describing optimal tool sequences:

workflows:
  - workflowId: primary-damage-assessment
    steps:
      - stepId: assess
        operationId: assessDamage
      - stepId: insurance
        operationId: checkInsuranceCoverage
        requestBody:
          payload:
            damage_type: $steps.assess.outputs.damage_type
      - stepId: costs
        operationId: estimateCost
      - stepId: knowledge
        operationId: searchRestorationKnowledge
      - stepId: callback
        operationId: requestCallback
Enter fullscreen mode Exit fullscreen mode

Data flows between steps via runtime expressions — the damage type from the assessment step feeds into insurance, cost, and knowledge searches automatically.

Agent Discovery: Meeting Agents Where They Are

We publish discovery files so agents can find us:

  • /.well-known/mcp.json — MCP server manifest
  • /.well-known/agent.json — Agent discovery (v2.0.0 with transports)
  • /.well-known/ai-plugin.json — ChatGPT plugin manifest
  • /openapi.yaml — OpenAPI 3.1 specification
  • /arazzo.yaml — Workflow specification
  • /llms.txt — LLM-optimized site summary

Each file serves a different discovery channel, but they all point to the same 7 tools.

Monitoring: Understanding Agent Behavior

We track tool usage with Prometheus metrics and a custom dashboard:

  • Tool calls by type and transport (REST vs MCP)
  • Agent identification from User-Agent strings (Claude, ChatGPT, LangChain, etc.)
  • Knowledge search analytics (query patterns, zero-result rate)
  • Conversion tracking (tool calls → callback requests)

This lets us understand how agents actually use our tools and where to improve.

Lessons Learned

  1. The proxy pattern works well for adding MCP to existing REST services. Zero business logic duplication.

  2. Tool descriptions are marketing copy for AI registries. Spend time on them.

  3. Stateless transport simplifies everything — no session management, no reconnection logic, each request stands alone.

  4. Reasoning kits beat raw data_meta.reasoning_instructions and related_tools help agents chain tools effectively without hardcoded workflows.

  5. Multiple discovery channels matter — different agents discover tools through different mechanisms. Publish everywhere.

Try It

Connect to our MCP server — no auth, no signup:

Server: mcp.philadelphiarestoration.org
Transport: Streamable HTTP
Enter fullscreen mode Exit fullscreen mode

Philadelphia Restoration is a free concierge service for Philadelphia homeowners dealing with water and fire damage. Our knowledge base is grounded in IICRC S500/S520/S540 standards and real restoration company experience.

Top comments (0)