Hatago (旅籠) - Traditional Japanese inn from the Edo period that provided lodging for travelers. A relay point connecting modern AI tools with MCP servers.
Introduction
🏮 Hatago MCP Hub is a lightweight MCP hub that consolidates multiple MCP servers into a single endpoint consumable across multiple AI clients like Claude Code, Cursor, Windsurf, and Codex CLI. By running as middleware on top of Hono, Hatago integrates and orchestrates multiple MCP servers to give you flexible, centralized operations. This post walks through the background, architecture, setup, operations, and current limitations.
himorishige
/
hatago-mcp-hub
Hatago MCP Hub is a lightweight hub server that provides unified management for multiple MCP servers.
English | 日本語
🏮 Hatago MCP Hub
Hatago (旅籠) - Traditional Japanese inn from the Edo period that provided lodging for travelers. A relay point connecting modern AI tools with MCP servers.
Overview
Hatago MCP Hub is a lightweight hub server that provides unified management for multiple MCP (Model Context Protocol) servers. It enables centralized access to various MCP servers from development tools like Claude Code, Codex CLI, Cursor, Windsurf, and VS Code.
✨ Features
🎯 Simple & Lightweight
-
Zero Configuration Start -
npx @himorishige/hatago-mcp-hub
- Non-invasive to Existing Projects - Doesn't pollute your project directory
🔌 Rich Connectivity
- Multi-Transport Support - STDIO / HTTP / SSE / WebSocket
- Remote MCP Proxy - Transparent connection to HTTP-based MCP servers
- NPX Server Integration - Dynamic management of npm package MCP servers
🏮 Additional Features
Hot Reload & Dynamic Updates
- Config File Watching - Auto-reload on configuration changes (no restart required)
- Dynamic Tool…
Why I built Hatago MCP Hub
As the number of MCP servers grows, keeping per-client configurations in sync becomes painful. You update .mcp.json
, but forget to update another tool (e.g., TOML-based Codex CLI). There are similar projects like the Docker Hub MCP Server, and that might work for many. But some teams can’t use Docker Desktop due to constraints, or they simply want the freedom to choose MCP servers independently. As a result, per-client custom configurations continue to proliferate.
To calm this configuration sprawl, I took the approach: consolidate everything behind a single MCP server (a hub), and point all clients at Hatago. It started as a small learning project for MCP, but after validating practical needs—name collision avoidance, progress relay, hot reload, etc.—the design evolved into what it is today.
High-level architecture
Hatago consists of three layers: the core hub, an MCP registry (Tools/Resources/Prompts), and the transport layer. It sits between AI clients and multiple MCP servers, brokering JSON-RPC/MCP traffic. A key design choice is transport independence: whether you connect via STDIO or Streamable HTTP/SSE/WebSocket, the upper logic behaves identically.
Internally, the hub aggregates tools from each MCP server to form a unified catalog. The critical part here is avoiding tool name collisions. Hatago uses the serverId_toolName
format as the public name for clients, while delegating execution to the original MCP server using its native tool name. From the client’s perspective, names are unique and their server affiliation is visible at a glance.
Another key capability is transparent relay of progress notifications (notifications/progress
). When a long-running operation emits progress from a downstream server, Hatago forwards it to the client as-is, preserving UX. The same goes for sampling (sampling/createMessage
): when a downstream server requests LLM generation, Hatago safely hands it to the upstream client and returns the result back. In testing many servers, few currently emit notifications/progress
, and not many tools can leverage sampling
yet—but Hatago supports them in anticipation of wider adoption.
Quick start
Hatago ships with a CLI and can be tried either inside an app repo or in a dedicated repo. Generate a config, then start either STDIO or Streamable HTTP.
# Initialize (interactive or quick defaults)
npx @himorishige/hatago-mcp-hub init
# Explicit mode selection
npx @himorishige/hatago-mcp-hub init --mode stdio
npx @himorishige/hatago-mcp-hub init --mode http
List the MCP servers you want to connect in the generated hatago.config.json
. You can manage both local MCPs (run via npx
or node
) and remote Streamable HTTP/SSE MCPs in a single file. Environment variable expansion like ${VAR}
or ${VAR:-default}
is supported, letting you reuse one config across dev and prod.
{
"$schema": "https://raw.githubusercontent.com/himorishige/hatago-mcp-hub/main/schemas/config.schema.json",
"version": 1,
"logLevel": "info",
"mcpServers": {
"filesystem": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/tmp"],
"tags": ["dev", "local"]
},
"deepwiki": {
"url": "https://mcp.deepwiki.com/sse",
"type": "sse",
"tags": ["dev", "production", "documentation"]
},
"api": {
"url": "${API_BASE_URL:-https://api.example.com}/mcp",
"headers": { "Authorization": "Bearer ${API_KEY}" },
"tags": ["production", "api"]
}
}
}
Starting is straightforward. Use STDIO for a single client; use Streamable HTTP when sharing across multiple clients. A --watch
mode reloads on config changes.
# STDIO mode (great for Claude Code)
hatago serve --stdio --config ./hatago.config.json
# HTTP mode (share across multiple clients)
hatago serve --http --config ./hatago.config.json
# Hot reload on config changes
hatago serve --stdio --watch
Tip: Built-in internal tools make operations convenient. Use
_internal_hatago_status
to check connections,_internal_hatago_reload
to manually reload, and_internal_hatago_list_servers
to list child servers. Invoke them like regular MCP tools.
Profile management with tags
With Hatago’s tag feature, you can group MCP servers into profiles—dev, prod, test—and select only what you need at startup. One config file can manage multiple environments, simplifying team and personal setup switches.
Adding tags
Add a tags
field to each server in hatago.config.json
. Non-English tags are supported as well, if you prefer more intuitive names.
{
"mcpServers": {
"filesystem-dev": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "./src"],
"tags": ["dev", "local", "開発"]
},
"github-api": {
"url": "https://api.github.com/mcp",
"headers": { "Authorization": "Bearer ${GITHUB_TOKEN}" },
"tags": ["dev", "production", "github"]
},
"database-prod": {
"command": "mcp-server-postgres",
"env": { "DATABASE_URL": "${PROD_DB_URL}" },
"tags": ["production", "本番", "database"]
},
"test-mock": {
"command": "node",
"args": ["./mocks/test-server.js"],
"tags": ["test", "テスト"]
}
}
}
Filter by tags at startup
Use --tags
to filter which servers start. If you specify multiple tags, servers matching any of them will start.
# Start only dev servers
hatago serve --tags dev
# Start only production servers
hatago serve --tags production
# Start both dev and test servers
hatago serve --tags dev,test
# Non-English tags also work
hatago serve --tags 開発,テスト
Real-world scenarios
- Personal dev: Work with a lightweight dev set in the morning, then include prod-equivalent servers before deployment.
# Local dev only
hatago serve --tags local --watch
# Pre-deploy checks (include some prod servers)
hatago serve --tags local,production
- Team roles: Frontend vs backend engineers can share the same config while starting different server subsets.
# Frontend developers
hatago serve --tags frontend,mock
# Backend developers
hatago serve --tags backend,database
# Full-stack developers
hatago serve --tags frontend,backend,database
Client-specific setup
Claude Code
In Claude Code, just register hatago
as a single MCP server. All MCP tools under Hatago appear as a unified list. Examples:
STDIO
{
"mcpServers": {
"hatago": {
"command": "npx",
"args": [
"@himorishige/hatago-mcp-hub", "serve", "--stdio",
"--config", "/ABS/PATH/hatago.config.json"
]
}
}
}
Streamable HTTP
{
"mcpServers": {
"hatago": {
"url": "http://localhost:3535/mcp"
}
}
}
Hatago handles name collisions by namespacing public names and delegates execution with original names. With --watch
, updates to the config trigger notifications/tools/list_changed
so clients can auto-refresh their lists.
Note: many clients still don’t support
notifications/tools/list_changed
. You may need to refresh manually depending on the client.
Codex CLI
Because Codex uses TOML, centralizing via Hatago can be even more impactful. STDIO example:
STDIO
[mcp_servers.hatago]
command = "npx"
args = [
"-y", "@himorishige/hatago-mcp-hub", "serve",
"--stdio", "--config", "/ABS/PATH/hatago.config.json"
]
Streamable HTTP
[mcp_servers.hatago]
command = "npx"
args = ["mcp-remote", "http://localhost:3535/mcp"]
As of 2025-09-01, native HTTP is not supported in Codex CLI, so you need mcp-remote.
Multi-client sharing (HTTP mode)
For teams accessing a common Hatago instance, Streamable HTTP mode is ideal. Multiple clients—Claude Code, Codex CLI, Cursor, Windsurf—connect to the same URL. Each only needs to configure Hatago, and all MCP servers under it are shared. Operational updates happen in a single place: hatago.config.json
.
Even better with tags
You can run multiple Hatago instances with different tag filters to provide purpose-specific endpoints—for example, dev servers for the dev team, monitoring tools for ops.
This structure lets each team see only what they need, keeping tool lists clean.
# Dev Hatago (port 3535)
hatago serve --http --port 3535 --tags dev,test
# Ops Hatago (port 3536)
hatago serve --http --port 3536 --tags production,monitoring
Profile switching flow
You can also combine file changes with --watch
to switch profiles dynamically.
Node vs Workers
In the Node runtime, Hatago can connect to all server types, including local MCPs launched with npx
or node
. In serverless environments like Cloudflare Workers where process spawning is unavailable, you typically attach remote MCPs exposed via HTTP/SSE. Regardless of placement, clients interact with Hatago the same way.
Because Hatago is built with Hono, it’s easy to swap surrounding middleware in both Workers and Node. You can implement logging, header injection, auth checks, and more in your app code naturally.
Design details
- Name collision avoidance: the default strategy is
namespace
(serverId_toolName
). Public names are unique; execution delegates to original names. - Progress relay: forwards
notifications/progress
upstream. WhenprogressToken
is present, Hatago avoids double-delivery. - Sampling bridge: bridges downstream
sampling/createMessage
to the upstream client and reflects responses/progress. - Hot reload: detects config changes, safely reconnects, and emits
notifications/tools/list_changed
after applying updates.
All of this adheres closely to MCP’s spec. Keeping routing/registry/event flow transparent directly impacts usability.
Operational tips
Use meaningful server IDs in the config (e.g., github-api
, filesystem-tmp
) so you can infer affiliation from public names. With env var expansion, you can centralize API endpoints and tokens.
If requests to an MCP server time out:
Even if a server supports notifications/progress
, timeouts can still happen depending on implementation. You can extend timeouts in Hatago’s config.
{
"mcpServers": {
"api": {
"url": "${API_BASE_URL:-https://api.example.com}/mcp",
"headers": { "Authorization": "Bearer ${API_KEY}" },
"tags": ["production", "api"],
"timeouts": { "requestMs": 60000 } // default is 30000ms
}
}
}
Current limitations
Hatago does not provide built-in authentication. OAuth is not supported. Use Hono middleware or an upstream layer like Cloudflare Zero Trust for authorization with bearer tokens or cookies. If a downstream MCP requires OAuth, you’ll need environment-specific extensions. Given the variability across deployments, Hatago keeps the core simple.
Using Hatago as a library
The hub logic is also available as packages (@himorishige/hatago-mcp-hub/node
, @himorishige/hatago-mcp-hub/workers
). You can embed Hatago’s hub capability into existing HTTP apps (like Hono). This enables a lightweight MCP aggregation point in an API gateway or internal portal. See the examples directory for details.
Conclusion
As MCP adoption grows, configuration sprawl and operational overhead quietly scale up. Hatago MCP Hub centralizes that growth behind a single hub to preserve day-to-day developer experience. Name collision avoidance, transparent progress relay, hot reload, and internal operational tools—none are flashy, but they collectively improve the “feel” of working with many MCP servers.
Start small: attach 2–3 MCP servers in hatago.config.json
and point your client to a single Hatago entry. Later, as needs arise—over five MCPs to manage, duplicate tool names, or isolating slow clients—iterate by adding/removing servers.
I’d love your feedback after you give Hatago MCP Hub a spin.
Top comments (0)