The MCP Hub Is the New Backend: Why Tool-First Architecture Changes Everything
February 2026
We've been building web applications the same way for fifteen years. A frontend talks to a REST API (or GraphQL, if you're feeling modern). Behind that API sits business logic, data access layers, auth middleware, caching, and a thousand lines of glue code that exist solely to shuttle data between systems that don't know about each other.
Then we added AI. And we bolted on another layer — tool definitions, function schemas, orchestrator loops — so the LLM could call the same business logic through a completely separate interface.
Then we added agent-to-agent communication. Another protocol. Another set of endpoints. Another translation layer.
Now the W3C is standardizing WebMCP — a browser-native API that lets AI agents running inside the browser discover and call tools on the page. That's a fourth consumer of the same underlying capabilities.
Four consumers. Four separate integration surfaces. All calling the same business logic through four different plumbing systems.
This is insane. And it doesn't have to be this way.
The Insight: MCP Tools Already Are Your API
The Model Context Protocol defines a tool as a function with a name, a description, an input schema, and a handler. Sound familiar? That's exactly what a REST endpoint is — minus the HTTP ceremony.
Tool: get_calendar_events
Input: { date: "2026-02-13", limit: 10 }
Output: [{ title: "Standup", time: "9:00 AM" }, ...]
There is no structural difference between this and GET /api/calendar?date=2026-02-13&limit=10. The tool is the API. The schema is the contract. The handler is the business logic.
So why are we maintaining both?
The answer, until recently, was that MCP tools lived inside AI orchestration loops and nothing else could reach them. But that constraint is dissolving — fast.
Four Consumers, One Hub
Here's what the landscape looks like in February 2026:
| Consumer | How It Reaches Your Tools | Status |
|---|---|---|
| LLM Orchestrator | MCP client → tools/call
|
Mature. Every major AI framework supports this. |
| Agent-to-Agent (A2A) | Google's A2A protocol — POST /a2a/message:send routes through the orchestrator, which calls the same tools |
Shipping. Google published the spec, implementations exist. |
| UI Widgets | A thin client SDK that calls tools directly or proxies through the server | Buildable today. This is the missing piece most teams haven't realized they can build. |
| Browser AI Agents | W3C WebMCP — navigator.modelContext.registerTool() exposes tools to any agent visiting the page |
Draft spec. Chrome feature flag. Microsoft and Google editors. Coming. |
Four consumers. Zero additional API layers required — if your tools are the source of truth.
This is the multiplier effect: every tool you write serves four surfaces simultaneously. Every new capability you add to your MCP hub is instantly available to your AI, your UI, visiting browser agents, and external agent systems. No REST endpoints to maintain. No GraphQL resolvers to keep in sync. No duplicate data-fetching logic.
The MCP tool IS the API.
What Changes Architecturally
Before: The Layer Cake
Frontend → REST API → Service Layer → Database
↑
LLM Orchestrator → Tool Definitions → (reimplements service layer)
↑
A2A Endpoint → Translation Layer → (reimplements it again)
Every consumer gets its own integration path. Business logic is duplicated or awkwardly shared through internal libraries. Adding a new consumer means building another translation layer.
After: The Hub
┌──────────────────────────────┐
│ MCP TOOL HUB │
│ (tools, resources, schemas) │
└──────┬───────┬───────┬───────┘
│ │ │
┌────────────┤ │ ├────────────┐
│ │ │ │ │
▼ ▼ ▼ ▼
LLM Agent UI Widgets WebMCP A2A Agents
Agents
One set of tools. One set of schemas. One authorization model. Four consumers reading from the same source of truth.
The Two-Tier Visibility Model
Not every tool should be exposed to every consumer. Your internal admin tools shouldn't appear in the WebMCP catalog for visiting browser agents. Your authenticated calendar tool shouldn't run client-side where there's no session context.
The solution is a simple category-based visibility model:
Public tools are safe for unauthenticated, client-side execution. They're read-only, require no secrets, and expose no private data. Think: search a public knowledge base, get product information, check business hours. These tools:
- Run directly in the browser (no server round-trip)
- Register with WebMCP for browser agents to discover
- Power public-facing widgets
- Appear in A2A agent cards for anonymous callers
Private tools require authentication, access sensitive data, or perform mutations. Think: read my calendar, send an email, update a record. These tools:
- Always execute server-side, proxied through the hub
- Are available to the LLM orchestrator (which has session context)
- Power authenticated widgets (the UI calls the server, which calls the tool)
- Can be selectively advertised to authenticated A2A partners
The hub decides what's public and what's private. The frontend doesn't make that call — it asks the hub for a catalog and gets back only what it's allowed to see.
This is security by architecture, not by convention.
Widgets Are Just Tool Calls With a Render Function
Here's the conceptual leap that unlocks the most value: a widget is not a component that fetches its own data. A widget is a tool call paired with a renderer.
Widget = ToolCall(name, args) + Renderer(result)
Your calendar widget doesn't need its own API route, its own fetch logic, its own error handling, its own caching strategy. It calls get_calendar_events — the same tool the LLM uses — and renders the result.
This means:
- The LLM and the widget always agree on the data. They're calling the same function with the same schema.
- Adding a widget costs near-zero backend work. The tool already exists. You're just building a visual layer.
- Tools become composable UI primitives. A dashboard is just a grid of tool calls, each with a renderer.
- The tool's schema drives the widget's interface. Input schema → form fields. Output schema → display template. You can generate basic widget UIs from the schema alone.
For public tools, the widget calls the hub directly from the browser. For private tools, the widget routes through the server, which adds session context and proxies the call. Same widget code, different execution path, decided by the tool's visibility category.
WebMCP: The Browser Becomes a Client
The W3C Web Machine Learning Community Group is drafting an API called WebMCP (editors from Microsoft and Google). It adds navigator.modelContext to the browser — a native surface for pages to:
-
registerTool()— expose a tool to any AI agent visiting the page -
provideContext()— give agents structured context about the page -
client.requestUserInteraction()— let a tool pause for user confirmation before doing something destructive
This is behind a Chrome feature flag today. When it ships, any page can declare: "I have these tools. Here are their schemas. Here's how to call them."
If your MCP hub already has tools with schemas and handlers, registering them with WebMCP is mechanical. Your public tools become browser-agent-discoverable automatically. A user's AI assistant — whether it's built into the browser, running as an extension, or operating as a cloud agent — can see what your page offers and interact with it programmatically.
And here's what makes this click: the WebMCP registration isn't a separate process. It's not a second system you wire up after the fact. The tool is already in the hub catalog. The widget is already calling it to render UI. The hub already knows it's marked for public consumption. So registering it with navigator.modelContext is just — "also do this too, since it's public anyway." The search widget is rendering results from search_available_products and, because that tool is flagged as public, it's also being advertised to visiting browser agents. No extra plumbing. No registration ceremony. The visibility category you already set is doing double duty.
The page is simultaneously consuming its own tools for UI and publishing them for external agents — not as two processes, but as one. One catalog, one visibility flag, two roles.
The page isn't just a visual surface anymore. It's a tool catalog — and it's already eating its own cooking.
A2A: The Server-to-Server Surface
Google's Agent-to-Agent protocol gives you the same multiplier on the server side. An A2A implementation exposes your hub's capabilities to external agent systems over HTTP:
-
Discovery:
GET /.well-known/agent-card.json— a machine-readable manifest of what your agent can do -
Execution:
POST /a2a/message:send— send a task, get a result -
Streaming:
POST /a2a/message:stream— same thing, but with real-time progress via SSE - Task management: get status, list tasks, cancel in-flight work
The A2A layer doesn't reimplement your business logic. It routes incoming requests through your orchestrator, which calls your tools, which are the same tools powering your LLM, your widgets, and your WebMCP surface.
External agents don't need to understand your internal architecture. They see an agent card, they send a message, they get a result. Your hub handles everything behind the curtain.
The Compounding Effect of Bridged Tools
Most MCP hubs don't implement every tool natively. They bridge external MCP servers — connecting to third-party providers that expose their own tools through the protocol. A single hub might bridge a dozen external servers, each contributing their tools to the unified catalog.
Here's where the architecture compounds: bridged tools get the same four-consumer treatment as native tools. When you bridge a new MCP server into your hub, those tools immediately become:
- Available to your LLM
- Callable from widgets
- Registerable with WebMCP
- Discoverable via A2A
You didn't write those tools. You didn't design their schemas. You just connected a bridge. And now they're available everywhere.
This turns the hub into an integration platform. Every MCP server in the ecosystem becomes a potential source of capabilities that flow through your hub to all four consumer surfaces.
What This Means for How We Build
If you take this architecture seriously, several things follow:
1. Stop building REST APIs for things that are already MCP tools.
If your LLM can call search_products, your frontend can too. Don't build GET /api/products/search as a separate thing. Route the widget through the tool.
2. Design tools as the primitive, not endpoints.
When speccing a new feature, start with the tool definition: name, schema, handler. The REST endpoint, the widget, the WebMCP registration, and the A2A skill all derive from that.
3. Let the hub own visibility.
Don't scatter access control across API gateways, frontend guards, and LLM system prompts. Put a public or private category on the tool in the hub. Everything downstream respects it.
4. Think in surfaces, not integrations.
"How do I expose this to the AI?" and "How do I show this in the UI?" and "How do I let external agents use this?" are the same question: "Which tool, and what visibility?"
5. Resources are the next frontier.
MCP defines resources as read-only data surfaces — URIs that return structured content. If tools are the new API endpoints, resources are the new database views. A dashboard widget backed by internal://analytics/daily-summary. A documentation browser backed by internal://docs/api-reference. Same hub, same visibility model, same four consumers. And here's the kicker: tools that piggyback resource creation — tools that produce resources as a side effect of execution — give you dynamic resources. A tool that generates a report doesn't just return a result; it mints a resource URI that any consumer can read later. The hub's resource surface grows organically as tools run. Static resources for reference data, dynamic resources for everything tools produce. The catalog writes itself.
The Conclusion
There's a new participant in the conversation. When the consumer is an LLM, a browser agent, or an external AI system, the handshake needs to change. Not because the old patterns failed us before, but because in this decade they fail often.
MCP gives us that abstraction. A tool is a universal unit of capability: named, schemaed, executable, discoverable.
WebMCP puts it in the browser. A2A puts it on the network. The LLM orchestrator already had it. And a thin SDK turns it into a widget.
The MCP hub isn't a sidecar for your AI features. It's the center of gravity for your entire application. The sooner we start treating it that way, the less code we'll need to write — and the more surfaces we'll be able to serve.
Build the tool once. Let everything else derive from it.
Top comments (0)