I’ve been building OpZero — an AI-native deployment platform that started as a way to get vibe-coded apps onto the internet without friction. Deploy from conversation, no git push, no CI/CD pipeline. It works. Ship an app to Cloudflare Pages in 2 seconds from a chat message.
But the thing we got working last night changes what OpZero actually is.
An MCP tool that starts Claude Code sessions
We deployed a claude_session_start MCP tool on Railway. It connects to a repo, spins up a Claude Code session, executes a task, and returns the result. A fully autonomous coding agent, callable as a tool by another agent.
The first successful run pointed at our own repo. Claude Code responded with "Hello! Ready to help with your OpZero project." and exited cleanly with code 0.
That's agents spawn``ing agents. And it works.
Why this isn't just a party trick
The AI tooling ecosystem right now has a composition problem. You've got MCP servers that give agents hands — they can deploy, search, query databases, manage infrastructure. But each tool is isolated. The agent calls one, gets a result, calls the next. It's orchestration, but it's flat.
When one of those tools can spin up another agent, the topology changes. You go from a hub-and-spoke model (agent calls tools) to a graph (agents delegate to agents who call tools). The parent agent doesn't need to know how to do everything. It needs to know how to decompose a problem and delegate the pieces.
This is how human engineering teams actually work. A tech lead doesn't write every line of code. They break down the work, assign it, and integrate the results. We just gave AI agents the same capability.
The bigger picture: a composable agent nervous system
We've been developing what we call the Module Contract — a universal spec that lets any capability (MCP server, API, skill, memory store, or now agent session) participate in a unified system. The key ideas:
Modules don't know about each other. A deploy module doesn't import a preview module. They both declare what they can do and what context they need. A resolver in the middle handles the matching and composition.
Discovery is automatic. An agent doesn't browse a marketplace and install plugins. It expresses intent — "deploy this to production" — and the resolver finds the right module, checks governance policies, and executes. No installation step.
Governance is built in, not bolted on. Every invocation passes through a policy layer. Cost controls, access controls, data classification, audit logging. This is what makes dynamic discovery safe enough for enterprise use.
The claude_session_start tool fits into this contract like any other module. It declares its capabilities (start a coding session in a repo), its context requirements (repo URL, task description, auth), and its outputs (session result, exit code). The resolver can compose it with other modules — spin up a session to fix a bug, then deploy the result, then run tests against the deployment.
What we're actually building
OpZero started as a deployment platform. It's becoming an enablement platform for agents — a provider-agnostic infrastructure layer that makes agents more capable regardless of which LLM is driving them.
The key insight: the mega LLM providers (Anthropic, OpenAI, Google) are all building their own tool ecosystems and plugin marketplaces. Each one wants to be the full vertical stack. But enterprises know from the cloud wars how that plays out — you go deep with one vendor, and five years later you're spending millions on migration.
OpZero doesn't compete with any of them. We make their agents more capable. We're the neutral layer that handles the messy infrastructure composition so that any agent, from any provider, can discover capabilities, respect governance policies, and get things done.
Your AI builds it. We put it on the internet. And now we help it build other things too.
What's next
We're finalizing the Module Contract spec (which defines how any capability becomes agent-discoverable and composable), building the resolver runtime (the intelligence layer between "agent needs something" and "agent has something"), and expanding the first-party module set beyond deployment.
If you're building MCP servers, agent tooling, or enterprise AI infrastructure — we should talk. The nervous system needs neurons.
OpZero is an open, provider-agnostic platform for agent infrastructure. No vendor lock-in. Your tools, your agents, your rules.
Top comments (1)
Agents spawning agents is the right direction — and you've identified the composition problem correctly. Flat orchestration (call A, get result, call B) breaks down once the chain has more than 3-4 steps because context degrades.
What's interesting about your approach is that the MCP tool itself becomes an agent boundary. The result that
claude_session_startreturns needs to be structured so the calling agent knows what happened, what succeeded, what failed, and what to do next — without burning 5k tokens parsing raw output.This is the agent-to-agent communication problem that mcp-fusion (github.com/vinkius-labs/mcp-fusion) tries to address at the framework level. When a tool response includes affordances — "next you can do X or Y" — the orchestrating agent doesn't need to reason as hard about what to do next. It just reads the options. For agent-spawning-agent patterns like yours, this could significantly reduce the reasoning overhead at each step.