The Model Context Protocol has created a growing ecosystem of tool servers -- filesystem operations, git integration, database access, API connectors. Most of these servers are written in TypeScript and communicate over stdio or SSE.
If you are building agent systems on the JVM, you face a choice: rewrite every tool in Java, or find a way to use what already exists. The useful answer is usually both -- and the bridge between them needs to be clean enough that the rest of your system does not care which approach a particular tool uses.
The integration problem
MCP servers expose tools through a well-defined protocol. LangChain4j (which AgentEnsemble builds on) already has MCP client support via McpClient and McpToolProvider. But there is a gap: LangChain4j's MCP integration produces tools for its AiServices abstraction, not for AgentEnsemble's AgentTool interface.
The bridge needs to:
- Connect to any MCP server (stdio or SSE transport)
- Discover available tools from the server
- Adapt each MCP tool to the
AgentToolinterface - Manage the server subprocess lifecycle
- Allow MCP tools and Java-native tools to coexist in the same agent's tool list
McpToolFactory
The agentensemble-mcp module provides McpToolFactory as the primary entry point. Connect to any MCP-compatible server and get back standard AgentTool instances:
try (StdioMcpTransport transport = new StdioMcpTransport.Builder()
.command(List.of("npx", "--yes",
"@modelcontextprotocol/server-filesystem", "/workspace"))
.build()) {
List<AgentTool> tools = McpToolFactory.fromServer(transport);
Agent agent = Agent.builder()
.role("File analyst")
.goal("Analyze project structure")
.tools(tools)
.llm(model)
.build();
}
The factory connects to the server, enumerates its tools, and wraps each one as an McpAgentTool. Because MCP tools already have typed parameter schemas, the wrapper passes those schemas through to LangChain4j's ToolSpecification directly -- no intermediate Java record needed.
You can also filter to specific tools:
List<AgentTool> tools = McpToolFactory.fromServer(transport,
"read_file", "search_files", "directory_tree");
This is useful when a server exposes tools you do not want the agent to have access to -- write operations, for instance, when the agent should only read.
Convenience factories for common servers
The two most common MCP servers for coding workflows are the filesystem and git reference servers. McpToolFactory provides convenience methods that handle the subprocess setup:
try (McpServerLifecycle fs = McpToolFactory.filesystem(projectDir);
McpServerLifecycle git = McpToolFactory.git(projectDir)) {
fs.start();
git.start();
List<AgentTool> allTools = new ArrayList<>();
allTools.addAll(fs.tools());
allTools.addAll(git.tools());
// Use allTools in any agent
}
The filesystem server provides: read_file, write_file, edit_file, search_files, list_directory, directory_tree, get_file_info.
The git server provides: git_status, git_diff_unstaged, git_diff_staged, git_diff, git_commit, git_add, git_log, git_branch, git_create_branch, git_checkout, git_show, git_reset.
Lifecycle management
MCP servers run as subprocesses. If you do not shut them down, you leak processes. McpServerLifecycle implements AutoCloseable so try-with-resources handles cleanup:
try (McpServerLifecycle server = McpToolFactory.filesystem(dir)) {
server.start();
// Use server.tools() ...
} // server is shut down here, subprocess is killed
For long-running ensembles, McpServerLifecycle also integrates with the ensemble's lifecycle listener. When the ensemble stops, any attached MCP servers are shut down automatically.
The lifecycle object exposes health checks:
if (server.isAlive()) {
List<AgentTool> tools = server.tools();
}
Mixing MCP and Java-native tools
The most practical pattern is combining MCP tools with Java-native tools in the same agent. MCP provides the filesystem and git operations; Java-native tools handle domain-specific logic, calculations, or API calls:
try (McpServerLifecycle fs = McpToolFactory.filesystem(projectDir)) {
fs.start();
Agent agent = Agent.builder()
.role("Code reviewer")
.goal("Review code changes and check style compliance")
.tools(fs.tools()) // MCP filesystem tools
.tools(List.of( // Java-native tools
new StyleCheckerTool(),
new MetricsCalculatorTool()))
.llm(model)
.build();
}
Both tool types implement the same AgentTool interface. The agent sees a flat list of tools with names and descriptions. It does not know or care which ones are backed by an MCP subprocess and which are pure Java.
This composability is the point. You can start with MCP servers for rapid capability acquisition, then replace individual tools with Java implementations when you need more control, better performance, or fewer runtime dependencies -- without changing the agent configuration.
The adapter pattern
Under the hood, each MCP tool is wrapped in an McpAgentTool:
public final class McpAgentTool implements AgentTool {
private final McpClient client;
private final String toolName;
private final String toolDescription;
private final JsonObjectSchema parameters;
@Override
public String name() { return toolName; }
@Override
public String description() { return toolDescription; }
@Override
public ToolResult execute(String input) {
// Parse input JSON, call client.executeTool(), wrap result
}
}
The adapter preserves the MCP tool's name, description, and parameter schema. The parameter schema flows through to the LLM's function-calling interface, so the model sees the same tool signature regardless of whether the tool is MCP-backed or Java-native.
Connecting to custom MCP servers
Any MCP-compatible server works -- not just the reference implementations. If you have a custom server that exposes domain-specific tools (database queries, API operations, internal services), connect it the same way:
// Custom MCP server over stdio
try (StdioMcpTransport transport = new StdioMcpTransport.Builder()
.command(List.of("python", "-m", "my_custom_mcp_server"))
.build()) {
List<AgentTool> tools = McpToolFactory.fromServer(transport);
// Use tools...
}
SSE transport works similarly for remote servers:
SseMcpTransport transport = new SseMcpTransport.Builder()
.sseUrl("http://mcp-server:8080/sse")
.build();
List<AgentTool> tools = McpToolFactory.fromServer(transport);
Tradeoffs
Subprocess overhead. Each MCP server is a separate process. For the reference servers, this means Node.js must be installed. The startup cost is measurable (typically 1-2 seconds for npx to resolve and start the server). For long-running agents, this is negligible; for one-shot scripts, it adds latency.
Debugging across process boundaries. When an MCP tool fails, the error comes back as a string from the subprocess. You lose Java stack traces and structured exception types. The bridge logs tool inputs and outputs at DEBUG level, but cross-process debugging is inherently harder.
Schema fidelity. MCP tool schemas are JSON Schema. The bridge passes these through as-is, which works well for LangChain4j's function-calling support. But if you need to validate inputs in Java before sending them to the server, you would need to add that validation layer yourself.
No hot-reload. If the MCP server crashes, the tools become unavailable. The bridge does not automatically restart servers. For production deployments, you would want health-check and restart logic around the lifecycle objects.
When to use MCP vs. Java-native tools
| Consideration | MCP | Java-native |
|---|---|---|
| Ecosystem breadth | Large and growing | You build what you need |
| Runtime dependency | Node.js (for reference servers) | Pure JVM |
| Startup latency | 1-2s per server | Instant |
| Debugging | Cross-process | Same-process stack traces |
| Customization | Limited to server's API | Full control |
| Integration with Java types | String-based | Native records, type safety |
The practical pattern: start with MCP for rapid capability bootstrapping, move to Java-native tools for anything performance-sensitive or deeply integrated with your domain model.
The MCP bridge is part of AgentEnsemble. The MCP bridge guide covers the full API and transport options.
Curious whether others are mixing MCP and native tools in their agent systems, and where the boundary between the two tends to settle in practice.
Top comments (0)