DEV Community

Raju Dandigam
Raju Dandigam

Posted on

MCP Without the Setup Pain: Using Docker MCP Toolkit with TypeScript Agents

Introduction

Model Context Protocol, usually called MCP, has quickly become one of the most important ideas in AI application development. It gives AI tools and agents a standard way to connect to external systems such as filesystems, GitHub, databases, browsers, documentation, and internal APIs.

The protocol is useful because it gives agents a common tool interface. Instead of every AI application inventing its own way to call tools, MCP creates a shared pattern for exposing capabilities.

However, the protocol is only one part of the story. The real pain starts when developers need to run multiple MCP servers locally. One server may need Node.js, another may need Python, another may need browser dependencies, and another may need OAuth or API keys. Suddenly, your agent is not just an AI workflow. It is a small distributed system running on your laptop.

Docker MCP Toolkit tries to solve that operational problem. It does not replace MCP, and it does not make your agent intelligent by itself. Its value is simpler and more practical: it helps you discover, configure, run, and manage MCP servers as containerized tools through Docker Desktop and the Docker MCP Gateway.

The Real MCP Problem Is Setup

A TypeScript agent may look simple at first. It receives a user request, asks an LLM what to do next, and then calls tools. But those tools need to run somewhere.

Imagine a code-review agent that needs three capabilities. It needs GitHub access to read pull request metadata. It needs filesystem access to inspect local files. It needs Playwright access to open a preview deployment and check whether the application still works.

Without Docker, each tool may come with a different setup process. You may need to install Node.js packages for one server, Python packages for another server, browser dependencies for Playwright, and local credentials for each integration. That might be acceptable for one developer on one machine. It becomes painful when a second developer joins, when the setup moves to CI, or when the team needs consistent tool versions.

This is the same problem Docker has always been good at solving. A tool should bring its runtime and dependencies with it. Developers should not need to manually reproduce a long setup document just to run the same agent workflow.

Docker's MCP documentation describes the Toolkit as a Docker Desktop management interface for setting up, managing, and running containerized MCP servers in profiles and connecting them to AI agents. It also highlights profile-based organization, integrated tool discovery, and zero manual setup as key features.

What Docker MCP Toolkit Actually Does

Docker MCP Toolkit sits between your AI client and your MCP servers. The AI client might be Claude Desktop, Cursor, VS Code, Docker AI Agent, or your own local TypeScript agent. The MCP servers are the tools that perform actions.

The Toolkit helps with the operational layer. It lets you browse MCP servers from Docker's MCP Catalog, add servers to profiles, connect clients, and run those servers through the Docker MCP Gateway. Docker's MCP Catalog documentation says the catalog contains more than 300 verified MCP servers packaged as container images with versioning, provenance, and security updates.

That packaging matters. A containerized MCP server can include the runtime it needs, the dependencies it needs, and a more predictable execution environment. The Docker MCP Gateway then manages the server lifecycle. Docker's gateway documentation explains that when an AI application needs a tool, the gateway identifies the correct server, starts it as a Docker container if needed, injects required credentials, applies security restrictions, forwards the request, and returns the result.

The important point is that your agent does not need to know how every MCP server is installed. It only needs to connect through the gateway.

Architecture Overview

Here is the architecture in one view.

The profile defines which servers are available for a workflow. For example, a frontend development profile might include GitHub, filesystem, Playwright, and documentation search. A backend profile might include GitHub, PostgreSQL, Redis, and observability tools.

Docker's profile documentation says profiles organize servers into named collections for different projects, and different AI applications can connect to different profiles. It also notes that profiles can be shared with teams through OCI-compliant registries, while credentials are not included in the shared profile for security reasons.

That gives teams a cleaner model. The profile defines the approved toolset. Each developer configures their own credentials. The agent connects to the profile instead of a random collection of local scripts.

Getting Started with Docker MCP Toolkit

The easiest path is through Docker Desktop. In current Docker documentation, Docker recommends using the MCP Toolkit interface in Docker Desktop, especially for discovery and profile management. The get-started guide explains that you can create a profile from the Profiles tab, browse servers from the Catalog tab, add them to the profile, and connect supported clients from the Clients tab.

A simple setup flow looks like this:

  1. Open Docker Desktop
  2. Select MCP Toolkit
  3. Create a profile named frontend-agent
  4. Add GitHub, filesystem, and Playwright servers from the Catalog tab
  5. Configure required credentials or OAuth permissions
  6. Connect your AI client to the profile

For clients that are not directly listed in Docker Desktop, Docker documents a manual stdio configuration using the gateway command:

{
  "servers": {
    "MCP_DOCKER": {
      "command": "docker",
      "args": ["mcp", "gateway", "run", "--profile", "frontend-agent"],
      "type": "stdio"
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

This is a useful pattern because many MCP clients support launching a local MCP server process over stdio. In this case, the process is the Docker MCP Gateway, and the gateway manages the actual MCP server containers behind it.

A Simple TypeScript Agent Example

The MCP client SDK APIs may vary based on transport and package version, so the example below is intentionally simple. The goal is to show the application shape, not hide the article behind too much SDK boilerplate.

A TypeScript agent using MCP tools usually follows this pattern:

type ToolCall = {
  name: string;
  arguments: Record<string, unknown>;
};

type ToolResult = {
  content: string;
};

async function callMcpTool(tool: ToolCall): Promise<ToolResult> {
  // In a real application, this call goes through your MCP client transport.
  // Docker MCP Gateway handles routing to the correct containerized server.
  console.log(`Calling MCP tool: ${tool.name}`);

  return {
    content: `Result from ${tool.name}`
  };
}

async function reviewPullRequest(prUrl: string) {
  const prDetails = await callMcpTool({
    name: "github.get_pull_request",
    arguments: { url: prUrl }
  });

  const changedFiles = await callMcpTool({
    name: "github.list_changed_files",
    arguments: { url: prUrl }
  });

  const packageJson = await callMcpTool({
    name: "filesystem.read_file",
    arguments: { path: "/workspace/package.json" }
  });

  return {
    prDetails: prDetails.content,
    changedFiles: changedFiles.content,
    packageJson: packageJson.content
  };
}

reviewPullRequest("https://github.com/example/app/pull/42")
  .then(console.log)
  .catch(console.error);
Enter fullscreen mode Exit fullscreen mode

In a real implementation, callMcpTool would use an MCP client transport connected to the Docker MCP Gateway. The gateway would route github.* calls to the GitHub MCP server container and filesystem.* calls to the filesystem MCP server container.

The agent itself stays clean. It is not installing GitHub dependencies. It is not launching Playwright. It is not managing Python or Node runtimes for individual tool servers. It is asking for tools by name, and Docker handles the operational boundary.

Why This Matters for TypeScript Agents

TypeScript is a strong fit for agent applications because it helps define tool contracts, workflow state, structured outputs, and runtime validation. But TypeScript alone does not solve the environment problem. A typed tool call still fails if the MCP server is not installed correctly, if the browser dependency is missing, or if a credential is configured differently across machines.

Docker MCP Toolkit makes the tool layer more repeatable. A team can agree that a specific profile is the standard development toolset. One developer can use it from Cursor. Another can connect it to Claude Desktop. A third can connect a custom TypeScript agent. The server collection stays consistent.

This becomes more important as agents move beyond simple demos. A real code assistant may need repository access, issue tracker access, local file access, test execution, browser automation, and documentation search. Without a management layer, MCP server sprawl becomes a real problem.

Where Docker Helps Most

Docker helps most when your agent needs more than one or two tools. If you are only testing a single local MCP server, manual setup may be fine. But if your workflow depends on several MCP servers, different runtimes, and credentials, the Docker approach becomes much more useful.

It also helps when teams need consistency. A new developer should not need to install five runtimes and follow a long checklist before trying an agent workflow. The closer the setup gets to "pull the profile, configure credentials, connect the client," the easier it becomes to share.

Docker also helps with security boundaries. MCP servers are powerful because they can touch real systems. That also makes them risky. A filesystem tool should not automatically access your entire machine. A browser tool should not have unlimited permissions. A GitHub tool should use scoped credentials. Running tools through a gateway and containerized servers does not remove all risk, but it gives teams a better place to apply isolation and control.

The Docker MCP Gateway repository describes this gateway pattern as AI Client → MCP Gateway → MCP Servers, with servers running as Docker containers and the gateway providing a unified interface, secrets handling, OAuth integration, and dynamic discovery.

What This Does Not Solve

Docker MCP Toolkit is not magic. It does not make a weak agent design reliable. It does not decide which tool should be called. It does not validate every tool result for you. It does not remove the need for approval gates when an agent can modify files, open pull requests, deploy code, or touch production-like systems.

It also does not mean every MCP server is automatically safe. You still need to choose trusted servers, limit permissions, review tool access, and avoid giving broad credentials to experimental workflows. Docker's catalog and container packaging improve the operational story, but security still depends on how the tools are configured and what the agent is allowed to do.

There is also a learning curve. Developers still need to understand MCP concepts such as clients, servers, tools, transports, and permissions. Docker simplifies the runtime and setup problem. It does not eliminate the need to design the agent workflow carefully.

A Practical Use Case

A good first use case is a local code review assistant. Keep it simple. Give it access to GitHub for pull request metadata, filesystem access to the local repository, and Playwright access to a preview URL.

The agent flow can be straightforward:

This is useful because it is realistic but still safe enough for a first experiment. The agent is not deploying anything. It is not merging code. It is gathering context and producing a review summary.

When to Use Docker MCP Toolkit

Use Docker MCP Toolkit when you are building agents that need multiple external tools, when you want repeatable local setup across a team, or when you want MCP servers to run in isolated containers instead of directly on every developer machine.

It is especially useful for TypeScript agent projects that combine GitHub, filesystem, browser automation, documentation search, databases, or cloud service tools. It is also useful when you want the same profile available across multiple AI clients.

Skip it for very small experiments. If you are testing one MCP server for an hour, manual setup may be faster. Bring in Docker MCP Toolkit when the setup starts becoming part of the problem.

Conclusion

MCP standardizes how agents talk to tools. Docker MCP Toolkit standardizes how those tools are discovered, configured, run, and shared.

That distinction matters. The future of agent development is not only about better prompts or smarter models. It is also about safer and more repeatable tool execution. Agents become more useful when they can access real systems, but they become harder to manage when every tool brings its own runtime, secrets, permissions, and setup instructions.

Docker MCP Toolkit gives TypeScript developers a practical way to manage that complexity. It lets teams create profiles, run MCP servers as containers, connect clients through a gateway, and reduce the dependency chaos that comes with multi-tool agents.

For a small prototype, you may not need it. For a real agent workflow that depends on GitHub, files, browsers, databases, or internal tools, Docker MCP Toolkit can make MCP feel less like a pile of scripts and more like a manageable development platform.

Top comments (2)

Some comments may only be visible to logged-in visitors. Sign in to view all comments.