TL;DR: MCP works fine remotely until tools need to read or write files. Then the shared-filesystem assumption breaks: remote servers cannot access client files, and generated artifacts get stranded on the server. I built remote-mcp-adapter to stage client files, capture tool outputs, and make remote MCP servers behave more like local ones.
Most MCP demos look clean when everything runs on one machine.
A tool reads a PDF, writes a screenshot, exports a file, and hands something useful back. No drama. No one thinks too hard about where files are coming from or where they end up.
Then you move the MCP server somewhere remote and the illusion dies.
I ran into this while working on a project to centralize MCP servers for an organization. On paper, MCP over HTTP feels straightforward. Put servers in containers or on a remote box, connect a client, and done.
Except not really.
Because a lot of useful MCP tools quietly assume one thing: the machine calling the tool and the machine executing the tool can both see the same filesystem.
The moment that assumption breaks, a whole category of tools starts falling apart.
The problem is not the protocol
MCP itself is not the issue.
The issue is that many real tools do more than return neat little JSON blobs. They read files, write files, or both. PDF processors, screenshot tools, document converters, browser automation, image generation, exporters, report builders — all of these end up touching files somewhere.
If the server is local, fine.
If the server is remote, you suddenly get two very stupid problems.
Problem 1: the server cannot see client files
A tool expects a path like this:
/Users/me/report.pdf
That path is valid on the client machine. It means nothing inside a remote container or VM.
So now the invocation looks valid, but the remote server cannot actually open the file. At that point people start doing ugly workarounds: manual uploads, weird path assumptions, base64 blobs where proper file handling should have existed.
It works just enough to be annoying.
Problem 2: output files get stranded remotely
Now go the other direction.
A tool runs remotely and writes this:
/tmp/screenshot.png
Nice. The screenshot exists.
Just not where the client needs it.
The tool succeeded, but the useful artifact is sitting on the remote machine. So from the user’s point of view, the tool kind of worked and kind of did not.
That, to me, is the real gap. Remote MCP is not only about forwarding requests to a tool server. It also needs a sane way to move artifacts across the client-server boundary.
The actual missing piece: artifact handling
Once MCP servers are remote, you need a layer that can do three things without being awkward about it:
- take files from the client side and make them available to remote tools
- capture files produced by those tools
- return those artifacts back to the client in a usable way
Without that, remote MCP works mostly for tools that never touch files.
That is fine for toy examples. It is not fine for the kind of tools people actually want to use.
What I built
I ended up building remote-mcp-adapter to deal with exactly this.
It sits between the MCP client and upstream MCP servers. The goal was not to replace MCP or wrap everything in a pile of custom behavior. The goal was simpler: stay transparent for normal MCP traffic, but handle the file boundary properly when tools need to read or write artifacts.
At a high level, it looks like this:
Client
↓
Remote MCP Adapter
↓
Upstream MCP Server(s)
The adapter adds the parts that remote setups seem to need but local demos tend to hide.
Session-safe file staging
If the client has files that a remote tool needs, the adapter stages them so the upstream server can access them safely.
That means the tool no longer has to pretend the client’s local file path magically exists on the server.
Artifact capture and return
If a remote tool writes screenshots, exports, generated documents, or other output files, the adapter captures them and makes them retrievable by the client.
That closes the loop that usually breaks in remote MCP setups.
Transparent forwarding
Everything else is just proxied through. I did not want to turn this into a whole alternative protocol. If a request does not need file or artifact handling, it should pass through normally.
Why I think this matters
Right now, a lot of MCP usage is still local, so this problem is easy to miss.
But if MCP keeps growing, more real deployments are going to look like this:
- containerized MCP servers
- shared internal MCP hubs
- Kubernetes-hosted MCP infrastructure
- remote enterprise tool servers
- centralized platforms exposing multiple MCP backends
In those setups, the shared-filesystem assumption is gone.
So this stops being a small inconvenience and starts becoming infrastructure. If remote MCP is going to feel normal, artifact movement has to be part of the story.
That is why I do not see this as just a proxy. It is closer to a missing transport layer for files and generated outputs.
The project
I open-sourced it here:
The whole point is to make remote MCP servers feel more local when tools need to consume client-side files or return generated artifacts.
If you are building in this space, I would genuinely like to know how you are solving this today. Maybe there is a cleaner pattern. Maybe this becomes part of how remote MCP systems are expected to work. Either way, the problem feels real enough that I did not want to keep papering over it with hacks.
Top comments (0)