DEV Community

Cover image for Why I only build read-only MCP servers
Wojciech Wentland
Wojciech Wentland

Posted on • Originally published at blog.wentland.io

Why I only build read-only MCP servers

Every MCP server I build is read-only. List, search, get, read. No create, update, delete, activate, purge.

I've been running Claude Code with --dangerously-skip-permissions in environments where the agent has no write-capable MCP tools and no direct path to mutate production systems. I haven't had a single unwanted action against a production system in months. Not because I trust the model to never hallucinate. Because the tools it has access to can't turn a hallucinated action into a real API write.

Read-only doesn't make an agent safe. It removes an entire class of failures.

The failure mode isn't hypothetical

There's a post on r/ClaudeCode where Claude suggested tearing down a GPU instance, then executed it. The user never confirmed. The model said "tear down the H100 too," treated its own suggestion as user confirmation, and destroyed a running instance with hours of cached build artifacts and compiled kernels on it.

Claude hallucinated user confirmation and destroyed a running GPU instance. Source: r/ClaudeCode

The model later admitted: "I hallucinated you saying that. You never said those words. I said it, then executed it as if you'd agreed."

If that agent had read-only tools, it would have read the instance list, maybe suggested tearing something down, and then... nothing. The suggestion dies as text. No one loses a machine.

How I actually use agents

My workflow with Claude Code looks like this: I ask it to investigate something. It reads logs, searches code, pulls data from MCP servers, and comes back with an analysis. If the analysis leads to an action — creating a Jira ticket, updating a config, deploying a change — Claude drafts it. I review the draft, then I do the action myself.

The agent reads and analyzes. I act.

I trust the model's judgment on what to write in a ticket. The problem is it sometimes hallucinates that I asked it to do something I didn't. If the tool is read-only, the worst that happens is it reads data it was going to read anyway. If the tool has write access, the worst that happens is the Reddit post above.

Approval fatigue is the real problem

"But there's a confirmation prompt before destructive actions." Sure. Claude Code asks before running commands. The problem is approval fatigue. After confirming 50 read operations, you stop reading the prompts. You click yes. And then the 51st one is vastai destroy instance 34122719.

Anthropic wrote about this in their sandboxing post. They found that constant permission prompts paradoxically reduce security because users stop paying attention. Their solution was sandboxing: restrict what the agent can access so you don't need to ask as often. They reduced permission prompts by 84% while maintaining security.

Read-only MCP servers follow the same logic. If the server can't write, you don't need to confirm writes. The agent operates freely within the read boundary. No fatigue, no missed confirmation on a destructive action.

That's why I run --dangerously-skip-permissions. It sounds reckless until you realize the agent's entire toolkit is read-only. There's nothing dangerous to skip permission for.

What this doesn't cover

Read-only MCP servers are one boundary, not a complete agent security model. If you also give the agent bash access, cloud CLIs, kubectl, or production credentials through other channels, this design won't save you. Claude Code with --dangerously-skip-permissions can still run shell commands, edit files, and interact with whatever's reachable from the host. Anthropic's own documentation recommends using isolated environments when running in bypass mode, and their sandboxing approach combines filesystem isolation, network restrictions, and permission controls — not just tool-level restrictions.

This article is about the MCP boundary specifically. For me, that boundary matters because my agents talk to external systems almost exclusively through MCP. But it's one layer, not the whole stack.

Beyond the IDE

There's another reason I care about read-only MCP servers: they're portable. My workflow is Claude Code today, but the same servers work in any agent system that speaks MCP.

In a headless agent system — one where there's no human in the loop and no bash shell — the MCP boundary isn't just one layer. It's the only interface the agent has to external systems. If every MCP server it can reach is read-only, the agent literally cannot mutate production state. No sandboxing needed, no permission prompts, no approval fatigue. The tools themselves are the guardrail.

This matters if you're building agent systems for other users. Giving all users read access to your CDN config, build logs, or DNS records is usually fine. Giving all users write access is a different conversation entirely. Read-only MCP servers let you expose data to agents at scale without worrying about what happens when one of them hallucinates an action.

What read-only servers are good for

I run MCP servers for CDN management, CI/CD, log aggregation, DNS, and incident management. All read-only. The questions I ask look like: "What's the current CDN config for checkout?" "Which build failed last night?" "Compare caching rules between production and staging." "Draft a Jira ticket for the DNS change we discussed."

Claude produces the draft text. I copy it into Jira or GitHub myself. Nothing in this workflow needs the agent to write to the target system.

The credential argument

Getting a read-only API credential approved is a conversation. "I need read access to the CDN config API for an AI assistant that helps engineers investigate issues." Most teams say yes.

Getting a write credential is different. "I need an AI agent to be able to modify CDN configurations." That's a meeting, a security review, a discussion about rollback procedures, and probably a "no" or a "let's revisit in Q3."

Read-only credentials have a smaller blast radius and a simpler approval process. They also happen to cover every use case I actually have.

What this means for MCP servers

Every MCP server I publish follows this: read-only by design. The MCP security best practices describe scope minimization as a core principle. Start with the minimum privileges, elevate only when required. My servers don't elevate.

If someone opens a GitHub issue asking for write tools, the answer is: "This server is intentionally read-only. Fork it if you need write operations." That's not laziness. It's a design decision about what I want an AI agent to be able to do when it hallucinates an action at 3am.

I'm planning a series of production-ready read-only MCP servers for various platforms. More on that soon.

Top comments (2)

Collapse
 
supertrained profile image
Rhumb

Strong framing.

I like the distinction here because read-only is not a complete safety model, but it is a real trust class. A hallucinated action can still show up in the plan, but if the visible tools are inspection-only it dies as text instead of turning into an external write.

The part I think teams still miss is that the boundary has to be whole.

If the same agent still has shell, write-capable filesystem access, or open network egress through another path, the system is not really operating in a read-only class. It just has one read-only surface inside a broader write-capable one.

So the useful operator question becomes:

  • which visible tools are inspect vs write vs execute vs egress
  • whether those authority classes are separated at discovery time
  • whether denied escalation attempts are logged as signals, not just blocked

That is where read-only stops being a nice permission setting and becomes actual containment design.

Collapse
 
contrastcyber profile image
UPinar

Completely agree. The read-only constraint eliminates an entire failure class. For security/OSINT tools especially, there's zero reason to need write access — query, enrich, report. That's it.