DEV Community

Cover image for Building an MCP server: teaching AI assistants about backups
Ely
Ely

Posted on • Originally published at medium.branie.it

Building an MCP server: teaching AI assistants about backups

My journey into the Model Context Protocol

Today I built a Model Context Protocol (MCP) server to connect AI assistants with Bareos backup infrastructure. Join me as I walk through building this integration and share what I learned along the way.

Or if you just want to see the result, check it out at https://github.com/edeckers/bareos-mcp-server

Photo of a clown puppet
Photo by János Venczák on Unsplash

What is MCP?

The Model Context Protocol is Anthropic's solution for giving AI assistants like Claude access to external tools and data. It's a JSON-RPC based protocol that runs over stdin/stdout.

The concept is straightforward: instead of manually running bconsole commands, copying output, and pasting it into a chat, you ask "show me the last 10 backup jobs" and the AI fetches it directly from your Bareos Director through the MCP server.

Why Bareos?

I've been managing my backups with Bareos for a while now. And while bconsole is powerful, it requires remembering specific command syntax for listing jobs, checking storage pools, or viewing client status.

Being able to query backup infrastructure conversationally, like "why did last night's backup fail?" or "how much storage is left in the Full pool?" seemed like a practical use case for MCP.

The journey

Understanding the Protocol

MCP is a JSON-RPC protocol with three main operations:

  1. Initialize: Server announces capabilities
  2. List tools: Server describes available tools
  3. Call tool: Client executes a tool with arguments

And that's pretty much it. The rest is tool implementation and error handling.

Read-Only by design for now

I deliberately kept everything read-only initially, so no starting jobs, deleting volumes, or pruning backups. Just queries. This was a practical decision: I wanted to understand the protocol and tool design without worrying about accidentally breaking production backups. Mutable actions are planned for later versions.

Emergent behavior

The interesting part came when I realized Claude could answer questions I hadn't explicitly coded for, and which would require multiple bconsole queries. Ask it "Which clients haven't backed up in the last 24 hours?" and it will:

  1. Call list_jobs to get recent jobs
  2. Call list_clients to get all clients
  3. Compare the two
  4. Provide an answer

I built primitive tools; the AI combines them to answer complex queries. That's the real value of MCP: you're building composable primitives, not a comprehensive API.

What I learned

Tool design matters

The quality of tool descriptions directly impacts how well the AI uses them. Vague descriptions lead to mistakes. Clear, specific descriptions with parameter explanations work reliably.

For example:

  • Don't do this: "List jobs"
  • Do this: "List recent backup jobs. Use the limit parameter to control how many jobs are returned (default: 50)"

Testing without an AI assistant

You can test MCP servers with just echo and pipes:

echo '{"jsonrpc":"2.0","id":1,"method":"tools/list","params":{}}' | \
./bareos-mcp-server
Enter fullscreen mode Exit fullscreen mode

No need to configure Claude Desktop or spin up a full client. Just send JSON-RPC over stdin. It is actually how Claude will interact with the server, and it is very useful for debugging.

Build for composition

Don't try to predict every query users might want. Build small, focused tools that can be combined. The AI is surprisingly good at figuring out how to compose them.

Recommendations for building MCP servers

  • Start with a tool you already use. Pick something you interact with regularly, you'll understand the use cases and catch issues faster.
  • Begin with one tool. Get one tool working end-to-end before adding more. I started with list_jobs and expanded from there.
  • Write clear descriptions. Tool descriptions are the AI's only guide.
  • Be specific about parameters, formats, and return values.
  • Test with raw JSON-RPC first. Verify your tools work with manual calls before involving an AI assistant. Faster iteration, clearer debugging.
  • Start read-only if safety matters. Read-only operations let you learn the protocol without risking production systems. Add mutations once you're confident.

Try it yourself

The code is open source at https://github.com/edeckers/bareos-mcp-server with installation instructions for both Claude Code and Claude Desktop.

Useful resources

Building your own MCP server? These resources helped me:


Built something with MCP or considering it? Let me know in the comments!

If the Dutch language doesn't scare you, and you'd like to know more about what keeps me busy aside from my pet projects, check my company website https://branie.it! Maybe we can work together on something someday :)

Top comments (2)

Collapse
 
renato_marinho profile image
Renato Marinho

Love this Ely! The "composable primitives" insight is exactly right — and your tip about read-only by design is solid practice.

Your note on tool design resonates with something we've built into MCP Fusion (github.com/vinkius-labs/mcp-fusion): the f.tool() API enforces typed input schemas and descriptions at the TypeScript level, so vague descriptions become a compile-time problem rather than a runtime one.

For the structured output side, we have Presenters — a schema-validated egress layer that strips undeclared fields before they reach the LLM, plus toolError() with structured codes and recovery suggestions (instead of raw error strings). This pairs well with what you describe about building composable tools: when one tool fails, the LLM knows exactly which other tool to call next.

We also just released Vercel and Cloudflare Workers adapters — @vinkius-core/mcp-fusion-vercel and @vinkius-core/mcp-fusion-cloudflare — for teams wanting to deploy MCP servers serverlessly. Your Bareos use case would be a great fit for an SSE transport deployment behind a Vercel Edge function.

Great write-up — this kind of real-world use case is exactly what the MCP community needs more of!

Some comments may only be visible to logged-in visitors. Sign in to view all comments.