Our team builds a lot of MCPs — for ourselves and for external users. Over time, recurring patterns have emerged. Here are the key use cases we see over and over again, organized by complexity.
Level 0. Give the agent access to APIs
The simplest and most obvious use case. You ask the agent: "analyze the Telegram channel @llm_under_hood, identify topics and popular posts" — it calls the Telegram API, fetches posts, calculates metrics, and returns the analysis.
Level 1. Automate routine by raising abstraction
AI frequently makes mistakes — forgets where servers and data are, makes syntax errors, even when everything is spelled out in context. MCP solves this by raising the abstraction level.
For example, I have 3 MCP servers written for a specific project. Each is 200-300 lines of TypeScript:
infra — vm_health generates a health report (12+ threshold alerts), container_logs returns logs, redis_query runs queries.
Sure, the agent can compose a long SSH command on its own, but it fails every other time. With MCP we remove the cognitive load:
// Without MCP: agent composes this and often gets it wrong
ssh user@server "docker exec redis redis-cli -a $PASS INFO memory | grep used_memory_human"
// With MCP: one tool call
redis_query({ server: "audioserver", command: "INFO memory" })
deps — dep_versions across 5 repositories, tag_api_types, update_consumer. Checking dependency versions, syncing API types between services — scripted and automatic.
s3 — S3 navigation: s3_org_tree, s3_device_files, s3_cat. Instead of aws s3 ls with endless paths — "show files for device X from yesterday".
Level 2. Semantic layer for data
An MCP server can wrap not just an API, but a semantic layer. Data is already prepared and labeled — the agent doesn't need to know the database schema, it operates with business concepts.
Yes, you can connect an MCP for GA4. But how do you account for all the custom tagging rules and complex logic of merging data from different sources?
That's what ETL is for — it handles the processing. The MCP server wraps the result as a semantic layer, and then anyone in the company can ask:
- "show traffic insights for yesterday"
- "which ASNs should we block?"
- "which users generated the most revenue?"
The agent doesn't need to know table names, join logic, or filtering rules. The MCP server encapsulates all of that.
This changes who can use the tool. An analyst builds the semantic layer once — then the entire team uses it, including managers who don't know SQL.
Level 3. Shared authorization and access control
One MCP server can serve the entire company.
Example: Google Search Console. Instead of handing out credentials to everyone — one internal OAuth. Connect to the MCP server, authenticate via corporate SSO, get access based on your role.
Or an MCP that gives some people access to yesterday's revenue and others — not. Role-based access at the tool level.
This is already the industry standard. Sentry, Stripe, GitHub, Atlassian — all offer remote MCP servers with OAuth. Zero-config for the user: add a URL, log in via browser, start working.
Building MCP servers: a skill with best practices
We analyzed the source code and documentation of 50 production MCP servers from Stripe, Sentry, GitHub, Cloudflare, Supabase, Linear, Grafana, Playwright, AWS, Terraform, MongoDB, and others.
Packaged it as a Claude Code skill — 23 sections covering:
- Architecture: transport choice (STDIO vs StreamableHTTP), deployment models, OAuth 2.1
- Tool design: naming conventions, writing descriptions for LLMs, managing tool count (1 to 1400+)
- Implementation: error handling, security, prompt injection protection, token optimization
- Operations: debugging with MCP Inspector, LLM-based eval testing, Docker deployment
- Industry patterns: top 35 patterns from production, pre-release checklist
Drop it into your .claude/skills/ directory and run /mcp-guide:
MCP Building Guide Skill on GitLab
The agent will use these best practices automatically when planning, developing, or reviewing MCP servers.
Top comments (0)