DEV Community

Cover image for MCP Has a Product Analytics Problem
Marcel-Felix Krause
Marcel-Felix Krause

Posted on

MCP Has a Product Analytics Problem

If you build a web app today, you have dozens of tools to understand how people use it. PostHog, Amplitude, Mixpanel. Page views, funnels, retention, cohorts. You can answer almost any product question within minutes.

Now try doing the same for an MCP server.

My friend Johann and I have been building MCP servers and ChatGPT integrations for the past year. We shipped tools, deployed them, and then realized: we had error logs and basic monitoring, but zero insight into how people were actually using our product.

We could see when things broke. We couldn't see what was working.

The observability layer exists. The product analytics layer doesn't.

Let's be fair: MCP developers aren't completely blind. You can hook up Sentry for error tracking. You can pipe logs to Datadog. You can build basic request counting yourself. The engineering observability side of MCP is solvable with existing tools.

But product analytics is a completely different thing. Observability tells you "is my server healthy?" Product analytics tells you "are people getting value from my product?"

In a web app, PostHog or Amplitude answers questions like: which features do users love? Where do they drop off? What does retention look like? What drives conversion? These tools were built for a world where humans click buttons and visit pages.

MCP works differently. There are no pages. There are no buttons. The user talks to an LLM, and the LLM decides which tools to call, in what order, and with what parameters. The user never directly interacts with your code. The model is the intermediary.

PostHog doesn't know what an MCP tool call is. Neither does Amplitude. You can track that your server is up and responding. But you can't answer: which of my 15 tools do people actually find useful? What sequences of tool calls lead to successful outcomes? Are users coming back after the first session? What usage patterns correlate with paying customers?

That's the gap.

We felt this ourselves

We had an MCP server with 12 tools. Error monitoring was set up, logs were clean, everything looked healthy. But after a week in production, we manually grepped through logs and discovered that 3 of our tools were never called by any LLM. Not once. Our server was "working perfectly" from an observability perspective. From a product perspective, 25% of our features were dead weight, and we had no idea.

Another time, users reported that "something felt off." No errors in Sentry. Server metrics looked fine. We spent two days digging through raw logs before we found that a specific parameter combination was causing slow responses for about 15% of calls. Not errors, just bad user experience. A product analytics dashboard with per-tool latency breakdowns would have shown this instantly.

The pattern kept repeating: our monitoring told us the system was healthy, but couldn't tell us whether the product was actually good.

Why this matters right now

MCP is growing fast. Claude, ChatGPT, Cursor, Windsurf, and more clients are adopting the protocol. The ecosystem is exploding, and developers are moving beyond hobby projects into real products.

Some are starting to charge money. Whether you're building a commercial MCP server, an internal tool for your company, or an open-source project you want people to actually use, error tracking alone isn't enough. You need product analytics:

  • Which tools do people actually use? If you have 15 tools and only 4 get called regularly, that tells you where to invest your time.
  • What does retention look like? Are users coming back after the first session, or do they try it once and never return?
  • What are the funnels? Which sequences of tool calls lead to successful outcomes? Where do sessions drop off?
  • What drives value? Which tool call patterns correlate with engaged users or paying customers?

You can have perfect uptime and zero errors, and still have a product that nobody finds useful. Without product analytics, you'd never know.

What MCP product analytics actually looks like

The good news: it's solvable. It just requires thinking about analytics at the protocol level instead of the UI level.

Tool-level usage analytics. Not just "how many requests did my server handle," but which tools are hot, which are cold, and how usage changes over time. This is the MCP equivalent of feature adoption tracking.

Session-level funnels. In traditional analytics, a funnel is "user visits landing page, signs up, completes onboarding." In MCP, a funnel is a sequence of tool calls. Which sequences lead to successful outcomes? Where do sessions end prematurely?

Retention curves. Are unique users coming back? How does day-1 retention compare to day-7? This is fundamental for understanding whether your MCP server is actually useful, but almost nobody in the ecosystem is tracking it.

Segmentation by client and model. Claude calls your tools differently than ChatGPT. Usage patterns vary by client. You need that granularity to understand your actual user base.

Revenue attribution. If you're monetizing, you need to connect tool usage patterns to revenue. Which tools are your money-makers? Which user segments generate the most value?

None of this is exotic. These are the same questions every product team asks about their web app. The difference is that for web apps, you install PostHog and get answers in 10 minutes. For MCP servers, this product analytics layer simply didn't exist.

So we built it

This is exactly why Johann and I built Yavio, an open-source product analytics layer for MCP servers and MCP apps. Not another monitoring tool. Not another error tracker. Product analytics: funnels, retention, tool usage, revenue attribution.

It's MIT licensed, you can self-host it with Docker, and it integrates with one function call wrapping your server. A free cloud version is coming soon at yavio.ai.

If you're building MCP servers, I'd genuinely love to hear how you're handling this today. Do you know which tools people actually use? Do you know your retention? Or do you just have error logs and hope for the best?

Drop a comment or find us on GitHub. We'd love to hear what product questions you wish you could answer about your MCP server!

Top comments (0)