43 unread Discord notifications. Five backup confirmations, all green. A content bot that published 3 articles to Shopify overnight. A credit audit flagging a user at 10 credits instead of 600. And an RSS feed that broke 3 hours ago but nobody noticed because the alert got buried under 12 success messages.
TL;DR: When you run multiple products on different stacks, your monitoring fragments across Discord channels, dashboards, and cron reports until you become the human aggregation layer. I built a custom MCP server, a private tool that lets Claude query all my apps and tell me what's broken. 16 commits, 4 hours of debugging, and one OAuth spec that nearly broke me. Below: what it took, what changed, and a framework for deciding when you need this vs. when you're overengineering.
Everything is fine AND something is burning. I just can't tell which without reading all 43, mentally sorting them by severity, and then cross-referencing across channels.
Five SaaS products. Convex here, Supabase there, n8n crons everywhere. Each product reports to its own Discord channel in its own format. Each channel mixes routine confirmations with actual emergencies. The noise-to-signal ratio is roughly 15 to 1.
The monitoring works. Every app reports. Backups confirm. Credit audits flag. Content pipelines log. The data exists.
But I became the aggregation layer.
The guy who opens 4 channels, 2 dashboards, and an n8n execution log to mentally reconstruct a picture of "how is everything going." App 1 is fine. App 2 has a minor issue. App 3 has a problem that needed attention 3 hours ago. I just didn't know because the alert was notification 37 out of 43.
If you're like me, you probably tried to fix this by building more tools 🤓. A custom dashboard here. A Slack bot there. A weekly summary script. Congrats, you now have 5 SaaS products AND 3 monitoring tools to monitor. You didn't solve the fragmentation. You added a layer to it.
A unified dashboard crossed my mind. Then I realized that's a sixth product to maintain. And it still requires me to look at it, interpret it, decide what matters. What I actually needed was something that could query all 5 sources and answer a question, not another screen.
So I gave Claude a custom tool to access all my apps at once. Built it as an MCP server on Next.js, deployed on Vercel, connected to Convex, Supabase, my content API, and a Discord adapter. The idea was simple. The execution almost killed the project.

Before the build: when you actually need this
I'm putting this section first because I almost built something I didn't need 6 months ago, and I don't want you to make the same mistake. A personal MCP server is the right tool at a specific point. Before that point, it's resume-driven development.
If you run one project on one stack, Discord notifications and a health check endpoint cover it. Maybe Grafana. You see everything from one place. If you're here and thinking about MCP, stop. Go build features instead.
Two or three projects on similar stacks? A shared dashboard covers it. CLI scripts handle ad-hoc checks. I still maintain that CLIs beat MCP for single-purpose agent tasks and nothing about this project changed that opinion. If your agent needs to execute one action against one system, a CLI is simpler and more predictable. Every time.
Three to five projects on mixed stacks is where it starts hurting. Convex here, Supabase there, custom REST APIs, different auth systems, different data shapes. No single tool queries all of them. Your morning routine involves 3 tabs and a Discord scroll. You start missing things. Not because you don't care, but because aggregating 4 different formats before coffee exceeds what a human brain should be doing at 7 AM.
Five plus, you realize you're the middleware. Each source works fine individually. The problem is synthesis. "Which of my apps needs attention" requires querying 4 databases, comparing results, ranking by severity. A dashboard can't do that. An LLM with access to your data can.
I was solidly in that last stage before I admitted it.
The threshold I'd suggest: if you spend more than 15 minutes a day context-switching between monitoring sources, and the data is accessible via API or database, this pays for itself in the first week.
The 16-commit disaster
I figured this would take 10 minutes 😬 . An MCP server is a few route handlers, a couple of tool definitions, deploy to Vercel, done. I was already thinking about what to cook for dinner when I started.
Four hours later I was still debugging OAuth. It took 4 hours, roughly 40 back-and-forth exchanges with Claude Code, an estimated $8–12 in API tokens, and 16 commits to get a working prototype. Looking at the git log the next morning was humbling.
The MCP server itself is a single Next.js app. Six route files, one utility module. You declare tools that Claude can call, each tool queries a data source, Claude decides which tools to use based on your question.
app/
├── .well-known/
│ ├── oauth-authorization-server/route.ts
│ └── oauth-protected-resource/route.ts
├── authorize/route.ts
├── token/route.ts
└── mcp/route.ts
lib/
└── oauth.ts
Declaring the tools took maybe 30 minutes. Each one is a function with a name, a description, a parameter schema, and logic that queries Convex or Supabase or an API. That's the easy part. The git log tells you exactly where the time went.
First wall: MCP sessions. I started with stateful sessions because that's what the SDK examples show. Deployed to Vercel. Nothing worked. Serverless functions don't keep state between invocations. Two commits to rip that out and go stateless. Should have known. Did not.
Second wall: Next.js routing. Three commits to figure out that the App Router dynamic route [transport] was causing path resolution issues and I needed a fixed /mcp route instead. Then the basePath was wrong. Then a favicon was somehow interfering. Tâtonnement is the polite word.
Then OAuth.
Seven commits. This is where I nearly abandoned the project.
Claude.ai requires OAuth 2.1 for MCP connections. The spec defines discovery endpoints, authorization flows, token exchange, PKCE. On paper it's clean. In practice, the constraints aren't documented clearly enough to implement without trial and error. And each trial requires a full Vercel deployment because you can't test the Claude.ai OAuth flow locally. Deploy, wait 30 seconds, click connect in Claude, watch it fail, read the cryptic error, fix, redeploy. Repeat.
What I learned by burning through those 7 commits: Claude ignores the endpoint URLs you declare in your OAuth metadata. You can set authorization_endpoint to /oauth/authorize all day long. Claude reads the issuer field and constructs the paths itself. Your endpoints must sit at /authorize and /token at the root, period. I found this out after deploying 3 times with different path configurations.
A different kind of failure: my route handlers returned Response.json(). Standard Web API. Works locally. Works in tests. On Vercel in production, it returns an empty response with no error, no log, no indication anything went wrong. Just a blank 200 and a broken OAuth flow. The fix is NextResponse from next/server. I found this in a GitHub issue with 3 upvotes buried in a thread about something else entirely.
And the one that cost me an evening: I set my JWT secret using echo "value" | vercel env add. The echo command appends a newline. The newline becomes part of the secret. Every JWT signature silently fails. Every token exchange returns "invalid_grant." The logs say nothing useful. I ran printf instead of echo, redeployed, and everything worked. I stared at the ceiling for a while after that one 💀
That was commit 14 of 16. The last two were cleanup.
The whole thing would have been 3 or 4 clean commits if the constraints had been documented. But that's always the game when you're integrating with systems whose exact behavior you discover by deploying and failing. The MCP spec is still young. Claude.ai's implementation has known bugs, especially around token refresh. Treat any MCP integration as a production prototype for now. Stable enough for daily use. Not something I'd ship to paying customers without more hardening.
For the broader picture of how these services connect, I documented the full architecture with cron jobs, memory, and dashboard in a previous article. The MCP server plugs into that same pattern.
What it actually looks like now
After the 16-commit baptism, here's where things stand.
I open Claude and type:
Anything need my attention this morning?
Claude hits the MCP server. The server queries Convex for user and credit data, Supabase for backup and cron health, the content API for publishing status, and a Discord adapter for recent alerts filtered by severity. The first time I ran this, the response took 11 seconds and the credit data came back as a raw JSON blob that Claude cheerfully described as "some financial information." Not exactly the polished cockpit experience.
After tuning the tool descriptions and the response format, it returns something actually useful:
2 items need attention:
1. Credit anomaly on [ClientApp]: user sa57*** shows 10 credits,
expected 600. Drift: -590. Flagged 14 hours ago, unresolved.
2. RSS feed error on ContentForge: "Stories by [author]" down
for 3h. No new content since 1:00 AM.
Everything else green:
- 3/3 backups completed
- Content pipeline: 6 scraped, 3 published, social scheduled
through Feb 27
- No other anomalies across 16 active users
Here's what I didn't expect: the tool descriptions matter more than the code. Claude decides which tools to call based on the descriptions you write. And it interprets the results based on what you told it the tool does.
Actually wait, let me rephrase that. It's not that descriptions "matter more" in some abstract sense. It's that I spent 2 hours writing TypeScript and 45 minutes rewriting descriptions, and the descriptions moved the needle ten times more.
My first attempt at the credit monitoring tool:
Name: "get_credits"
Description: "Query credit data from Convex"
Claude called it. Got back raw JSON. Described it to me as "some financial information that appears to contain user balances." Technically correct. Completely useless at 6am.
Second version:
Name: "check_credit_anomalies"
Description: "Find users whose current credit balance differs
from their expected balance by more than 10%. Returns user ID
(anonymized), current balance, expected balance, and drift.
Flag any drift over 50 credits as urgent."
Same underlying data. Same Convex query. But now Claude returns a prioritized list with severity flags and tells me which ones need immediate attention. The description is a prompt. Treat it like one. Write it like you're briefing a junior dev who needs to know what matters, not just what the function returns.
This applies to every tool. A backup checker described as "get backup status" gives you timestamps and file sizes. One described as "check if any scheduled backups failed or are overdue in the last 24 hours, flag missing backups by app name" gives you actionable answers.
I realized during this phase that the TypeScript is almost irrelevant. Any dev can write a function that queries Convex. The hard part is writing descriptions that make Claude do the right thing on the first try. If you've worked with prompt contracts, same principle. You don't share code anymore. You share intent, constraints, and expected behavior. The tool description IS the contract between you and Claude.
I can follow up naturally. "Show me that user's transaction log" sends Claude back to the MCP server with a targeted Convex query. "When did the RSS feed last work" hits a different tool. The conversation flows but the data is structured underneath.
The morning check went from 15 minutes of scrolling and context-switching to about 30 seconds and maybe a follow-up question.
On two occasions in the past week, it caught issues I would have missed until a user complained. Not because the alerts weren't there. Because they were buried.
What changes when you give Claude access to your actual data is not the reach. The data was always there. You stop being the router. My phone still buzzes with 43 Discord notifications every morning. I just don't read them first anymore.
The stack, for those who want to build one
A secure MCP server with OAuth 2.1, deployed and running, costs zero dollars. Vercel's free tier handles it. No database needed. No monthly bill. What I described above is the base, but it's extensible to whatever you need. Every new data source is one more tool function. Every new question you want to ask Claude is one more description to write. The architecture doesn't change.
The full setup is 6 route files in a Next.js app. Tokens are stateless JWTs signed with jose. The authorize endpoint auto-approves because it's a personal server.
What you need to figure out before you start:
Can you query your data sources from a serverless function? If yes, each one becomes a tool. If your data lives behind a VPN or requires a persistent connection, Vercel serverless won't work and you'll need a different deployment model.
Are you comfortable with OAuth 2.1? If you haven't implemented an OAuth server before, budget a full day for the auth layer alone. The MCP tools themselves are trivial. The OAuth dance is where projects die.
The setup takes about an hour if you know the exact constraints upfront. Since I already paid the 4-hour tax and documented the pitfalls above, you migth get lucky.
The morning after

Small island north of Panama. A guy on the beach was selling artisan coffee for more than Starbucks charges. I bought one because I was half asleep and he was persuasive. I prefer instant coffee. Don't judge me.
I sat down, opened Claude on my phone, typed "anything broken," got the answer in 4 seconds, and went back to staring at the ocean. Discord had 38 unread notifications. I didn't open it.
The MCP server didn't give me new data. Everything it knows was already in my Discord channels, my databases, my cron logs. It just replaced me as the sorting algorithm. And honestly, I was a terrible sorting algorithm 🤷♂️
If your side projects multiply faster than your attention does, the answer probably isn't a better dashboard.
If you build things and occasionally want to read about them breaking in production, follow along.
* Yes, the cover image is AI-generated. My artistic skills peak at box diagrams in Excalidraw.
Top comments (0)