Two weeks ago I shipped ForgePoint Signal — a paid MCP server with x402 micropayments on Base mainnet. Four tools, $0.10 USDC per call, no signup, the payment is the auth. Federal Register and IRS Internal Revenue Bulletin data, parsed by Claude, served over MCP.
Two posts ago I wrote about the build pattern. One post ago I wrote about the positioning mistake I made along the way. This one's the messier one: the stuff that's actually hard once it's running.
This isn't a victory post. The Glama listing on Signal shows roughly zero usage in the last 30 days. Volume's still low, and that's part of what I want to be honest about. Most "I shipped a paid MCP" posts skip past this part. I think that's the part that matters.
The Easy Parts (Mostly)
The stack itself was the smallest piece of the work. If you have ever wired Express + Vercel + Supabase before, the MCP server layer is roughly an afternoon. Claude does the parsing reliably. The cron is a single GitHub Actions workflow. x402 on Base mainnet wired up with the Coinbase facilitator in a few hours — the docs are clear enough, the example code works.
A lot of people I've talked to expect the build to be the hard part. It is not. The build is roughly 20% of the actual work.
The Hard Parts
Middleware ordering will bite you.
The x402 verifier has to run before the MCP tool dispatcher. Not after. Run it after, and you end up either letting unauthenticated calls through on retries or double-charging the same agent for one call. I got this wrong on the first deploy and saw the symptom before I understood the cause. If you're wiring this yourself, draw the request lifecycle on paper before you trust the framework defaults. Whatever middleware library you use will quietly let you get the order wrong.
The facilitator landscape is moving under your feet.
When I shipped, the Coinbase facilitator was the obvious choice. Two weeks later, PayAI has its own facilitator (with active bugs surfacing on x402-foundation/x402 as I write this), Amazon Bedrock launched AgentCore Payments on May 7, and Google's AP2 is sitting in the wings. The picking-the-facilitator decision you make this week is a different decision than the one you would have made last week, and a different decision than the one you'll be making in two months.
I picked Coinbase because it was the most production-tested at the time. If I were shipping fresh today I'd probably still pick Coinbase but I'd write my facilitator integration thin enough to swap. Don't tightly couple to a specific facilitator's response shape. They're going to converge but they haven't converged yet.
Discoverability is the entire game after you ship.
This is the one I underestimated by the largest factor. The build took a couple of weekends. Getting the server discovered and called — that's been the entire two weeks since.
I have listings on Smithery, Glama, mcp.so, and mcp.directory. I have an .well-known/mcp/server-card.json per the spec. The Anthropic MCP registry knows about it. The Glama listing — which is one of the more active MCP directories — shows that agents have called my server zero times in the last 30 days that I'd describe as meaningful traffic.
This is normal. Almost every newly-listed MCP server I've looked at on Glama shows similar zero-or-near-zero numbers in the first few weeks. The marketplace listings are necessary but not sufficient. They're the equivalent of being indexed on Google — being indexed is not the same as being clicked.
What actually moves traffic, as far as I can tell so far: writing about it (this kind of post), being mentioned in adjacent communities, getting picked up by aggregator newsletters, and direct integration with agent frameworks where someone explicitly configures your endpoint. Nothing automatic.
Pricing decisions are not obvious and have no good defaults.
I priced everything at $0.10 USDC per call. That was a guess. The published x402 examples mostly use $0.01–$0.10. So I rounded.
The actual right pricing — per call vs per query depth, flat vs tiered, repeat-read pricing vs one-shot deeper analysis — I don't have data to make those decisions. Anyone telling you "here's the optimal x402 pricing curve" is making it up. The market is too young; there isn't enough traffic across enough endpoints to know.
What I suspect, from the few thoughtful operators I've talked to: agents probably balk at high repeat-read costs but pay willingly for one-shot deeper analysis. So a flat per-call price under-monetizes deep queries and over-prices shallow ones. But that's a hypothesis, not data.
The Surprises
Cold outreach to other builders is a relationship game, not a sales game.
I sent fifteen-plus cold messages to other technical solo builders shipping MCP servers. Three replied substantively. Every single one of them treated me as a peer to compare notes with, not as a service provider to hire. That tells me something important about who the buyer for this kind of work actually is. The people who build this stack mostly want to ship their own — they don't outsource it. The people who don't build this stack are who you actually need to find.
That changes the whole shape of "who is this for." If you're considering shipping a paid MCP and you assume your audience is other developers — it probably isn't. It's whoever has the data and doesn't want to learn five new pieces of infrastructure to expose it.
Marketplace listings are necessary but not sufficient (and the listings themselves take real work).
Every directory has its own submission process, its own JSON schema, its own review timing. Glama wants one set of metadata, Smithery wants another, mcp.so wants a different schema, mcp.directory has yet another. Each takes 30 minutes to an hour to do well. You can do this part wrong and tank your discoverability before you even start.
There's a real opportunity for someone to build the "list-everywhere" tool for MCP servers and capture the friction. I'd buy it.
The technical buyer and the operational buyer are different people.
If you're shipping this stack, you're going to talk to two kinds of people in cold conversations: technical builders who want to compare notes (peers), and operational/data-owner people who want to know if their data could earn from agents (potential buyers). The conversation feels almost identical from the outside, but the buying intent is completely different. Learn to tell them apart early or you'll spend weeks confused about why "great conversations" aren't producing customers.
What I'd Do Differently
If I were shipping this stack fresh today, three changes:
-
Wire x402 first on a stub tool, before the data ingest works at all. Get the payment loop end-to-end on a
hello worldendpoint. See the receipt. Then layer the actual data work on top. Most of the painful debugging I did was in the payment loop, which I tried to wire last. - Abstract the facilitator integration thin. Treat the response shape as a moving target. Don't depend on specific Coinbase or PayAI quirks.
- Treat distribution as part of "shipped," not "what I'll do next." Write the dev.to post the same day you push the v1. Submit to all four directories the same day. Don't let "I'll do the marketing tomorrow" become "I shipped two weeks ago and nobody knows."
Who This Might Be For
I'm writing this post for two kinds of reader.
If you're a builder considering shipping a paid MCP — most of the above is the stuff I wish someone had told me. The build is the easy part; the post-ship grind is the work.
If you're a data owner — you have a useful dataset, you've thought about whether AI agents would pay to access it, but you don't want to learn five pieces of infrastructure to find out — I'd genuinely like to hear what you're sitting on. ForgePoint Signal is still live at forgepointsignal.com, the source is on GitHub, and the easiest way to reach me is here or on dev.to. I've spent the last two weeks watching the friction points up close. There's a real chance the pattern fits your data and you'd never have to learn what x402 means.
That's it. Two weeks in. Glama still shows zero. The work continues.
Top comments (0)