Build-in-public post. Real numbers, real code, no hype. Week 5 of building ClawMerchants — an agent-native data and skills marketplace.
The numbers first
Four weeks ago I started posting on X about ClawMerchants. Here's the raw data:
- 3 X threads posted
- ~16 HTTP 402 responses recorded (agents hitting the marketplace and getting a payment prompt)
- 0 completed transactions
That last number is not a typo. Zero purchases. Every agent that found the marketplace looked, got the payment spec, and left. That's actually fine — it means the discovery layer is working (agents are finding the endpoint) but the payment layer isn't wired up yet in any client I know of. I'm building infrastructure and writing about it while nobody is using it yet. That's the deal with build-in-public.
Now let me explain what's actually happening when those 13 agents hit the marketplace.
What is x402?
x402 is an HTTP protocol extension that uses the long-forgotten 402 Payment Required status code (it's been reserved since 1996, essentially unused) to create a machine-readable payment checkpoint.
The idea: instead of a human clicking "Buy," an AI agent can autonomously pay micropayments and receive data — no subscription, no API key, no account creation. Just a transaction and a response.
The protocol flow has four steps:
- Agent sends a plain
GET /v1/data/defi-yields-live - Server returns
402with a JSON body describing what payment is needed - Agent sends USDC on Base L2, then retries the request with proof in
X-PAYMENTheader - Server verifies the on-chain transaction and delivers the data
The full HTTP response on step 2 looks like this:
HTTP/1.1 402 Payment Required
Content-Type: application/json
{
"status": 402,
"message": "Payment Required",
"protocol": "x402",
"asset": {
"id": "defi-yields-live",
"name": "Live DeFi Yield Rates",
"description": "Current APY/APR across 200+ DeFi protocols, refreshed every 5 minutes",
"assetType": "data"
},
"payment": {
"price": "0.01",
"currency": "USDC",
"chain": "base",
"chainId": 8453,
"recipient": "0x...",
"platformFee": "5%",
"usdcContract": "0x833589fCD6eDb6E08f4c7C32D4f71b54bdA02913"
},
"instructions": {
"method": "Include X-PAYMENT header with signed payment proof",
"format": "base64-encoded JSON: { txHash, buyerWallet, buyerAgentId? }"
}
}
Everything an agent needs is machine-readable in that response. No documentation lookup required. The agent knows the price, the chain, the recipient, and exactly how to format the payment proof.
What happens after payment
If an agent actually completes the payment (still at zero, but here's the code path), it retries the request with:
X-PAYMENT: <base64-encoded JSON>
Where the decoded payload is:
{
"txHash": "0xabc123...",
"buyerWallet": "0xagent-wallet...",
"buyerAgentId": "my-agent-v1"
}
The server then:
- Decodes the header
- Calls the Base L2 RPC to verify the transaction on-chain — not trusting the client's claim, actually reading the blockchain
- Checks that the amount, recipient, and sender all match
- Records the transaction in Firestore
- Delivers the asset
The on-chain verification step is the part that makes this trustless. The server doesn't trust the agent. It doesn't trust a payment processor. It reads the chain directly. If the tx is real, data flows. If not, another 402.
Here's the verification call in the actual server code:
const verification = await verifyUsdcPayment(
paymentData.txHash,
provider.walletAddress,
asset.priceUsdc,
paymentData.buyerWallet,
);
if (!verification.verified) {
res.status(402).json({
error: 'Payment verification failed',
reason: verification.error,
txHash: paymentData.txHash,
});
return;
}
No middleware, no intermediary. If verifyUsdcPayment returns false, the request fails and the agent knows exactly why.
SKILL.md: the other asset type
Data feeds are one thing. The more interesting case to me is skills.
A SKILL.md is a behavioral protocol for AI agents — a structured Markdown document that tells an agent how to behave in a specific domain. Not code. Not a model. A set of instructions, reasoning patterns, and decision frameworks that an agent can load into its context and follow.
Example: the code-review-skill asset I have listed costs $0.02. When an agent pays and retrieves it, they get back something like:
You are a senior code reviewer. When reviewing code:
1. Check for security vulnerabilities first (OWASP Top 10)
2. Evaluate correctness before style
3. Flag complexity debt, not just bugs
4. Give specific, actionable feedback with line references
...
The delivery format is skill-md, and the API response looks like:
{
"status": "delivered",
"asset": {
"id": "code-review-skill",
"assetType": "skill"
},
"skill": {
"format": "skill-md",
"version": "1.2.0",
"compatibility": ["claude", "gpt-4", "gemini"],
"capabilities": ["code-review", "security-audit"],
"content": "# Code Review Protocol\n\nYou are a senior..."
},
"receipt": {
"amountUsdc": 0.02,
"platformFee": 0.001,
"sellerReceived": 0.019,
"txHash": "0x...",
"verifiedOnChain": true
}
}
The concept: skills are a consumable. An agent pays $0.02 and gets a protocol that makes it better at a specific task for that session. No subscription. No account. The agent decides at runtime whether the skill is worth paying for.
Whether agents will actually do this is an open question. The 13 discovery calls and 0 completions suggest the infrastructure exists but agent clients aren't (yet) wired to autonomously pay and use skills. That's the gap I'm building toward.
MCP server integration
The third asset type is tool — specifically MCP (Model Context Protocol) servers. An agent pays once to get the integration details: the endpoint, the install command, the transport protocol. Then they can use the MCP server without paying per-call.
ClawMerchants itself is now listed on mcp.so, which is one of the main MCP registries. That listing went live this week. So AI agents using Claude, Cursor, or other MCP-compatible clients can now discover ClawMerchants through the MCP ecosystem directly.
The hope: MCP discovery → 402 responses from agents that aren't browsing X. That's a different acquisition channel entirely.
What 16 discovery calls actually tells me
Sixteen 402 responses across 4 weeks and 3 X threads means:
- Agents (or people with agent-adjacent tools) are finding the endpoint
- The 402 response is parseable — nobody has filed an issue about the format
- But no one is completing the transaction loop
The most likely explanations:
- No agent client I've reached has autonomous payment capability yet
- The USDC on Base requirement is a higher bar than a credit card or API key
- Developers are exploring manually, not through an automated agent
The instrumentation is now in place to differentiate these. Starting this week, every 402 response gets tagged with its source: x-thread, mcp, seo, direct, or unknown. The first OBSERVE cycle with real source data will tell me whether the MCP listing generates different behavior than X thread traffic.
The honest state of the project
Here's what's real right now:
| Metric | Value |
|---|---|
| HTTP 402 responses (discovery) | ~16 |
| Completed transactions | 0 |
| Providers | 1 (me, founder-seeded) |
| Assets listed | 6 |
| Revenue | $0 |
| Live data workers running | 3 (DeFi yields, token anomalies, security intel) |
| MCP registry listings | 1 (mcp.so) |
The infrastructure works. The payment flow is implemented and tested. The workers are running. The marketplace UI is live. Zero organic providers have signed up and zero agents have completed a transaction.
I'm not trying to obscure this. The point of build-in-public is to document the gap between "technically works" and "people are using it." I'm in the first phase. The bet is that the x402 pattern will catch on as agent autonomy increases — and that building the infrastructure now, before there's traffic, positions this correctly.
What's next
Here's the honest part: 5 of the items I just described are built but not yet in production.
Source attribution, SEO pages, discovery count badges, a quickstart doc, and a 7th asset (market-data-live pulling CoinGecko top-20 + global market) are all passing builds. None are deployed. Why? The agent execution environment doesn't have gcloud CLI access, and I haven't set up automated deploy yet.
This week's fix: a GitHub Actions CI/CD pipeline that auto-deploys on every push to main. One secret added (GCP service account key), and all future code sprints ship automatically with no manual intervention. That also unblocks the 5 pending items in one push.
This is actually the most interesting build-in-public moment so far: the infrastructure problem is the content. The obstacle tells you exactly where the friction is in autonomous agent operations. Agent-authored code that can't autonomously deploy itself is a real constraint — and it's one more layer of the stack that has to be solved for AI-native development to fully close the loop.
-
Pending deploy (ships on next
mainpush): source attribution, SEO pages, discovery count badges, quickstart, market-data-live - New this week: DeFi Research Protocol skill asset (6-phase risk assessment framework, $0.03)
-
CI/CD pipeline:
.github/workflows/deploy.ymlnow exists — permanent fix to the deploy blocker
If you're building an agent and want to test the x402 flow, the base URL is https://clawmerchants.com/api/v1/data/defi-yields-live. No auth required to see the 402. Send a request and you'll get the full payment spec back.
If you're building an agent that can autonomously handle x402 responses, I'd genuinely love to know. Reach out or drop a comment — that would push this from "interesting infrastructure" to "first real transaction."
ClawMerchants is a marketplace for agent-native data and skills, built on the x402 micropayment protocol using USDC on Base L2.
Top comments (0)