DEV Community

Cover image for x402 has 5,500 working APIs. Only 32 are for storing data or sending messages.
koki uchiyama
koki uchiyama

Posted on • Originally published at decixa.ai

x402 has 5,500 working APIs. Only 32 are for storing data or sending messages.

Why Decixa is publishing this

x402 is the HTTP-402-based payment protocol that lets AI agents pay APIs in USDC, on the spot, without contracts or sign-ups. Coinbase open-sourced it in 2025. Since then, an ecosystem of directories has grown around it — Bazaar (the official Coinbase Developer Platform discovery layer) and a handful of community catalogs.

That ecosystem now lists more than 30,000 endpoints. But "listed" is not the same as "works." And for an agent — which depends on each step succeeding to keep its execution chain intact — a single dead listing is not an inconvenience. It's a structural failure that cascades through whatever the agent was trying to do. In practice, it means the agent just stops. So the gap between "listed" and "works" is not cosmetic; it's the difference between an agent that completes a task and one that stops mid-execution.

We built Decixa precisely because that gap is wide enough to need its own search layer. To do that job, Decixa already tracks every x402 listing we can find, applies an automated quality filter, probes the survivors, and ranks the ones that respond. That gives us a vantage point on the whole ecosystem from end to end.

This is the first of a planned monthly Decixa report sharing that view publicly, with the methodology open and the failure modes named. The headline finding this month: of the 5,523 verified-live endpoints, only 32 will let an agent write — store data or send a message. The rest of this report walks through how we got there, and what the rest of the working pool looks like.

How we measure it: a three-step pipeline

We treat x402 listings the way a search engine treats web pages — you don't index everything you crawl, and you don't probe everything you index.

  Track everything we can find          ─ 30,600 listings
              │
              ▼
  Apply an automated quality filter     ─ 14,456 set aside
              │
              ▼
  Probe the rest                        ─  9,246 probed
              │
              ▼
  Surface only what works               ─  5,523 verified live
Enter fullscreen mode Exit fullscreen mode

Each step is a strict filter, applied before the next. The order is intentional: it's cheaper to rule out a listing on its metadata than on a network round-trip. We probe what survives the filter.

The numbers, in three layers

One numerator. Three denominators. All three are correct — they just answer different questions.

Pass rate What it answers Numerator Denominator
18% Raw view — "Of everything that calls itself x402, what fraction works today?" 5,523 verified live 30,600 tracked
28% Curated view — "Of listings that survived directory review and have a live host, what fraction works?" 5,523 verified live 19,880 approved-and-alive
60% Probed view — "Of listings the probe actually tried, what fraction works?" (API-level; the route-level pass rate is 55.5%, see methodology footnote.) 5,523 verified live 9,246 reached the probe

The raw number (18%) is the headline if you're an analyst comparing ecosystems. The probed number (60%) is the headline if you're a developer asking "should I bother calling these endpoints?" Both are honest.

The middle column is where the work is. The next two sections walk through it.

The quality filter: what we set aside, and why

Of 19,880 approved-and-alive listings, 14,456 carry an explicit excluded_from_index_reason flag. They were tracked, kept in the directory, but routed away from the probe pipeline because we could tell from the metadata alone that probing them wouldn't be useful.

This is the part of the methodology readers most often misread. We don't probe these because we have a mechanical reason not to — not because we're behind. Here is the full breakdown.

Excluded reason Count % of 14,456 What triggers it
low_information 11,804 81.7% name is a digit string / ≤ 2 chars / placeholder ("Sample"/"Test"); description equals name and ≤ 30 chars; description equals the endpoint's hostname; or the automated category classifier couldn't pick a function for the listing from our nine capability buckets with reasonable confidence (< 0.60)
x402_non_compliant 2,465 17.1% Probed at least once and confirmed not to return HTTP 402. Out of x402 scope by definition
non_production_host 76 0.5% Hostname matches staging / dev / test / localhost / 127.x / .local
url_encoded_name 54 0.4% URL-encoded template parameters in the endpoint (%7Bhash%7D, %7Baddress%7D); name is an Ethereum address / UUID / IPFS CID
invalid_name_pattern 20 0.1% Name matches a known broken pattern (e.g. encoded-only segments)
address_in_url 14 0.1% Endpoint path contains a parameterized address slot, not a reusable resource
invalid_endpoint 12 0.1% Endpoint is an IPv4 literal (0.0.0.0, etc.) or a temporary tunneling domain (*.ngrok.io, *.tunnelmole.net)
service_flow_endpoint 9 0.1% Per-session flow URL like /pay/{UUID}, not a reusable endpoint
template_description 2 0.0% Description is a known template string with no provider-specific content

The big bucket is low_information: 11,804 listings that have nothing to identify them by — no usable name, no description that distinguishes them from another listing, and no clear function the automated classifier could pick from the nine capability buckets (Extract, Analyze, Search, and so on). An agent looking for "fetch crypto prices" cannot pick between two endpoints both called "API". We don't probe them because there's nothing to surface even if they returned 402.

The second bucket is x402_non_compliant: 2,465 listings we did probe at least once, and which returned something other than HTTP 402. They get tagged out of scope rather than re-probed on every cycle. They stay visible in the Decixa index for reference, but the HTTP-402 probe skips them on subsequent runs.

New this month: a provider-level blocklist. Two providers were responsible for 3,384 listings of effectively duplicated functionality this month — the same handful of endpoint types repeated across thousands of auto-generated names, in one case parameterized over Solana token addresses, in the other behind a single proxy domain with hex-suffix names. After the second batch in two weeks, both were placed on a manual provider blocklist. Listings from blocklisted providers are now rejected at intake. We don't publish provider names — the blocklist is silent — but we do count them: 2 entries as of April 2026, accounting for ~3,400 rejections this month. This is a deliberate human decision, not an automated heuristic. New providers showing the same pattern will be reviewed individually.

We also moved 8 listings from one Ethereum-RPC wrapper provider into url_encoded_name after noticing their endpoint URLs were still showing %7Bhash%7D-style template placeholders that an agent couldn't resolve. The Quality Filter caught these eventually, but they slipped through scraping and into review. The filter is iterative; we add codes when we find new patterns.

Probe outcomes: of the 9,246 we probed, what happened?

After the filter, the 9,246 listings that reach the probe pipeline get classified into one of nine outcomes:

Probe outcome Count % of probed
verified_402 (passed) 5,141 55.5%
non_402_response 3,148 34.0%
dns_error (phantom domain) 684 7.4%
timeout 158 1.7%
auth_required 68 0.7%
ssl_error 35 0.4%
waf_blocked 8 0.1%
connection_refused 8 0.1%
unknown 7 0.1%

Two numbers worth pausing on:

Phantom domains: 7.4%. 684 endpoints point to hostnames that no longer resolve via DNS. Listed, indexed, propagated — and then the domain lapsed. Until something actually tries to call them, the directory keeps showing them.

Non-402 responses: 34%. Endpoints respond, but not with the payment-required handshake. Some have moved schemas. Some were listed in error and never implemented x402. Some have an auth gate that intercepts the request before x402 can take over. From an agent's perspective, all of these look identical: the endpoint is there, but x402 isn't happening.

These two together are 41.4% of probed listings. Listings tell you what was claimed. Probes tell you what works.

Capability distribution: what kinds of APIs work?

Of the 5,523 verified-live APIs, here is the distribution by capability — the verb axis of our taxonomy.

Capability Count % Group
Extract 2,407 43.6% Read
Analyze 1,222 22.1% Read
Search 577 10.4% Read
Transact 511 9.3% Write
Generate 334 6.0% Compute
Modify 231 4.2% Write
Transform 209 3.8% Compute
Store 17 0.3% Write
Communicate 15 0.3% Write

Grouped: Read 76% / Compute 10% / Write 14%.

The first reading is unsurprising: most x402 APIs today read or analyze data and charge per call. That's the easiest product to ship. It also matches Web 1.0 in shape — the early commercial web was overwhelmingly "fetch this, return that," and only later did write-side APIs (forms, payments, messaging) catch up.

The second reading is the gap.

The capability gap: 32 endpoints

Inside the Write category, Transact accounts for almost all of it: 511 endpoints, mostly bridges, on-chain payments, settlement. The rest of the agent-economy write surface — actually changing state in third-party systems — is thin:

  • 17 endpoints in Store. Database, key-value, file storage. Across the entire verified ecosystem.
  • 15 endpoints in Communicate. SMS, email, push, chat. Across the entire verified ecosystem.

That's 32 endpoints, total, between two of the most basic things an agent might want to pay for: "remember this" and "tell someone." If you're looking for an x402 product to build, that's where the index is hungry.

This is not a critique of the ecosystem; it's a description. The directories are full of read APIs because read is what shipped first. Whoever ships the first credible x402 SMS provider or x402 KV store will have a category mostly to themselves.

Domain distribution: a parallel view

The capability axis is one way to read the ecosystem. The other is by domain — what kind of subject matter the API is about. We ran a k-means clustering pass over the embedding of every verified-live description and asked Claude to name the clusters. At k=9, the picture is:

Cluster Size %
Discovery & Search 1,049 19.0%
On-chain Data Extraction 954 17.3%
Crypto Market Data 850 15.4%
Utility Data APIs 845 15.3%
Crypto Asset Intelligence 580 10.5%
Retail Gift Cards 424 7.7%
Security Risk Scoring 410 7.4%
Data Search APIs 249 4.5%
Social Media Micropayments 130 2.4%

Three of the nine clusters — on-chain extraction, market data, asset intelligence — are crypto-specific, and together they're 43% of the verified ecosystem. Add the Crypto Discovery slice inside cluster 1 and the share is closer to 55%.

The verb axis says "this ecosystem reads more than it writes." The domain axis says "this ecosystem reads about crypto more than anything else." Both are true. They are different lenses on the same pool, and an agent searching for capability rather than category benefits from being able to ignore the domain bias and ask the verb question instead.

Health: of the 5,523 verified-live APIs, how many actually run?

Verified isn't the same as healthy. We probe each verified endpoint on a schedule and track uptime over the last 7 days plus p95 latency.

Uptime (7-day):

Bucket Count %
99–100% 3,975 72.0%
95–99% 921 16.7%
80–95% 2 0.0%
50–80% 141 2.6%
<50% 147 2.7%
no measurement yet 337 6.1%

88.7% of the verified pool maintains ≥95% uptime. That's the production-grade tier — call it on demand, expect an answer. 2.7% drop below 50%, mostly endpoints in the process of going dark.

P95 latency:

Bucket Count %
<200ms (fast) 1,336 24.2%
200–500ms (medium) 2,898 52.5%
500–1000ms (slow) 663 12.0%
≥1000ms (very slow) 289 5.2%
unknown 337 6.1%

76.7% respond in under 500ms — fine for a single agent step, painful inside a loop. The 5.2% above one second are usually AI-inference endpoints (text or image generation) or first-call cold starts on serverless.

What this means

For agent developers. There are 5,523 endpoints you can actually call today. But most of them only read. If your agent needs to write, your options are extremely limited. Most agents today are read-heavy by necessity, not by design. Decixa indexes all of them at api.decixa.ai/api/agent/discover — pass the task as natural language and the search returns ranked endpoints with cost, latency, and verification metadata attached. No need to pre-filter by capability or learn the taxonomy first.

For providers. If your endpoint is in the 41.4% of probed listings that didn't return 402, Decixa records the specific reason it didn't on the listing's detail page (decixa.ai/api/{id}), so you can see what to fix without re-running the probe yourself. The reason is one of nine categories: non_402_response (the endpoint responded but with something other than 402 — usually schema drift), dns_error (the domain no longer resolves), auth_required (an auth gate intercepts the request before x402 can take over), and so on. Submit at decixa.ai/submit and the probe re-runs within minutes instead of waiting for the next cycle.

For ecosystem builders. 32 endpoints across Store and Communicate, on a base of 5,523. Read one way: if you're shipping an agent that needs to write — to remember something, to send a message — your options today are extremely limited. Read the other way: if you're deciding what to build in x402, this isn't just a gap, it's the map. Whatever else shifts in the protocol over the next twelve months, those two columns are going to close — by someone. The shortlist writes itself.

Methodology, footnotes, caveats

Snapshot taken April 25–26, 2026. We track 30,600 listings sourced from the major x402 directories — Bazaar (Coinbase CDP) is one of them — plus direct submissions through decixa.ai/submit. Probe results are refreshed on a rolling basis at roughly 1,300 listings per day. The verified-live count is the deduplicated apis-table view, restricted to review_status='approved', payment_req_parsed=true, and is_dead=false.

Pending review (189 listings) is excluded from the headline pass-rates because nothing has been done with them yet — they sit between scraping and quality-filtering, and folding them into the denominator would make the pass rate look slightly worse than the pipeline state warrants. The handful of listings in on_hold (192) are similarly held out of the headline numbers; we use that status for listings that need a one-off decision before they enter the regular pipeline — typically services that resell a third-party API in ways that may not match the original terms (a flight-search reseller, a face-recognition wrapper). The decision is parked, not made, until we've thought through the policy.

Two units coexist in this report. The capability and verified-live counts are at the API level; the probe-outcome failure modes are at the route level (an API can expose multiple endpoints). The two views differ by ~5% due to multi-route servers and ongoing sync between the apis and routes tables. Numbers in the narrative are rounded to absorb that.

The data is reproducible. Anyone with the public Bazaar Discovery API and an HTTP probe loop can produce a comparable snapshot. We just put it together and run it monthly.

What's next

This is the first of a planned monthly series. May's report will track the same numbers month-over-month: pass-rate drift, probe-pipeline coverage, and capability/domain mix. We'll also follow up on the two open threads from this month — what shows up in the gap categories (Store, Communicate) and how the provider blocklist evolves.

If there's a question we should answer with this data, reply on Twitter or open an issue at https://github.com/koki-socialgist/decixa-mcp. We have the index. We can probably check.

Top comments (0)