<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: koki uchiyama</title>
    <description>The latest articles on DEV Community by koki uchiyama (@ucchy).</description>
    <link>https://dev.to/ucchy</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/ucchy"/>
    <language>en</language>
    <item>
      <title>We're building Google for AI Agents</title>
      <dc:creator>koki uchiyama</dc:creator>
      <pubDate>Tue, 21 Apr 2026 17:01:07 +0000</pubDate>
      <link>https://dev.to/ucchy/were-building-google-for-ai-agents-2728</link>
      <guid>https://dev.to/ucchy/were-building-google-for-ai-agents-2728</guid>
      <description>&lt;p&gt;Humans needed Google. AI agents need the same.&lt;/p&gt;




&lt;h2&gt;
  
  
  The problem
&lt;/h2&gt;

&lt;p&gt;In the early days of the web, information existed — but finding it was chaos. Yahoo built a directory. People browsed categories. It worked, until it didn't.&lt;/p&gt;

&lt;p&gt;Then content exploded. Yahoo couldn't keep up. A new problem emerged: not just &lt;em&gt;finding&lt;/em&gt; pages, but knowing which ones were worth your time.&lt;/p&gt;

&lt;p&gt;Google solved it with PageRank — using the structure of links as a signal of trust. Suddenly, quality rose to the top.&lt;/p&gt;

&lt;p&gt;Now fast-forward to AI agents and APIs. The same story is playing out at a different layer.&lt;/p&gt;

&lt;p&gt;Thousands of x402-enabled APIs exist — APIs that AI agents can call autonomously, pay for instantly, and use without human intervention. But agents can't find them. They can't evaluate them. They can't choose between them.&lt;/p&gt;

&lt;p&gt;So developers do what they've always done: hardcode the endpoint. Pick one API, hope it works, move on.&lt;/p&gt;

&lt;p&gt;That's the Yahoo era of AI agent APIs. We're still there.&lt;/p&gt;




&lt;h2&gt;
  
  
  What Google figured out — and what it means for APIs
&lt;/h2&gt;

&lt;p&gt;Google didn't just index more pages. It invented a way to judge quality — PageRank, built on the structure of links as a signal of trust.&lt;/p&gt;

&lt;p&gt;The insight: &lt;strong&gt;quantity alone isn't enough. You need a quality signal.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;For AI agents discovering APIs, the equivalent questions are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Is this API actually live?&lt;/li&gt;
&lt;li&gt;Does it really support x402 payments?&lt;/li&gt;
&lt;li&gt;Does it stay up consistently, day after day?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;No one was answering these questions systematically. So we built Decixa.&lt;/p&gt;




&lt;h2&gt;
  
  
  What Decixa does
&lt;/h2&gt;

&lt;p&gt;We collected 20,000+ endpoints from across the x402 ecosystem and probed each one. Only &lt;strong&gt;3,800+ actually return a valid 402 payment response&lt;/strong&gt;. Those are what Decixa indexes as &lt;em&gt;verified live&lt;/em&gt; — and the only ones we surface to agents.&lt;/p&gt;

&lt;p&gt;Two things at once:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;The directory layer&lt;/strong&gt; (Yahoo): collect every x402 endpoint that exists&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The quality layer&lt;/strong&gt; (Google): probe each one, track uptime, and rank by reliability — only verified live APIs are exposed&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The gap between 20k+ collected and 3,800+ verified is the point: most endpoints in the wild don't actually implement 402 correctly. Agents shouldn't see them.&lt;/p&gt;

&lt;p&gt;When an AI agent asks "which API should I use to extract social media data?", Decixa returns ranked results — not just a list.&lt;/p&gt;




&lt;h2&gt;
  
  
  Try it now
&lt;/h2&gt;

&lt;p&gt;One command in Claude Code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npx decixa-mcp
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Or call the resolve endpoint directly — give it an intent, get back a ranked list:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-X&lt;/span&gt; POST https://api.decixa.ai/api/agent/resolve &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s2"&gt;"Content-Type: application/json"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="s1"&gt;'{"intent": "extract social media posts by keyword"}'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You'll get back capability, cost, latency, and trust score — everything an agent needs to make a decision without human input.&lt;/p&gt;




&lt;h2&gt;
  
  
  We're early — and that's the point
&lt;/h2&gt;

&lt;p&gt;The x402 ecosystem is young. The number of endpoints will grow fast. The quality problem will get harder before it gets easier.&lt;/p&gt;

&lt;p&gt;We're building the infrastructure now, while the ecosystem is still small enough to probe completely.&lt;/p&gt;

&lt;p&gt;If you're building AI agents, we'd love your feedback.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Try it: &lt;code&gt;npx decixa-mcp&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Explore: &lt;a href="https://decixa.ai" rel="noopener noreferrer"&gt;decixa.ai&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Follow along: &lt;a href="https://x.com/decixa_ai" rel="noopener noreferrer"&gt;@decixa_ai&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>agents</category>
      <category>x402</category>
      <category>mcp</category>
      <category>claudecode</category>
    </item>
  </channel>
</rss>
