As agentic systems mature, I keep getting pulled into the same discussion:
Should we build an “agent exchange”?
A marketplace.
A registry.
A platform where agents publish themselves and others can discover and invoke them.
It’s a reasonable instinct. Platforms are tangible, fundable, and easy to explain. But if we look at how the internet itself scaled, a different pattern emerges; one that is worth revisiting before we hard-code the future of agent ecosystems.
The most durable digital ecosystems have not been built on centralized exchanges. They have been built on open discovery standards, which then enabled many kinds of platforms to emerge naturally on top.
That distinction matters.
Discovery Comes Before Platforms
The early web did not begin with a global website directory. Instead, it began with a small set of boring but powerful primitives. DNS solved naming. HTTP solved retrieval. Well-known locations solved where to publish machine-readable metadata.
Search engines did not own the web. They indexed it.
Because discovery was decentralized and standardized, multiple kinds of intermediaries emerged: general-purpose search engines, academic crawlers, archives, analytics platforms, and vertical indices. None of them controlled participation. All of them benefited from the same shared signals.
Agent ecosystems are approaching the same fork in the road.
Agent Discovery eXchange as a Discovery Layer
Agent Discovery eXchange (AX) takes a deliberately narrow position. It does not try to be an exchange, a marketplace, or a registry. It defines a simple, internet-native way for agents to introduce themselves and for other agents or systems to discover and understand them before any interaction occurs.
AX focuses on discovery metadata: what an agent is capable of, which interaction protocols it supports, and how a caller should approach it. It explicitly does not define execution semantics, conversation formats, trust enforcement, or business models.
That separation is intentional. Discovery is infrastructure. Execution is choice.
How AX Fits Conceptually

The diagram illustrates the role AX plays in the stack.
At the top, DNS provides a globally scalable naming system. AX builds on this by defining a well-known HTTPS endpoint where an agent can publish discovery metadata. This metadata is retrieved before any interaction begins and is used solely to inform decisions about how, or whether, to proceed.
Below that, agents remain peers. Once discovery and protocol selection are complete, execution proceeds using existing mechanisms such as A2A, MCP, GraphQL, or REST. AX is not in the execution path. It is a pre-flight signal, not a control plane.
This placement is critical. It allows discovery to scale independently of how agents communicate, reason, or evolve.
Sidebar: Discovery Signals the Internet Already Understands
The web has faced this problem before. Two familiar mechanisms illustrate why open discovery signals tend to scale better than centralized registries.
robots.txt
robots.txt provides a voluntary, machine-readable way for websites to signal how automated systems should interact with them. It does not enforce policy, authenticate crawlers, or define business relationships. Instead, it establishes a shared convention that allows many independent crawlers to behave responsibly at scale.
OpenSearch
OpenSearch allows a website to describe its search interface in a standardized format so that browsers, aggregators, and tools can integrate search capabilities without bespoke agreements. It does not define how search results are ranked or monetized; it simply exposes how search can be performed.
Agent Discovery eXchange (AX)
AX applies the same principle to autonomous agents. It provides a standardized way for agents to advertise their existence, capabilities, and supported interaction protocols using existing internet infrastructure. Like robots.txt and OpenSearch, AX is advisory, decentralized, and non-enforcing. It enables indexing, aggregation, and exchange without requiring a central authority or prescribing execution behaviour.
Across all three, the pattern is consistent: discovery is standardized so that ecosystems, platforms, and intermediaries can emerge naturally on top.
Learn more here: https://github.com/sempfa/agent-discovery-exchange
Why Not Start With a Platform?
Centralized exchanges feel efficient early on, but they tend to conflate discovery with control. They decide who can participate, how agents are represented, and what rules apply by default. Over time, they encode business models, governance decisions, and technical assumptions into what should have been neutral infrastructure.
That does not mean platforms are bad. It means they are downstream.
AX does not compete with platforms. It enables them. Any organization can build a curated directory, a commercial marketplace, a trust broker, or a vertical index on top of AX. The difference is that no single platform becomes the gatekeeper for discovery itself.
This is the same reason the web has many search engines instead of one official index.
Crawlers, but for Agents
A useful mental model is to think about AX as enabling agent indexing in the same way early web metadata enabled crawling.
With AX, agents publish machine-readable descriptions of themselves at predictable locations. Automated systems can crawl these endpoints, index capabilities, categorize agents, and present them through different interfaces.
It is not unreasonable to imagine a future where discovery tools expose views like “all,” “documents,” “media,” and “agents.” In that world, agents are not hosted by the search provider. They are simply indexed.
That future only works if discovery is decentralized, standardized, and voluntary. AX is designed for that world.
Separation of Concerns Is the Core Design Choice
The most important thing about AX is what it refuses to do.
It does not define how agents authenticate.
It does not define how trust is enforced.
It does not define how tasks are executed.
It does not define how agents are monetized or ranked.
By keeping discovery separate from execution, AX allows innovation to happen independently at each layer. Interaction protocols can evolve. Trust frameworks can emerge. Platforms can differentiate. Discovery remains stable.
This is how the internet avoids collapsing under its own complexity.
Enabling Ecosystems, Not Owning Them
The goal of AX is not to prevent agent exchanges from existing. It is to ensure that they are built on shared, open signals rather than proprietary choke points.
Just as the web benefited from open discovery primitives before commercial platforms emerged, agent ecosystems benefit from agreeing on how agents introduce themselves before deciding how they are indexed, curated, or exchanged.
Discovery should be boring. That is its strength.
Closing Thought
The web did not scale because someone built “the website exchange.” It scaled because discovery was open, simple, and universal, and because platforms were free to emerge on top rather than being baked in from the start.
Agent ecosystems deserve the same foundation.

Top comments (0)