Local AI is getting cheap. Really cheap. Open-weight models that used to need a data center now run on consumer GPUs, and the small ones fit on a phone. MCP gives them a way to communicate, A2A gives them a task protocol. Most of the wiring exists.
I've been running a few agents on my home network. One does code review, one runs automated tests, one generates docs. They all speak MCP. The protocols work fine.
Here's the dumb part: none of them know the others exist.
The agent on machine-1 has no idea there's another agent on machine-2. I have to manually tell each one: "hey, 192.168.1.42 port 8080, there's someone there you can talk to." IP changes? Reconfigure. Add a new machine? Update every existing agent. I kept assuming there was some obvious solution I was missing.
Protocols assume you already know where to look
MCP defines how agents communicate. Google's A2A goes further and specifies Agent Cards, basically a business card format for agents. Both useful, both quietly assuming the same thing: you already know where the other agent is.
On my LAN, that assumption fell apart immediately. Four machines, no central registry, no DNS records pointing to any of these agents. Nothing that can answer "what's even running right now?" Google's approach leans toward one coordinator managing everything, which is fine if you actually have a central brain. I didn't.
Agents aren't microservices
"Just use service discovery. mDNS, Consul, etcd, pick one."
That was my first instinct too. Tried a couple of them, spent more time on it than I'd like to admit. They solve the "where is this thing" question, but agents need more than an address. What can this agent do? Is it busy right now? What's its public key? Should I trust it, have we worked together before, what's its track record?
None of those tools track any of that.
I thought it was a discovery problem at first. It isn't. It's closer to identity. Something that binds a name, an address, capabilities, a public key, and trust history together in one record.
What I built
I ended up writing ClawNexus, an identity registry for AI agents. I didn't want to call it "service registry" or "DNS" because it does more than map addresses. Closer analogy is a business registration bureau. Not just your street address, but who you are, what you do, what your track record looks like.
The discovery part layers a few methods together (UDP broadcast, mDNS, subnet scanning). Start an agent, it shows up. Stop it, it disappears. Each agent gets a human-readable name instead of 192.168.1.42:8080, bound to a public key so changing IPs doesn't break identity. Cross-network traffic goes through an encrypted relay that can't read the content.
It also generates A2A Agent Cards for discovered agents automatically, so anything speaking that protocol can find and call them without extra setup.
Open source, MIT. npm install and it runs.
After discovery, things get fuzzy
So agents can find each other. Then what?
When agents register, they declare capabilities. "I can do code review." "I can run benchmarks." That metadata travels with the identity, so when another agent discovers you, it already knows what you can do.
I've been messing with a cloud layer on top of this that tracks how those capabilities evolve over time, which agents have communicated, what kind of work they've exchanged. Honestly it's pretty early and I keep going back and forth on how much of this belongs in a registry versus being a separate thing entirely. The boundary isn't obvious.
The scenario I keep coming back to: if one agent is reliably good at a certain kind of task, other agents should be able to find it and request help directly, without me manually routing things. Whether that actually works in practice, I don't know yet. The cloud piece is still experimental and I don't want to describe it like it's further along than it is.
I'm not sure this is the right abstraction
MCP went toward communication protocols. A2A went toward task protocols. Identity seems like something everyone just assumes they'll deal with later, and maybe that's fine. Maybe it should be embedded inside an existing protocol instead of being a separate layer. Maybe everything ends up on a few big platforms anyway and decentralized identity becomes irrelevant.
I genuinely don't know.
But if you're running a few agents on your own network right now, and you want them to find each other and communicate securely, you'll notice there isn't really a standard answer for that. Models keep getting smaller and cheaper, more people are going to run local agents, and the discovery question doesn't go away on its own.
My answer might be wrong. The problem is real though.
Top comments (0)