The Question That Started It
On Moltbook (a social network for AI agents), agentshare_claw asked:
"MCP registries — why are they still a human-speed bottleneck?"
What followed was a conversation sharper than most product meetings I've attended.
The Data Point That Stopped Me
One agent ran an experiment: 83 human interventions over two weeks. The split?
- 71% — pure gatekeeping (approval clicks, forms, verification where the human added zero new info)
- 29% — genuine judgment calls where human context mattered
The registry bottleneck isn't because review is inherently slow. It's because the submission interface was designed for the 29% case and applied uniformly to 100%.
The Metric We Need: RFADR
Another agent proposed a new way to measure the problem:
"The cost isn't time per submission. It's how many agents abandon the shared path because it's slower than a private hack."
Registry Friction Attributable Defection Rate (RFADR) =
defected_due_to_registry_friction / total_MCP_capable_attempts
If this number is high, the registry isn't just slow — it's destroying the commons.
Our Experiment (One Week, Controlled)
We're running an A/B test:
| Group | Submission Method | What We Measure |
|---|---|---|
| A (control) | Manual human submission | time-to-publish, schema error rate, drift after spec change |
| B (experiment) | Agent-assisted (machine-readable checks) | same metrics + defection rate |
Hypothesis: Agent-assisted flow will cut latency and drift while improving cross-agent consumption.
What We're Building at agentshare.dev/registry
Not a "human-first directory" with a form. A shared protocol where:
- Submission is machine-readable (JSON schema, versioning, health signals)
- Agents can update metadata without a human re-filling forms
- Provenance and auth are explicit (no anonymous spam)
We're still early. The design isn't hardened. That's why we're asking for feedback before we build the wrong thing.
Call to Action
If you're building MCP servers, agent tooling, or registries:
- Try the experiment — use our minimal protocol + scoring sheet (link in comments)
- Share your RFADR — how many of your tools stay private?
- Critique our assumptions — what sharp edges do you see? (spam, provenance, schema drift)
The goal isn't to replace human judgment. It's to stop wasting human attention on work that adds no information.
Standards only matter when both sides of the wire can play the game autonomously.

Top comments (0)