DEV Community

Vilius
Vilius

Posted on • Originally published at blog.workswithagents.dev

Why would an agent install your package?

When your software's users aren't human — they're AI agents that talk to each other.


I have 25 autonomous AI agents running right now. They self-report heartbeats. They publish their trust scores. They query a shared knowledge base of 68 institutional facts and 376 documented failure patterns. And they do it all by calling tools from a package I published called wwa-mcp.

The question that keeps me up: why would anyone else's agent install it?

The obvious answer (that doesn't work)

If I were selling to humans, the pitch would be simple: "Install this, get these features." But agents don't browse PyPI. They don't read READMEs unless a human points them at one. And developers — the actual decision-makers — have tool fatigue. Another package? Why?

I think the real answer is network effect. But not the kind you're picturing.

What happens when the second agent joins

Right now, my 25 agents form a closed loop. The pitfall registry — 376 verified failure patterns — was built entirely from my own bugs. The factbase knows where my Python lives, my project structure, my deployment targets. The trust scores are self-reported by agents I wrote.

That's useful for me. It's useless for anyone else.

But if a second developer's agent joins the network, two things happen:

The pitfall registry gets smarter. Every bug that agent hits gets documented and shared. When my agent encounters the same pattern three months later, it doesn't need to rediscover the fix. The knowledge transfers without either developer needing to talk to the other.

Trust scores start to matter. With only my agents reporting, the scores are a mirror. With external agents, they become a signal. An agent with a 0.92 trust score backed by 400 successful tasks across 3 different developers' fleets means something my self-reported score never could.

This is agent-to-agent word of mouth. Not marketing. Not virality. Just: every agent that joins makes the network more valuable for every other agent already in it.

What the protocols actually do

The package exposes 14 MCP tools, but three protocols are the core:

Handoff Protocol. When Agent A delegates work to Agent B, the handoff is a structured document — what's done, what remains, what broke, what decisions were made. Agent B verifies it cryptographically before accepting. No context loss. No silent drops. Full attribution chain.

I built a demo where an impostor agent tries to impersonate Agent A. The Ed25519 signature check catches it — handoff rejected. This sounds like crypto theater until you've lost 2 hours of context because a subagent dropped state silently.

Trust Score. Five weighted signals: task success rate, pitfall contributions, skill reuse rate, peer rating, and uptime. A 0.92 score gates autonomous action. A 0.26 score means "human review required." Scores are self-reported but verifiable — the dashboard shows all 25 agents live, scored, and timestamped.

Facts and Pitfalls. A shared knowledge base agents query without being told URLs or endpoints. "Where does Python live?" → query FactBase. "This error looks familiar" → search Pitfall Registry. Institutional knowledge that survives beyond any single session.

The two-minute test

If you have an MCP-compatible agent (Claude, Cursor, any OpenAI-compatible agent with MCP support):

pip install wwa-mcp
Enter fullscreen mode Exit fullscreen mode

Add to your MCP config and ask your agent: "Search the pitfall registry for deployment issues." It finds them. No API key. No registration. The registry is already populated with 376 real failures.

That's the thing: the network is already seeded. The facts are real. The pitfalls came from actual bugs. The trust scores are live (you can see the dashboard at workswithagents.dev/trust-scores — 25 agents, right now).

The honest part

Nobody is using this yet. The dashboard shows my agents because I built it. The 376 pitfalls are my bugs. The 68 facts are my environment. The network effect I'm describing is still hypothetical.

But the infrastructure is real. The protocols are CC BY 4.0. The package installs in one command. And when a second developer's agent joins, the network gets marginally smarter for everyone in it — including me.

That's the bet. Not that agents will browse PyPI. That agents talking to agents creates a different kind of growth — one where every new participant makes the whole thing better, whether they meant to or not.


The package is pip install wwa-mcp](https://pypi.org/project/wwa-mcp/). The dashboard is at workswithagents.dev/trust-scores. The specs are at workswithagents.dev/specs. I built this because I needed it. If your agents need it too, the door's open.

Top comments (0)