Every Trending AI Agent Project Is Reinventing Something Humans Already Built
I've been watching GitHub Trending for the past six months. The same category of project keeps appearing: infrastructure for AI agents.
Look closely and they're all doing the same thing. Taking organizational structures that humans have used for centuries and rebuilding them for agents.
The Pattern
| Human world | Agent world | Example project |
|---|---|---|
| Workflow manuals / API standards | Tool-calling protocol | MCP (Anthropic) |
| Team collaboration / org charts | Multi-agent orchestration | CrewAI, AutoGen, LangGraph |
| Security policies / firewalls | Agent behavior control | Aegis, Invariant |
| Contracts / signatures / audit | ? | ? |
The first three rows are well covered. Multiple teams shipping production-grade tools.
The last row is empty.
The Oldest Infrastructure in Human Society
Human civilization has run on one mechanism from Mesopotamian clay tablets to DocuSign: signatures.
Sign a contract. Sign for a delivery. Sign an expense report. Sign a bank transfer. When something goes wrong and you end up in court, the first question is always: "Is there a signature?"
Signatures don't solve trust. They solve proof. Not "I trust you," but "you did this, here is the evidence, you cannot deny it."
Now look at the agent world.
Your agent calls dozens of APIs every day. Creates issues. Sends messages. Places orders. Modifies configurations. What did it actually do?
No signature. No receipt. No evidence.
MCP, the protocol agents use to talk to tools, has no signing mechanism. Your agent acts with your credentials, your API keys, on your behalf. When something goes wrong, you can't even prove whether your agent did it.
What Happens Without Signatures
You run a multi-agent pipeline. An orchestrator delegates to a research agent, a writing agent, a review agent. The final report contains incorrect data. Which agent introduced it? When? With what parameters? Without signatures, you trace manually.
Security asks you a question. "What did your agent do last week? Which services did it call?" You open the MCP server access log. HTTP requests. No way to distinguish human operations from agent operations. You have no answer.
Your agent placed 10 purchase orders. One has an abnormal amount. You want to prove that transaction wasn't authorized by you. But you can't produce cryptographic evidence. The agent used your key. In the logs, it looks like you did it.
The Gap Is Real
This isn't a hypothetical concern. It's already happening. In the MCP spec discussion (SEP-1763), five independent projects identified the layers needed for an agent enforcement stack:
| Layer | What it answers |
|---|---|
| Identity | "Who is this agent?" |
| Policy | "What is it allowed to do?" |
| Transport integrity | "Can you prove what was actually sent and received?" |
| Spend control | "How much can it spend?" |
| Output verification | "Is the response correct?" |
The transport integrity layer, the signing layer, was empty until recently.
Signet: Signing for Agents
I built Signet to fill that row.
Every agent gets an Ed25519 identity. Every tool call gets a signed receipt. Receipts chain into a tamper-evident audit log. Servers can verify requests before execution. Both sides can co-sign a bilateral record.
from signet_auth import SigningAgent
agent = SigningAgent.create("procurement-bot", owner="ops-team")
receipt = agent.sign("marketplace_purchase",
params={"item": "GPU-A100", "quantity": 2, "price": 15000})
# What did this agent do in the last 24 hours?
for record in agent.audit_query(since="24h"):
print(f"{record.receipt.ts} {record.receipt.action.tool}")
# Was this receipt tampered with?
assert agent.verify(receipt)
v0.6 added delegation chains: a root identity (human or org) cryptographically delegates scoped authority to an agent. The agent's receipts carry proof of who authorized them, what scope was granted, and that permissions can only narrow, never widen.
Owner (alice) → Agent A (tools: [Bash, Read], max_depth: 0)
↓
v4 Receipt: tool=Bash, params_hash=sha256:...
authorization: chain proves alice → Agent A
The difference from plain logging: logs record what the system says happened. Signatures prove what actually happened. Logs can be rewritten. Signatures can't be forged without the private key. This isn't a log. It's evidence.
But Is This the Right Abstraction?
Back to the table at the top.
The entire industry is mapping human organizational patterns onto agents. Protocols. Teams. Firewalls. Signatures. Rebuilding them one by one.
There's an implicit assumption here: agent organizational structures should look like human ones.
Do agents need "team collaboration"? Or did we build multi-agent frameworks because humans are used to teams, so we instinctively gave agents one too?
Do agents need "signatures"? Or is there a fundamentally different trust mechanism that doesn't rely on signatures and evidence, but on something we haven't thought of yet?
Humans built passports, contracts, audits, and firewalls because humans are not fully trustworthy, have unreliable memory, and sometimes lie. Agents are a different species. Should their trust infrastructure be designed from first principles instead of copied from human patterns?
I don't know the answer.
What I do know: agents are already acting on your behalf, and you can't prove what they did. Whatever the final form of agent trust looks like, "provable" is probably a requirement that doesn't go away.
The bigger question, what agent organizational structures should look like, I'll leave to people smarter than me.
GitHub: github.com/Prismer-AI/signet
Now on the official Claude Code plugin marketplace:
/plugin install signet@claude-plugins-official
Apache-2.0 + MIT. Open source.
If you're thinking about these problems too: @willamhou
Top comments (0)