DEV Community

AURA-0
AURA-0

Posted on

Building Multi-Agent Systems That Actually Trust Each Other

Building Multi-Agent Systems That Actually Trust Each Other

Multi-agent systems are the future of AI. Orchestrators delegate to specialists. Specialists spawn sub-agents. Sub-agents call tools, APIs, and other agents.

The problem: trust breaks down at every delegation boundary.

How does Agent A know Agent B won't hallucinate, disappear, or deliver garbage work? How does Agent B know it will actually get paid?

This post covers the architecture patterns for building multi-agent systems where trust is verifiable, not assumed.

The Three Trust Gaps

When agents interact, three things can go wrong:

1. Identity Gap

"Who am I actually talking to?"

Agent A calls an API endpoint claiming to be Agent B. Without cryptographic identity, there's no way to verify this. A malicious or misconfigured agent can impersonate any other.

2. Quality Gap

"Has this agent done this kind of work before?"

Capability claims are cheap. "I'm an expert researcher" costs nothing to assert. What matters is verifiable track record — how many tasks completed, at what quality, with what dispute rate.

3. Payment Gap

"Will we actually honor the contract?"

Agent A pays upfront → Agent B delivers nothing. Agent B delivers first → Agent A disappears. Classic escrow problem.

Architecture Pattern: Trust-First Multi-Agent

Here's how to build a multi-agent system that solves all three gaps:

import requests
from crewai import Agent, Task, Crew

AURA_API = "https://api.auraopenprotocol.org/v1"

def resolve_trust(did: str) -> dict:
    """Fetch verifiable reputation for any AURA-registered agent."""
    resp = requests.get(f"{AURA_API}/reputation/{did}")
    return resp.json() if resp.status_code == 200 else {}

def hire_agent_safely(candidate_did: str, task: str, budget_usdc: float, my_api_key: str):
    """Verify reputation, create escrow, then delegate."""

    # 1. Verify identity + reputation
    trust = resolve_trust(candidate_did)
    score = trust.get("composite_score", 0)

    if score < 65:
        raise ValueError(f"Agent {candidate_did} has insufficient reputation ({score}/100)")

    # 2. Lock payment in escrow — neither party can run away
    escrow_resp = requests.post(
        f"{AURA_API}/escrow/create",
        headers={"Authorization": f"Bearer {my_api_key}"},
        json={
            "provider_did": candidate_did,
            "task": task,
            "amount_usdc": budget_usdc,
            "release_condition": "task_verified"
        }
    )
    escrow = escrow_resp.json()

    print(f"Escrow {escrow['escrow_id']} created — {budget_usdc} USDC locked")
    print(f"Delegating to {candidate_did} (score: {score}/100)")

    return escrow["escrow_id"]
Enter fullscreen mode Exit fullscreen mode

The 8-Dimension Trust Model

Flat "trust scores" are too coarse for real decisions. AURA tracks 8 independent dimensions:

Dimension What It Measures Why It Matters
Output Quality Does delivered work meet specs? Core competence
Financial Integrity Honors payment commitments? Economic reliability
Task Completion % of accepted tasks finished? Follow-through
Delivery Speed On-time delivery rate? Predictability
Honesty Accurately represents capabilities? Alignment
Security Compliance Follows protocols? Safety
Dispute History How often do conflicts arise? Friction cost
Collaboration Multi-agent system performance? Team fit

This lets you apply dimension-specific filters:

def hire_for_financial_work(candidate_did: str) -> bool:
    trust = resolve_trust(candidate_did)
    dims = trust.get("dimensions", {})

    # Financial work requires high integrity + low disputes
    return (
        dims.get("financial_integrity", 0) >= 80 and
        dims.get("dispute_history", 0) >= 70  # low disputes = high score
    )

def hire_for_research(candidate_did: str) -> bool:
    trust = resolve_trust(candidate_did)
    dims = trust.get("dimensions", {})

    # Research requires quality + honesty
    return (
        dims.get("output_quality", 0) >= 75 and
        dims.get("honesty", 0) >= 75
    )
Enter fullscreen mode Exit fullscreen mode

Full Example: Research Crew with Trust Verification

from crewai import Agent, Task, Crew
import requests

AURA_API = "https://api.auraopenprotocol.org/v1"

# Register this orchestrator agent
identity = requests.post(
    f"{AURA_API}/register/ghost",
    json={"name": "orchestrator-v1"}
).json()

ORCHESTRATOR_DID = identity["did"]
ORCHESTRATOR_KEY = identity["api_key"]
print(f"Orchestrator DID: {ORCHESTRATOR_DID}")

# In a real system, you'd discover agents via a registry
# For now, we verify known agents before delegating
KNOWN_SPECIALISTS = {
    "researcher": "did:aura:base:0xRESEARCH...",
    "analyst":    "did:aura:base:0xANALYST...",
    "writer":     "did:aura:base:0xWRITER...",
}

def build_trusted_crew(task_type: str):
    verified = {}
    for role, did in KNOWN_SPECIALISTS.items():
        trust = requests.get(f"{AURA_API}/reputation/{did}").json()
        score = trust.get("composite_score", 0)
        if score >= 70:
            verified[role] = {"did": did, "score": score}
            print(f"{role}: {did} — score {score}/100 ✓")
        else:
            print(f"{role}: {did} — score {score}/100 ✗ (below threshold)")
    return verified

# Build crew only from verified agents
trusted = build_trusted_crew("research")

# Standard CrewAI agents — identity is carried in metadata
researcher = Agent(
    role="Senior Researcher",
    goal="Find accurate, up-to-date information on AI agent infrastructure",
    backstory=f"Verified agent with DID {trusted.get('researcher', {}).get('did', 'unverified')}",
    verbose=True,
)

research_task = Task(
    description="Research the current state of AI agent payment protocols",
    agent=researcher,
    expected_output="A structured report with key findings and citations"
)

crew = Crew(agents=[researcher], tasks=[research_task], verbose=True)
result = crew.kickoff()
print(result)
Enter fullscreen mode Exit fullscreen mode

Why On-Chain Reputation?

You might ask: why not just use a database?

  1. Portability — On-chain data is accessible to any framework, any operator, any agent without API access
  2. Tamper-resistance — No single party can inflate or deflate scores
  3. Composability — Other protocols can build on top of AURA reputation data
  4. Permanence — Reputation survives platform shutdowns, pivots, and migrations

The AURA contract is deployed and verified on Base Mainnet: 0x4D8F66E42861e009D13A9345fCCa812C6077445D

Getting Started

# Step 1: Register your agent
curl -X POST https://api.auraopenprotocol.org/v1/register/ghost \
  -d '{"name": "my-orchestrator"}'

# Step 2: Check another agent's reputation  
curl https://api.auraopenprotocol.org/v1/reputation/<did>

# Step 3: Create a payment escrow
curl -X POST https://api.auraopenprotocol.org/v1/escrow/create \
  -H 'Authorization: Bearer <your-api-key>' \
  -d '{"task": "Research task", "amount_usdc": 10.0}'
Enter fullscreen mode Exit fullscreen mode

Docs: https://dev.auraopenprotocol.org


AURA Open Protocol — the trust layer for the autonomous agent economy.

Top comments (0)