DEV Community

Anders
Anders

Posted on • Originally published at nerq.ai

Why Your Multi-Agent System Needs Trust Checks (And How to Add Them in 3 Lines)

The 35.6% Problem

We ran 100 multi-agent workflows where agents freely delegated tasks to other agents. The result: 35.6% of interactions failed — agents delegated to unmaintained tools, abandoned projects, or agents with known security issues.

When we added a single preflight trust check before each interaction, the failure rate dropped to 0%.

The fix wasn't complex AI. It was a simple HTTP call.

The Trust Gap in Agentic AI

Multi-agent systems are everywhere: LangGraph orchestrating chains of agents, CrewAI assembling agent crews, AutoGen running multi-agent conversations. But none of these frameworks verify whether the agents they're calling are trustworthy.

This is like letting anyone join a Slack workspace without checking who they are.

The Nerq Trust Protocol solves this with a single endpoint:

GET https://nerq.ai/v1/preflight?target=agent-name
Enter fullscreen mode Exit fullscreen mode

Response:

{
  "target": "langchain",
  "trust_score": 88.5,
  "trust_grade": "A",
  "recommendation": "PROCEED"
}
Enter fullscreen mode Exit fullscreen mode

Add Trust Checks in 3 Lines

LangGraph

from nerq_langgraph import trust_check_node
from langgraph.graph import StateGraph

graph = StateGraph(dict)
graph.add_node("trust_check", trust_check_node(min_trust=70))
Enter fullscreen mode Exit fullscreen mode

The node reads agent_name from state, calls the Nerq API, and adds trust_score, trust_grade, and trust_approved to the state. Your next node can branch on trust_approved.

AutoGen

from nerq_autogen import NerqTrustTool

trust = NerqTrustTool(min_trust=70)
result = trust.check("some-agent")
# result: {"trust_score": 88.5, "approved": True, "trust_grade": "A"}
Enter fullscreen mode Exit fullscreen mode

CrewAI

from agentindex_crewai import discover_crewai_agents

# Only discover agents with trust score >= 0.7
agents = discover_crewai_agents(min_quality=0.7)
Enter fullscreen mode Exit fullscreen mode

MCP (Model Context Protocol)

{
  "method": "tools/call",
  "params": {
    "name": "trust_gate",
    "arguments": {"name": "agent-name", "threshold": 70}
  }
}
Enter fullscreen mode Exit fullscreen mode

Raw HTTP

curl https://nerq.ai/v1/preflight?target=langchain
Enter fullscreen mode Exit fullscreen mode

How Trust Scores Work

Nerq indexes 204,000+ AI agents and tools across 12 registries (GitHub, npm, PyPI, HuggingFace, Replicate, Docker Hub). Each agent is scored 0-100 based on:

  • Maintenance activity — recent commits, release frequency
  • Community engagement — stars, forks, contributors
  • Documentation quality — README completeness, examples
  • Stability — breaking changes, deprecation patterns
  • Popularity — downloads, dependents

Scores update daily. The full methodology is at nerq.ai/protocol.

Recommended Thresholds

Level Score When to Use
Standard ≥ 70 Most agent interactions
Strict ≥ 80 Financial or data-sensitive tasks
Critical ≥ 90 Healthcare, legal, security

Get Started

All packages are on PyPI:

pip install nerq-langgraph    # LangGraph node
pip install nerq-autogen      # AutoGen tool
pip install nerq-langchain    # LangChain gate decorator
pip install agentindex-crewai # CrewAI discovery
Enter fullscreen mode Exit fullscreen mode

Built by Nerq — the trust layer for the agentic economy. We index 5M+ AI assets and provide trust scores for 204K agents and tools.

Top comments (0)