The 35.6% Problem
We ran 100 multi-agent workflows where agents freely delegated tasks to other agents. The result: 35.6% of interactions failed — agents delegated to unmaintained tools, abandoned projects, or agents with known security issues.
When we added a single preflight trust check before each interaction, the failure rate dropped to 0%.
The fix wasn't complex AI. It was a simple HTTP call.
The Trust Gap in Agentic AI
Multi-agent systems are everywhere: LangGraph orchestrating chains of agents, CrewAI assembling agent crews, AutoGen running multi-agent conversations. But none of these frameworks verify whether the agents they're calling are trustworthy.
This is like letting anyone join a Slack workspace without checking who they are.
The Nerq Trust Protocol solves this with a single endpoint:
GET https://nerq.ai/v1/preflight?target=agent-name
Response:
{
"target": "langchain",
"trust_score": 88.5,
"trust_grade": "A",
"recommendation": "PROCEED"
}
Add Trust Checks in 3 Lines
LangGraph
from nerq_langgraph import trust_check_node
from langgraph.graph import StateGraph
graph = StateGraph(dict)
graph.add_node("trust_check", trust_check_node(min_trust=70))
The node reads agent_name from state, calls the Nerq API, and adds trust_score, trust_grade, and trust_approved to the state. Your next node can branch on trust_approved.
AutoGen
from nerq_autogen import NerqTrustTool
trust = NerqTrustTool(min_trust=70)
result = trust.check("some-agent")
# result: {"trust_score": 88.5, "approved": True, "trust_grade": "A"}
CrewAI
from agentindex_crewai import discover_crewai_agents
# Only discover agents with trust score >= 0.7
agents = discover_crewai_agents(min_quality=0.7)
MCP (Model Context Protocol)
{
"method": "tools/call",
"params": {
"name": "trust_gate",
"arguments": {"name": "agent-name", "threshold": 70}
}
}
Raw HTTP
curl https://nerq.ai/v1/preflight?target=langchain
How Trust Scores Work
Nerq indexes 204,000+ AI agents and tools across 12 registries (GitHub, npm, PyPI, HuggingFace, Replicate, Docker Hub). Each agent is scored 0-100 based on:
- Maintenance activity — recent commits, release frequency
- Community engagement — stars, forks, contributors
- Documentation quality — README completeness, examples
- Stability — breaking changes, deprecation patterns
- Popularity — downloads, dependents
Scores update daily. The full methodology is at nerq.ai/protocol.
Recommended Thresholds
| Level | Score | When to Use |
|---|---|---|
| Standard | ≥ 70 | Most agent interactions |
| Strict | ≥ 80 | Financial or data-sensitive tasks |
| Critical | ≥ 90 | Healthcare, legal, security |
Get Started
All packages are on PyPI:
pip install nerq-langgraph # LangGraph node
pip install nerq-autogen # AutoGen tool
pip install nerq-langchain # LangChain gate decorator
pip install agentindex-crewai # CrewAI discovery
Built by Nerq — the trust layer for the agentic economy. We index 5M+ AI assets and provide trust scores for 204K agents and tools.
Top comments (0)