DEV Community

Dinesh Kumar
Dinesh Kumar

Posted on

I built a LangChain integration that stops your agent from calling broken MCP servers

If you're building LangChain agents that use MCP tools, you've hit this problem: your agent calls an MCP server, the server is down or slow, and your agent fails silently. You don't know it happened until a user complains.

I just shipped dominion-observatory-langchain — a LangChain callback handler that fixes this.

What it does

Two things, automatically:

  1. Pre-call trust check: Before your agent calls any MCP server, it queries a behavioral trust score. If the score is below your threshold, the agent skips that server.

  2. Post-call telemetry: After every MCP call, it reports the outcome (success/failure, latency, errors) to Dominion Observatory — a cross-ecosystem behavioral trust database tracking 4,500+ MCP servers.

Installation

pip install dominion-observatory-langchain
Enter fullscreen mode Exit fullscreen mode

Usage

from dominion_observatory_langchain import ObservatoryCallbackHandler

handler = ObservatoryCallbackHandler()

# Add to your LangChain agent
agent = initialize_agent(
    tools=tools,
    llm=llm,
    callbacks=[handler]
)
Enter fullscreen mode Exit fullscreen mode

That's it. Every MCP tool call your agent makes is now trust-verified and logged.

Why this matters

For reliability: Your agent stops calling servers that have been failing. Trust scores are computed from real production interaction data across the entire MCP ecosystem — not static GitHub metadata.

For compliance: EU AI Act Article 12 requires automatic logging of AI agent actions. Deadline: August 2, 2026. This callback handler creates the audit trail automatically.

For the ecosystem: Every interaction your agent reports makes the trust scores more accurate for everyone. It's a network effect — the more agents participate, the better the data.

How it's different from Glama/Smithery scores

Glama and Smithery score servers based on static tool definition quality and metadata. That tells you if a server is well-documented. It doesn't tell you if it actually works when 1,000 agents call it simultaneously.

Dominion Observatory collects production behavioral data from real agent interactions across any MCP client. Success rates, latency distributions, error patterns — observed, not inferred.

Open source

MIT license. The callback handler and SDK are fully open source.

AutoGen, CrewAI, and LlamaIndex integrations coming next.


Dinesh Kumar — building the behavioral trust layer for the agent economy. Singapore.

Top comments (0)