DEV Community

Ronit Mehta
Ronit Mehta

Posted on

I Made 4 AI Agents Debate Each Other. Here's Why You Should Never Trust a Single LLM Answer Again.

GPT-4 gave me a confident answer last year.

Precise numbers. Named researchers. A specific clinical study with exact findings.

It was entirely fabricated.

Not partially wrong. Not slightly off. The study did not exist. The researchers were not real. Every single number was invented — delivered with the same calm, authoritative tone the model uses when it is reciting actual facts.

And that is the problem.


The Real Issue Is Not Hallucination

Every developer knows LLMs hallucinate. That is old news.

The real issue is there is no signal.

A system that is 100% correct and a system that is 100% wrong sound identical. Same confidence. Same tone. Same formatting. No uncertainty score. No source tracing. No audit trail showing how the model reached its conclusion.

You are asking a single system — trained to sound confident — to self-evaluate its own reliability.

That is like asking a witness to also be the judge, the jury, and the fact-checker.

I got tired of it. So I built something different.


What If AI Reasoned Like Science Does?

Science does not trust single sources. Peer review exists because even brilliant researchers need adversarial challenge before their conclusions are accepted.

A claim must survive scrutiny, not just sound convincing.

What if AI worked the same way?

Not one model. Not one answer. Four specialist agents with defined, conflicting roles — gathering evidence, challenging it, moderating the process, and only accepting a verdict when the probability math converges.

That is ARGUS — Agentic Research and Governance Unified System.

pip install argus-debate-ai
Enter fullscreen mode Exit fullscreen mode

How It Works: The Four Agents

ARGUS makes four agents debate every claim before outputting a verdict.

🟦 The Moderator
Creates the debate agenda. Decides what needs investigating. Sets stopping criteria — convergence, round limits, or budget exhaustion.

🟩 The Specialist
The evidence gatherer. Runs hybrid retrieval across ingested documents and external sources. BM25 sparse search + FAISS dense vector search, fused via Reciprocal Rank Fusion. Finds the strongest supporting evidence and adds it to the debate graph with confidence scores.

🟥 The Refuter
Actively adversarial. Its only job is to break the proposition. Find counter-evidence. Expose methodological flaws. Add attack edges to the graph. It does not try to be balanced. This is intentional.

🟨 The Jury
Does not argue. Reads the final graph. Computes the Bayesian posterior. Applies calibration corrections. Only renders a verdict when the math converges — with a confidence score and structured reasoning you can audit.


The Core: Conceptual Debate Graph (C-DAG)

The underlying data structure is not a prompt chain.

It is a directed graph where every proposition, piece of evidence, and rebuttal is a node — and edges carry polarity.

SUPPORTS edge = +1
ATTACKS edge  = -1
REBUTS edge   = challenges a prior attack
Enter fullscreen mode Exit fullscreen mode

Every edge is weighted by three factors:

  • Confidence of the agent that added it
  • Relevance of the evidence to the claim
  • Quality of the source

Belief propagates through this graph in log-odds space for numerical stability:

posterior = sigmoid( logit(prior) + Σ( wi × log(LRi) ) )

where wi = polarity × confidence × relevance × quality
Enter fullscreen mode Exit fullscreen mode

ARGUS does not count votes. It weights every piece of evidence by credibility and source quality before updating the posterior. One high-quality peer-reviewed study correctly outweighs five low-quality blog posts.


A Real Debate, Step by Step

Claim: "Caffeine improves long-term cognitive performance."
Prior: 0.5 (no initial bias)

Round 1 — Specialist
Finds three RCTs showing short-term attention and reaction time improvements.
Adds SUPPORTS edges. Confidence: 0.82, 0.79, 0.85.
Posterior → 0.67

Round 2 — Refuter
Finds two meta-analyses showing tolerance development and withdrawal deficits nullify long-term gains.
Adds ATTACKS edges. Confidence: 0.88, 0.91.
Posterior → 0.44

Round 3 — Specialist
Adds a 2023 longitudinal study (n=3,400) on reduced Alzheimer's risk in long-term moderate consumers.
Posterior → 0.58

Round 3 — Refuter
Rebuts: study conflates caffeine with other dietary factors. No control for socioeconomic variables. Rebuttal strength: 0.71.
Posterior → 0.52

Jury Verdict: UNCERTAIN
Posterior: 0.52
Reasoning: Short-term benefits are well-evidenced. Tolerance effects are equally documented. Long-term effects remain genuinely contested. Recommend domain-specific investigation.


That verdict took 3 rounds. Cited 6 sources. Every step is recorded in a hash-chained PROV-O audit ledger. You can replay the entire debate and verify nothing was tampered with.


The Part That Surprised Me Most

I assumed the multi-agent debate logic would be the hard part.

It was not.

Calibration was harder.

A confidence score only means something if it is accurate. A system that says "87% confident" should be right 87% of the time across many claims. Most LLM-based systems are wildly overconfident.

ARGUS addresses this with:

  • Temperature scaling on the jury's outputs
  • Expected Calibration Error (ECE) measurement
  • Brier Score tracking across debates

When you run ARGUS on a benchmark where ground truth is known, you can measure how calibrated the verdicts actually are and adjust until the confidence scores are meaningful.

A fact-checking system that says 91% confident when it should say 62% is worse than useless. It gives you false certainty.


The CRUX Protocol: Epistemic State as a First-Class Primitive

Standard multi-agent systems pass messages.

ARGUS agents pass epistemic state.

The CRUX Protocol treats every claim as a bundle carrying a Beta distribution over confidence — not a point estimate, but a full distribution.

Beta(8, 2) and Beta(80, 20) both have a mean of 0.8. But the second agent has seen ten times more evidence. They should not be treated equally. CRUX does not treat them equally.

Agents also maintain a Credibility Ledger — a hash-chained record of past predictions versus actual outcomes, updated ELO-style. Historically well-calibrated agents get more weight in the final verdict.

When agents contradict each other, the Belief Reconciliation Protocol merges their Beta distributions via Bayesian parameter addition and issues a proof certificate showing exactly how the merge was performed.

Nothing is swept under the rug.


Try It in 10 Lines

from argus import RDCOrchestrator, get_llm

llm = get_llm("openai", model="gpt-4o")

result = RDCOrchestrator(llm=llm, max_rounds=5).debate(
    "Does caffeine improve long-term cognitive performance?",
    prior=0.5
)

print(result.verdict.label)      # UNCERTAIN / SUPPORTED / REFUTED
print(result.verdict.posterior)  # e.g. 0.52
print(result.verdict.reasoning)  # Full structured reasoning
Enter fullscreen mode Exit fullscreen mode

Works with GPT-4o, Claude, Gemini, and fully local via Ollama — no cloud required.

# For local use
pip install argus-debate-ai[ollama]
Enter fullscreen mode Exit fullscreen mode

What ARGUS Is Not

It is not fast. A 5-round debate with hybrid retrieval takes 45–90 seconds.

For real-time applications — that is a problem.

For research, fact-checking, enterprise document analysis, legal review, medical decision support, or any domain where being confidently wrong has real consequences — the latency is worth it.


What It Supports

Category Details
LLM Providers 27+ including OpenAI, Anthropic, Gemini, Groq, Mistral, Ollama
Tools 50+ including ArXiv, DuckDuckGo, Wikipedia, BigQuery, Pinecone
Retrieval BM25 + FAISS + Cross-encoder reranking via RRF
Document formats PDF, TXT, HTML, Markdown, JSON
Interfaces Python API, CLI, Streamlit sandbox, Bloomberg-style TUI
License MIT

The Honest Comparison

Feature Standard LLM ARGUS
Source tracing ✅ Full provenance
Uncertainty score ✅ Calibrated posterior
Adversarial challenge ✅ Dedicated Refuter agent
Audit trail ✅ Hash-chained ledger
Multi-model support ✅ 27+ providers
Speed Fast 45–90 seconds

Links


One Question For You

What claim would you want to put through an adversarial AI debate?

Drop it in the comments. I will run a live ARGUS debate on the most interesting one and post the full verdict — evidence nodes, posterior evolution, and jury reasoning — as a reply.

Law, medicine, finance, tech, anything. The more contested, the better.


Top comments (0)