Benchmarks are useful, but they don't really tell me whether a prompt change or cheaper model is good enough for my own workflow.
I kept running into that, so I ended up building a config-driven eval pipeline: run test cases, check format/schema, use a separate LLM as judge, then generate comparison reports.
What it does
3-stage pipeline:
- Inference — Run your test cases against candidate models (format and schema validation runs automatically)
- Judge — A separate LLM scores outputs on 9 metrics (accuracy, faithfulness, completeness, etc.)
- Compare — Aggregate scores into a comparison report (JSON + Markdown)
Key design choices:
- 3-layer judge architecture — Format, content, and expression are evaluated in separate LLM calls with no shared context. This prevents a formatting issue from biasing content scores.
- Pairwise + absolute + hybrid modes — Compare two models head-to-head, score them independently, or both.
- Majority vote aggregation — Run the judge multiple times and take the majority to reduce noise.
- Blinding — Candidate labels are randomized to prevent position bias.
-
Consistency mode — Set
inference_repeats >= 2and the pipeline automatically switches to measuring output stability instead of quality.
Multi-vendor support:
- OpenAI, Azure OpenAI, Gemini (native REST), and any OpenAI-compatible endpoint (LM Studio, vLLM, etc.)
- Mix and match — e.g., judge with GPT, candidates on local models
What the output looks like
You get a comparison-report.json with win rates, per-metric mean scores, confidence intervals, and critical issue counts. Plus a Markdown report for quick reading.
The rubric is a standalone Markdown file with score anchors (1/3/5), bias guards, and critical issue rules. You can customize evaluation criteria by editing the rubric alone — no code changes needed.
What it's NOT
- Not a benchmark suite — you bring your own test cases
- Not a model training tool — it evaluates outputs, not weights
- Not an agent framework — it's a batch evaluation pipeline
Tech stack
Python >= 3.11, Pydantic, Typer CLI. Three commands to run: uv sync, configure .env, uv run llm-judge run-all.
Repo: archminor/llm-as-a-judge
Curious to hear how other people are handling production LLM evals.

Top comments (0)