DEV Community

Cover image for Evaluating LLM code reviewers: an offline harness for precision, recall, and routing"
Prakhar Singh
Prakhar Singh

Posted on

Evaluating LLM code reviewers: an offline harness for precision, recall, and routing"

If you cannot measure it, you cannot route it. Why offline evaluation is the difference between a code reviewer that improves over time and one the team dismisses within a sprint.

Chat evaluations are vibes-based: thumbs-up on "was this helpful?" measured against no particular ground truth. Code review needs something stricter. A reviewer that flags five real bugs and one bogus warning is useful; one that flags one real bug and five bogus warnings is dismissed within a sprint. Offline evaluation answers the question before the reviewer ships. It tells you which model to route a given change to, when to escalate, and whether the system is getting better or worse over time. Without it, every routing decision is a guess.

Building the evaluation set

Start with past pull requests that carry human accept/reject outcomes. This is your ground truth. Filter aggressively: comments the author dismissed within seconds, comments where the reviewer later admitted they were wrong, comments on code that has since been deleted. What remains is a set of (diff, finding, accept/reject) triples where the human label is trustworthy enough to score against.

Three slices determine whether the set is useful. Change type: a model that catches null-safety regressions perfectly may miss concurrency bugs entirely, and vice versa. If your eval set is 90% style nits, the score tells you nothing about correctness. File ownership: different teams write different code, and an evaluator that scores well on backend services may crater on frontend components. Language: a Python reviewer handles types as optional annotations; a TypeScript reviewer treats them as structural contracts. A single aggregate score hides per-slice failures. Slice and score separately.

Scoring: precision, recall, and the dimensions that matter

Precision and recall trade off against each other. In code review, precision matters more than recall. A missed real bug is an opportunity cost. A bogus flag is a trust cost, and trust collapses non-linearly: two or three bad comments in a single pull request are enough for a developer to start dismissing the bot reflexively, and once that habit forms, the reviewer's signal-to-noise ratio becomes irrelevant because nobody is reading it. Target recall above 0.7 and precision above 0.85 before any output reaches a developer.

Multi-tier labeling. Not all findings are created equal, and collapsing everything into accept/reject loses signal. A three-tier scheme works better in practice: Hard Reject for factually wrong or harmful findings, Soft Reject for valid-but-low-value suggestions (style nits, marginal improvements, technically-correct-but-low-priority), and Accept for good catches. Three tiers let you compute precision at different strictness levels: Hard-Reject-only precision captures the rate of genuinely harmful false positives, while Soft+Hard Reject precision captures developer tolerance more broadly. The two numbers tell different stories, and both matter for calibration.

Self-consistency over N samples. Run the same diff through the reviewer multiple times. If it produces different findings each time, the model is underspecified for the task. Low self-consistency correlates with high false-positive rate in production, and it is a cheaper signal to measure than full precision/recall against ground truth. Track it per model version and per slice.

Severity-aware precision. A bogus "use const instead of let" suggestion is an eye-roll. A bogus security or null-dereference claim is a trust-destroyer. Weighted precision, where false positives are scored by their potential impact rather than counted equally, tracks closer to actual developer tolerance than raw precision. Label severity on the evaluation set (critical, medium, low) and weight false positives accordingly: a false-critical costs 10x a false-low in the weighted score. The number that predicts whether your reviewer stays in the loop is almost never raw precision.

Confidence calibration. The reviewer should know when it does not know. A comment emitted with low confidence should be suppressed by the routing layer rather than surfaced with a disclaimer. Surfacing it anyway is tempting (more coverage) but the disclaimer carries no weight with a developer who already distrusts the tool. Calibrate a threshold on the offline eval set: what is the lowest confidence score at which precision stays above 0.85? Discard everything below it.

From evaluation to routing

Offline evaluation is not a one-time gate. It is the mechanism that drives routing decisions in production. A classification router sends simple changes to a cheap fast model and complex changes to a frontier model, but the classification policy itself needs evaluation: what threshold defines "complex"? A fallback chain escalates from cheap to expensive when self-consistency drops, but the escalation threshold needs evaluation too. Both thresholds are hyperparameters, and offline eval is how you tune them.
Evaluation-driven A/B routing ties this together. Maintain an offline evaluation set, score every model variant against it on the relevant slices, and route production traffic to whichever variant scores highest per slice. When a new model ships, the evaluation set tells you whether it is an upgrade or a regression before any user sees it. When a slice degrades, traffic shifts back automatically. This is the only routing strategy that adapts to model updates without manual intervention.
Ensemble disagreement is itself a routing signal. When a cheap model signs off on a change but a frontier model flags something, the disagreement is worth surfacing regardless of which model is "correct." Disagreement rate between model pairs, tracked over time on the eval set, often catches regressions faster than raw precision shifts: if two models that agreed 95% of the time last week now agree 80%, something changed, and the eval set alone may not tell you what.

This evaluation harness feeds the routing decisions described in the companion article on agentic code review in production: offline evaluation scores drive the model router and fallback chain thresholds directly.

The closed feedback loop

The offline evaluation set decays. The codebase evolves, old patterns become obsolete, new patterns emerge. Every accepted or dismissed comment in production must feed back into the ground truth set. A dismissed comment becomes a negative example: same diff, same finding, but ground truth equals reject. Next time the model proposes something similar, the offline eval catches it before a developer sees it. An accepted comment becomes a positive example and reinforces the pattern.

Retrieval-augmented generation over the repository's past review threads can surface similar past comments, making it easier to spot when the model is proposing a finding that a human reviewer already dismissed under slightly different wording.

This feedback loop is where most teams underinvest. They build the eval set once, ship the reviewer, and treat the eval as a static artifact. The reviewer plateaus, the team stops trusting it, and the project is shelved. The loop is what separates a reviewer that improves release over release from one that is disabled within a quarter. Without it, the false-positive rate is whatever the underlying model happens to produce. With it, the rate trends down per release.

Open challenges

Ground truth drift. An evaluation set built from last year's pull requests scores last year's patterns. As the codebase adds new modules, changes languages, or adopts new frameworks, the ground truth ages out. Periodic re-labeling, sampled from recent production dismissals and accepts, keeps the set relevant. Freshness weighting (recent examples count more than stale ones) is a lighter-weight alternative.


This article was originally published on prakharsingh.github.io/notes/evaluating-llm-code-reviewers/

Top comments (0)