<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Prakhar Singh</title>
    <description>The latest articles on DEV Community by Prakhar Singh (@prakharsingh_17).</description>
    <link>https://dev.to/prakharsingh_17</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/prakharsingh_17"/>
    <language>en</language>
    <item>
      <title>Agentic code review in production: orchestration, evaluation, and the cost of being wrong</title>
      <dc:creator>Prakhar Singh</dc:creator>
      <pubDate>Tue, 12 May 2026 09:29:37 +0000</pubDate>
      <link>https://dev.to/prakharsingh_17/agentic-code-review-in-production-orchestration-evaluation-and-the-cost-of-being-wrong-3090</link>
      <guid>https://dev.to/prakharsingh_17/agentic-code-review-in-production-orchestration-evaluation-and-the-cost-of-being-wrong-3090</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;What "agentic" actually buys you over a linter, why single-model approaches stall, and why false positives — not raw model capability — determine whether the system stays in the loop.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;em&gt;Agentic&lt;/em&gt; has become a marketing flag, but in code review it carries a precise technical meaning: the system, not the user, decides which tools to invoke against a change, in what order, and how to weight their findings. A linter runs a fixed pipeline. A single-pass language-model reviewer reads the diff and emits comments end-to-end. An agentic reviewer chooses between a compiler, a type checker, a test runner, a secret scanner, a static analyzer, and one or more language-model calls — then arbitrates their disagreements before surfacing a review comment.&lt;/p&gt;

&lt;p&gt;The model is one tool among several. The system's value is in the arbitration policy that decides which findings reach the developer.&lt;/p&gt;

&lt;h2&gt;
  
  
  The orchestration problem
&lt;/h2&gt;

&lt;p&gt;Single-model approaches stall on three axes that pull in different directions: accuracy, latency, and cost. A frontier model gives the strongest multi-step reasoning on a non-trivial change but typically adds several seconds of latency and an order-of-magnitude cost premium per call; a small open-weights model returns in under a second but misses subtle invariants. Three routing strategies cover most of the production space:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Task-classification routing.&lt;/strong&gt; A lightweight classifier (a smaller model or a rules layer) decides which downstream model handles a request. Style nits, dead-code removal, and import-order checks go to a fast cheap model; logic changes and concurrency reasoning go to a stronger one. This works as long as the classifier is calibrated; misclassification lands hard-reasoning prompts on under-powered models and produces confident-sounding nonsense.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Fallback chains.&lt;/strong&gt; Try the cheap model first; if self-consistency across N samples is low — or a cheap verifier disagrees — escalate. This is robust against classifier drift but doubles cost on the long tail.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Evaluation-driven A/B routing.&lt;/strong&gt; Maintain an offline evaluation set of past pull requests with human accept/reject outcomes; score model variants on precision and recall against that ground truth and route traffic to whichever variant scores highest on the relevant slice. This is the only strategy that adapts when a model is updated.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In practice production systems combine all three: classify, fall back on low confidence, and let offline evaluations reshape weights every release cycle.&lt;/p&gt;

&lt;h2&gt;
  
  
  Grounding with static analysis and retrieval
&lt;/h2&gt;

&lt;p&gt;A pure language-model review hallucinates fixes — proposing API calls that do not exist, citing version-specific behavior incorrectly, suggesting refactors that break other call sites the model never saw. Two anchors push the hallucination rate down.&lt;/p&gt;

&lt;p&gt;First, deterministic static analyzers run in parallel with the language model. Type errors, null dereferences, missing &lt;code&gt;await&lt;/code&gt;, unused imports — these are cheap, deterministic, and not worth a model call. The agent uses their output as ground truth and frames its review around facts the static analyzer surfaced, not facts the model invented.&lt;/p&gt;

&lt;p&gt;Second, retrieval-augmented generation over the repository itself: prior review threads, commit messages, and the project's design documents. Most code review observations are not novel. The same patterns get flagged across files — null-safety regressions, missing index migrations, inconsistent error wrapping. Retrieving prior review comments scoped to the touched files, modules, or owners shifts the model from generic best-practice advice to comments that match the codebase's established conventions.&lt;/p&gt;

&lt;h2&gt;
  
  
  False positives as the dominant cost
&lt;/h2&gt;

&lt;p&gt;Developer trust in an automated reviewer collapses non-linearly: a handful of bad comments is usually enough for the team to start dismissing the bot reflexively. The arithmetic is unforgiving: a 5% false-positive rate at twenty review comments per pull request is one bogus flag per PR. Within a sprint, the team stops reading the bot's output.&lt;/p&gt;

&lt;p&gt;Three controls keep the rate manageable:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Confidence thresholding&lt;/strong&gt; — never surface a comment below a calibrated threshold, even if the model is willing to speak.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Dedup against historical dismissals&lt;/strong&gt; — if a reviewer dismissed an analogous comment six months ago, the same shape of comment on the same file is suspect this time.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;A closed feedback loop&lt;/strong&gt; — every dismissed or accepted comment becomes training signal for the next routing decision and threshold update.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The third is where most teams underinvest. Without the loop the false-positive rate is whatever the underlying model happens to produce. With it, the rate trends down per release.&lt;/p&gt;

&lt;h2&gt;
  
  
  Compliance as a routing constraint
&lt;/h2&gt;

&lt;p&gt;Compliance is not a bolt-on check. It belongs at the same layer as task classification — a first-class routing input, not a separate stage tacked on at the end.&lt;/p&gt;

&lt;p&gt;Code touching regulated data — protected health information, payment card numbers, EU resident identifiers — has to route differently. GDPR shapes both transfer (no diffs leaving the controller's processors without a Data Processing Agreement) and retention (logged prompts and completions are themselves processing activity). HIPAA obligations — Business Associate Agreements and minimum-necessary access — determine which model endpoints are eligible to process diffs containing PHI. PCI-DSS controls dictate cardholder-data redaction before model invocation. SOC 2 controls dictate operational guarantees on the reviewer service itself. Bolting any of this on after the fact produces gaps that surface during the first audit, not during development.&lt;/p&gt;

&lt;h2&gt;
  
  
  Closing
&lt;/h2&gt;

&lt;p&gt;Agentic code review is a coordination system with a language model embedded in it, not a language model with tools attached. The hard problems are not in the model — they are in the routing, the grounding, the evaluation, and the feedback loops that decide what the system does next time. Teams that treat the model as the product underinvest in everything that actually determines whether the product stays in use.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Originally published at &lt;a href="https://prakharsingh.github.io/notes/agentic-code-review/" rel="noopener noreferrer"&gt;prakharsingh.github.io/notes/agentic-code-review&lt;/a&gt; on 12 May 2026. I'm &lt;a href="https://prakharsingh.github.io/" rel="noopener noreferrer"&gt;Prakhar Singh&lt;/a&gt;, Founding Engineer at Devzy AI building an agentic AI system for automated code review across CLI, PR, and IDE surfaces.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>codereview</category>
      <category>llm</category>
      <category>devtools</category>
    </item>
  </channel>
</rss>
