<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Kyberkt-Labs</title>
    <description>The latest articles on DEV Community by Kyberkt-Labs (@rahul_sharan_ae40bfad2bf3).</description>
    <link>https://dev.to/rahul_sharan_ae40bfad2bf3</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/rahul_sharan_ae40bfad2bf3"/>
    <language>en</language>
    <item>
      <title>Stop trying to replace code reviewers. Brief them.</title>
      <dc:creator>Kyberkt-Labs</dc:creator>
      <pubDate>Sat, 25 Apr 2026 16:49:31 +0000</pubDate>
      <link>https://dev.to/rahul_sharan_ae40bfad2bf3/stop-trying-to-replace-code-reviewers-brief-them-2lcm</link>
      <guid>https://dev.to/rahul_sharan_ae40bfad2bf3/stop-trying-to-replace-code-reviewers-brief-them-2lcm</guid>
      <description>&lt;h1&gt;
  
  
  Stop trying to replace code reviewers. Brief them.
&lt;/h1&gt;

&lt;p&gt;In the last 18 months, AI changed how code gets &lt;em&gt;written&lt;/em&gt;. It also broke how it gets &lt;em&gt;reviewed&lt;/em&gt; — and most of the industry is fixing the wrong thing.&lt;/p&gt;

&lt;p&gt;Pull requests today are bigger, more frequent, and harder to read. Engineers ship in hours what used to take days. The keyboard is no longer the bottleneck. &lt;strong&gt;The reviewer is.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The dev-tools industry's response has been to point AI at the same problem — but in the wrong direction.&lt;/p&gt;

&lt;h2&gt;
  
  
  The "AI reviewer" trend, and why it stalls out
&lt;/h2&gt;

&lt;p&gt;Open any PR with an AI review bot installed and you'll see the same shape: a flurry of inline comments. &lt;em&gt;"Consider extracting this method." "Add a null check here." "Possible race condition."&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Some are useful. Most are noise. A few are wrong. None of them tell you the thing you actually need to know on a 26-file PR:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;What is this change really doing? Where do I look first? What's likely to break?&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The premise of these tools is that the AI is the reviewer. The human's job is to read the bot's comments and rubber-stamp.&lt;/p&gt;

&lt;p&gt;That premise is wrong.&lt;/p&gt;

&lt;p&gt;On any PR that genuinely matters — the migration, the perf overhaul, the security-sensitive endpoint — the reviewer is the one who has to make the judgment call. The AI's job isn't to take that call away. It's to &lt;strong&gt;prepare the reviewer to make it well.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What "prepared" actually means
&lt;/h2&gt;

&lt;p&gt;Imagine your most senior engineer is about to walk into a code review for a PR they've never seen. What do they want, in the first 60 seconds?&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Intent&lt;/strong&gt; — what is this PR actually trying to do?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Architecture&lt;/strong&gt; — what's the shape of the change?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Reading order&lt;/strong&gt; — of these 26 files, which 3 explain everything else?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Risk surface&lt;/strong&gt; — what's the most likely place this breaks?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Questions for the author&lt;/strong&gt; — what's worth asking before approving?&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;That's a briefing. It's what a good tech lead gives a reviewer who's about to go cold into someone else's diff. It's not opinions. It's &lt;em&gt;context&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;That's what we built. We call it &lt;strong&gt;Cicero&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Cicero, in one PR
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://github.com/Kyberkt-Labs/Cicero" rel="noopener noreferrer"&gt;https://github.com/Kyberkt-Labs/Cicero&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We just ran Cicero on a real 26-file PR from a working repository. Two threads were bundled into one change: a major performance overhaul of an MVT velocity-compute pipeline, plus a fix to an anonymous-share-link viewer flow. The kind of PR that makes a senior reviewer sigh.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;65 seconds. $0.44. One briefing.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Here's the actual structure it produced:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Intent&lt;/strong&gt; — one paragraph identifying that the PR was actually two independent threads bundled together, and flagging that as something to call out to the author.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Architecture&lt;/strong&gt; — 6 bullets, each pointing at a specific module, naming the contract change, and explaining how the rest of the diff implements it.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Reading order&lt;/strong&gt; — 7 files in dependency order. The worker contract first, then helpers, then consumers, then the unrelated map-init fix last. Not alphabetical. Not file-tree order. The order a human should actually read them.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Risk surface&lt;/strong&gt; — 7 specific concerns, each tied to actual code locations: a worker/TypeScript source-of-truth drift problem, a date-range cache short-circuit that assumes invariants the data producer might not honor, a referential-stability change that could cascade through downstream &lt;code&gt;useMemo&lt;/code&gt;s.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Questions for the author&lt;/strong&gt; — 6 questions ready to paste into the PR conversation, including the bundling concern, a missing-localStorage-key migration, and a parity check between two near-duplicate code paths.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A reviewer reads that in 2 minutes, then walks into the diff with a plan.&lt;/p&gt;

&lt;p&gt;Total review time drops by something like 40%. And the actual review gets &lt;em&gt;better&lt;/em&gt; — because the reviewer isn't filtering through bot noise, they're inspecting against a checklist they can trust.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why this is better than what's out there
&lt;/h2&gt;

&lt;p&gt;Three differences that matter:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. We don't write in the PR.&lt;/strong&gt;&lt;br&gt;
Bot comments live forever in the conversation thread, even when they were wrong. Cicero's output is for the reviewer's eyes, on the side. Cleaner PR history. No "actually that's a false positive" replies clogging the conversation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. We don't pretend to know if the code is correct.&lt;/strong&gt;&lt;br&gt;
We tell you where to look. The judgment stays with the human — which is the only place it can responsibly stay until AI is good enough to be trusted on the high-stakes calls. (It isn't, yet.)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. We're hallucination-disciplined.&lt;/strong&gt;&lt;br&gt;
The briefing comes with a path-existence guard: any file path the model mentions that doesn't exist in the actual diff gets stripped and replaced with &lt;code&gt;[unverified path]&lt;/code&gt;. You'll never click a Cicero suggestion and land on a file the model invented. If you've used AI tools that confidently cite functions and files that don't exist — you know how rare and load-bearing that property is.&lt;/p&gt;

&lt;p&gt;Other tools are betting that AI can replace the reviewer.&lt;/p&gt;

&lt;p&gt;We're betting that AI can make the reviewer faster, sharper, and more confident — without taking the call away from them.&lt;/p&gt;
&lt;h2&gt;
  
  
  The bigger pattern
&lt;/h2&gt;

&lt;p&gt;The "AI does the work for you" framing has a ceiling. It works for tasks where the cost of being wrong is low and the cost of verification is high (autocomplete, doc summarization, glue code). It breaks for tasks where the cost of being wrong is &lt;em&gt;high&lt;/em&gt; and the cost of verification is &lt;em&gt;also&lt;/em&gt; high — code review, system design, security audit, anything where the human is responsible for the outcome.&lt;/p&gt;

&lt;p&gt;For those tasks, the right framing is the opposite: &lt;strong&gt;AI as the prep-work specialist.&lt;/strong&gt; Do the reading, the orientation, the dependency-tracing, the surfacing of likely problems — then hand the &lt;em&gt;informed&lt;/em&gt; human the keyboard.&lt;/p&gt;

&lt;p&gt;Cicero is the first version of that idea applied to PR review. The same framing maps onto incident response, design review, and security triage — anywhere a senior person has to make a high-stakes call inside a context they don't already hold.&lt;/p&gt;
&lt;h2&gt;
  
  
  Try it
&lt;/h2&gt;

&lt;p&gt;Cicero V0 is the engine — works as a CLI today, with a Chrome extension landing next. Bring your own Anthropic API key and a GitHub token, point it at any PR you can read, and you'll have a briefing in under a minute.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;cicero brief https://github.com/your-org/repo/pull/123
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you find yourself asking &lt;em&gt;"where do I even start?"&lt;/em&gt; on a PR more than once a week — try it on the next one.&lt;/p&gt;

&lt;p&gt;Two minutes of reading the briefing should be enough to tell you whether your code reviews would benefit from a co-pilot that does the prep work and stays out of the way.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Built for the AI-coding era, where the reviewer is the bottleneck — not the keyboard.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>devbugsmash</category>
      <category>webdev</category>
      <category>productivity</category>
    </item>
  </channel>
</rss>
