<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Jamie</title>
    <description>The latest articles on DEV Community by Jamie (@tidusjar).</description>
    <link>https://dev.to/tidusjar</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/tidusjar"/>
    <language>en</language>
    <item>
      <title>We tried to measure AI's actual impact on our codebase. Here's why it's so hard.</title>
      <dc:creator>Jamie</dc:creator>
      <pubDate>Tue, 07 Apr 2026 10:11:54 +0000</pubDate>
      <link>https://dev.to/tidusjar/we-tried-to-measure-ais-actual-impact-on-our-codebase-heres-why-its-so-hard-5h97</link>
      <guid>https://dev.to/tidusjar/we-tried-to-measure-ais-actual-impact-on-our-codebase-heres-why-its-so-hard-5h97</guid>
      <description>&lt;p&gt;Everyone's seen the stats. "55% faster." "40% more code." "3x productivity." They end up in pitch decks and team retrospectives, and nobody really questions them because the conclusion &lt;em&gt;feels&lt;/em&gt; right -- AI tools do feel helpful when you're using them.&lt;/p&gt;

&lt;p&gt;But when we actually tried to find those numbers in real commit histories and PR patterns, we hit a wall.&lt;/p&gt;

&lt;p&gt;Not because AI isn't changing how people write code -- it clearly is. But because measuring &lt;em&gt;how much&lt;/em&gt;, and whether it's actually good, turns out to be a genuinely messy problem.&lt;/p&gt;

&lt;h2&gt;
  
  
  The obvious metric is the wrong one
&lt;/h2&gt;

&lt;p&gt;The first instinct is output. More commits, more PRs, more lines of code. And yes, those numbers do go up.&lt;/p&gt;

&lt;p&gt;But so do some less flattering ones:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Average commit size grows -- more lines per commit, which correlates with harder-to-review changes&lt;/li&gt;
&lt;li&gt;PR cycle times don't improve, and sometimes get worse -- reviewers are spending longer on more code written in a less familiar style&lt;/li&gt;
&lt;li&gt;Commit message quality drops -- more "update logic", less actual context&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;None of this means AI is making things worse. It means raw output metrics don't capture what's actually happening.&lt;/p&gt;

&lt;h2&gt;
  
  
  The before/after problem
&lt;/h2&gt;

&lt;p&gt;Comparing pre- and post-AI adoption sounds straightforward. In practice, you're also comparing different project phases, team compositions, architectural decisions, and a dozen other variables that moved at the same time. Almost every "AI made us X% faster" claim, when you look at the methodology, is comparing an enthusiastic adoption period against an uncontrolled baseline.&lt;/p&gt;

&lt;h2&gt;
  
  
  What you can actually observe
&lt;/h2&gt;

&lt;p&gt;After looking at a lot of repositories, the measurable impacts fall into a few categories:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Code homogeneity increases.&lt;/strong&gt; AI-assisted codebases tend to become more internally consistent -- the same patterns repeat. Good for cognitive load, but the same mistakes replicate everywhere too.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Review burden shifts.&lt;/strong&gt; The bottleneck doesn't disappear, it moves. Code output goes up, but someone still has to review it. AI-generated code tends to look correct even when it isn't, which makes subtle bugs harder to catch.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Test coverage quietly degrades.&lt;/strong&gt; AI-assisted PRs consistently ship proportionally less test code than production code. The feature lands fast, the tests get deferred.&lt;/p&gt;

&lt;h2&gt;
  
  
  The uncomfortable ones
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Knowledge erosion.&lt;/strong&gt; We've seen repositories where contributor breadth increases but contributor depth decreases. Bus factor metrics that look healthy, where none of the contributors could confidently explain the module without re-reading it. The metric looks fine. The codebase is fragile.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Architectural drift.&lt;/strong&gt; AI-generated code is only as good as the context it had when generating. Over months, this creates "dialects" within the same repo -- different patterns for the same operations, because different sessions had different context.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The productivity paradox.&lt;/strong&gt; More code, but not proportionally more capability. The codebase grows faster than the product does. AI accelerates accidental complexity because the cost of writing code drops while the cost of understanding and maintaining it stays constant.&lt;/p&gt;

&lt;h2&gt;
  
  
  What to actually track
&lt;/h2&gt;

&lt;p&gt;If a single metric or simple before/after comparison won't cut it, here's what we think gives a more honest picture:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;PR cycle time (not PR volume)&lt;/li&gt;
&lt;li&gt;Review comment density (not approval speed)&lt;/li&gt;
&lt;li&gt;Revert and hotfix rates (not commit counts)&lt;/li&gt;
&lt;li&gt;Test-to-production code ratio over time&lt;/li&gt;
&lt;li&gt;Contributor depth per module, not just breadth&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And watch for: commit sizes trending upward without corresponding feature complexity, declining review thoroughness as volume increases, and the growing gap between code output and test coverage.&lt;/p&gt;




&lt;p&gt;AI coding tools are genuinely useful -- we use them ourselves. But the rush to quantify impact has produced a lot of misleading numbers, and optimising for the wrong metrics leads you somewhere you don't want to be.&lt;/p&gt;

&lt;p&gt;Curious whether others are tracking any of this, or whether most teams are still pointing at velocity and calling it done. What signals have you found actually tell you something meaningful?&lt;/p&gt;

</description>
      <category>ai</category>
      <category>productivity</category>
      <category>devtools</category>
      <category>codequality</category>
    </item>
    <item>
      <title>How to Spot a Burning-Out Codebase Before It Burns Out Your Team</title>
      <dc:creator>Jamie</dc:creator>
      <pubDate>Fri, 27 Mar 2026 11:45:38 +0000</pubDate>
      <link>https://dev.to/tidusjar/how-to-spot-a-burning-out-codebase-before-it-burns-out-your-team-5df3</link>
      <guid>https://dev.to/tidusjar/how-to-spot-a-burning-out-codebase-before-it-burns-out-your-team-5df3</guid>
      <description>&lt;h2&gt;
  
  
  The Codebase Knows First
&lt;/h2&gt;

&lt;p&gt;It usually starts with a vague feeling. Standups feel heavier. Estimates keep slipping. Engineers who used to volunteer for hard problems are suddenly very interested in being assigned simple ones.&lt;/p&gt;

&lt;p&gt;By the time it surfaces as a people problem — someone handing in notice, a team missing a milestone — the warning signs were already sitting quietly in the commit history, often for months.&lt;/p&gt;

&lt;p&gt;Codebases don't burn out. But they accumulate the conditions that burn out the people working inside them. Learning to read those signals is one of the highest-leverage things an engineering manager can do.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Codebase as a Team Health Mirror
&lt;/h2&gt;

&lt;p&gt;A repository's activity patterns are a direct reflection of how a team is functioning. Commit cadence, PR review turnaround, contributor distribution — these aren't just engineering hygiene metrics. They're a record of how your team is spending its energy, who's carrying the load, and where friction is building.&lt;/p&gt;

&lt;p&gt;The advantage of looking here is that it's objective and early. People are often reluctant to surface burnout or frustration until it's already critical. The repo doesn't have that filter.&lt;/p&gt;

&lt;h2&gt;
  
  
  5 Warning Signs to Watch
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Sustained Commit Velocity Drop
&lt;/h3&gt;

&lt;p&gt;A single slow week is noise. A four-week declining trend in commit frequency — especially when nothing obvious changed (no holidays, no planned focus sprint) — is a signal worth investigating.&lt;/p&gt;

&lt;p&gt;This often means work has become harder, not less. Engineers are spending more time fighting existing complexity, debugging unclear ownership, or simply losing motivation to push things forward.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Bus Factor Creep
&lt;/h3&gt;

&lt;p&gt;Pull up the contributor breakdown for your core repositories. If one person is responsible for 60% or more of recent commits, you have a bus factor problem — and potentially a burnout candidate.&lt;/p&gt;

&lt;p&gt;High bus factor isn't just a knowledge-risk. It means one engineer is carrying disproportionate cognitive load, probably fielding most of the questions, and likely feeling increasingly isolated. When they leave or burn out, the knowledge goes with them.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Stale PR Pile-Up
&lt;/h3&gt;

&lt;p&gt;Open PRs that sit untouched for a week or more are a sign that review culture has broken down. This usually happens for one of two reasons: reviewers are overwhelmed and deprioritising review work, or the codebase has become so complex that reviewing feels like too much to take on.&lt;/p&gt;

&lt;p&gt;Either way, the PR queue becoming a graveyard is demoralising for the engineers opening them. It signals that their work isn't being seen.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Inactivity Spikes Around Specific Contributors
&lt;/h3&gt;

&lt;p&gt;Look for individual contributors whose commit frequency suddenly drops — not to zero, but noticeably below their baseline. This is different from someone taking leave. It's the pattern of someone who hasn't quit but has mentally checked out.&lt;/p&gt;

&lt;p&gt;These engineers are often still delivering on assigned work but have stopped taking initiative, stopped reviewing PRs proactively, stopped contributing outside of what's strictly required.&lt;/p&gt;

&lt;h3&gt;
  
  
  5. Shrinking Contributor Diversity Over Time
&lt;/h3&gt;

&lt;p&gt;A healthy codebase sees contributions spread across the team. When you notice the same two or three names appearing on every recent commit — and others fading out — it's worth asking why.&lt;/p&gt;

&lt;p&gt;Sometimes it's a natural consequence of specialisation. Often, it's a sign that the codebase has developed informal gatekeepers, or that the onboarding cost of contributing has become too high for newer team members to bother.&lt;/p&gt;

&lt;h2&gt;
  
  
  What To Do When You See These Signals
&lt;/h2&gt;

&lt;p&gt;The goal isn't to fix the repository. The repository is just the evidence. The goal is to have better conversations, earlier.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Use the signals as a starting point, not a conclusion.&lt;/strong&gt; "I noticed PR review times have been climbing over the past three weeks — what's making reviews feel hard right now?" is a much more productive conversation opener than a retrospective about why a deadline was missed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Rotate ownership deliberately.&lt;/strong&gt; If bus factor is creeping up, create explicit opportunities for knowledge transfer. Pair reviews, rotating on-call, architecture walkthroughs — not as a process exercise but because the team genuinely needs the coverage.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cut scope when velocity drops.&lt;/strong&gt; A sustained velocity drop rarely means people aren't working hard enough. It usually means the work itself has become harder than planned. The answer is almost never to push harder — it's to remove friction, defer non-critical work, or acknowledge that the timeline needs to change.&lt;/p&gt;

&lt;p&gt;The engineers on your team will tell you when they're struggling, eventually. The codebase will tell you sooner.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;I'm building &lt;a href="https://reposhark.com" rel="noopener noreferrer"&gt;RepoShark&lt;/a&gt; to surface exactly these kinds of signals automatically from your repositories. If you're an engineering leader who wants earlier visibility into codebase health, I'd love to hear from you.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>codequality</category>
      <category>engineering</category>
      <category>technicaldebt</category>
      <category>leadership</category>
    </item>
  </channel>
</rss>
