<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Sam Gutentag</title>
    <description>The latest articles on DEV Community by Sam Gutentag (@samgutentag).</description>
    <link>https://dev.to/samgutentag</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/samgutentag"/>
    <language>en</language>
    <item>
      <title>The Merge Queue Scaling Problem Every Growing Team Hits</title>
      <dc:creator>Sam Gutentag</dc:creator>
      <pubDate>Wed, 10 Sep 2025 18:23:31 +0000</pubDate>
      <link>https://dev.to/samgutentag/the-merge-queue-scaling-problem-every-growing-team-hits-48cf</link>
      <guid>https://dev.to/samgutentag/the-merge-queue-scaling-problem-every-growing-team-hits-48cf</guid>
      <description>&lt;p&gt;You know that moment when your team grows from 15 to 30 engineers and suddenly everything feels slower despite having more people?&lt;/p&gt;

&lt;p&gt;I've been diving deep into why this happens and how advanced merge queues solve it.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Stale CI Problem
&lt;/h2&gt;

&lt;p&gt;Your PR passes all tests, you merge it, main breaks. Sound familiar?&lt;/p&gt;

&lt;p&gt;This happens because your PR tested against old main, not the main that exists after other PRs merge first.&lt;/p&gt;

&lt;h2&gt;
  
  
  Basic vs Advanced Solutions
&lt;/h2&gt;

&lt;p&gt;GitHub's merge queue: ✅ Fixes stale CI, ❌ Creates new bottlenecks&lt;br&gt;
Trunk's approach: ✅ Fixes stale CI, ✅ Optimizes for scale&lt;/p&gt;

&lt;h2&gt;
  
  
  The Key Insight
&lt;/h2&gt;

&lt;p&gt;Not all changes are equal:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Docs update: 2 minutes&lt;/li&gt;
&lt;li&gt;API change: 15 minutes
&lt;/li&gt;
&lt;li&gt;DB migration: 45 minutes&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Why should the docs update wait for the migration?&lt;/p&gt;

&lt;h2&gt;
  
  
  Advanced Features That Matter
&lt;/h2&gt;

&lt;p&gt;🚀 Parallel queues (independent changes don't block each other)&lt;br&gt;
💰 Batching (test 3 compatible PRs as one unit)&lt;br&gt;
⚡ Optimistic merging (fast PRs don't wait for slow ones)&lt;/p&gt;

&lt;h2&gt;
  
  
  Real Numbers
&lt;/h2&gt;

&lt;p&gt;Customer with 50 engineers:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;CI costs: $15K → $4K monthly&lt;/li&gt;
&lt;li&gt;Merge time: 45min → 12min average&lt;/li&gt;
&lt;li&gt;Main branch health: 60% → 99% uptime&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Read more at &lt;a href="https://trunk.io/blog/outgrowing-github-merge-queue" rel="noopener noreferrer"&gt;https://trunk.io/blog/outgrowing-github-merge-queue&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;What's your merge queue horror story? 👇&lt;/p&gt;

</description>
      <category>sre</category>
      <category>cicd</category>
      <category>devops</category>
      <category>productivity</category>
    </item>
    <item>
      <title>AI Tools Are Not Time Machines</title>
      <dc:creator>Sam Gutentag</dc:creator>
      <pubDate>Fri, 18 Jul 2025 13:00:00 +0000</pubDate>
      <link>https://dev.to/samgutentag/ai-tools-are-not-time-machines-1ch6</link>
      <guid>https://dev.to/samgutentag/ai-tools-are-not-time-machines-1ch6</guid>
      <description>&lt;p&gt;A recent &lt;a href="https://metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study/" rel="noopener noreferrer"&gt;study by Metr&lt;/a&gt; suggested that AI slows open-source development by 19%. Digging into the &lt;a href="https://metr.org/Early_2025_AI_Experienced_OS_Devs_Study.pdf" rel="noopener noreferrer"&gt;study findings&lt;/a&gt;, we can't help but wonder, really?&lt;/p&gt;

&lt;p&gt;AI tools like Cursor aren’t built to eliminate engineering as some form of &lt;a href="https://en.wikipedia.org/wiki/Law_of_the_instrument#Computer_programming" rel="noopener noreferrer"&gt;Golden Hammer&lt;/a&gt; but instead are intended to quickly get developers up and running on features, fixes, or entire new projects. &lt;/p&gt;

&lt;p&gt;When devs don’t have to handwrite every test or rewire every interface, they’re free to go deeper into correctness, edge cases, and long-term maintainability. &lt;/p&gt;

&lt;p&gt;The study itself admits this. One developer reflected, &lt;em&gt;“If I didn’t have AI, I probably [...] not gone so ham on nailing the correct Python tooling.”&lt;/em&gt; Another noted, _“Some of [code changes] were a little tedious, and so I am like is this going to [be worth it without AI], but with AI [I make the changes].” _ &lt;/p&gt;

&lt;p&gt;AI encourages developers to do the “nice-to-have” engineering that makes systems better.&lt;/p&gt;

&lt;h2&gt;
  
  
  Shifted Time, Not Wasted Time
&lt;/h2&gt;

&lt;p&gt;The reported 19% slowdown sounds dramatic until you do the math. If a task takes two hours, that’s 23 extra minutes. But how often does that number even hold? Software work isn’t that deterministic. A single developer working on a single task running longer than expected say, by just three days could fully account for that 19% swing across the entire study sample.&lt;/p&gt;

&lt;p&gt;And here’s the kicker: the study observed just 16 developers across a combined 250 hours of work. That’s roughly four days per person. For a field like software engineering where one missed spec, dependency conflict, or infra issue can nuke your velocity, that’s a statistically fragile dataset. The margin of error here is easily larger than the effect size. More bluntly: this sample is orders of magnitude too small to draw high-confidence conclusions at the granularity the authors aim for.&lt;/p&gt;

&lt;h2&gt;
  
  
  Mastery Comes From Reps
&lt;/h2&gt;

&lt;p&gt;Like any developer tool, Cursor isn't magic out of the box and requires practice and experience to become a force multiplier. Put in the reps, and you'll get better results. The data actually backs this up.&lt;/p&gt;

&lt;p&gt;In appendix C.2.7 of the study, researchers analyzed productivity results based on the number of hours of Cursor experience each developer had. No substantial gains were observed across the first 50 hours, but positive speedups emerged past the 50-hour mark. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9cgbrw8ohp73x6c2pkmn.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9cgbrw8ohp73x6c2pkmn.webp" alt="quote pulled from study with chart" width="800" height="438"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Think about that: the “upper bound” of experience in this study was just one dev who had barely more than a full work week of time with the tool, and they saw improvements. That’s not a conclusive data set, it’s a signal. With more reps, configuration, and learning, the benefits compound.&lt;/p&gt;

&lt;p&gt;Where AI tooling helps get the grunt work out of the way, it won’t design architecture or reason through your edge cases but it gives you more time to do just that. &lt;/p&gt;

&lt;p&gt;The old saying that a poor craftsman blames their tools holds firm;  time in the editor isn’t a cost and a thoughtful developer optimizes their workflows to use meaningful tools and techniques. &lt;/p&gt;

&lt;h2&gt;
  
  
  This isn’t less work. It’s better-spent time.
&lt;/h2&gt;

&lt;p&gt;You're missing the point if you’re still measuring AI by how fast it types or how quickly it gets you through tickets. Refine your workflows. Optimize your editor. Get the reps. The future isn’t about working faster it’s about solving deeper problems sooner.&lt;/p&gt;




&lt;p&gt;Interested in learning more about how the team at &lt;a href="http://www.trunk.io" rel="noopener noreferrer"&gt;Trunk&lt;/a&gt; thinks about AI tooling and Developer workflows?  Join our mailing list at &lt;a href="https://trunk.io/agent" rel="noopener noreferrer"&gt;https://trunk.io/agent&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>devops</category>
      <category>trunk</category>
    </item>
  </channel>
</rss>
