<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Brynjar Jónsson</title>
    <description>The latest articles on DEV Community by Brynjar Jónsson (@brynjarjonsson).</description>
    <link>https://dev.to/brynjarjonsson</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/brynjarjonsson"/>
    <language>en</language>
    <item>
      <title>What do 3-day reviews actually cost your team?</title>
      <dc:creator>Brynjar Jónsson</dc:creator>
      <pubDate>Mon, 16 Feb 2026 21:09:59 +0000</pubDate>
      <link>https://dev.to/brynjarjonsson/the-hidden-cost-of-longs-reviews-what-3-day-reivews-actually-cost-3ne9</link>
      <guid>https://dev.to/brynjarjonsson/the-hidden-cost-of-longs-reviews-what-3-day-reivews-actually-cost-3ne9</guid>
      <description>&lt;p&gt;Most teams don't think much about review time. PRs go up, reviews happen "when people have time," work continues. It feels efficient - nobody's sitting idle.&lt;br&gt;
But there's a hidden cost that compounds quietly in the background.&lt;/p&gt;

&lt;h2&gt;
  
  
  The "staying busy" trap
&lt;/h2&gt;

&lt;p&gt;When reviews take 3 days, developers do what any reasonable person would: they start the next thing. The problem isn't idleness - it's what happens next.&lt;br&gt;
With 3-day review waits in a 2-week sprint:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Each dev juggles 2.5 items at once on average&lt;/li&gt;
&lt;li&gt;Every review return means context switching - "where was I?"&lt;/li&gt;
&lt;li&gt;Merge conflict risk increases 38% as parallel work diverges&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The devs aren't idle. They're busier than they need to be.&lt;/p&gt;

&lt;h2&gt;
  
  
  What the math actually says
&lt;/h2&gt;

&lt;p&gt;Let's make it concrete. A team of 5, 2-week sprints, assuming average 2 days of dev work:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;With 3-day reviews:&lt;/strong&gt; 260 stories/year&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;With same-day reviews:&lt;/strong&gt; 520 stories/year&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Same people. Same skills. &lt;strong&gt;Double the output.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;"But that's theoretical," you might say. "Real teams have overlap, parallel work, varying review complexity."&lt;/em&gt; &lt;/p&gt;

&lt;p&gt;Here's the thing: if you reduce cycle time from 5 days to 2.5 days, throughput doubles. That's not a theory - that's Little's Law - that's math.&lt;/p&gt;

&lt;p&gt;The overlap and parallel work you're describing? They're already baked into cycle time. However your team works, &lt;strong&gt;doubling the time from start to done means half as much gets done.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Need management buy-in? Here's what it costs.
&lt;/h2&gt;

&lt;p&gt;Fixing slow reviews isn't free - it takes process changes, maybe tooling, definitely prioritization. If you need to make the case for investing in this, here are the numbers:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The throughput gap:&lt;/strong&gt; To match a faster team's output, you'd need to &lt;strong&gt;hire an entire second team&lt;/strong&gt;. That's a $500k+ decision vs a process change.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The hidden costs:&lt;/strong&gt; At $75/hour loaded developer cost, 3-day reviews create:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;37.9 days/year lost to context switching&lt;/li&gt;
&lt;li&gt;24.3 days/year lost to merge conflict rework&lt;/li&gt;
&lt;li&gt;$37,275/year per team in hidden costs&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That's 497 hours of developer time - not building features, just waiting and recovering from waiting.&lt;/p&gt;

&lt;h2&gt;
  
  
  The question worth asking
&lt;/h2&gt;

&lt;p&gt;Team dynamics vary. Workloads differ. What works for one team won't work for another. But here's a question worth asking: &lt;strong&gt;Do you actually know how long your reviews take?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Not a guess. Not "a day or two, usually." The actual median time from PR opened to review complete. Most teams have never measured it. Which means they've never questioned whether it could be better.&lt;/p&gt;

&lt;p&gt;Once you can see where your issues actually spend their time - stage by stage - you can decide what's worth fixing. And once you fix the wait states, the throughput gains from Little's Law stop being theoretical.&lt;/p&gt;

&lt;h2&gt;
  
  
  What does your typical review wait time cost?
&lt;/h2&gt;

&lt;p&gt;Plug in your numbers. See what the math says for your team.&lt;br&gt;
&lt;a href="https://smartguess.is/blog/3-day-code-review-cost/?utm_source=devto&amp;amp;utm_medium=article&amp;amp;utm_campaign=time_in_status&amp;amp;utm_content=review_cost_calculator" rel="noopener noreferrer"&gt;Try the calculator →&lt;/a&gt;&lt;/p&gt;

</description>
      <category>management</category>
      <category>productivity</category>
      <category>softwaredevelopment</category>
      <category>softwareengineering</category>
    </item>
    <item>
      <title>The 3-Day Code Review Problem (And What It's Actually Costing You)</title>
      <dc:creator>Brynjar Jónsson</dc:creator>
      <pubDate>Thu, 08 Jan 2026 11:36:26 +0000</pubDate>
      <link>https://dev.to/brynjarjonsson/the-3-day-code-review-problem-and-what-its-actually-costing-you-5m8</link>
      <guid>https://dev.to/brynjarjonsson/the-3-day-code-review-problem-and-what-its-actually-costing-you-5m8</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;"People often take around a week to review my changes... having to wait half a sprint is pretty insane in my opinion."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;A developer posted this &lt;a href="https://www.reddit.com/r/AskProgramming/comments/1m7737f/for_those_of_you_with_mandatory_code_reviews_in/" rel="noopener noreferrer"&gt;on Reddit recently&lt;/a&gt;. Their team runs two-week sprints. Reviews take a week. They suggested &lt;em&gt;"same or next day"&lt;/em&gt; turnaround and got &lt;em&gt;"interesting looks from a couple people, like I was saying something crazy or being unreasonable."&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;When they asked their teammates how long reviews &lt;strong&gt;should&lt;/strong&gt; take, most said "3-4 days." The thread that followed had over 50 responses. &lt;/p&gt;

&lt;p&gt;What struck me wasn't the advice—it was how wildly different "normal" looks across teams.&lt;/p&gt;




&lt;h2&gt;
  
  
  The gap between what's possible and what's accepted
&lt;/h2&gt;

&lt;p&gt;Some teams operate reviews like this:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"My team does same or next day and favors small, focused change sets as much as possible."&lt;/p&gt;

&lt;p&gt;"As a TL, I aim to get a review done within an hour of receiving it. It's built into my job description and it affects team velocity."&lt;/p&gt;

&lt;p&gt;"My Jira metrics say under 20 hours for all my client companies (from open to merged), this includes addressing comments, change requests, potential reworks."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Other teams have normalized something very different:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"My old team would have it done same day. Then I went to a team where it would take a week. Now I'm on a team where 2-3 days is normal."&lt;/p&gt;

&lt;p&gt;"When I started on my current contract, reviews took 1-3 weeks."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Same industry. Same type of work. Completely different expectations.&lt;/p&gt;




&lt;h2&gt;
  
  
  The real cost of waiting
&lt;/h2&gt;

&lt;p&gt;One reply laid it out clearly:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"Any wait is bad and costs money in all sorts of ways—context switching, merge conflicts, finding out about bugs a week later instead of 15 minutes after writing it."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The original poster described what this looks like in practice:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"I am faster than my other team mates, so my MRs in this team pile up like train cars... I actually just avoid picking up new tasks to avoid context overload because I need to wrap up what's pending."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Read that again. A developer is &lt;em&gt;intentionally slowing down&lt;/em&gt; because the review process can't keep up. They're managing their own throughput to compensate for a broken system.&lt;/p&gt;

&lt;p&gt;Another response connected it to broader research:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"It's a major problem, for many organizations, and it pays to solve it (refer: 'Accelerate' by Dr. Forsgren)."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://itrevolution.com/product/accelerate/" rel="noopener noreferrer"&gt;The Accelerate - research&lt;/a&gt; shows elite teams achieve lead times—from commit to production—measured in hours, not weeks. If your reviews alone take 3-4 days, you've already blown past what high performers accomplish end-to-end.&lt;/p&gt;




&lt;h2&gt;
  
  
  What high-performing teams actually do
&lt;/h2&gt;

&lt;p&gt;The thread surfaced several patterns from teams that have solved this:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;They treat reviews as blockers, not backlog&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"PRs should be treated as blockers and dealt with ASAP."&lt;/p&gt;

&lt;p&gt;"In the best teams I've been a part of, the guideline was to start the day by reviewing any pending MR, before producing any additional code."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;They have explicit time agreements&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"We have a 24 hour turn around time expectation for most code reviews."&lt;/p&gt;

&lt;p&gt;"I expect 24 hour turnaround per iteration from required reviewers... 24 hours is when pings start going out."&lt;/p&gt;

&lt;p&gt;"Four hours, tops. Then I start DMing people for the review."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;They keep changes small and reviewable&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"I learned to pair down my commit sizes. Now my commits are less than 100 lines. They are so small that when I ask for reviews, people readily look at my PRs because they know the PR will be easy to review. I routinely get reviews done same day now."&lt;/p&gt;

&lt;p&gt;"We review all changes within a day. Though we keep our tickets and changes small so they are easy to read through and understand."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;They make reviews part of the job, not an interruption&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"We do same day code reviews for all merge requests, with the goal to do them within an hour of being submitted. We all have to have our code merged, so it's really just like we need it done by other people as much as we need it done so it basically averages out."&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Making it work without breaking flow
&lt;/h2&gt;

&lt;p&gt;The common objection is that fast reviews mean constant interruptions. But the best advice flips this:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"When you start a work period, do reviews first, then go to coding. When you come in in the morning, get your cup of coffee, and then do reviews. Once reviews are done, start coding. When you come back from lunch, do any pending reviews, then get back to coding. No interruptions, no leaving the state of flow."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Another pattern:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"Code in small batches, and when you're done a batch of coding, reviewing other people's code is higher priority than the next batch of coding."&lt;br&gt;
Reviews don't break flow if you design them as transitions, not interruptions.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Why nothing changes
&lt;/h2&gt;

&lt;p&gt;The most honest observation in the thread:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"The situation is surreal, but it's also disturbingly normal and organizations tend to be very slow to move off their broken process no matter how large the evidence that it works badly."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Teams know long reviews hurt. Many have read Accelerate. They've felt the pain of context-switching back to week-old code. But the process persists because:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;No visibility&lt;/strong&gt; — You can't improve what you can't see. Without data, "reviews are slow" is just a feeling, easily dismissed.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;No baseline&lt;/strong&gt; — What's normal for your team? Is this sprint worse than last sprint? Without tracking, you can't know.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;No accountability&lt;/strong&gt; — A 24-hour agreement means nothing if no one can tell when it's been violated.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The original poster captured the dysfunction perfectly:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"Teams putting processes before people and then claiming to do agile."&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Debug your process, not your team
&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;Your reviews take exactly as long as the system you've designed allows.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Reviews aren't slow because developers are lazy. You designed the system work flows through. It's yours to redesign. But like any debugging, it starts with visibility.&lt;/p&gt;

&lt;p&gt;When a deploy fails, you don't guess at the cause. You look at the logs. You trace the error. You find the line that broke.&lt;br&gt;
Your review process deserves the same treatment:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;How long do items actually sit in review?&lt;/li&gt;
&lt;li&gt;Which transitions are slow?&lt;/li&gt;
&lt;li&gt;  Dev done → In Review?&lt;/li&gt;
&lt;li&gt;  In Review → Review done?&lt;/li&gt;
&lt;li&gt;  Review done → In Test?&lt;/li&gt;
&lt;li&gt;Is the 24-hour agreement being met?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These questions are all easy to answer—if you collect the data.&lt;/p&gt;

&lt;p&gt;The developer who started this thread isn't unreasonable for expecting same-day reviews. They're just on a team that doesn't see where the time goes.&lt;/p&gt;




&lt;p&gt;I'm the lead developer of &lt;a href="https://marketplace.atlassian.com/apps/4028833452/?utm_source=devto&amp;amp;utm_medium=blog&amp;amp;utm_campaign=time-in-status&amp;amp;utm_content=3day-code-review-problem" rel="noopener noreferrer"&gt;Time In Status&lt;/a&gt; by Smart Guess — it shows you how long Jira issues spend in each status—so you can see where work waits, spot review bottlenecks before they pile up, and finally have data for those "why did this take so long?" conversations.&lt;/p&gt;

</description>
      <category>discuss</category>
      <category>productivity</category>
      <category>softwareengineering</category>
    </item>
    <item>
      <title>Debug your process, Not your team</title>
      <dc:creator>Brynjar Jónsson</dc:creator>
      <pubDate>Sun, 04 Jan 2026 19:27:50 +0000</pubDate>
      <link>https://dev.to/brynjarjonsson/debug-your-process-not-your-team-4op8</link>
      <guid>https://dev.to/brynjarjonsson/debug-your-process-not-your-team-4op8</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/challenges/mux-2025-12-03"&gt;DEV's Worldwide Show and Tell Challenge Presented by Mux&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Built
&lt;/h2&gt;

&lt;p&gt;I built a Jira app that helps development teams debug their delivery process. It combines Time in Status tracking with Flow Intelligence and AI-powered guidance to help teams understand where work gets stuck, predict when it will ship, and fix systematic bottlenecks.&lt;/p&gt;

&lt;p&gt;Think of it as a debugger for your workflow, not your code.&lt;/p&gt;

&lt;h2&gt;
  
  
  My Pitch Video
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://player.mux.com/FQBVY8hgPgByVXWrQM01F5VEOO1Sn006Xwsx1MogrL19o?metadata-video-title=Time+in+status+by+Smart+Guess&amp;amp;video-title=Time+in+status+by+Smart+Guess" rel="noopener noreferrer"&gt;Mux video&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Demo
&lt;/h2&gt;

&lt;p&gt;Try it here:&lt;br&gt;
&lt;a href="https://marketplace.atlassian.com/apps/4028833452" rel="noopener noreferrer"&gt;https://marketplace.atlassian.com/apps/4028833452&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Story Behind It
&lt;/h2&gt;

&lt;p&gt;Every developer has heard the question: &lt;em&gt;"Why is this story taking soo long?"&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The answer is usually buried somewhere in the process—work sitting in review, blocked dependencies, testing pile-ups. &lt;/p&gt;

&lt;p&gt;Many teams see the same bottlenecks sprint after sprint, but without hard data, pushing for change is an uphill battle. So nothing changes.&lt;/p&gt;

&lt;p&gt;I built Smart Guess because I wanted to bring the &lt;strong&gt;debugging mindset&lt;/strong&gt; developers use for code to the way teams work. Instead of blaming people, &lt;strong&gt;you diagnose the system&lt;/strong&gt;. The tagline says it all: Debug your process, not your team.&lt;/p&gt;

&lt;p&gt;What makes it different from existing tools is the combination of:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Visibility&lt;/strong&gt; — see exactly where time goes, right on the issue&lt;br&gt;
&lt;strong&gt;Prediction&lt;/strong&gt; — forecast when work will ship based on actual team history&lt;br&gt;
&lt;strong&gt;Guidance&lt;/strong&gt; — NoEsis, an AI assistant that helps interpret patterns and build the case for change&lt;/p&gt;

&lt;h2&gt;
  
  
  Technical Highlights
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Backend:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Atlassian Forge - Serverless cloud platform with FaaS runtime&lt;/li&gt;
&lt;li&gt;Node.js - Backend resolver logic&lt;/li&gt;
&lt;li&gt;Forge Storage API - Persistent data layer&lt;/li&gt;
&lt;li&gt;Jira REST API &amp;amp; Agile API - Sprint/board data integration&lt;/li&gt;
&lt;li&gt;OpenAI Assistants API - AI coaching (NoEsis) with thread-based conversations&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Frontend:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;React - Component architecture with hooks&lt;/li&gt;
&lt;li&gt;Atlaskit Design System - Atlassian's official UI framework&lt;/li&gt;
&lt;li&gt;Vite - Modern build tooling and dev server&lt;/li&gt;
&lt;li&gt;
&lt;a class="mentioned-user" href="https://dev.to/forge"&gt;@forge&lt;/a&gt;/bridge - Forge &amp;lt;-&amp;gt; UI communication layer&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>devchallenge</category>
      <category>muxchallenge</category>
      <category>showandtell</category>
      <category>video</category>
    </item>
    <item>
      <title>Why You Keep Explaining the Same Delays Every Sprint</title>
      <dc:creator>Brynjar Jónsson</dc:creator>
      <pubDate>Sun, 28 Dec 2025 19:12:57 +0000</pubDate>
      <link>https://dev.to/brynjarjonsson/why-you-keep-explaining-the-same-delays-every-sprint-3abb</link>
      <guid>https://dev.to/brynjarjonsson/why-you-keep-explaining-the-same-delays-every-sprint-3abb</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;Why is this taking so long?&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;You've heard this question in standups. In Slack. In retros. Every sprint, someone asks. Every sprint, the answer is vague—&lt;em&gt;"it got stuck in review," "we hit some blockers," "it took longer than expected."&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;So you start tracking Time in Status. Now you have data—3 days in Review, 2 days in QA. But the question keeps coming back. Because knowing &lt;strong&gt;how long&lt;/strong&gt; isn't the same as knowing &lt;strong&gt;why&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Most teams collect this data and do nothing with it. Or worse—they use it in ways that hurt morale without improving delivery. Here are the five most common mistakes, and what to do instead.&lt;/p&gt;




&lt;h2&gt;
  
  
  Mistake #1: Measuring Without Context
&lt;/h2&gt;

&lt;p&gt;You see an issue spending 4 days in “In Progress.” Is that bad? You have no idea. Without knowing what’s normal for similar work, the number is meaningless.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What teams do wrong:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Set arbitrary thresholds (“anything over 2 days is a problem”)&lt;/li&gt;
&lt;li&gt;Compare all issues equally (a bug fix vs. a story vs. a sub-task)&lt;/li&gt;
&lt;li&gt;React to outliers without understanding why&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;What to do instead:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Compare cycle time to similar completed work. A 4-day story might be perfectly healthy if your team’s stories typically take 3-5 days. But if similar stories usually take 1 day, that’s a red flag.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpt2b2aewx0bpm4qocpkl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpt2b2aewx0bpm4qocpkl.png" alt=" " width="692" height="562"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;The Summary card compares each issue to your team's actual history for the same issue type. Color-coded health tells you at a glance whether something needs attention.&lt;/em&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Mistake #2: Treating Symptoms, Not Root Causes
&lt;/h2&gt;

&lt;p&gt;You notice issues pile up in “Code Review” every sprint. So you discuss the issue and agree to try “reviewing faster.” Next sprint, same problem.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What teams do wrong:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Focus on individual issues instead of patterns&lt;/li&gt;
&lt;li&gt;Push people to work faster instead of fixing the process&lt;/li&gt;
&lt;li&gt;Have the same retrospective conversation every two weeks&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;What to do instead:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Look for patterns across sprints. Is the bottleneck consistent? Does it happen with certain issue types? Are certain team members being overloaded?&lt;/p&gt;

&lt;p&gt;The problem usually isn’t people—it’s the process. Maybe reviews pile up because:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;No dedicated review time is scheduled&lt;/li&gt;
&lt;li&gt;One person is the bottleneck for all reviews of certain type&lt;/li&gt;
&lt;li&gt;Stories are too large to review quickly&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flunnq1qrao4nbc5lzu67.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flunnq1qrao4nbc5lzu67.webp" alt=" " width="800" height="428"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Flow Intelligence surfaces patterns across your team—trends, predictability, cumulative flow. Stop asking “why is THIS issue late?” and start asking “why does this KEEP happening?”&lt;/em&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Mistake #3: Reporting on the Past Instead of Predicting the Future
&lt;/h2&gt;

&lt;p&gt;Time in Status tells you what already happened. By the time you see a problem, it’s too late. The sprint is over. The stories didn't complete.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What teams do wrong:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Only look at metrics in retrospectives (after the damage is done)&lt;/li&gt;
&lt;li&gt;Can’t answer “when will this ship?” with confidence&lt;/li&gt;
&lt;li&gt;Constantly surprised by delays&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;What to do instead:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Use historical data to predict future delivery. If your team’s stories typically take 3-5 days, and this one started 2 days ago, you can forecast when it’s likely to finish.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Funsi2ot0l1rqy3y858ma.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Funsi2ot0l1rqy3y858ma.png" alt=" " width="800" height="561"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The Forecast card shows three scenarios based on your team’s actual completion patterns:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Likely&lt;/strong&gt; — Half of similar work finished in this time or less&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Plan for&lt;/strong&gt; — 85% of similar work finished in this time or less&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Worst case&lt;/strong&gt; — 95% of similar work finished in this time or less&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When someone asks, “When will this be done?”—you have an answer based on data, not gut feel.&lt;/p&gt;




&lt;h2&gt;
  
  
  Mistake #4: Reports That Don't Lead to Action
&lt;/h2&gt;

&lt;p&gt;A report that sits in a dashboard doesn't improve anything. The value isn't in the chart—it's in knowing what to do next.&lt;br&gt;
Your Cycle Time went up 20% this sprint. Your throughput dropped. The cumulative flow diagram shows a pattern forming. What's causing it? What should you try first?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What teams do wrong:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt; Collect metrics but never act on them&lt;/li&gt;
&lt;li&gt;Spend retrospectives debating what the data means&lt;/li&gt;
&lt;li&gt;Make changes based on guesses, then don't know if they helped&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;What to do instead:&lt;/strong&gt;&lt;br&gt;
Get guidance alongside the data—so you can move from insight to action faster.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqnuvvnwd4w90en05euy0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqnuvvnwd4w90en05euy0.png" alt=" " width="800" height="748"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Noesis helps you interpret what you're seeing and suggests what to try next. Ask it anything:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;"What does this cumulative flow diagram tell me?"&lt;/li&gt;
&lt;li&gt;"Why is our cycle time increasing?"&lt;/li&gt;
&lt;li&gt;"Work keeps piling up in QA—what should we try?"&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You learn to read the patterns while solving real problems. The next time you see a similar issue, you'll spot it yourself.&lt;/p&gt;




&lt;h2&gt;
  
  
  Mistake #5: Using Time in Status to Measure People Instead of Process
&lt;/h2&gt;

&lt;p&gt;Time in Status becomes a surveillance tool. "Why did YOUR tickets take so long?" Developers get defensive. Trust erodes. People start gaming the metrics.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What teams do wrong:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use time data in performance reviews&lt;/li&gt;
&lt;li&gt;Call out individuals in standups&lt;/li&gt;
&lt;li&gt;Create pressure without providing support&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;What to do instead:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Debug your process rather than eroding trust. The best coaches don't track who's slow—they remove what's in the way and make everyone faster.&lt;/p&gt;

&lt;p&gt;When work takes too long, it's usually systemic: too much work in progress, dependencies, or overload in review and test.&lt;/p&gt;

&lt;p&gt;The question isn't "who's slow?" It's "what's blocking flow?" Maybe one person is the bottleneck for reviewing a certain part of the codebase. Maybe no one can step in when testing backs up. These are process problems with process solutions.&lt;/p&gt;




&lt;h2&gt;
  
  
  Don't Repeat Yourself
&lt;/h2&gt;

&lt;p&gt;Measuring Time in Status is useful. But it's just a starting point.&lt;br&gt;
The real goal isn't tracking how long things take. It's:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Spotting patterns&lt;/strong&gt; before they derail yet another sprint&lt;/li&gt;
&lt;li&gt;Understanding &lt;strong&gt;why delays happen&lt;/strong&gt; so you can fix them once&lt;/li&gt;
&lt;li&gt;Having &lt;strong&gt;better conversations about process&lt;/strong&gt;, not blame&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Fix the root cause. Stop explaining the same delays. Move on.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;I'm the lead developer of &lt;a href="https://marketplace.atlassian.com/apps/4028833452?utm_source=devto&amp;amp;utm_medium=blog&amp;amp;utm_campaign=why-you-keep-explaining-same-delays" rel="noopener noreferrer"&gt;Time In Status by Smart Guess&lt;/a&gt;&lt;br&gt;
 — it shows you how long Jira issues spend in each status—so you can see where work waits, spot review bottlenecks before they pile up, and finally have data for those "why did this take so long?" conversations.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>atlassian</category>
      <category>jira</category>
      <category>agile</category>
      <category>productivity</category>
    </item>
  </channel>
</rss>
