DEV Community

Cover image for Estimates don't work, but there's a simpler way
Jon Lauridsen
Jon Lauridsen

Posted on • Updated on

Estimates don't work, but there's a simpler way

I see and hear of many teams that still do estimates, but estimates are a wasteful activity compared to alternatives and very often have hopelessly poor signal-to-noise ratios. There are better ways of working!

Here's the thing: If you can show your estimations reliably predicts the actual time it takes to complete stories then, okay, your estimates do provide a signal (though there are still easier ways of working, we'll get to that). But I have effectively never seen that happen, because actually almost no-one even does that analysis so they just go on blind faith that estimates somehow provide a signal. And the few cases I know of where estimates got compared to actual time taken inevitably found estimates do not reliably map to actual time spent.

Most of us got into estimates just because it kinda snuck in as part of scrum ceremonies, and it's high time we question this practice!

There is simply no point to estimating if an estimate does not reflect reality, because then it's useless as a prioritization mechanism. So let's be clear: If estimates do not map to actual time then estimates are worthless.

So what to do about it?

Here's the pitch: Use past performance to forecast likely futures instead. It's very easy: Just the start- and end-dates of past stories is enough to thoroughly forecast future outcomes.

There are tools to do this, for example Troy Magennis has free tools to make such forecasts just by filling out a few cells in Excel. Or ActionableAgile exists if you want a paid product. But what they share is they produce probabilistic forecasting of likely end-dates based on past performance. Because your team is probably right now slicing stories and working on them the same way they did the past several weeks, and so the same past patterns are more likely than not going to express themselves in the future.

With a forecasting model you get up-to-date forecasting every day as new progress is made, which is something estimates can't do: Let's assume the team is working on on an epic that first had 5 stories to complete, and then they discover new complexities that results in a total of 10 stories under that epic, then the forecasting model can instantly predict the impact this change has on the likely completion date. That is a game-changer over estimations that are stuck in their original form unless a time-consuming re-estimation process is activated. And just like a Google Maps journey has an ETA that becomes more precise the closer you get to your destination, the forecasting also becomes more precise the closer you get to the end.

Even if we accept the wild hypothesis that actually forecasting isn't any more precise than estimates, and I certainly don't believe that assertion but let's play along with it for a moment, then forecasting is certainly an easier process to follow: No more planning poker, no more arbitrary fibonacci numbers, just focus on doing the actual work and let the model give you predictions. It's actually a very simple tool that just has a complex name (probabilistic forecasting can sound like something out of Star Trek), but it actually just amounts to filling in a a couple of date-fields every now and again.

Forecasting is also just the beginning of an incredible journey based on metrics. Tracking how quickly stories gets completed is a performance metric, and if combined with a sensible set of measurements that also include quality metrics (e.g. number of bugs, how long it takes to fix outages, etc.), then the team can start safely experimenting with alternate ways of working. What if a team can cut the time they work on stories in half, and keep all other metrics equal? They would double the opportunities to react to changing requirements, react twice as fast to customer feedback, generate twice as many iterations.

It is a superpower to optimize for responsiveness, and all the team has to do is to safely try experimenting with how they work. For inspiration, here are some possible experiments to run:

  • Retrospect all stories that exceed one standard deviation, and work to right-size all stories so they generally take the same amount of time (always anchored around each story delivering actual customer value). Right-sized stories bring stability and predictability, and will over time drive towards quicker delivery of stories.
  • Find ways of working that eliminates/reduces bugs and rework, because such unplanned harms predictability. This could include pair- and team-programming.
  • Analyze and identify where work queues up and lies idle, because those are sources of delays. A great mindset for a team is to challenge each of the steps in their work to only do the essential steps, because that minimizes waste. E.g. if code-changes are blocked by manual testing requirements then perhaps that testing can be eliminated in favor of test-automation, and the manual testing can be carried out in production behind feature flags? (recall the team is optimizing for all metrics to improve, so in this case this change must not result in more bugs being reported)

There are hundreds of exciting experiments teams can run to optimize their ways of working. So simplify your work-processes today by dropping those estimates and make room for forecasting instead.

Top comments (0)