DEV Community

Cover image for Scrum smells, pt. 7: Wishful plans
mobile.IT
mobile.IT

Posted on

Scrum smells, pt. 7: Wishful plans

In the preceding parts of the planning series, we were just preparing our ground. So today, let's put that into practical use and make some qualified predictions.

You're planning an initial release of a product and you know what features need to be included so that it gets the necessary acceptance of users. Or your stakeholders are asking you how long it will take to get to a certain feature. Or you have a certain budget for a project and you're trying to figure out how much of the backlog is the team capable of delivering for that amount of money.

Measuring velocity

There is a useful metric commonly used in the agile world called development velocity (or team velocity). It basically says, what is the amount of work that a particular team can do within one sprint on a certain product in a certain environment?

In essence, it's just a simple sum of all the work that the team is able to do during a sprint. It is important to count only the work that actually got to the state where it meets the definition of done within that particular sprint. So when a team does work worth 50 story points within a sprint, that's the team's velocity in that given sprint.

Nonetheless, we must expect that there are variables influencing the “final” number. Estimates are not precise, the team might have its members sick or on vacation and so on. That means that the sprint velocity will vary between the sprints. So as always, the longer we observe and gather data, the more reliable numbers we can get. Longer-term statistical predictions are usually more precise than short-term ones.

So over time, we can calculate averages. I found it useful to calculate rolling averages over several past sprints because the velocity usually evolves. It smooths out local dips or highs caused for instance by the parallel vacation of several team members. Numbers from the beginning of a project will probably not relate very much to values after two years of the team maturing. The team gets more efficient, makes better estimates, and also the benchmark for estimates usually changes somewhat over the course of time.

That means that we will get an average velocity that represents the typical amount of work that a given team is able to do within one sprint. For instance, a team that finished 40, 65, 55, 60, 45, and 50 story points in subsequent sprints will have an average velocity of slightly over 50 story points per sprint over that time period.

Note: If you're a true geek, you can calculate standard deviation and plot a chart out of it. That will give you a probability model.

Unexpected work's ratio

Now the last factor we need to know in order to be able to create meaningful longer-term plans is the bias between the known and unknown work.

I'll use an example to explain the logic that follows. So let's say we have 10 user stories at the top of our product backlog, worth 200 story points. The development team works on them and after 4 sprints it gets them done. But when retrospectively examining the work that was actually done within those past 4 sprint backlogs, we see that there was a lot of other (unpredicted) stuff done apart from those original 20 stories. If we've been consistent enough and have most of the stuff labeled with sizes, we can now see their total size. Let's say 15 unexpected items got done in a total size of 75 story points.

That means we now have an additional metric. We can compare the amount of unexpected work to the work expected in the product backlog. In this particular example, our ratio for the past 4 sprints is 75:200, which means that for every expected story point of work, there came almost 0,4 additional story points that we had not known about 4 sprints ago.

Again, this evolves over time and you also get more precise numbers as time passes and the team matures. On one of our projects, we came to a long-term statistic of 0,75 of extra story points of unpredictable stuff for every 1 known story point, just to give you some perspective.

Having a measurable metric like this also helps when talking to the stakeholders. No one likes to hear that you keep a large buffer just in case; that's hard to grasp and managers usually will try to get rid of that in any planning. So a metric derived from experience is much easier to explain and defend.

Making predictions

So back to the reason why we actually started with all these statistics in the first place. In order to provide some qualified predictions, we need to do some final math.

With considerable consistency, we got to a state where we know the (rough) sizes of items in our backlog and therefore we know the amount of known work. Now we also know the typical portion of the unexpected stuff as a ratio to the known work. You also know the velocity of your team.

We will now add the percentage of unpredicted work to the known work and we get the actual amount of work that we can expect. Dividing by the team's velocity, we can get to the amount of time the team will need to develop all of it.

Let's demonstrate that with an example: There's a long list of items in the product backlog and you're interested in knowing how long it will take to develop the top 30 of them. There shouldn't be any stories labeled with the “no idea” sizes like “100” or “??”. That would skew the calculation considerably, we need to make sure such items don't exist there. So in our example, we know the 30 stories are worth 360 story points.

We've observed that our ratio of unpredictable to known stuff is 0,4:1. So 360 * 0,4 = 144. That means that even though we now see stuff for 360 points in our list, it is probable that by the time we finish the last one , we will actually make another (of course roughly) 144 points of work that we don't know about yet. So in total, we will have roughly 500 points of work to do.

Knowing our velocity (let's stick with 50 points per sprint), let's divide 500 / 50 = 10. So we can conclude that to finish the thirtieth item in our list, it will take us roughly 10 sprints. It might be 8 or it might be 12, depending on the deviations in our velocity and the team's maturity.

Additional decisions we can take

Two common types of questions that we can now answer:

  1. It's the first of January and we have 2-week long sprints with the team from the previous example. Are we able to deliver all of the 30 items by March? Definitely not. Are we able to deliver them by December? Absolutely. It seems that they will be dealt with sometime around May or June.
    
  2. We know our budget will last for (e.g.) 4,5 months from now. Will we be able to deliver those 30 items? If things go optimistically well, it might be the case. But we should evaluate the risk and decide accordingly.
    

    How can we act upon this? We can now systematically influence the variables in order to increase our chances of fulfilling the plan. A few options out of many:

  • We can try to raise the team's velocity by adding a developer if that's deemed a good idea.

  • We can try to simplify some stories in the backlog to make the amount of known work smaller.

  • Or we can push the plan's end date.

A warning: Some choose an approach to let everything be constant and try to increase the velocity by “motivating” (understand forcing) the team to plan more story points for a sprint. I don't need to explain that this is a dead-end that, statistically speaking, leads to the most likely scenario of having something “fall over” from the sprint backlog. It burdens the team with the unnecessary overhead of having to deal with the consequences of overcommitment during the sprint and work that won't get done any faster anyway. We can rather review the development tools and processes to see if there is any chance for velocity improvement, but that should be a permanent and continuous activity for any team regardless of plans.

Final words

Planning projects is never an exact process. But there are certain statistics and metrics that can give us guidelines and help us see how realistic various plans are. We can then distinguish between surefire plans, totally unrealistic plans, or reasonable ones. It can tell us when we should be especially cautious and take action to increase our chances.

But any predictions will only be as precise as we are transparent and honest with ourselves when getting the statistics. Trying to obscure anything in order to pretend there are no unforeseen factors or problems will only make the process more unpredictable in the long run.

So hopefully this article will inspire you on how to tackle the future in a more comfortable way.

Written by: Otakar Krus

Top comments (0)