The best estimate we can provide for non-trivial development is always an educated guess made in good faith. Here's a partial explanation of why we call estimates "estimates."
To even attempt a perfectly accurate estimate we would need to do part of the development work - the very work being estimated. A significant part of that effort would include reading and understanding existing code, often understanding the interaction between different systems, and planning changes. We must do these things whether we estimate or not. The accuracy of the estimate would depend on how much of that we do up front.
It usually makes sense to do such research and planning up front, but we usually do so to help us arrive at a more educated guess, not to produce a perfectly detailed series of development steps. The more precisely we attempt to plan the individual steps so that we can estimate how long they will take, the more of our development work we would actually be performing as part of the estimation of that work.
Consider what that means: We could ask developers to take time attempting to provide detailed, accurate estimates, but now the effort of estimating would include what would have been part of the development effort. The result? We would have to ask developers how long it will take to produce the estimate. That's right, an estimate for the estimate.
At this point we're only a few steps removed from completing the actual work to produce an estimate. We produce the desired software, it takes exactly seven months, and now, with zero remaining effort, we can accurately "predict" that the development takes seven months.
Am I exaggerating or playing games with words? Not at all. If we wish to provide perfectly accurate "estimates" for non-trivial development to meet a specified set of requirements, that is the rabbit hole we go down. It becomes easier to see why we stop short of that and settle for a well-informed, good faith guess.
But suppose we don't. Absurdly, we attempt to plan out every detail of intended development, solving every problem up front, so that we know exactly how long it will take to make the corresponding modifications. Can this succeed?
No. Each planned development step would depend on knowing the exact state of the system after the previous steps have been completed. It would require our planning to represent an accurate simulation of the development process. This is close enough to impossible that it's not worth attempting. Actual development involves course correction, refactoring, and moments when we realize that the without additional modifications, we can't code what we thought we could. We must either modify existing code to accommodate our intent, change our intent, or both.
We will not overcome this uncertainty by studying our code and planning more carefully. And that's not a problem. We recognize that uncertainty when we begin work, and use our skills to solve problems as they arise. Knowing that our development will rarely proceed in a straight line along a predictable path, we do our best to incorporate such obstacles into our estimates.
Suppose that we decide to attempt a perfect estimate anyway, and spend a significant amount of time anticipating every modification and perfectly visualizing how the outcome of each one fits into the next, all without writing code. Producing the estimate defeated its own purpose because it took a long, unpredictable amount of time to accurately predict remaining effort. What happens next?
- Someone gets sick or quits.
- Someone makes a mistake - just one.
- Requirements change.
Actually we can rule out changing requirements. We can have our perfect estimate or allowance for adjusting requirements, but not both. If the precise estimate wasn't unrealistic and absurd enough, are we also willing to lock in requirements? Of course not. If we are willing to lock in both time and requirements then software development is likely foreign to us. Unless, that is, we heavily pad the estimate, rendering it less accurate.
What about the other two? We can't plan for human factors and other unknowns. We can only account for them by padding the estimate.
Not only is the attempt to produce a perfect estimate self-defeating, but even in success it is defeated by unpredictable human factors and the unacceptable sacrifice of locking in requirements.
What do we do instead? That's conveniently beyond the scope of this article, but here are few approaches:
- If our timeline is fixed we prioritize requirements so that the success or failure of a project does not depend on the implementation of every last feature. If our development process delivers increments of working, valuable software then at the end of a fixed timeline we'll always have software worth using.
- In some cases we must be flexible with our timeline. When will self-driving cars be ready? When they're ready.
- If we're not shipping "completed" software - perhaps we're working on a website - then perhaps we can focus more on a steady stream of useful features rather than attemping to predict when a predetermined "big bang" will be available. Even software that users must install occasionally adds features through incremental updates.
I'm not arguing against the existence of estimates. (Some people do, and my point isn't to disagree with them either.) But if we must provide them, everyone involved must understand the limits of their accuracy. Padded estimates may be more reliable - although that's not always the case - but they are, by definition, less accurate.
Why have I considered all of this? Because some people disagree with the idea of continuously, iteratively delivering working software because it doesn't lead to producing large increments on a predictable timeline. And admittedly, there are scenarios in which large, scheduled deliveries are expected. But experience has shown that delivering software in increments results in more valuable software produced with less wasted time. You get more in the same amount of time. Would we rather have more software, delivered as it is completed, meeting users' changing requirements, or less software built for yesterday's requirements delivered on time?
Top comments (2)
Mike Cohn's excellent book Succeeding with Agile, chapter 15 Planning has a section called Separate Estimating from Committing.
I think it is important to emphasize that an estimate is not a commitment. I would only be willing to give an estimate as an iron-clad commitment if I am allowed to give the estimate after the work is done. ;-)
For those teams that use story points and calculate velocity as story points per sprint, the theory is that velocity is a better way to estimate how long it will take to get how much work done (with some range and confidence); or vice versa, given a target date, how much work is expected to be done at a minimum, and how much work may possibly be done as a maximum, and what work will not be done.
Using historical data ("yesterday's weather") as a predictor is a good thing, in conjunction with short iterations and an empirical approach. Works much better than wishful thinking, gut feel, and hand-waving that I've seen with Gantt charts laid out at the start of a project by a project manager who was unable to account for every sickness, mistake, requirement change, or surprise work discovered during development. (Gantt charts that are written in pencil, fluid, continuously adjusted & updated, as a map-and-a-plan to get from here-to-there are okay. Gantt charts that carved in stone, are immutable, drives the goals and milestones, and slippages are blamed on others are not good.)
I have regular conversations with people who ask for estimates, and quite often we find that there isn't much point unless there are choices to be made, things like do we/don't we, what else could we do?
Without choices, that are taken based on estimated time/cost, they really don't add much value.