loading...

Failing Fast : Hypothesis-driven Decision Making

dwd profile image Dave Cridland ・4 min read

Fail Stamp

"Fail Fast!" is a constant refrain amongst developers - and increasingly business types as well. This article is going to tell you what this really means, why this is such a good idea (despite the hype!), and how to go about it to maximize success. At the end, I'll go over those once more to check.

But first, I'm going to tell you one of those was wrong. Because we're not going to use the technique to maximize success at all - in fact, in many ways, the opposite.

We're going to minimize the cost of failure - and surprisingly, maximize the benefit.

With Science!

Hypothesis-driven Decision Making

When I'm faced with explaining what "Fail Fast!" really means, I usually fall back on the drier - but infinitely more accurate - name of Hypothesis-driven Decision Making.

After all, the object of the methodology is not to fail fast - ideally, we don't want to fail at all - but what we do want to do is identify failure quickly and correct the trajectory. And when we do encounter failure, we'll be gaining almost as much as a success.

As with many things I encourage, this is yet another re-purposing of the scientific method, so let's just quickly review the steps:

  • Hypothesis (Prediction)
  • Experiment
  • Review

Hypothesis

The first step is to construct a hypothesis.

A hypothesis is simply a prediction of what will happen if a particular course of action is taken.

I'm going to use some rubbish examples, here - mostly because I don't want you to be thinking about the problem, so much as how we handle it. So I'm going to pretend that adding a button is something we need to do this way - and not by A/B testing on the live site. Should we add a button? It's a tough call. The dev team is split. Marketing has no data (and besides, they've all gone out to lunch). Sales are very enthusiastic, but nobody ever knows why.

So we construct our hypothesis. We think making the button is going to maintain, or increase, our button presses. And we think this will be measurable after a week. Where you've got a time-box like this, ensure it's the smallest reasonable time-box you can. Time is, after all, money - one of the few clichés that's universally true.

But importantly, ensure that you have good consensus over the hypothesis.

Experiment

Now we have the criteria for success, we can go for it. In this case, it's easy - add the button.

In other cases, you'll be carrying out a marketing plan, or some development, or whatever. I'm calling this an experiment, but what it really is is just doing the thing. You only, though, need do enough to prove your hypothesis - or rather, you need do enough that you can potentially disprove it.

Review

At the earliest point you can, test the hypothesis against reality.

In our daft example, we said we should measure a change in just a week - so this point would be exactly a week after the button is added. So at that point, stop. Measure.

If the results confirm your hypothesis, then move on. If they don't, you've cost the company a little over a week.

Whatever you do, don't now get embroiled in debates over whether the hypothesis was right or not. This requires a bit of discipline, but allowing yourself to question the hypothesis reopens a debate and also means the experimental results are worthless.

Instead, always declare either success or failure dispassionately, and decide on a new hypothesis to test. The results of your previous experiment will often feed into the next, and it's important to build a culture that understands that this gain in information is valuable whether or not the hypothesis was proven.

That doesn't mean that any failed experiment is abandoned entirely, of course - if there's strong consensus that another week would show the change, that'll make a fine experiment to run. But it's a distinct experiment, and it may well be that another experiment makes more sense.

The sunk cost fallacy

By drawing a clear line under each experiment, you also solve another problem - the sunk cost fallacy. The sunk cost fallacy says that if you spend $20,000 on trying to solve a problem, estimate it'll take another $40,000 to complete, and only then find an alternate solution that will cost a total of $30,000, the already "spent" $20,000 doesn't factor into your choice.

By using hypothesis-driven decision making, you always avoid this - each experiment, each step forward, is taken in isolation naturally.

Fail fast, but fail well

Failing fast, therefore, is as much about a solid framework for identifying failure. Done well, it'll not only limit failure to a pre-agreed scope, but a good team can gain a lot of knowledge from such failures. And when things go well, these, too, are clearly identified.

The way to get most out of the methodology is to be disciplined throughout, and get solid consensus over the hypothesis, the experimental method, and seek an objective outcome.

Happy failing!

Posted on by:

dwd profile

Dave Cridland

@dwd

Open source and open standards for mission-critical code.

Discussion

markdown guide