DEV Community

Cover image for Confirmation Bias: How your brain wants to wreck your code
Matt Eland
Matt Eland

Posted on • Updated on

Confirmation Bias: How your brain wants to wreck your code

We suck at testing our own code. We suck so badly at it that it has led to entire professions like as Quality Assurance analysts and test engineers. We're not necessarily bad at coding, but we're notoriously bad at finding our own bugs.

Why is that? How do you take some of the smartest people out there and find some of the most glaring omissions and defects?

Confirmation Bias

The answer lies in confirmation bias:

Confirmation bias is the tendency to search for, interpret, favor, and recall information in a way that affirms one's prior beliefs or hypotheses.

Essentially, when we're developing features, we're evaluating whether or not our code works. We run our application to make sure it meets every stated requirement and every unstated one we can think of. When all the boxes are checked, we hand the code off to QA and they tell us all the things we missed.

This is because we're biased towards confirming that our code works as we expect it does. We test with a narrow set of steps that we repeat every time until our code works for that workflow, then we do some cursory testing on the feature and move on to the next aspect of it or the next feature. We rarely stop and question our core assumptions about our test data, environment setup, or even the edge cases on business rules.

It's not that we're bad, it's that we're so focused on the technical weeds of implementing new features in the best (fastest, cleanest, simplest, safest, most interesting, etc.) way possible that we don't think about the actual forest of the feature or its full context in the larger product offering.

What about Test Driven Development?

But what about Test Driven Development (TDD)? That's supposed to improve quality and prevent bugs, right? Shouldn't that be the solution? After all, in TDD, you write the tests first and then code the application to pass those tests. Doesn't that bypass confirmation bias by putting the requirements up front?

Test Driven Development is Red (failing tests), Green (Passing tests), Refactor (Cleaning up test code and code under test)

Sure, TDD does improve quality and it reduces the odds of software breaking in the same way twice, but it doesn't get around the core problem of confirmation bias. The problem is that we don't know all of the requirements up front because of a wide variety of factors.

Maybe you don't realize some requirements until you see a working prototype or possibly you're in an agile environment and it's too time-prohibitive to write up a comprehensive set of requirements for everything going into a sprint. Whatever the reason, you rarely have a full set of requirements that can shield you from any potential bug in your code.

To be fair, this is certainly far better than writing code and verifying that it looks like it works, but we're still rooting for the code to match a list of requirements, whether that's by getting unit tests to pass or manually testing the software.

Solving Confirmation Bias

So, if professional developers are so bad at finding problems with their own code, why are quality assurance professionals so much better at finding bugs? In part it's because of their specialized skillset and training as well as their proven experience, but I think it's also deeper than that.

Just as confirmation bias makes developers tend to confirm that their code works, testers often have confirmation bias that bugs are lurking deep inside the system and so they dig until they uncover as many as they can.

Think about it: if you're a developer and eager to see if your hard work paid off and move on to the next task to keep the project on schedule, you're going to have an entirely different approach to testing than a tester who looks at code as a hunt for bugs waiting ruin a user's day. Testers make it a pride to look at the code with a focus of finding each and every error, edge case, usability issue, or missing validation rule, and that's a very good thing.

What if I'm not a tester?

Okay, so motivated and talented quality assurance professionals who understand software development are a big part of the answer to confirmation bias, but what if you don't have a tester or can't afford to hire any? What can you do to eliminate bugs before handing code off to testers or shipping it to production?

I have a few ideas that could help developers act more like highly-skilled testers:

Trick your brain while verifying code

When you're wrapping up a set of changes, you need to signal to your brain that you're looking at things from a fresh and different perspective. Do whatever it takes to make your brain look differently - move to a different location in your home or office, change the lighting or decorations, put on a literal hat, stand instead of sit - do as much as you can to signal that this is not business as usual, then look over the changed code you are committing expecting to find bugs. Don't find any? Great, fire up your application and give it whatever random input you can. Really try to break it.

Test the things that scare you most

While you were making your changes, chances are something popped into the back of your mind and worried you about the areas of code you were changing. Some nagging fear or uncertainty - either in potentially breaking something else or something you didn't understand. Stop and investigate this until your anxiety is lessened. As much as our minds try to trick us, they're also really good at background processing and making connections and chances are they've zeroed in on some of the larger areas of risk in your domain.

Set a minimum time limit and stick to it

If you sit down and say "I'm going to test this for 15 minutes trying to break it, and I'm not going to stop until the timer expires or I find some issues", you're turning confirmation bias on its head by dedicating the time to testing the change. You want that time to be spent productively, so you're actually psychologically incentivizing yourself to find problems with your code.

Look at your code through a fixed set of 'lenses'

This is similar to my suggestion on code review, but you can take multiple passes through your application when testing. That means focusing on a specific area of risk and testing the code top to bottom against that risk.

Some examples of lenses include:

  • Looking for validation issues
  • Testing for security problems such as SQL Injection
  • Working only with a keyboard
  • Working only on a tablet or touchpad
  • Using a certain browser that scares you so much it keeps you awake at night
  • Testing as anonymous users
  • Testing as authenticated users
  • Testing as administrators
  • Looking at how a feature works with other major feature (repeat per major feature)

Whether or not you're a tester, you have the power to fight against confirmation bias. Of course it'd be better if we could work to eliminate as many classes of defects as possible, but at some level we'll always need to test and so, we'll need to find the best ways to remove or work around our biases and really test our code.

Top comments (3)

Collapse
 
stealthmusic profile image
Jan Wedel

We don’t have QA testers and I don’t think this is really necessary (which you seem to suggest).

In my team, we love testing because that makes us sleep better after shipping code to production.

To fight the confirmation bias, we do code reviews (which is OKish,
But not optimal) but much better, we do pair programming whenever possible (in that case, we don’t do code reviews). This really makes you develop differently. Usually, while pairing, you won’t write code that “will hopefully be OK” because of the peer pressure. You will write more and better tests.

Collapse
 
integerman profile image
Matt Eland

I do think pair programming is a help on the journey. I've worked with all forms of QA from mine to a dedicated and very involved department that did Tom's of exploratory testing as well as feature and bug testing. I've seen good dev testers and bad ones as well as awesome and awful QA. Every team's needs are different.

Collapse
 
aleksikauppila profile image
Aleksi Kauppila

This is a really nice series. Thanks and good job Matt 👍👍