loading...

The guilt of not testing everything

ozymandias547 profile image Joseph Jung ・2 min read

My CTO: “We’re really impressed with your testing solution, you’ve gone above and beyond!”

(Immediate feeling of guilt washes over me)

This happened a couple months ago after my CTO heard about a testing solution I put together for our web applications. My solution was to auto-generate the QA Selenium code instead of manually coding it by recording/configuring the tests. (Shameless plug: it’s available for free over at snaptest.io . This tool has dramatically increased coverage - but why did I still feel guilty about accepting this compliment?

It’s because I unreasonably assume testing should be exhaustive.

Exhaustive testing: refers to running a test on every possible combination of actions/data.

The guilt comes when I say things like: “I’ve tested the new feature”, or “the tests are passing”. In the back of my mind, I know there are hundreds of thousands of combinations I didn’t test for.

We must give ourselves a break and remember that exhaustive testing is usually not possible. One must reconcile the cost/reward balance sheet of testing… Especially when you start thinking about negative tests.

Negative tests/Destructive tests/error path testing – refers to checking how gracefully a system handles errors or unexpected situations. This is when exhaustive testing gets exhausting… Don’t expect to get them all.

Here are my recommendations for handling negative test cases:

  1. Pick off the easy negative tests like form validation.
  2. Defer negative tests that will take a lot of work to create and maintain. Such as 1 slow network connections on mobile applications. It may be more economical just to manually test those occasionally.
  3. Spend time making negative tests on the most commonly experienced errors. Ask yourselves, which errors are users most likely to see? Cover those ones.
  4. Don’t worry if you don’t cover every state of the app. Just remember, any test is helpful!

Discussion

pic
Editor guide
Collapse
pierresassoulas profile image
Pierre Sassoulas

A mentor once said to me

"If you have to test the sinus function, you can't test every possible values, because this is a continuous function with infinite values. Your work as an engineer is to find a compromise between testing enough so it does not ever break in production, while not being too expensive so that our competitor do not take our business. In order to do that you have to test intelligently."

Every tests is useful should be taken with a grain of salt. A test covering a very small part of the code, never catching any errors and taking 15s every time you launch your test suite should probably be removed...

Collapse
jlhcoder profile image
James Hood

Nice post! I got over his guilt after I had to maintain a set of selenium tests that took 8+ hours to run and were flaky on top of it. 😱 Now I give my teams strong guidance on not even thinking of integration level testing in terms of coverage percentages, but rather overall risk management. I'm also an advocate of white box integration tests where you mock certain layers to test your highest risk/complexity areas of code in a controlled, fast-running environment.

Collapse
mortoray profile image
edA‑qa mort‑ora‑y

The primary thing to test are those things you've written as use-cases or are being sold as features. Many people seem to forget these tests. Beyond that it's relatively open. Maybe a few perceived combinations and expected common errors.

Collapse
asynccrazy profile image
Sumant H Natkar

I like unit testing my code because it helps me finding null object reference exceptions or some particular case which I have forgotten to include in my code.

But expecting more than that is too much for me.

Also tests are especially helpful when you change something, they give immediate feedback if you have broken something which was working.

Collapse
pbeekums profile image
Beekey Cheung

Great post. I think it's also important to note that a large value of tests come from making it easier to debug issues users do see. Tests give you all the things that are not going to be the cause.

I once worked at a company where a specific system took hours to debug when something came up. We eventually rewrote it with tests. Bugs still popped up, but they took 15 minutes for us to debug, fix the code, and deploy.

Collapse
c0il profile image
Vernet Loïc

To quote Fabien Potencier, creator of the Symfony framework: "Some tests are better than... no test at all!"

Collapse
damimj profile image
Damian eMe JoTa

Use this cheat sheet of data type attacks and web tests, and you will be happier:

testobsessed.com/wp-content/upload...