DEV Community

Discussion on: Why Testing After Is a Bad Practice

Collapse
 
highperformancecoder profile image
Russell Standish

TDD works well when the behaviour is specified up front. The problem is not a lot of my code is like that. The process of writing the code involves discovering what is required (which is very agile - write the code, get it into the hands of the stakeholder ASAP, revise the code according to feedback).

As much as possible, I do testing at the same time as coding. You need to test the code anyway to verify your solution is doing what it is supposed to do, so why not make a little extra effort and automate those tests. So yeah - having a separate ticket item "Write tests for ..." is a bad move.

I do agree that this process does tend to miss edge cases. Hence, I also try to factor in a period of what I call "coverage testing" at some point during an iteration. Using a code coverage tool (eg gcov) take a close look at the lines of code not covered by the regression suite. Then think about what is involved to exercise those lines of code, and create tests where relevant. It's a great way of catching bugs. Not all lines of code need to be exercised by a test - the "dummy tests" mentioned above - so focussing on a percentage coverage is not really worthwhile. Focus on the lines not covered, and decide then whether testing is appropriate.

Collapse
 
mbarzeev profile image
Matti Bar-Zeev

I agree with most of what you wrote here. As for the "coverage test" period, This sort of things do not apply when praticing TDD for the obvious reason that there is not a line of code which did not start with a test.
What you mentioned about checking put the coverage and writing tests accordingly is also my way when I need to write tests after. How else would you do that, right? ;)