re: When do you write your tests? VIEW POST


I don't use TDD, but I definitely write tests. Just to note that there are differences between the two.

Using a slightly different meaning of "when", I write tests when there's a behavior that I want to be sure of. I think thinking about why you write tests is more productive than strictly thinking whether you should follow the rules or not.

I always ask myself, what do I want to make? How can I make the API clean? How can I assert that the behavior is correct? What are the steps necessary so that I'm able to verify that? What information do I want to get from those tests?

At this point, I'm just thinking, designing, and jotting notes. No code (not even test code) is written, except for maybe sketching out the possible APIs and see which I like best.

Afterwards, I'm going for the implementation first, to make sure I can write an implementation of the API that I designed. Along the way, I keep asking questions: if I write it like this, can I still verify the behavior? Is there something necessary on the implementation that I missed to consider on the design phase? If so, I can just iterate and improve the API with the new knowledge that I gathered.

If I think the implementation is done, I write the tests to verify what I want to be sure of. This is usually the point where I notice I make stupid mistakes on the implementation logic, thankfully before it goes to production.

On another note, I also write tests when I work with a library or piece of code that I haven't come across before. At a company where I work we have some DB access code auto-generation mechanism via annotations, and I remember writing tests for the auto-generated code (which I technically didn't need to do, as the code-generator were heavily tested by the responsible authors) to make sure that I understand how it works. Most of the time these are throwaway tests that I don't commit to source control.

So there you have it. I don't write tests first, but I design and code with tests in mind. Some might say that this is in a sense TDD, but TDD purists might argue that I don't actually follow the formula of red-green-refactor religiously.


It's the first time I heard of "red-green-refactor" catchphrase, but isn't it supposed to be "red-refactor-green"? 😅


I believe it's really red-green-refactor, as Wikipedia and several other articles confirm:

Each test case fails initially: This ensures that the test really works and can catch an error. Once this is shown, the underlying functionality can be implemented. This has led to the "test-driven development mantra", which is "red/green/refactor", where red means fail and green means pass. Test-driven development constantly repeats the steps of adding test cases that fail, passing them, and refactoring.

The principle is that, if we're in the red, it's not possible to refactor confidently, because you don't know whether the current code to be refactored is correct, or even exists at all!

I understand now. Thanks for your explanation!

Code of Conduct Report abuse