DEV Community

Cover image for Why I Don't do TDD

Why I Don't do TDD

Shai Almog on November 22, 2022

I recently gave a talk about debugging for the London Java Community. During the Q&A part of the talk, someone asked me about my approach to Te...
Collapse
 
jmfayard profile image
Jean-Michel πŸ•΅πŸ»β€β™‚οΈ Fayard • Edited

I agree with your description of the use cases and no-use cases

Generally I think the world would be better off if we saw Test Driven Development as a tool in our toolbox rather than something you SHOULD do if you want to be a REAL programmer.

Also true for many other things as I said in this old article of mine

42 things you MUST stop obsessing about if you want to become a good $PERSON - DEV Community πŸ‘©β€πŸ’»πŸ‘¨β€πŸ’»

Collapse
 
nicolus profile image
Nicolus

Thing is everyone wants a recipe for how to be a good programmer, and it's much more alluring to think "If I do TDD religiously I'll be a good programmer" than "If I get years if experience working on real products with various tools I'll be a good programmer". Plus when you're in the TDD circle you get that warm fuzzy feeling of being able to talk down to anyone who's not doing TDD.

Collapse
 
jmfayard profile image
Jean-Michel πŸ•΅πŸ»β€β™‚οΈ Fayard

True

Collapse
 
jtlapp profile image
Joe Lapp

Thanks for saying this out loud. TDD makes sense when waterfall makes sense, which is when you have clear requirements in advance of development. It doesn't make sense when requirements and solution are evolving, as coding to the tests slows me down and holds me back from finding the right solution. I find TDD helpful once requirements are clear for a module, later in development. Given agile's prevalence, it seems to me that TDD ought to be rare at the start of a project.

Collapse
 
phlash profile image
Phil Ashby

Interesting - as my view of TDD (and it's cousin BDD) is that the value is higher for poorly specified systems, as the up-front test design forces early consideration of the specification, and likely reduction of scope and effort to get something in front of it's consumers where feedback is obtained earlier.

My general view:

  • TDD/BDD or any creating tests first strategy applies an 'outside in' principle to designing a system, by asking 'how do we test that?': from general behavioural traits ('what does it do?') down through architectural decisions ('how does it do that?') to coupling considerations ('what does this API look like?') and internal detail ('how does that module work?'). At each stage the human preference to minimise the number of tests keeps the focus on core things, reducing accidental complexity issues.
  • Post implementation test strategies apply a waterfall-like 'inside out' principle, where (unit) tests are created for details as they implemented, then integrations (coupling), then behaviour - which all assumes an accurate a-priori specification. If the specification is not accurate (in my experience this is always!) both the implementation and test creation work is wasted effort, especialy when the last set of tests discover that the system doesn't do what the consumer(s) really wanted (or they have changed their mind by then).

As ever - it depends 😁 and YMMV.

Collapse
 
jtlapp profile image
Joe Lapp

I'm not smart enough to figure everything out before I start coding. Even when I do think I have everything figured it, it turns out I'm often wrong and end up having to revise the API. For simple things, yes I could do this, but for novel algorithms and architectures, it's beyond me. I know, I've tried.

Thread Thread
 
phlash profile image
Phil Ashby

This is kind of my point too - in an evolving requirements world, asking 'how do I test it?' before coding up something untestable helps clarify the requirements as they exist now for the behaviour you are interested in. In your earlier comment you say:

coding to the tests slows me down and holds me back from finding the right solution

I would ask, how do you know when you arrive at the right solution? You must be testing your work, thus you have designed a test before you commit to a solution.. you are thus doing some TDD, but perhaps not starting at the system level and asking: what needs doing first and how do we know we're getting that right? 😁

Thread Thread
 
jtlapp profile image
Joe Lapp

Yes, I do test, once I've coded up a basic framework, so I know what my APIs are and have some evidence that they're reasonable. But unless I completely understand the interface beforehand, I start with coding to get my basic design. Testing can start once I generally have inputs mapping to outputs. If I'm not there yet, any tests I might have written might end up getting rewritten.

Collapse
 
miketalbot profile image
Mike Talbot ⭐

Very good points. For me, I tend to TDD when I'm building an API as it helps as a way of testing it - eg. its the fastest way of building it. If/when mocking starts becoming ridiculous or contrived then I stop.

Collapse
 
davelapchuk profile image
Dave Lapchuk

APIs are also versioned entities where you need to ensure the behaviour of old versions never changes between releases.

Collapse
 
alvarolorentedev profile image
Alvaro • Edited

Nice article.
nevertheless, I always get to the conclusion people forget that TDD is a tool for minimalistic code design, which is great for extreme programmers.
It's a very good tool to keep your feet down to earth, a problem that appears as experience increases, because it follows the principle of YAGNI, as most of the software functionalities never get extended and design and patterns become only accidental complexity.

Collapse
 
codenameone profile image
Shai Almog

Methodologies are great until people start treating them as a religion at which point they often become a hindrance. I think there's no "one true way", to implement software correctly.

Collapse
 
leob profile image
leob

Well written & articulated!

The thing about TDD is that it forces you to write tests at all - you can't do TDD (obviously) without writing tests ... if you don't do TDD then who/what forces you to write them at all? It might (and will) be easily forgotten, you need discipline.

Nevertheless I agree with you - only use TDD when it makes sense, because in some cases it does, and in other cases it doesn't.

It should not be a religion.

Collapse
 
netch80 profile image
Valentin Nechayev

TDD is a religion which isn't fully followed in any practically important case. Strict following of it doesn't allow creating anything but a trivial loreless code from scratch. Modification of an existing code contradicts to it. Multiple requirement adjustments, R&D phase contradicts to it. More advanced testing approaches that a direct lowest level ("unit") functional testing contradicts to it. If anybody declares zhe uses TDD for a product larger than one-screen PoC, go checking where TDD principles are ignored.

OTOH, TDD is a good tool against lazy and cheating middle-level managers who tend to postpone any testing in order to declare feature release as early as possible. Am average Bill Lumbergh who releases a code without tests will violate not the abstract (for him) programming principles - he will violate administrative rules and so spoil own career. In that sense, requiring TDD in such a company, with proper emulation at lowest level, is a mean good:)

Collapse
 
dendihandian profile image
Dendi Handian • Edited

Still stick to Test-After Development, because I still need to check breaking changes everywhere when updating core and packages. TDD is more like culture.

Collapse
 
mcsee profile image
Maxi Contieri

Nice article !

Collapse
 
thenickest profile image
TheNickest

Interesting post. I like that you seem to think thoroughly about concepts and if you should use them. However, some opinion shared I do not share.
Unit Tests come relatively cheap. That’s why they should be written, as they assure code does what it shall do on the lowest possible level. They shall run short and with no harm. Iβ€˜ve experienced many cases where, in a strongly typed language, colleagues tended to write the tests for the code they produced first, rather than for how it was specified. In the end the code did exactly not what it should have done, although all tests were green. So what went wrong? They did not see the tests as a highly supportive measure but as a necessary deed. Bad. Also, code coverage alone does not give any hint about code quality e.g. sticking to DRY, KISS, single responsibility, if something is logic - u name it principle smarter folks have come up with. This makes it a bad KPI. I can deliver you 100% coverage with crappiest code.
Why I write most of my tests first is because they come cheap (time), I can run them infinitely (and therefore my code) plus I really think of my goal like: what am I supposed to do with what I get and what shall I return.
Claiming that TDD does not improve your code is just as senseless as just stating it does. It depends highly on what you are doing with it.

Collapse
 
jakub_zalas profile image
Jakub Zalas

Thanks for this post. It's good to be challanged from time to time. My experience is entirely different.

This company had one of the largest, most detailed sets of annex design books. Based on these design specifications the company built thousands of tests. We were supposed to pass a huge amount of tests with our system.

Seems like you're basing your opinion on an experience with a company that has not done TDD.

As a result, TDD over-emphasizes the β€œnice to have” unit tests, over the essential integration tests.

In my experience in TDD, unit tests are not nice to have. They're essential. Integration tests are still there, but we don't need a lot of them. They won't be end-to-end either. That's thanks to the design that TDD tends to encourage based on decoupled components. If I got each component to work and the integration between components, I will have enough confidence the system works as expected. Without bloated end-to-end integration tests. "Test a chain by testing every link."

One point where this really fails is in UI testing. Solutions like Selenium, etc. made huge strides in testing web front ends. Still, the complexity is tremendous and the tests are very fragile.

Yes, that's why people who practice TDD do not use Selenium as part of their workflow. Not on a large scale anyway. I'm missing what's your point here.

Integration tests are more important for quality in the long run.

How so? My experience is that internal quality of software is much better achieved with small, focused, micro-tests (both unit and integration). External quality is where acceptance tests shine.

Collapse
 
starkraving profile image
Mike Ritchie

I also struggle to implement TDD for new green field code, but I love it for bug fixes. If there’s a reproducible bug yet all my tests are passing, it means I don’t have complete coverage in my unit tests.

If I can add new tests that fail while my existing tests pass, there’s a good chance that I have an idea about what the defective code is. And if my subsequent changes to the codebase make the new tests pass, there’s a good chance that I’ve got a valid fix.

Collapse
 
wiktorwandachowicz profile image
Wiktor Wandachowicz

"If we have a pre-existing system with tests, then TDD makes all the sense in the world. But testing a system that wasn’t built yet. There are some cases where it makes sense, but not as often as one would think."

For me that's a contradiction. How then to create a new system (that wasn’t built yet) to have tests, if we don't create tests as we go? Or maybe, sometimes, apply TDD approach?

My opinion on this is to use the mix of all necessary things. Write unit tests for the code written during workday. Write unit tests for poorly specified parts before coding, hence use TDD. Use help of testers, or just teammates, to check new (and existing) software artifacts. Deploy often, fix quick. Listen to your customers, adapt, be agile.

Collapse
 
dburton90 profile image
Daniel Barton

I think only good reason for not using TDD is with long running integration tests. Other than that I really like programming against the tests - it makes programming really pleasant (especially debugging and refactoring). I personally hates the tests, but I know I can't live without them, so if I can use the test also for faster developing, I'll do that.
Usually I spent 1-2 hours to figuring out how to write first test for the feature I am trying to develop (what and how mock stuff, how the api will look like, etc...). But after that it's just pleasure :D. Just copy/pasting tests with small modifications and writing/refacoring code without any fear. I usually gain really good coverage without even focusing on that.

Collapse
 
netch80 profile image
Valentin Nechayev

All this means you don't use a true TDD but you just are writing tests. Well, this is really useful, unlike the true TDD which is the impractical religion.

Collapse
 
dburton90 profile image
Daniel Barton

That's true, what I am doing is not TDD by definition, but writing code against tests as much as I can. That's where I found most value. :)

Collapse
 
unknown_entity profile image
Mark

I think it is unfortunate that it is called TDD as the name makes people think it is about testing, it is not.

TDD is about shaving off a small part the bigger problem so that can be clearly articulated in code. Testing is a by product, code coverage is a by product, covering the backside of the developer is a by product. It is just another tool in our tool box but to think of it about unit testing is missing a bigger more important thing and I believe that is why people view it as something they don't have to do.

I am not from the school of thought that says you should do TDD for absolutely everything, but you probably need to do it more than the average developer thinks. We do need the fine grained thinking that enables us to write the tests in the first place, and that is actually harder than writing the tests so if you get to that point you may as well write down the test quickly then validate your thinking. Writing the code first then the test after is OK but most people will write the wrong code that does compile then write a test that validates their wrong code.

Long and short of it from my understanding is that TDD should probably be renamed to avoid people thinking the main thing is about unit tests because that is just a by product, but the elemental thinking and breaking down of the problem and encoding that in a test is the main thing. I'll stop here because I don't like writing that much but thought I'd add my 2 pence here.

Collapse
 
codenameone profile image
Shai Almog

Thanks for your thoughts. I've heard similar claims before but I'm yet to see the point of TDD without the testing aspect. Dividing a problem into smaller problems is at the core of many tools and methodologies, I don't see how TDD is different here.

Sure when we rely on the compiler alone we can inadvertently write bad code to satisfy the compiler (and linter). But a bad test can produce an even worse result and it's often the case that proper tests are harder to write than an implementation.

Collapse
 
josuto profile image
Josu Martinez • Edited

Nice article! I’d however like to share one though. TDD is about testing behavior, not implementation. Besides, integration tests serve the purpose of checking that an external dependency to your system behaves as expected. Therefore, if one could have control over the behavior of a system dependency, one could write a test suite for it before writing any code. In the cases where existing dependencies are implemented by other teams within our organization (a pretty common case in my experience), we could use Customer-Driven Contract (i.,e., CDC) testing; we could then follow TDD and enjoy all its benefits.

As a final note, I find hexagonal architecture as a great complement to TDD. It enables to properly decouple domain from dependency logic and thus focus first on developing your domain model, leaving the reasoning on how to integrate to an external dependency for later.

Collapse
 
elsyng profile image
Ellis

4 points of thought:

  1. I think, ideally TDD is when you write the tests first, then write the code later (hence the second D=driven in TDD). In practice people write the tests during or after the coding: I am not sure that is TDD as such.
  2. Also I think, ideally: the tests should be written by someone else (a tester, not developer) whose job is creating tests.
  3. The necessity and value-for-money of TDD for backend and for frontend are very different, which is often ignored or not understood. Backend and frontend are both software applications, but they are two very different beasts. Things like security, data validation, and testing: very important for backend, not nearly as important for frontend, relatively speaking.
  4. People can also become less inclined to update the code if they will also have to fix existing tests, which are often more difficult to read. That is to say, tests can cause the codebase to age/deteriorate faster.
Collapse
 
guillaume_agile profile image
craft Softwr & Music

What about the feedback time of E2E tests, and their poor reliability? (flaky tests)
blog.octo.com/en/the-test-pyramid-...