I recently gave a talk about debugging for the London Java Community. During the Q&A part of the talk, someone asked me about my approach to Te...
For further actions, you may consider blocking this person and/or reporting abuse
I agree with your description of the use cases and no-use cases
Generally I think the world would be better off if we saw Test Driven Development as a tool in our toolbox rather than something you SHOULD do if you want to be a REAL programmer.
Also true for many other things as I said in this old article of mine
42 things you MUST stop obsessing about if you want to become a good $PERSON - DEV Community π©βπ»π¨βπ»
Thing is everyone wants a recipe for how to be a good programmer, and it's much more alluring to think "If I do TDD religiously I'll be a good programmer" than "If I get years if experience working on real products with various tools I'll be a good programmer". Plus when you're in the TDD circle you get that warm fuzzy feeling of being able to talk down to anyone who's not doing TDD.
True
Thanks for saying this out loud. TDD makes sense when waterfall makes sense, which is when you have clear requirements in advance of development. It doesn't make sense when requirements and solution are evolving, as coding to the tests slows me down and holds me back from finding the right solution. I find TDD helpful once requirements are clear for a module, later in development. Given agile's prevalence, it seems to me that TDD ought to be rare at the start of a project.
Interesting - as my view of TDD (and it's cousin BDD) is that the value is higher for poorly specified systems, as the up-front test design forces early consideration of the specification, and likely reduction of scope and effort to get something in front of it's consumers where feedback is obtained earlier.
My general view:
As ever - it depends π and YMMV.
I'm not smart enough to figure everything out before I start coding. Even when I do think I have everything figured it, it turns out I'm often wrong and end up having to revise the API. For simple things, yes I could do this, but for novel algorithms and architectures, it's beyond me. I know, I've tried.
This is kind of my point too - in an evolving requirements world, asking 'how do I test it?' before coding up something untestable helps clarify the requirements as they exist now for the behaviour you are interested in. In your earlier comment you say:
I would ask, how do you know when you arrive at the right solution? You must be testing your work, thus you have designed a test before you commit to a solution.. you are thus doing some TDD, but perhaps not starting at the system level and asking: what needs doing first and how do we know we're getting that right? π
Yes, I do test, once I've coded up a basic framework, so I know what my APIs are and have some evidence that they're reasonable. But unless I completely understand the interface beforehand, I start with coding to get my basic design. Testing can start once I generally have inputs mapping to outputs. If I'm not there yet, any tests I might have written might end up getting rewritten.
Very good points. For me, I tend to TDD when I'm building an API as it helps as a way of testing it - eg. its the fastest way of building it. If/when mocking starts becoming ridiculous or contrived then I stop.
APIs are also versioned entities where you need to ensure the behaviour of old versions never changes between releases.
Nice article.
nevertheless, I always get to the conclusion people forget that TDD is a tool for minimalistic code design, which is great for extreme programmers.
It's a very good tool to keep your feet down to earth, a problem that appears as experience increases, because it follows the principle of YAGNI, as most of the software functionalities never get extended and design and patterns become only accidental complexity.
Methodologies are great until people start treating them as a religion at which point they often become a hindrance. I think there's no "one true way", to implement software correctly.
Well written & articulated!
The thing about TDD is that it forces you to write tests at all - you can't do TDD (obviously) without writing tests ... if you don't do TDD then who/what forces you to write them at all? It might (and will) be easily forgotten, you need discipline.
Nevertheless I agree with you - only use TDD when it makes sense, because in some cases it does, and in other cases it doesn't.
It should not be a religion.
TDD is a religion which isn't fully followed in any practically important case. Strict following of it doesn't allow creating anything but a trivial loreless code from scratch. Modification of an existing code contradicts to it. Multiple requirement adjustments, R&D phase contradicts to it. More advanced testing approaches that a direct lowest level ("unit") functional testing contradicts to it. If anybody declares zhe uses TDD for a product larger than one-screen PoC, go checking where TDD principles are ignored.
OTOH, TDD is a good tool against lazy and cheating middle-level managers who tend to postpone any testing in order to declare feature release as early as possible. Am average Bill Lumbergh who releases a code without tests will violate not the abstract (for him) programming principles - he will violate administrative rules and so spoil own career. In that sense, requiring TDD in such a company, with proper emulation at lowest level, is a mean good:)
Still stick to Test-After Development, because I still need to check breaking changes everywhere when updating core and packages. TDD is more like culture.
Nice article !
Interesting post. I like that you seem to think thoroughly about concepts and if you should use them. However, some opinion shared I do not share.
Unit Tests come relatively cheap. Thatβs why they should be written, as they assure code does what it shall do on the lowest possible level. They shall run short and with no harm. Iβve experienced many cases where, in a strongly typed language, colleagues tended to write the tests for the code they produced first, rather than for how it was specified. In the end the code did exactly not what it should have done, although all tests were green. So what went wrong? They did not see the tests as a highly supportive measure but as a necessary deed. Bad. Also, code coverage alone does not give any hint about code quality e.g. sticking to DRY, KISS, single responsibility, if something is logic - u name it principle smarter folks have come up with. This makes it a bad KPI. I can deliver you 100% coverage with crappiest code.
Why I write most of my tests first is because they come cheap (time), I can run them infinitely (and therefore my code) plus I really think of my goal like: what am I supposed to do with what I get and what shall I return.
Claiming that TDD does not improve your code is just as senseless as just stating it does. It depends highly on what you are doing with it.
Thanks for this post. It's good to be challanged from time to time. My experience is entirely different.
Seems like you're basing your opinion on an experience with a company that has not done TDD.
In my experience in TDD, unit tests are not nice to have. They're essential. Integration tests are still there, but we don't need a lot of them. They won't be end-to-end either. That's thanks to the design that TDD tends to encourage based on decoupled components. If I got each component to work and the integration between components, I will have enough confidence the system works as expected. Without bloated end-to-end integration tests. "Test a chain by testing every link."
Yes, that's why people who practice TDD do not use Selenium as part of their workflow. Not on a large scale anyway. I'm missing what's your point here.
How so? My experience is that internal quality of software is much better achieved with small, focused, micro-tests (both unit and integration). External quality is where acceptance tests shine.
I also struggle to implement TDD for new green field code, but I love it for bug fixes. If thereβs a reproducible bug yet all my tests are passing, it means I donβt have complete coverage in my unit tests.
If I can add new tests that fail while my existing tests pass, thereβs a good chance that I have an idea about what the defective code is. And if my subsequent changes to the codebase make the new tests pass, thereβs a good chance that Iβve got a valid fix.
"If we have a pre-existing system with tests, then TDD makes all the sense in the world. But testing a system that wasnβt built yet. There are some cases where it makes sense, but not as often as one would think."
For me that's a contradiction. How then to create a new system (that wasnβt built yet) to have tests, if we don't create tests as we go? Or maybe, sometimes, apply TDD approach?
My opinion on this is to use the mix of all necessary things. Write unit tests for the code written during workday. Write unit tests for poorly specified parts before coding, hence use TDD. Use help of testers, or just teammates, to check new (and existing) software artifacts. Deploy often, fix quick. Listen to your customers, adapt, be agile.
I think only good reason for not using TDD is with long running integration tests. Other than that I really like programming against the tests - it makes programming really pleasant (especially debugging and refactoring). I personally hates the tests, but I know I can't live without them, so if I can use the test also for faster developing, I'll do that.
Usually I spent 1-2 hours to figuring out how to write first test for the feature I am trying to develop (what and how mock stuff, how the api will look like, etc...). But after that it's just pleasure :D. Just copy/pasting tests with small modifications and writing/refacoring code without any fear. I usually gain really good coverage without even focusing on that.
All this means you don't use a true TDD but you just are writing tests. Well, this is really useful, unlike the true TDD which is the impractical religion.
That's true, what I am doing is not TDD by definition, but writing code against tests as much as I can. That's where I found most value. :)
I think it is unfortunate that it is called TDD as the name makes people think it is about testing, it is not.
TDD is about shaving off a small part the bigger problem so that can be clearly articulated in code. Testing is a by product, code coverage is a by product, covering the backside of the developer is a by product. It is just another tool in our tool box but to think of it about unit testing is missing a bigger more important thing and I believe that is why people view it as something they don't have to do.
I am not from the school of thought that says you should do TDD for absolutely everything, but you probably need to do it more than the average developer thinks. We do need the fine grained thinking that enables us to write the tests in the first place, and that is actually harder than writing the tests so if you get to that point you may as well write down the test quickly then validate your thinking. Writing the code first then the test after is OK but most people will write the wrong code that does compile then write a test that validates their wrong code.
Long and short of it from my understanding is that TDD should probably be renamed to avoid people thinking the main thing is about unit tests because that is just a by product, but the elemental thinking and breaking down of the problem and encoding that in a test is the main thing. I'll stop here because I don't like writing that much but thought I'd add my 2 pence here.
Thanks for your thoughts. I've heard similar claims before but I'm yet to see the point of TDD without the testing aspect. Dividing a problem into smaller problems is at the core of many tools and methodologies, I don't see how TDD is different here.
Sure when we rely on the compiler alone we can inadvertently write bad code to satisfy the compiler (and linter). But a bad test can produce an even worse result and it's often the case that proper tests are harder to write than an implementation.
Nice article! Iβd however like to share one though. TDD is about testing behavior, not implementation. Besides, integration tests serve the purpose of checking that an external dependency to your system behaves as expected. Therefore, if one could have control over the behavior of a system dependency, one could write a test suite for it before writing any code. In the cases where existing dependencies are implemented by other teams within our organization (a pretty common case in my experience), we could use Customer-Driven Contract (i.,e., CDC) testing; we could then follow TDD and enjoy all its benefits.
As a final note, I find hexagonal architecture as a great complement to TDD. It enables to properly decouple domain from dependency logic and thus focus first on developing your domain model, leaving the reasoning on how to integrate to an external dependency for later.
4 points of thought:
What about the feedback time of E2E tests, and their poor reliability? (flaky tests)
blog.octo.com/en/the-test-pyramid-...