The TDD Test
I have had some experience with co-workers who were doing TDD incorrectly.
The benefits of TDD is that it puts pressure on...
For further actions, you may consider blocking this person and/or reporting abuse
So I'm intrigued by this - please can you expand on the behaviour changes this additional step encourages (or what was being done 'incorrectly')?
I would also like to know if you think TDD tests are always internal to a service/feature/component and thus disposable things (a la James C), written for the benefit of the local team only and separate from say contract tests that usefully provide confidence in exposed behaviour for all consumers over long periods of time? I ask since we usually start by collecting contract tests from consumers (other teams) and the local team work to make those pass, using whatever internal workflow / techniques they are comfortable with.
One way to do TDD incorrectly (such that it isn't TDD) is to write the unit tests in arrears. That misses out on the entire point of TDD.
I think unit tests are necessarily internal to a service/feature/component -- if they weren't they wouldn't be unit tests, they'd be integration tests.
They are to fill the gap that languages have which do not have contract support.
One of the quotes I like is "untested code is buggy code". Unit tests can exercise every code path, such that code can be known to fulfill basic correctness.
It may not work together -- something that integration tests are useful for -- but at least it has basic correctness. Integration tests do not have the ability to exercise every code path (as pointed out by Rainsberger in his Integration Tests are a Scam presentation).
That's where TDD can help provide a suite of unit tests as a residual value. And that suite of unit tests will also good code coverage because the unit tests are written a priori, one by one, in a tight cycle with the implementation.
Contract tests are great. Acceptance tests. Integration tests. Security tests. Performance tests. All of them have their place. None of them are substitutes for unit tests, none of them give assurance of basic correctness. And unlike those other tests, with TDD the developers write the unit tests not for the sake of unit testing, but rather for the sake of design and development.
All those other kinds of tests ought to be created by quality engineers. But TDD style unit tests: those are created by the developers.
Another problem with TDD is the language and environment used. C++ is not a great language to use for TDD, when every other step involves a lengthy build cycle. On the other hand, C#, Visual Studio, NUnit or xUnit.net, and NCrunch is an amazing combination -- one which makes TDD actually fun. (Seriously. FUN! Not kidding.)
Thanks for the detailed reply :)
If I understand you (I can be a bit slow sometimes!), this approach takes external (contract, and quality) requirements (and hopefully tests from independent QA folks), decomposes them into TDD unit tests to ensure the internals are designed, implemented and refactored correctly, piece by piece to meet each larger contract requirement? These internal unit tests are now disposable, having provided most of their value, leaving the external tests to provide confidence that future changes are not breaking contracts?
If so, then sure, I think this is a good way of working without accumulating excessive amounts of redundant internal unit tests, although I may agree with James C that it's /passing/ tests that provide no value in the longer term, and their removal is primarily to limit the test maintenance pain... YMMV :)
TDD unit tests are at the level of quarks, electrons, and atoms.
Integration tests, system tests, and acceptance tests are at the level of bricks, mortar, girders, electrical, and plumbing. The external contract and quality (the requirements and specifications) are at this level; these kinds of tests reflect those things.
Unit tests are run in debug mode. For a large application, the entire suite of unit tests should take a few seconds or less to run.
Integration tests, system tests, and acceptance tests are run against optimized release code. And can take hours to run... or longer.
Unit tests ensure basic correctness. (Unit tests are a solution for languages that do not provide contracts. In this sense contracts are the prerequisites, postconditions, and invariants of the methods and classes. That's entirely different from contracts at the scope of system requirements, external contracts, and quality standards.)
TDD is not about testing; TDD is about design. TDD unit tests are a kind of scaffolding. The value of the unit test is how it helps guide development. The residual value of unit tests is that they provide confidence in basic correctness. However, the primary value of TDD unit tests is in the process (the steps I cited above), which means that if the test is deleted it still has provided enormous value.
Much like scaffolding when building a building: once the building is built, the scaffolding can come down. Coplien considers a large suite of unit tests -- which have to be maintained -- as muda (waste, in the Lean sense). He considers eliminating that waste as a good and prudent thing to do.
For my projects, I do not consider the suite of unit tests (which have to be maintained) to be muda. However, I would much rather have proper language support for contracts which would obviate the need for unit tests to ensure basic correctness, and with C++20 there is hope.
The ironic thing is that "a suite of unit tests to ensure basic correctness" is a byproduct of TDD. The point of TDD is design, and contract support in the language would not be facilitated (nor hindered) by contract support -- but would eliminate the need for unit tests, which would probably make some developers who do TDD to stop doing TDD. Especially those developers who do TDD or do TDD incorrectly as a means to create a unit test suite, rather than as a means to facilitate better design.
"Unit tests are a solution for languages that do not provide contracts. In this sense contracts are the prerequisites, postconditions, and invariants of the methods and classes. That's entirely different from contracts at the scope of system requirements, external contracts, and quality standards."
OK, I get this - and I found this presentation on contracts within the language for C++20: cppeurope.com/wp-content/uploads/2...
I contest however that these internal correctness contracts /must/ be derived from the external contracts defining expected behaviour or they are useless in proving that the right thing is being constructed, indeed Mr Garcia in the linked paper says as much on slide 20 "Correctness -> Degree to which a software component matches its specification." - scaffolding that helps build a bungalow when a block of flats was required is not very helpful :)
Unit tests cannot fulfill the role of acceptance tests.
To further elaborate...
Unit tests they are useless in proving that the right thing is being constructed. They only demonstrate basic correctness.
Unit tests won't show whether a bungalow is correct, or a block of flats is correct.
They'll show if the nail is correct. The nail does not care if it is used in a bungalow, or a block of flats.
If you've got the luxury of having quality engineers that can write unit tests, integration tests and acceptance tests (and they have the time to do so) you can throw away TDD unit tests easily.
I think projects that work this way are in the minority.
Quality engineers should not be writing unit tests. If the developers do TDD, the developers should be writing unit tests.
Quality engineers should be writing integration tests and acceptance tests.
Sorry I misread your comment.
So you mean "delete the TDD test but make sure to write an integration test that covers this code"
Or maybe delete the TDD unit test in the moment it's in the way of refactoring the code because the integration tests gets you covered. Until that, the TDD unit test serves as regression test and (hopefully) kind of documentation.
No, I mean "delete the unit test, because it has served its purpose, and now it is waste" as a way to help educate the developers not to write their unit tests in arrears.
Unit tests written in arrears — if written at all — have poor coverage characteristics, and do not guide design (hence are not TDD).
Integration tests are not a substitute for unit tests.
Does TDD actually work in practice? I've never found it to be applicable in my day to day dev work.
It can. It's a technique, and a foundation upon which many other agile engineering practices are built upon.
Where I've found that it works is under these conditions:
In my experience, the developer team was initially forced to do TDD by management. The platform we used was Visual Studio, C#, NUnit, and NCrunch. There was an entirely separate integration test system. The unit tests ran in a few seconds, and thanks to the magic of NCrunch had practically immediate feedback as you were typing in Visual Studio. After the development team acclimated, all the developers on the team were really enamored with TDD, unit tests, and the tools we were using. My anecdotal assessment is that unit testing avoided 98% of the bugs that we had experienced on the first generation system (which was not TDD and had no unit tests) -- that was the value of having a unit test suite (~70% coverage) to ensure basic correctness.
Where TDD is difficult to do is if the development team is recalcitrant, the tools being used are inimical to TDD, or the unit tests system becomes polluted with non-unit tests.
For example, in my current project the development team has almost no interest in unit tests (let alone TDD), C++ is not a TDD nor unit test friendly language, and for the small number of ostensible "unit tests" we do have over half of them are actually integration tests and take far too long to run as unit tests.
On systems where I think TDD is even more critical is larger systems that use a duck typed language. When I worked on a large TypeScript based project, I lamented that we did not have a unit test suite. (Whether we would have created that unit test suite using the TDD technique is orthogonal.)
I was working in a python environment and we had lots of unit tests but worked in a more TLD style (test last development). Most of our tasks were open enough that TDD would mean doing all the design up front rather than refactoring as we went along.
My question was more about how writing the test first is meant to work in practice? Do you know what your units are meant to be? Do requirements have to be complete and explicit?
It's meant to work in practice by putting pressure on the developers to use most of the principles of SOLID (especially LSP, ISP, and DIP), Law of Demeter (related to DIP), Design by Contract (related to LSP), YAGNI, KISS, GRASP, SOC (related to SRP), DRY code and WET tests.
The "how to" steps are the steps I cited above in the posting. It's a very fast cycle.
The unit test is written moments before the code that implements the functionality.
If the developer does not know what is being implemented, then how would the developer know what to implement (regardless of TDD or unit tests)?
For the unit tests, no. Unit tests are for testing units. With TDD, unit tests are a means to guide design. (q.v. Growing Object-Oriented Software, Guided by Tests by Steve Freeman and Nat Pryce.)
Requirements are at the level of acceptance tests, and perhaps integration tests and system tests.
Unit tests are written by developers, to aid development. Acceptance tests (and integration tests, and system tests) are written by quality engineers to ensure what the developers created meets the requirements.
Think of it this way: you have a bolt and a nut. You can create all sorts of tests to make sure the bolt works correctly, and all sorts of tests to make sure the nut works correctly. Those are unit tests. As soon as you write tests for the bolt-and-nut, that is no longer a unit, now it is an assembly, and these are not unit tests -- they are assembly tests (which is a kind of integration test).
Since you are using TLD and Python, you could probably dispense with unit testing altogether, and use something like Python Contracts module, by Rob King / deadpixi. This one supports require, ensure, invariant, types, and transform. Also supports async function (aka coroutines).
How do you refactor code safely with a low test coverage?
Having detailed and valid specs and technical documentation represents only one per cent of the advantages to you?
I refactor code safely with high unit test coverage, and integration tests.
I think detailed and valid specs, and technical documentation are great. Unit tests are not those things. Unit tests fulfill the role of contracts for languages that do not provide contracts as a core language feature. Unfortunately, that's almost all the mainstream languages. My programming language is C++, and with luck C++20 will have bona fide contracts. Other programming languages with contract support are Eiffel, D, Ada 2012, Spec#.
The value proposition of TDD is the impact it has on design. That represents 99% of the value. The residual value -- the other 1% -- has value too for catching regressions against basic correctness. But that value is orders of magnitude lower.
Why delete the test ? Sorry i din't get the end of the article.
To drive home the importance of writing the unit test first, because that is what is valuable for TDD.
Rather than after the fact, because that has far less value (and isn't TDD).