Okay, maybe you think you know what TDD is but let me tell you a story.
Some years ago there was this young developer who thought that, even when...
For further actions, you may consider blocking this person and/or reporting abuse
In my experience unit tests have added little to no value. The company I work for has decoupled their code to the point of insanity. We've wound up with a 65 project C# nightmare all in the name of TDD. There are 2 problems. The first is that you start rely solely on these tests and you never take the time to make sure you did you damn job correctly. The second is that you stop writing good code and you write shit code that's easy to test. 99.99% of the time the tests you write don't tell you if your code will actually work as expected when someone/some process is trying to use it. As a developer I know I've done my job right when I don't need any tests to make assertions about the code I've written. If you really want to know if your code works USE IT. Use whatever you've made the way it's intended to be used and you'll figure out a lot more about it than if you just write some useless bloody tests. If you really want to optimize your code and make it as good as it can be stop writing tests, use some common sense, gather metrics, and use it.
Your company is not using TDD and you can share this post with your coworkers.
Sounds like your company is using Unit Testing as the holy grail when it's only a tool of TDD that always have to be accompanied with integration tests (as I mention on the post).
Unit Test -> prevents coupling and ensure contracts between classes.
Integration Test -> ensures that your code makes sense.
Of course, if the majority of your unit tests are: "ClassA.dog() calls ClassB.sheep() with X when receive X" then you may be having a problem of cohesion.
I agree that the types of unit tests he is promoting are crazy, but automated testing (mostly via TDD) can be a great way to introduce a really rapid and nice feedback loop. You do need to be careful not to make the code worse just for it to be testable. As you say, you should actually use the code, and good tests will do just that. The reason I prefer this over just manual testing is it helps me quickly find and root out the bugs much faster than I would if I just did manual testing. It provides a better debug environment. I also feel it helps create a better flow when I can just stay in one environment for a long time while writing code and running tests rather than switch between that and slowly manually testing the application/website. Also, manual testing leaves me less brave about aggressive refactoring. I won't say you cannot refactor without tests like other people often say, because I have done it successfully over many years, but having a nice test suite underneath does allow even more refactoring to be done more quickly.
The hardest part about making these benefits really work for you is in how you write the tests. You need to create the right types of compressions (I like to use that word instead of abstraction) to create testing DSLs that allow you to make your tests very readable, remove excess code, and remove all duplication from the tests. The last thing you want is for the test code to feel bloated and like it is blowing up your code base and making refactoring harder instead of easier.
You want to remove as much friction in writing and running tests as possible. You want your entire test suite to run very fast so you have no qualms about running it very frequently. Most people place their tests off to the side, in an entirely separate, duplicately named directory. This blows up the number of directories, increases refactoring maintenance, and increases the friction of writing and running both the test code and the code together. So instead I prefer to write the test code in a file right alongside the code being tested, in the same directory, with almost the same name as the file being tested. This greatly reduces the friction of writing and finding test code, and makes other programmers less likely to forget to think about the tests when modifying that same code.
One problem you'll run into, in the long run, having set test suites that your code relies on is that when you want to do aggressive refactoring you'll also have to refactor most or all of your tests. Now instead of refactoring a single code base, you're refactoring 2, taking twice as long. When you have all these tests, even if they're good tests, when it comes time to change (and in technology it changes fast) you have to change your tests as well, adding time and complexity. The real crux of the issue is that tests will add SOME value to whatever you're doing, but it's going to cost more than it's worth.
I used to believe exactly what you just said. I tried TDD and other ways of automated testing on and off again for several years, and kept coming back to what you just expressed. It is simultaneously true that tests both add and subtract value in the way you described. So the challenge is, to write your code and tests in such a way that the tests add more value than they take away. In order to achieve that I had to combine the concepts of test driven development, type driven development, contract driven development, and exception driven development into a single thing. I also had to invent a type of test that allows tracking of ensuring 100% code coverage in a way that does not hold the entire code base to that standard, and is enforced on a per test/class under test basis, in order to facilitate rapid feedback.
One more thing to add. Many proponents of TDD will say the main value it adds is actually not in the fact that the tests test the correctness of the code, but I suspect someone like yourself will quickly realize how ridiculous that is. We want our code to be correct, and if the tests help us to make the code work correctly, surely the added value must outweigh any other advantage they bring, since not having tests surely does not prevent you from designing your code well or writing good code. If dragging around a code base of tests can be justified, surely it must lie somewhere in that they help you test the code and work out bugs. So if you can find a way to make the tests add more value than they subtract, your code will be less buggy as well. I mean, it sounds like refactoring is something you like to do, and yes you can technically refactor more quickly without tests, but of course you can refactor quickly if the code doesn't actually need to work.
First I want to clarify that I do strongly believe in using automated testing and feel TDD is a great tool.
A programming article with no examples is perfectly fine, but if you are going to do that you should get a little more deep in a philosophical sense, and question your own reasoning more. The idea that coupling is bad makes no logical sense, it is exactly the same as saying "using code is bad", which is the name of an article I wrote that explains this (also with no examples).
As evidence that I am correct, the subsequent behavior you promote in an attempt to remove coupling is crazy. You say unit tests should test which methods on other objects the class under test calls, but that is not testing the contract at all, it is testing and duplicating the implementation.
The contract is the expected behavior of using the interface the class provides. If the expected behavior of an object is that it calls certain methods a certain way on other objects, then all you have done is implement your own harder to use programming language within the language you are using. But that kind of behavior is only the natural consequence of trying to avoid coupling.
Coupling is bad because it increases the amount of code impacted with any minor change, therefore increasing the cost of maintenance.
I said also, that you should test how they do these calls and explain how that helps reduce coupling. I don't know how you write your tests but if you need to duplicate the implementation I suggest a change.
Once again you are not fully thinking through the words and concepts you are using. All code is coupling. The entire point of code is coupling. If you remove all coupling, you no longer have a system, just a bag of objects. You have it backwards. I can repeat your first sentence as the exact opposite, and it makes more sense. Coupling is good because it increases the amount of code impacted with any minor change, thereby decreasing the cost of maintenance. Good classes achieve high conceptual compression, not abstraction.
So, you are talking about the coupling vs cohesion thing.
TDD can help with that too, If you listen to the tests, of course. Like with coupling, if you detect your tests being hard to do, you can detect lack of cohesion if you realize that most of your tests just check that X calls Y with exactly the same output as input was given.
Well, I guess we will have to agree to disagree. I like that you brought cohesion into the picture, as it is one part of getting compression right. I also like that you have been talking about listening to your tests. It is important to listen to your tests, and your code in general, as it can talk and give feedback to those who know how to listen. Ultimately though I feel that coupling is a good thing, and I'm not sure how writing the types of unit tests you describe would help me find the unwanted types of coupling. To me the only coupling I don't want is random coupling, which I would automatically avoid just by not using random extra things in the code, by having a very general aesthetic sense of what the responsibilities of a class should and should not be, and by refactoring to simplify things (although that simplification is often accomplished by either introducing some new coupling or by replacing some existing coupling with more desirable coupling).
I don't like those type of tests, since the one you are talking about is tied to the implementation.
As you say that's good to put clear contracts between classes, but I don't see the purpose of that even when I think having clear interfaces is important.
Instead I prefer to test how each function answers with different parameters and edge cases.
Thanks for sharing and making me think about testing.
In my opinion, this is a bad idea. A test should verify that a given function does what it was meant to do. Most of the time, the details of its implementation should not be part of the test. The approach you describe will make it easy to create passing tests even though the actual application logic doesn't work properly. It's also brittle: If you change the implementation of a function, the tests are likely to break, even if the contract that function fulfills remains unchanged.
There are cases where this kind of testing, using mocks, is appropriate. If your function interacts with an external system, then there is a decent chance that you should mock that out for testing purposes. For instance, a test may say "please make sure that the
send_text_messagefunction was called with the following parameters" -- but without actually runningsend_text_message. This is done when the external system may be unavailable entirely, or produces results that vary over time, or would slow down the test suite too much.Over all though, I recommend that a test should call a given function, and then confirm that the return value of that function is what was expected, or at least that some state change resulting from that function call is what it should be.
But you could just test your DAOs/repositories with, say, in-memory DB instance in isolation. And nothing prevents you from using mocks with other application-level tests.
I would still be worried that the in memory database does not behave the same, and this would hide many bugs that would otherwise be caught by using the real database engine.
Tight coupling results from not using pure interfaces for everything... All parameters and return values should be pure interface types .. that way, all dependencies can be passed in as Fakes or Stubs .. resist the temptation to call constructors in methods.
That prevents coupling on an implementation level but not at a logical level. With TDD you can prevent both.
I prefer to just think of tests as tests. I also don't often mock classes.
I write tests first to prove what I'm about to code is correct (or occasionally that my mental model in testing doesn't match reality, or the class API sucks).
Typically software is built in layers, so for higher levels, I can write tests assuming the lower levels are tested (because they have tests) and work (because the code runs and does something sensible also).
Not usually important to mock because that's a lot of work that provides little benefit and slows progress. I do mock if it's easy to do (abstract base classes) or very important to hide some super complex/fragile system.
I try to balance forward progress, rather than testing everything, especially important because half the code gets rewritten so much that I'd have to rewrite half the tests. "Just enough tests" is actually therefore better than "everything is tested", which is better than "Not enough or no tests".
Test behavior not implementation is my motto. I like cleanly separate the external resources and test them with integration tests that reused as a shared specification to implement in memory representation. With that being done, I don't have to use any mock/stub anymore, and my tests only aim to do behavior testing through composition.
Of course this requires a small amount of self discipline, but so far (in the past years) I only have good experiences with it.
The only "con" with it that each time an external resources needs to be added to the system, there is an extra step for creating an in memory implementation as well by shared specs.
But I'm a simple man. I see post about TDD, I upvote :)
Hi Adrián,
thanks for your post.
I'm always amazed by how emotional these TDD discussions get. It reminds me very much of the current "replace master with main" discussion. There are stubborn opinions for and against it.
I don't like hardliner mindsets in general. There a people hunting the 100% code coverage which leads to silly tests. But there are the "don't write any test at all" people as well. I think tests are super valuable. But you have to learn to write useful tests.
At work, we try to write tests for all the important parts of our (pretty big) application and it works out pretty well for us.
Personally TDD means your code is testable. It's designed, that it's proven to be testable, at a smaller level. The tests also communicate on how it works. It also speeds up development, develop once and park it. I do reactor, but it can be an obsession, visually I want to keep it simple (KISS), I hate VAR in code reviews as I dont want the cognate overload of working out what VAR is especial when there is so much to scan. Sure there is the issue that some people don't get anything out of tests, then don't do testing. For me it's also pain reduction. The next question is about mocking and fakes, what is testable and is the fake honest? Next, are the tests giving me a false sense of security? Finally you are delivering a product, at a price point, tests are not the delivery, you code is (YAGNI).
I think it would be easier to follow if you've demonstrated this on a you example that you develop step by step in your article.
Good idea! I will do a followup of this one with that when I have time. Thanks :D
whats is difference between tdd and unit test?
TDD is a way to develop using tests while unit tests are tests that aim to check and verify the smallest unit of your logic.
Unit testing is part of TDD but only a part, you need integration tests also.
Your english is great! Any chance you could think of an example of where you found hidden coupling? Also, what purpose are your integration tests serving?
Thanks! I'm always worried about not making sense because English not being my native language.
Usually, I found hidden coupling at the early stages of developing providers (in the sense of MySQL, RabbitMQ or a microservice) architecture to hide that accidental complexity. Like, the other day I was connecting a microservice to a legacy one that had a logical coupling (it wasn't so micro) and the provider class was reflecting that coupling.
For me, Integration Tests are a tool that ensures that the software being developed does what it says. It is important the part of "software being developed" anything outside it should be mocked. I usually code APIs (Postman is all the frontend I need :P) so, my Integration Tests are usually a bunch of HTTP calls with assertions for the replies.
I use NodeJs most of the time and use Nock and Supertest