DEV Community

You don't know TDD

Adrián Norte on March 17, 2019

Okay, maybe you think you know what TDD is but let me tell you a story. Some years ago there was this young developer who thought that, even when...
Collapse
 
shadow1349 profile image
shadow1349

In my experience unit tests have added little to no value. The company I work for has decoupled their code to the point of insanity. We've wound up with a 65 project C# nightmare all in the name of TDD. There are 2 problems. The first is that you start rely solely on these tests and you never take the time to make sure you did you damn job correctly. The second is that you stop writing good code and you write shit code that's easy to test. 99.99% of the time the tests you write don't tell you if your code will actually work as expected when someone/some process is trying to use it. As a developer I know I've done my job right when I don't need any tests to make assertions about the code I've written. If you really want to know if your code works USE IT. Use whatever you've made the way it's intended to be used and you'll figure out a lot more about it than if you just write some useless bloody tests. If you really want to optimize your code and make it as good as it can be stop writing tests, use some common sense, gather metrics, and use it.

Collapse
 
anortef profile image
Adrián Norte

Your company is not using TDD and you can share this post with your coworkers.

Sounds like your company is using Unit Testing as the holy grail when it's only a tool of TDD that always have to be accompanied with integration tests (as I mention on the post).

Unit Test -> prevents coupling and ensure contracts between classes.
Integration Test -> ensures that your code makes sense.

Of course, if the majority of your unit tests are: "ClassA.dog() calls ClassB.sheep() with X when receive X" then you may be having a problem of cohesion.

Collapse
 
stilldreaming1 profile image
still-dreaming-1 • Edited

I agree that the types of unit tests he is promoting are crazy, but automated testing (mostly via TDD) can be a great way to introduce a really rapid and nice feedback loop. You do need to be careful not to make the code worse just for it to be testable. As you say, you should actually use the code, and good tests will do just that. The reason I prefer this over just manual testing is it helps me quickly find and root out the bugs much faster than I would if I just did manual testing. It provides a better debug environment. I also feel it helps create a better flow when I can just stay in one environment for a long time while writing code and running tests rather than switch between that and slowly manually testing the application/website. Also, manual testing leaves me less brave about aggressive refactoring. I won't say you cannot refactor without tests like other people often say, because I have done it successfully over many years, but having a nice test suite underneath does allow even more refactoring to be done more quickly.

The hardest part about making these benefits really work for you is in how you write the tests. You need to create the right types of compressions (I like to use that word instead of abstraction) to create testing DSLs that allow you to make your tests very readable, remove excess code, and remove all duplication from the tests. The last thing you want is for the test code to feel bloated and like it is blowing up your code base and making refactoring harder instead of easier.

You want to remove as much friction in writing and running tests as possible. You want your entire test suite to run very fast so you have no qualms about running it very frequently. Most people place their tests off to the side, in an entirely separate, duplicately named directory. This blows up the number of directories, increases refactoring maintenance, and increases the friction of writing and running both the test code and the code together. So instead I prefer to write the test code in a file right alongside the code being tested, in the same directory, with almost the same name as the file being tested. This greatly reduces the friction of writing and finding test code, and makes other programmers less likely to forget to think about the tests when modifying that same code.

Collapse
 
shadow1349 profile image
shadow1349 • Edited

One problem you'll run into, in the long run, having set test suites that your code relies on is that when you want to do aggressive refactoring you'll also have to refactor most or all of your tests. Now instead of refactoring a single code base, you're refactoring 2, taking twice as long. When you have all these tests, even if they're good tests, when it comes time to change (and in technology it changes fast) you have to change your tests as well, adding time and complexity. The real crux of the issue is that tests will add SOME value to whatever you're doing, but it's going to cost more than it's worth.

Thread Thread
 
stilldreaming1 profile image
still-dreaming-1 • Edited

I used to believe exactly what you just said. I tried TDD and other ways of automated testing on and off again for several years, and kept coming back to what you just expressed. It is simultaneously true that tests both add and subtract value in the way you described. So the challenge is, to write your code and tests in such a way that the tests add more value than they take away. In order to achieve that I had to combine the concepts of test driven development, type driven development, contract driven development, and exception driven development into a single thing. I also had to invent a type of test that allows tracking of ensuring 100% code coverage in a way that does not hold the entire code base to that standard, and is enforced on a per test/class under test basis, in order to facilitate rapid feedback.

Thread Thread
 
stilldreaming1 profile image
still-dreaming-1 • Edited

One more thing to add. Many proponents of TDD will say the main value it adds is actually not in the fact that the tests test the correctness of the code, but I suspect someone like yourself will quickly realize how ridiculous that is. We want our code to be correct, and if the tests help us to make the code work correctly, surely the added value must outweigh any other advantage they bring, since not having tests surely does not prevent you from designing your code well or writing good code. If dragging around a code base of tests can be justified, surely it must lie somewhere in that they help you test the code and work out bugs. So if you can find a way to make the tests add more value than they subtract, your code will be less buggy as well. I mean, it sounds like refactoring is something you like to do, and yes you can technically refactor more quickly without tests, but of course you can refactor quickly if the code doesn't actually need to work.

Thread Thread
 
shadow1349 profile image
shadow1349

Whenever, I can I like to refactor code, provided it will make the code better or adds some value. One thing I've gotten into lately is TypeScript and NodeJS (coming from a C#/dotnet, and Java background). One tool I've grown incredibly fond of is linting (I know it's availabel in more languages than just Node/TypeScript, but it's predominantly used Node). This allows me to define a set of rules and standards for the code style and quality that are enforced with a single tool. It is especially useful when it comes to deploying via some CI/CD tool, like CircleCI, because I can lint the whole project and if it fails, it doesn't deploy.

For me, this is so much better than writing tests because it covers the whole code base, rather than single classes/functions, it will catch most errors, it enforces code style quality, and it can be easily copied to another project.

I acknowledge that this won't catch 100% of all errors because you can still have incorrect code meet those standards which will get built, and I also know that these things can sometimes slip through PR reviews.

However, if I have sufficiently setup monitoring for my code (such as sentry or grafana, or both) I'll gather needed metrics, as well as being able to accurately and quickly react to any issues by running this code in a development/staging environment before pushing it to a production environment. Most products should have various environments (like development, staging, production) anyway so I make sure that these are utilized fully.

However, one kind of test I can get behind, at least for APIs, is postman tests. The great thing about postman tests is that you can refactor the code all you want, as long as you're returning the same data and HTTP status code your postman tests will pass and accurately tell you if your API has performed as expected. Not to mention they work the same way no matter what language your API is written in. So these tests will add value without taking a ton of time to refactor if your code does change, and if your return data changes it's very very simply to update the test.

Thread Thread
 
bosepchuk profile image
Blaine Osepchuk • Edited

It's definitely easy to write tests with low ROI.

The top 4 ways to do that are:

  • Testing trivial details
  • Testing the implementation details instead of testing through the public API
  • doing behavior verification when state verification is more appropriate and mockist TDD where classical TDD is more appropriate (see: martinfowler.com/articles/mocksAre...)
  • Testing complicated/coupled/non-SRP code

With that said, my team has thousands of tests that were mostly written over the last 8 years and we've never had problems like @shadow1349 is describing. We believe our test suites provide a high ROI on our efforts.

Thread Thread
 
shadow1349 profile image
shadow1349

If you're happy with your tests, that's great, but I'm going to present a challenge for your tests.

  1. You're going to have so much code it will become difficult to sift through and maintain, with those tests bogging down your code base, it will take more and more developer time and effort to maintain.
  2. Since your test suites (mostly) are 8 years old, you're going to have people saying things like "I'm not touching that" (we've all heard it and/or said it) because of how old it is, or because they have forgotten about it or what it does/how it works, or because they're new and it can be intimidating.
  3. The more tests you have the slower your CI/CD pipelines are going to run (in my situation our product takes more than 45 minutes to build), that is if you have CI/CD pipelines, otherwise this point is moot.
    • This isn't necessarily the end of the world, but the longer it takes to run the longer it will take to see if you have any issues, and the longer the development process will take as a whole, it can add up to be a real pain.
Thread Thread
 
stilldreaming1 profile image
still-dreaming-1

shadow, I agree with everything you said about those tools and techniques. However, I still have found that a suite of tests on top of that is amazing, simply because they help you catch and fix bugs faster and less painfully than manual testing does. You would have to be so careful and slow, and do so much manual testing to get the same level of assurance. I guess what I'm saying is, prior to submitting a pull request and having the CI rerun lints, and prior to asking someone else to review your code, you still want to have reason to believe your work is likely to be high quality, and I believe you do that. But tests help you get to that point faster and with less pain. Why waste someone someone else's limited mental energy and time with reviewing code only to point out bugs that you could have found yourself by writing tests? I'm not saying don't do code reviews, I'm just saying you want to be proactive about doing quality work prior to the code review. Why rely on a slow and manual process of gathering metrics from manual testing in a development or staging environment prior to being proactive about using tests to quickly find and fix most of those same problems? If you haven't experienced that level of benefit from testing your classes, then see it as a challenge. I'm not saying your wrong, I'm saying you can do better. You can setup a development environment, and write your code and tests in such a way that, they more than provide enough value to be worth doing in addition to all those other things. They can make programming more fun and less painful too. But this only happens after a high level of mastery of writing valuable tests in a frictionless way. Part of the trick is to setup an environment that makes running the exact tests you want to very frictionless, just a couple keyboard clicks away from when you finished typing in the code. It keeps the act of programming in a purer form longer, a programming dance with the machine.

Thread Thread
 
shadow1349 profile image
shadow1349

still-dreaming-1 I really enjoy the last thing you said about dancing with the machine, I hope you don't mind if I use that later.

However, having to write tests gets us from 0 to app slower than simply not writing tests. Tests won't make anything fast, but they will provide the feeling of making it safer. If you've come into a project with existing tests you may receive those benefits, but the moment you have to make a breaking change you're going to have to refactor some tests and hopefully they pass. This is a much slower process, but again provides the feeling of safety.

My philosophy when it comes to software development is like Facebook's "Move fast with stable infrastructure" (used to be "Move fast and break things") basically the hacker method, with my own personal ideology intermingled in (I could write a whole article on this, so I won't get into it).

A lot of my opposition to tests is, in part, emotional, as I feel less of the dance with the machine and more of an awkward arm's distance middle-school swaying. This is not a good argument against TDD, I get that.

A better argument is to say that code reviews will add much more value to the team, because we will get a better understanding of the code base, and the code our team mates are writing. It is important to keep everyone on the same page. Not to say you can't do that with TDD, but TDD adds in (what I believe to be) unnecessary steps.

Thread Thread
 
bosepchuk profile image
Blaine Osepchuk • Edited

@shadow1349 , here are my answers to your questions.

1) The volume of the code plus the tests doesn't slow us down. Tests are in a separate dir from the code and we have no problems keeping our code organized.

If you are saying its slower to write code and tests than it is to write tests alone, we find that not to be the case either when we account for all appropriate costs.

If you just look at the code and say that you could code feature x in 1 hour and it would take another hour to write tests for that code so tests are inefficient, you're not counting all relevant costs.

For example, you have to create and run a manual testing plan on the code, which takes time. Then someone else on your team also has to run a manual testing plan during the code review. And if the reviewer spots a problem with your pull request, you have to go back, fix the problem and repeat the cycle (so you've run the manual testing plan at least 4 times now). That doesn't have to happen to often for automated tests to pay for themselves.

But it doesn't end there. What happens if you find a defect in that code a month from now? Now, you have to go through the cycle yet again but most teams don't keep their manual tests so you'll probably have to recreate them. Then 5 years from now a colleague will need to change how feature x works but she didn't write it, doesn't know how it works, and has no documentation to help her. So she has to spend time figuring out what it should do, if it in fact does what it's supposed to do, and how to change the code so it satisfies the new requirements in addition to all the existing requirements that are not supposed to change but are not well documented.

2) The age of our code is not correlated with our willingness to change it. Clean code, covered by tests is always easy to change. The code people have a negative emotional reaction to is the original code that is a tangle of responsibilities, logic, and database calls all rolled into one without any tests.

When we want to change that code we have to employ the techniques from Working Effectively with Legacy Code and Clean Code to carefully get the code under test, write characterization tests around it, then refactor it. And that's is a very time consuming process, even though it's the best way forward when the code is complicated and the change is non-trivial.

Dealing with our own legacy code is actually the thing that convinced us to adopt design reviews, static analysis, automated code styling, automated testing, code reviews, etc. We didn't want to live that way any more.

We were tired of defects creeping into our projects. We were tired of bouncing branches back and forth between the author and the reviewer because we kept finding more mistakes as the hours rolled by. We were tired of being interrupted to fix errors somebody else discovered in production. We were tired of spending hours trying to understand what some piece of code did. It was frustrating and inefficient and we vowed not to make any more code like that ever again. And while our new code isn't perfect, we've largely accomplished our mission.

One of the top posts this week is on this very topic: Towards Zero Bugs by Jonathan

3) Quick feedback is important. On the project I spend the most time on, we have about 2,900 tests with around 3,800 assertions. This includes both unit and integration tests. I have the slowest dev machine on my team and I can run all the tests in 53 seconds. I can run the unit tests alone in about 15 seconds. Or I can run the integration tests alone in 38 seconds. My colleague has a faster dev machine and he can run all our tests in about half that time. However, in my normal workflow, I usually just need to run the tests in one file to ensure I'm not breaking anything and that almost always takes a fraction of a second.

Listen, I believe you when you report that automated tests have a low ROI in your project. All the things you're saying and the questions you are asking point to problems with your testing program. The part I don't understand is why you think that it's a problem with automated testing in general instead of a symptom that something's not quite right in the way you're building and maintaining your test suite?

Cheers.

Thread Thread
 
shadow1349 profile image
shadow1349

I'm curious to know what language you're using. I came into the current project I'm working on (it's dotnet C#) and they had all their tests written, which total less than half of the number of tests you have. Yet it takes around 45 minutes to run our tests and that's on our more powerful build servers.

What I was trying to say is that it is faster not to write tests at all. Even down the line, I have never needed tests to tell me what is going on with the code I'm working on, or how it works. If I'm sufficiently familiar with the language and practices used on that project I don't need a lot to figure out how everything is put together. Perhaps not everyone is that way, but then are they really qualified to work on that team/project?

I think automated tests are a problem because every team and project I've come into has used those tests like a crutch. I believe a good developer doesn't need to lean on automated tests to write good clean code. Nor do they need automated tests to tell them how some code is intended to work and if there are any problems with it.

I think in theory automated tests sound good, but in practice they fall short.

Thread Thread
 
bosepchuk profile image
Blaine Osepchuk

My project is written in PHP 7.1 and our unit tests are written for PHPUnit.

I think we'll have to agree to disagree about the utility of automated testing. Even if you are as good as you say you are, how are you going to:

  • Find enough programmers with your skill level to develop and maintain your project?
  • And what are you going to do when those developers move on and new developers take their place without the kind of safety net automated tests provide?
  • How are you going to prevent your project from being overwhelmed by technical debt if you have no automated tests to help you refactor safely?

But you don't have to take my word for it (nor should you). It's pretty trivial to design and run an experiment where you have a control where you don't write any automated tests and an experimental group where you write automated tests and see which one is cheaper/better/faster in the long run.

We've done that as a sort of a back-of-the-napkin calculation. And in our project, with our team, we are way further ahead using our QA processes (including automated testing) than the "good old days" when QA was ad hoc.

One final thought: I think I'm a pretty good programmer and I've been doing this professionally for almost 20 years and I'm still amazed at how many defects I can find if I actually take the time to write some tests. Even seemingly simple code can contain errors that are difficult to detect by visual inspection alone. That's why using multiple kinds of QA (static analysis, automated testing, individual code reviews, peer code reviews, manual testing, etc.) is a very good strategy for most projects.

Thread Thread
 
stilldreaming1 profile image
still-dreaming-1 • Edited

Shadow, you said tests give the feeling of safety. But they don't just give you feeling of safety, they give you actual safety. They help you find and root out many bugs very efficiently. If the tests your project has don't provide any actual safety, the team is not writing proper tests, as myself and Blaine have been trying to tell you. Calling tests a crutch is just that, name calling. You could refer to any QA tool/technique a crutch and say it should not be needed, but it either adds more value than it takes away or it doesn't.

I am especially worried when you follow up talking about crutches with the idea that you know how your code behaves without tests. I suppose it is possible you have some kind of very warped brain that is organized such that it gives up intelligence in other areas in favor of giving you the exact kind of intelligence to understand code perfectly, but that is not at all normal, nor should it be. Any good developer can grok the language they work in and have an understanding of code written in it, but that is not at all the same as saying they can just write code, read over it, and know it is correct. They might think and feel they can do this, but that is also not the same. Basically I don't believe you, and I think you are overestimating your own abilities.

On that note I have a personal challenge to any developer that thinks they don't need tests. It is actually a reminder to myself and a challenge I give myself any time I start thinking I don't need tests. Write a non-trivial class, and read over it until you feel confident there are no bugs. Now write tests that thoroughly cover every part of the behavior and achieve 100% code coverage of that class. I bet at least 7/10 times times this will uncover bugs you didn't realize were there. I'm amazed at how often even relatively simple code I write still has bugs that the tests reveal.

Thread Thread
 
shadow1349 profile image
shadow1349 • Edited

There are entire methodologies centered around not writing tests. Most notably the hacker method, used predominantly at Facebook. Basically, you innovate and ship code as fast as possible. You realize that there will be bugs, but deal with those as they come because it's more important to innovate fast. You can look up Erik Meijer as he talks a little bit about TDD.

Most code written today is simply a set of instructions written in a way that is readable by humans. If you write good clean code you can follow it quite easily. The more code you write the harder it get's to follow, but you can still follow it. Take this example:

  • A = > 90
  • B = 80 - 90
  • C = 70 - 80
  • D = 60 - 70
  • F = < 60

const grade = 80;

if (grade > 90) {
console.log('A');
} else if (grade < 90 && grade > 80) {
console.log('B');
} else if (grade < 80 && grade > 70) {
console.log('C');
} else if (grade < 70 && grade > 60) {
console.log('D');
} else if (grade < 60) {
console.log('F');
} else {
console.log('ERR');
}

While this is a very simple example, but this is similar to code that I've seen tests written for. You may spend 10 to 15 minutes writing a test for this bit of code and running your tests and trying to figure out why this prints ERR. Then you're not going to look at this code again and simply run the test. Then the developers who come in who didn't write this code or the tests will come in and simply run the tests without understanding the code. The only time they'll do that is if the tests fail. But it's going to take much longer to figure out the issues because they didn't take the time to figure out how the code works they were lazy and just ran the tests.

In practice this is actually really bad because no one other the the person who wrote the code and the person who wrote the test (sometimes the same person) will take any time to understand the code. You may tell me that I'm wrong, but human nature is to grab the low hanging fruit. Since developers are in fact humans, many of them will not take any time to understand the code because the tests are the low hanging fruit.

While that doesn't encompass ALL developers, it is human nature and includes enough of us to make me think we need to write fewer tests and spend more time understanding the code we write better. This is the very definition of a crutch, because you lean on the tests to find your problems instead of understanding the code well enough to be able to suss out the problems. That doesn't take a "warped brain" that just takes some effort, common sense, and an understanding of the language/environment you're working in.

Thread Thread
 
shadow1349 profile image
shadow1349

I will however, concede on one thing. Not all software products are equal, and there are some that really do require tests. If we look at the recent plane crashes of those 737 max planes they pointed to a faulty controller that reported that the plane was stalling when it wasn't. I then sent the plane into a node dive to gain enough speed to safely fly.

Things like this, I believe, are exempt from the hacker method where you try to innovate as fast as possible and deal with errors as they come because on systems like that you can't rapidly deploy changes and fixes. But you also have to understand your code and system and how everything works.

The final cause of the 737 crashes are still under investigation. I do know that Boeing uses TDD. If the initial findings are correct and the issue occurred because of a controller incorrectly sent messages to the automated system that the plane is stalling, why didn't the tests catch it?

You can say that they didn't have enough tests or that they the bad tests, but the core issue is that no one understood the system well enough to catch that issue. No one took the time and effort to do that and it cost people their lives.

Thread Thread
 
stilldreaming1 profile image
still-dreaming-1 • Edited

shadow, let me get this straight. You are saying that avoiding tests in order to not find bugs that tests would reveal before deployment in order for the users to "find the bugs" for you, is the way for developers to avoid being lazy?

I don't buy that skimping on pre-deployment QA speeds up innovation, and I don't think that quality only matters in life and death scenarios like the plane problem you mentioned. The world is facing the opposite problem. Most software, products, and services, barely even work. I'm constantly getting annoyed by things not working quite right, or not as well as I would like them to. You know what would be truly innovative? Products and code that actually works!

What is more, I feel that true innovation is what emerges when you insist on extreme quality. If we all started enforcing things that are currently impractical, like 100% code coverage, what would happen is we would innovate to make achieving that less painful.

Thread Thread
 
shadow1349 profile image
shadow1349

I don't think you fully understand the hacker method at all. You can enforce standards without having to write and maintain thousands of tests.

Collapse
 
floverdevel profile image
Ellis

When written correctly, tests created with TDD are supposed to help refactoring. You could change the implementation details and the tests should still be all green.

If you end up having problems refactoring BECAUSE of the unit tests the it's probably because you are testing the implementation details instead of testing the behavior of the system or behavior of the component. One hint of that is when you have a lot of mocks in your tests.

Collapse
 
os008 profile image
Ahmed el-Sawalhy

In my opinion, if you feel you are not getting value from your tests, then you might not be utilising them correctly.

I used to not write tests, and it sucked. I would test by hand, and then find out that I had missed some obvious checks.

Writing tests forced me to think of all possible scenarios beforehand, and actually discover issues early on before deployment. In addition, when I change some code, I can very easily confirm its impact with the click of a button, without affecting the environment.

It makes life so much easier on the long run, but requires long term vision.

Collapse
 
okdewit profile image
Orian de Wit • Edited

I tend to ignore most of the talk about how one should test, and write tests which are most likely to catch bugs caused by seemingly innocent code changes.

I try to base my starting point not on some cultist ritual, but purely on utilitarian value: "Will any test go red if I purposefully add bugs?"

Sometimes I neatly use TDD, sometimes I write code first.

For every method, I try to simply think "If some lumbering oaf in the future (probably me) were to quickly adjust any of the methods I'm calling here, what could go wrong?"

In practise, that means I often write in a semi-TDD-style for the method I'm currently adding, while trying to solidify the methods it depends on with some extra weird edge case tests.

Collapse
 
stilldreaming1 profile image
still-dreaming-1 • Edited

First I want to clarify that I do strongly believe in using automated testing and feel TDD is a great tool.

A programming article with no examples is perfectly fine, but if you are going to do that you should get a little more deep in a philosophical sense, and question your own reasoning more. The idea that coupling is bad makes no logical sense, it is exactly the same as saying "using code is bad", which is the name of an article I wrote that explains this (also with no examples).

As evidence that I am correct, the subsequent behavior you promote in an attempt to remove coupling is crazy. You say unit tests should test which methods on other objects the class under test calls, but that is not testing the contract at all, it is testing and duplicating the implementation.

The contract is the expected behavior of using the interface the class provides. If the expected behavior of an object is that it calls certain methods a certain way on other objects, then all you have done is implement your own harder to use programming language within the language you are using. But that kind of behavior is only the natural consequence of trying to avoid coupling.

Collapse
 
anortef profile image
Adrián Norte

Coupling is bad because it increases the amount of code impacted with any minor change, therefore increasing the cost of maintenance.

You say unit tests should test which methods on other classes it calls, but that is not testing the contract at all, it is testing and duplicating the implementation.

I said also, that you should test how they do these calls and explain how that helps reduce coupling. I don't know how you write your tests but if you need to duplicate the implementation I suggest a change.

Collapse
 
stilldreaming1 profile image
still-dreaming-1 • Edited

Once again you are not fully thinking through the words and concepts you are using. All code is coupling. The entire point of code is coupling. If you remove all coupling, you no longer have a system, just a bag of objects. You have it backwards. I can repeat your first sentence as the exact opposite, and it makes more sense. Coupling is good because it increases the amount of code impacted with any minor change, thereby decreasing the cost of maintenance. Good classes achieve high conceptual compression, not abstraction.

Thread Thread
 
anortef profile image
Adrián Norte

So, you are talking about the coupling vs cohesion thing.

TDD can help with that too, If you listen to the tests, of course. Like with coupling, if you detect your tests being hard to do, you can detect lack of cohesion if you realize that most of your tests just check that X calls Y with exactly the same output as input was given.

Thread Thread
 
stilldreaming1 profile image
still-dreaming-1 • Edited

Well, I guess we will have to agree to disagree. I like that you brought cohesion into the picture, as it is one part of getting compression right. I also like that you have been talking about listening to your tests. It is important to listen to your tests, and your code in general, as it can talk and give feedback to those who know how to listen. Ultimately though I feel that coupling is a good thing, and I'm not sure how writing the types of unit tests you describe would help me find the unwanted types of coupling. To me the only coupling I don't want is random coupling, which I would automatically avoid just by not using random extra things in the code, by having a very general aesthetic sense of what the responsibilities of a class should and should not be, and by refactoring to simplify things (although that simplification is often accomplished by either introducing some new coupling or by replacing some existing coupling with more desirable coupling).

Collapse
 
ghy profile image
skrc8

I don't like those type of tests, since the one you are talking about is tied to the implementation.

As you say that's good to put clear contracts between classes, but I don't see the purpose of that even when I think having clear interfaces is important.

Instead I prefer to test how each function answers with different parameters and edge cases.

Thanks for sharing and making me think about testing.

Collapse
 
stormingorman profile image
Brian Gorman

Tight coupling results from not using pure interfaces for everything... All parameters and return values should be pure interface types .. that way, all dependencies can be passed in as Fakes or Stubs .. resist the temptation to call constructors in methods.

  1. All parameter and return types are pure interfaces
  2. Try to only call constructors inside of other constructors. Only call contractors in member methods if there is no other way ..
Collapse
 
anortef profile image
Adrián Norte

That prevents coupling on an implementation level but not at a logical level. With TDD you can prevent both.

Collapse
 
nestedsoftware profile image
Nested Software • Edited

Tests usually are kinda like "given X to ClassA.cat() it will call ClassB.fox() with X*Y"

In my opinion, this is a bad idea. A test should verify that a given function does what it was meant to do. Most of the time, the details of its implementation should not be part of the test. The approach you describe will make it easy to create passing tests even though the actual application logic doesn't work properly. It's also brittle: If you change the implementation of a function, the tests are likely to break, even if the contract that function fulfills remains unchanged.

There are cases where this kind of testing, using mocks, is appropriate. If your function interacts with an external system, then there is a decent chance that you should mock that out for testing purposes. For instance, a test may say "please make sure that the send_text_message function was called with the following parameters" -- but without actually running send_text_message. This is done when the external system may be unavailable entirely, or produces results that vary over time, or would slow down the test suite too much.

Over all though, I recommend that a test should call a given function, and then confirm that the return value of that function is what was expected, or at least that some state change resulting from that function call is what it should be.

Collapse
 
serhuz profile image
Sergei Munovarov

DAL/DAL/REPO layer mustn't be mocked. Why?. Simply, the objective is to test a function that works with the database.

But you could just test your DAOs/repositories with, say, in-memory DB instance in isolation. And nothing prevents you from using mocks with other application-level tests.

Collapse
 
stilldreaming1 profile image
still-dreaming-1 • Edited

I would still be worried that the in memory database does not behave the same, and this would hide many bugs that would otherwise be caught by using the real database engine.

Collapse
 
n_develop profile image
Lars Richter

Hi Adrián,
thanks for your post.
I'm always amazed by how emotional these TDD discussions get. It reminds me very much of the current "replace master with main" discussion. There are stubborn opinions for and against it.

I don't like hardliner mindsets in general. There a people hunting the 100% code coverage which leads to silly tests. But there are the "don't write any test at all" people as well. I think tests are super valuable. But you have to learn to write useful tests.
At work, we try to write tests for all the important parts of our (pretty big) application and it works out pretty well for us.

Collapse
 
timberzen profile image
doug

Personally TDD means your code is testable. It's designed, that it's proven to be testable, at a smaller level. The tests also communicate on how it works. It also speeds up development, develop once and park it. I do reactor, but it can be an obsession, visually I want to keep it simple (KISS), I hate VAR in code reviews as I dont want the cognate overload of working out what VAR is especial when there is so much to scan. Sure there is the issue that some people don't get anything out of tests, then don't do testing. For me it's also pain reduction. The next question is about mocking and fakes, what is testable and is the fake honest? Next, are the tests giving me a false sense of security? Finally you are delivering a product, at a price point, tests are not the delivery, you code is (YAGNI).

Collapse
 
adamluzsi profile image
Adam Luzsi • Edited

Test behavior not implementation is my motto. I like cleanly separate the external resources and test them with integration tests that reused as a shared specification to implement in memory representation. With that being done, I don't have to use any mock/stub anymore, and my tests only aim to do behavior testing through composition.

Of course this requires a small amount of self discipline, but so far (in the past years) I only have good experiences with it.

The only "con" with it that each time an external resources needs to be added to the system, there is an extra step for creating an in memory implementation as well by shared specs.

But I'm a simple man. I see post about TDD, I upvote :)

Collapse
 
610yesnolovely profile image
Harvey Thompson

I prefer to just think of tests as tests. I also don't often mock classes.

I write tests first to prove what I'm about to code is correct (or occasionally that my mental model in testing doesn't match reality, or the class API sucks).

Typically software is built in layers, so for higher levels, I can write tests assuming the lower levels are tested (because they have tests) and work (because the code runs and does something sensible also).

Not usually important to mock because that's a lot of work that provides little benefit and slows progress. I do mock if it's easy to do (abstract base classes) or very important to hide some super complex/fragile system.

I try to balance forward progress, rather than testing everything, especially important because half the code gets rewritten so much that I'd have to rewrite half the tests. "Just enough tests" is actually therefore better than "everything is tested", which is better than "Not enough or no tests".

Collapse
 
detunized profile image
Dmitry Yakimenko

I think it would be easier to follow if you've demonstrated this on a you example that you develop step by step in your article.

Collapse
 
anortef profile image
Adrián Norte

Good idea! I will do a followup of this one with that when I have time. Thanks :D

Collapse
 
sobhan_1995 profile image
Sobhan

whats is difference between tdd and unit test?

Collapse
 
anortef profile image
Adrián Norte

TDD is a way to develop using tests while unit tests are tests that aim to check and verify the smallest unit of your logic.

Unit testing is part of TDD but only a part, you need integration tests also.

Collapse
 
mykezero profile image
Mykezero

Your english is great! Any chance you could think of an example of where you found hidden coupling? Also, what purpose are your integration tests serving?

Collapse
 
anortef profile image
Adrián Norte

Thanks! I'm always worried about not making sense because English not being my native language.

Usually, I found hidden coupling at the early stages of developing providers (in the sense of MySQL, RabbitMQ or a microservice) architecture to hide that accidental complexity. Like, the other day I was connecting a microservice to a legacy one that had a logical coupling (it wasn't so micro) and the provider class was reflecting that coupling.

For me, Integration Tests are a tool that ensures that the software being developed does what it says. It is important the part of "software being developed" anything outside it should be mocked. I usually code APIs (Postman is all the frontend I need :P) so, my Integration Tests are usually a bunch of HTTP calls with assertions for the replies.

I use NodeJs most of the time and use Nock and Supertest