re: You don't know TDD VIEW POST

re: I used to believe exactly what you just said. I tried TDD and other ways of automated testing on and off again for several years, and kept coming b...

One more thing to add. Many proponents of TDD will say the main value it adds is actually not in the fact that the tests test the correctness of the code, but I suspect someone like yourself will quickly realize how ridiculous that is. We want our code to be correct, and if the tests help us to make the code work correctly, surely the added value must outweigh any other advantage they bring, since not having tests surely does not prevent you from designing your code well or writing good code. If dragging around a code base of tests can be justified, surely it must lie somewhere in that they help you test the code and work out bugs. So if you can find a way to make the tests add more value than they subtract, your code will be less buggy as well. I mean, it sounds like refactoring is something you like to do, and yes you can technically refactor more quickly without tests, but of course you can refactor quickly if the code doesn't actually need to work.

Whenever, I can I like to refactor code, provided it will make the code better or adds some value. One thing I've gotten into lately is TypeScript and NodeJS (coming from a C#/dotnet, and Java background). One tool I've grown incredibly fond of is linting (I know it's availabel in more languages than just Node/TypeScript, but it's predominantly used Node). This allows me to define a set of rules and standards for the code style and quality that are enforced with a single tool. It is especially useful when it comes to deploying via some CI/CD tool, like CircleCI, because I can lint the whole project and if it fails, it doesn't deploy.

For me, this is so much better than writing tests because it covers the whole code base, rather than single classes/functions, it will catch most errors, it enforces code style quality, and it can be easily copied to another project.

I acknowledge that this won't catch 100% of all errors because you can still have incorrect code meet those standards which will get built, and I also know that these things can sometimes slip through PR reviews.

However, if I have sufficiently setup monitoring for my code (such as sentry or grafana, or both) I'll gather needed metrics, as well as being able to accurately and quickly react to any issues by running this code in a development/staging environment before pushing it to a production environment. Most products should have various environments (like development, staging, production) anyway so I make sure that these are utilized fully.

However, one kind of test I can get behind, at least for APIs, is postman tests. The great thing about postman tests is that you can refactor the code all you want, as long as you're returning the same data and HTTP status code your postman tests will pass and accurately tell you if your API has performed as expected. Not to mention they work the same way no matter what language your API is written in. So these tests will add value without taking a ton of time to refactor if your code does change, and if your return data changes it's very very simply to update the test.

It's definitely easy to write tests with low ROI.

The top 4 ways to do that are:

  • Testing trivial details
  • Testing the implementation details instead of testing through the public API
  • doing behavior verification when state verification is more appropriate and mockist TDD where classical TDD is more appropriate (see:
  • Testing complicated/coupled/non-SRP code

With that said, my team has thousands of tests that were mostly written over the last 8 years and we've never had problems like @shadow1349 is describing. We believe our test suites provide a high ROI on our efforts.

If you're happy with your tests, that's great, but I'm going to present a challenge for your tests.

  1. You're going to have so much code it will become difficult to sift through and maintain, with those tests bogging down your code base, it will take more and more developer time and effort to maintain.
  2. Since your test suites (mostly) are 8 years old, you're going to have people saying things like "I'm not touching that" (we've all heard it and/or said it) because of how old it is, or because they have forgotten about it or what it does/how it works, or because they're new and it can be intimidating.
  3. The more tests you have the slower your CI/CD pipelines are going to run (in my situation our product takes more than 45 minutes to build), that is if you have CI/CD pipelines, otherwise this point is moot.
    • This isn't necessarily the end of the world, but the longer it takes to run the longer it will take to see if you have any issues, and the longer the development process will take as a whole, it can add up to be a real pain.

shadow, I agree with everything you said about those tools and techniques. However, I still have found that a suite of tests on top of that is amazing, simply because they help you catch and fix bugs faster and less painfully than manual testing does. You would have to be so careful and slow, and do so much manual testing to get the same level of assurance. I guess what I'm saying is, prior to submitting a pull request and having the CI rerun lints, and prior to asking someone else to review your code, you still want to have reason to believe your work is likely to be high quality, and I believe you do that. But tests help you get to that point faster and with less pain. Why waste someone someone else's limited mental energy and time with reviewing code only to point out bugs that you could have found yourself by writing tests? I'm not saying don't do code reviews, I'm just saying you want to be proactive about doing quality work prior to the code review. Why rely on a slow and manual process of gathering metrics from manual testing in a development or staging environment prior to being proactive about using tests to quickly find and fix most of those same problems? If you haven't experienced that level of benefit from testing your classes, then see it as a challenge. I'm not saying your wrong, I'm saying you can do better. You can setup a development environment, and write your code and tests in such a way that, they more than provide enough value to be worth doing in addition to all those other things. They can make programming more fun and less painful too. But this only happens after a high level of mastery of writing valuable tests in a frictionless way. Part of the trick is to setup an environment that makes running the exact tests you want to very frictionless, just a couple keyboard clicks away from when you finished typing in the code. It keeps the act of programming in a purer form longer, a programming dance with the machine.

still-dreaming-1 I really enjoy the last thing you said about dancing with the machine, I hope you don't mind if I use that later.

However, having to write tests gets us from 0 to app slower than simply not writing tests. Tests won't make anything fast, but they will provide the feeling of making it safer. If you've come into a project with existing tests you may receive those benefits, but the moment you have to make a breaking change you're going to have to refactor some tests and hopefully they pass. This is a much slower process, but again provides the feeling of safety.

My philosophy when it comes to software development is like Facebook's "Move fast with stable infrastructure" (used to be "Move fast and break things") basically the hacker method, with my own personal ideology intermingled in (I could write a whole article on this, so I won't get into it).

A lot of my opposition to tests is, in part, emotional, as I feel less of the dance with the machine and more of an awkward arm's distance middle-school swaying. This is not a good argument against TDD, I get that.

A better argument is to say that code reviews will add much more value to the team, because we will get a better understanding of the code base, and the code our team mates are writing. It is important to keep everyone on the same page. Not to say you can't do that with TDD, but TDD adds in (what I believe to be) unnecessary steps.

@shadow1349 , here are my answers to your questions.

1) The volume of the code plus the tests doesn't slow us down. Tests are in a separate dir from the code and we have no problems keeping our code organized.

If you are saying its slower to write code and tests than it is to write tests alone, we find that not to be the case either when we account for all appropriate costs.

If you just look at the code and say that you could code feature x in 1 hour and it would take another hour to write tests for that code so tests are inefficient, you're not counting all relevant costs.

For example, you have to create and run a manual testing plan on the code, which takes time. Then someone else on your team also has to run a manual testing plan during the code review. And if the reviewer spots a problem with your pull request, you have to go back, fix the problem and repeat the cycle (so you've run the manual testing plan at least 4 times now). That doesn't have to happen to often for automated tests to pay for themselves.

But it doesn't end there. What happens if you find a defect in that code a month from now? Now, you have to go through the cycle yet again but most teams don't keep their manual tests so you'll probably have to recreate them. Then 5 years from now a colleague will need to change how feature x works but she didn't write it, doesn't know how it works, and has no documentation to help her. So she has to spend time figuring out what it should do, if it in fact does what it's supposed to do, and how to change the code so it satisfies the new requirements in addition to all the existing requirements that are not supposed to change but are not well documented.

2) The age of our code is not correlated with our willingness to change it. Clean code, covered by tests is always easy to change. The code people have a negative emotional reaction to is the original code that is a tangle of responsibilities, logic, and database calls all rolled into one without any tests.

When we want to change that code we have to employ the techniques from Working Effectively with Legacy Code and Clean Code to carefully get the code under test, write characterization tests around it, then refactor it. And that's is a very time consuming process, even though it's the best way forward when the code is complicated and the change is non-trivial.

Dealing with our own legacy code is actually the thing that convinced us to adopt design reviews, static analysis, automated code styling, automated testing, code reviews, etc. We didn't want to live that way any more.

We were tired of defects creeping into our projects. We were tired of bouncing branches back and forth between the author and the reviewer because we kept finding more mistakes as the hours rolled by. We were tired of being interrupted to fix errors somebody else discovered in production. We were tired of spending hours trying to understand what some piece of code did. It was frustrating and inefficient and we vowed not to make any more code like that ever again. And while our new code isn't perfect, we've largely accomplished our mission.

One of the top posts this week is on this very topic: Towards Zero Bugs by Jonathan

3) Quick feedback is important. On the project I spend the most time on, we have about 2,900 tests with around 3,800 assertions. This includes both unit and integration tests. I have the slowest dev machine on my team and I can run all the tests in 53 seconds. I can run the unit tests alone in about 15 seconds. Or I can run the integration tests alone in 38 seconds. My colleague has a faster dev machine and he can run all our tests in about half that time. However, in my normal workflow, I usually just need to run the tests in one file to ensure I'm not breaking anything and that almost always takes a fraction of a second.

Listen, I believe you when you report that automated tests have a low ROI in your project. All the things you're saying and the questions you are asking point to problems with your testing program. The part I don't understand is why you think that it's a problem with automated testing in general instead of a symptom that something's not quite right in the way you're building and maintaining your test suite?


I'm curious to know what language you're using. I came into the current project I'm working on (it's dotnet C#) and they had all their tests written, which total less than half of the number of tests you have. Yet it takes around 45 minutes to run our tests and that's on our more powerful build servers.

What I was trying to say is that it is faster not to write tests at all. Even down the line, I have never needed tests to tell me what is going on with the code I'm working on, or how it works. If I'm sufficiently familiar with the language and practices used on that project I don't need a lot to figure out how everything is put together. Perhaps not everyone is that way, but then are they really qualified to work on that team/project?

I think automated tests are a problem because every team and project I've come into has used those tests like a crutch. I believe a good developer doesn't need to lean on automated tests to write good clean code. Nor do they need automated tests to tell them how some code is intended to work and if there are any problems with it.

I think in theory automated tests sound good, but in practice they fall short.

My project is written in PHP 7.1 and our unit tests are written for PHPUnit.

I think we'll have to agree to disagree about the utility of automated testing. Even if you are as good as you say you are, how are you going to:

  • Find enough programmers with your skill level to develop and maintain your project?
  • And what are you going to do when those developers move on and new developers take their place without the kind of safety net automated tests provide?
  • How are you going to prevent your project from being overwhelmed by technical debt if you have no automated tests to help you refactor safely?

But you don't have to take my word for it (nor should you). It's pretty trivial to design and run an experiment where you have a control where you don't write any automated tests and an experimental group where you write automated tests and see which one is cheaper/better/faster in the long run.

We've done that as a sort of a back-of-the-napkin calculation. And in our project, with our team, we are way further ahead using our QA processes (including automated testing) than the "good old days" when QA was ad hoc.

One final thought: I think I'm a pretty good programmer and I've been doing this professionally for almost 20 years and I'm still amazed at how many defects I can find if I actually take the time to write some tests. Even seemingly simple code can contain errors that are difficult to detect by visual inspection alone. That's why using multiple kinds of QA (static analysis, automated testing, individual code reviews, peer code reviews, manual testing, etc.) is a very good strategy for most projects.

Shadow, you said tests give the feeling of safety. But they don't just give you feeling of safety, they give you actual safety. They help you find and root out many bugs very efficiently. If the tests your project has don't provide any actual safety, the team is not writing proper tests, as myself and Blaine have been trying to tell you. Calling tests a crutch is just that, name calling. You could refer to any QA tool/technique a crutch and say it should not be needed, but it either adds more value than it takes away or it doesn't.

I am especially worried when you follow up talking about crutches with the idea that you know how your code behaves without tests. I suppose it is possible you have some kind of very warped brain that is organized such that it gives up intelligence in other areas in favor of giving you the exact kind of intelligence to understand code perfectly, but that is not at all normal, nor should it be. Any good developer can grok the language they work in and have an understanding of code written in it, but that is not at all the same as saying they can just write code, read over it, and know it is correct. They might think and feel they can do this, but that is also not the same. Basically I don't believe you, and I think you are overestimating your own abilities.

On that note I have a personal challenge to any developer that thinks they don't need tests. It is actually a reminder to myself and a challenge I give myself any time I start thinking I don't need tests. Write a non-trivial class, and read over it until you feel confident there are no bugs. Now write tests that thoroughly cover every part of the behavior and achieve 100% code coverage of that class. I bet at least 7/10 times times this will uncover bugs you didn't realize were there. I'm amazed at how often even relatively simple code I write still has bugs that the tests reveal.

There are entire methodologies centered around not writing tests. Most notably the hacker method, used predominantly at Facebook. Basically, you innovate and ship code as fast as possible. You realize that there will be bugs, but deal with those as they come because it's more important to innovate fast. You can look up Erik Meijer as he talks a little bit about TDD.

Most code written today is simply a set of instructions written in a way that is readable by humans. If you write good clean code you can follow it quite easily. The more code you write the harder it get's to follow, but you can still follow it. Take this example:

  • A = > 90
  • B = 80 - 90
  • C = 70 - 80
  • D = 60 - 70
  • F = < 60

const grade = 80;

if (grade > 90) {
} else if (grade < 90 && grade > 80) {
} else if (grade < 80 && grade > 70) {
} else if (grade < 70 && grade > 60) {
} else if (grade < 60) {
} else {

While this is a very simple example, but this is similar to code that I've seen tests written for. You may spend 10 to 15 minutes writing a test for this bit of code and running your tests and trying to figure out why this prints ERR. Then you're not going to look at this code again and simply run the test. Then the developers who come in who didn't write this code or the tests will come in and simply run the tests without understanding the code. The only time they'll do that is if the tests fail. But it's going to take much longer to figure out the issues because they didn't take the time to figure out how the code works they were lazy and just ran the tests.

In practice this is actually really bad because no one other the the person who wrote the code and the person who wrote the test (sometimes the same person) will take any time to understand the code. You may tell me that I'm wrong, but human nature is to grab the low hanging fruit. Since developers are in fact humans, many of them will not take any time to understand the code because the tests are the low hanging fruit.

While that doesn't encompass ALL developers, it is human nature and includes enough of us to make me think we need to write fewer tests and spend more time understanding the code we write better. This is the very definition of a crutch, because you lean on the tests to find your problems instead of understanding the code well enough to be able to suss out the problems. That doesn't take a "warped brain" that just takes some effort, common sense, and an understanding of the language/environment you're working in.

I will however, concede on one thing. Not all software products are equal, and there are some that really do require tests. If we look at the recent plane crashes of those 737 max planes they pointed to a faulty controller that reported that the plane was stalling when it wasn't. I then sent the plane into a node dive to gain enough speed to safely fly.

Things like this, I believe, are exempt from the hacker method where you try to innovate as fast as possible and deal with errors as they come because on systems like that you can't rapidly deploy changes and fixes. But you also have to understand your code and system and how everything works.

The final cause of the 737 crashes are still under investigation. I do know that Boeing uses TDD. If the initial findings are correct and the issue occurred because of a controller incorrectly sent messages to the automated system that the plane is stalling, why didn't the tests catch it?

You can say that they didn't have enough tests or that they the bad tests, but the core issue is that no one understood the system well enough to catch that issue. No one took the time and effort to do that and it cost people their lives.

shadow, let me get this straight. You are saying that avoiding tests in order to not find bugs that tests would reveal before deployment in order for the users to "find the bugs" for you, is the way for developers to avoid being lazy?

I don't buy that skimping on pre-deployment QA speeds up innovation, and I don't think that quality only matters in life and death scenarios like the plane problem you mentioned. The world is facing the opposite problem. Most software, products, and services, barely even work. I'm constantly getting annoyed by things not working quite right, or not as well as I would like them to. You know what would be truly innovative? Products and code that actually works!

What is more, I feel that true innovation is what emerges when you insist on extreme quality. If we all started enforcing things that are currently impractical, like 100% code coverage, what would happen is we would innovate to make achieving that less painful.

I don't think you fully understand the hacker method at all. You can enforce standards without having to write and maintain thousands of tests.

code of conduct - report abuse