re: You don't know TDD VIEW POST

TOP OF THREAD FULL DISCUSSION
re: still-dreaming-1 I really enjoy the last thing you said about dancing with the machine, I hope you don't mind if I use that later. However, having...

@shadow1349 , here are my answers to your questions.

1) The volume of the code plus the tests doesn't slow us down. Tests are in a separate dir from the code and we have no problems keeping our code organized.

If you are saying its slower to write code and tests than it is to write tests alone, we find that not to be the case either when we account for all appropriate costs.

If you just look at the code and say that you could code feature x in 1 hour and it would take another hour to write tests for that code so tests are inefficient, you're not counting all relevant costs.

For example, you have to create and run a manual testing plan on the code, which takes time. Then someone else on your team also has to run a manual testing plan during the code review. And if the reviewer spots a problem with your pull request, you have to go back, fix the problem and repeat the cycle (so you've run the manual testing plan at least 4 times now). That doesn't have to happen to often for automated tests to pay for themselves.

But it doesn't end there. What happens if you find a defect in that code a month from now? Now, you have to go through the cycle yet again but most teams don't keep their manual tests so you'll probably have to recreate them. Then 5 years from now a colleague will need to change how feature x works but she didn't write it, doesn't know how it works, and has no documentation to help her. So she has to spend time figuring out what it should do, if it in fact does what it's supposed to do, and how to change the code so it satisfies the new requirements in addition to all the existing requirements that are not supposed to change but are not well documented.

2) The age of our code is not correlated with our willingness to change it. Clean code, covered by tests is always easy to change. The code people have a negative emotional reaction to is the original code that is a tangle of responsibilities, logic, and database calls all rolled into one without any tests.

When we want to change that code we have to employ the techniques from Working Effectively with Legacy Code and Clean Code to carefully get the code under test, write characterization tests around it, then refactor it. And that's is a very time consuming process, even though it's the best way forward when the code is complicated and the change is non-trivial.

Dealing with our own legacy code is actually the thing that convinced us to adopt design reviews, static analysis, automated code styling, automated testing, code reviews, etc. We didn't want to live that way any more.

We were tired of defects creeping into our projects. We were tired of bouncing branches back and forth between the author and the reviewer because we kept finding more mistakes as the hours rolled by. We were tired of being interrupted to fix errors somebody else discovered in production. We were tired of spending hours trying to understand what some piece of code did. It was frustrating and inefficient and we vowed not to make any more code like that ever again. And while our new code isn't perfect, we've largely accomplished our mission.

One of the top posts this week is on this very topic: Towards Zero Bugs by Jonathan

3) Quick feedback is important. On the project I spend the most time on, we have about 2,900 tests with around 3,800 assertions. This includes both unit and integration tests. I have the slowest dev machine on my team and I can run all the tests in 53 seconds. I can run the unit tests alone in about 15 seconds. Or I can run the integration tests alone in 38 seconds. My colleague has a faster dev machine and he can run all our tests in about half that time. However, in my normal workflow, I usually just need to run the tests in one file to ensure I'm not breaking anything and that almost always takes a fraction of a second.

Listen, I believe you when you report that automated tests have a low ROI in your project. All the things you're saying and the questions you are asking point to problems with your testing program. The part I don't understand is why you think that it's a problem with automated testing in general instead of a symptom that something's not quite right in the way you're building and maintaining your test suite?

Cheers.

I'm curious to know what language you're using. I came into the current project I'm working on (it's dotnet C#) and they had all their tests written, which total less than half of the number of tests you have. Yet it takes around 45 minutes to run our tests and that's on our more powerful build servers.

What I was trying to say is that it is faster not to write tests at all. Even down the line, I have never needed tests to tell me what is going on with the code I'm working on, or how it works. If I'm sufficiently familiar with the language and practices used on that project I don't need a lot to figure out how everything is put together. Perhaps not everyone is that way, but then are they really qualified to work on that team/project?

I think automated tests are a problem because every team and project I've come into has used those tests like a crutch. I believe a good developer doesn't need to lean on automated tests to write good clean code. Nor do they need automated tests to tell them how some code is intended to work and if there are any problems with it.

I think in theory automated tests sound good, but in practice they fall short.

My project is written in PHP 7.1 and our unit tests are written for PHPUnit.

I think we'll have to agree to disagree about the utility of automated testing. Even if you are as good as you say you are, how are you going to:

  • Find enough programmers with your skill level to develop and maintain your project?
  • And what are you going to do when those developers move on and new developers take their place without the kind of safety net automated tests provide?
  • How are you going to prevent your project from being overwhelmed by technical debt if you have no automated tests to help you refactor safely?

But you don't have to take my word for it (nor should you). It's pretty trivial to design and run an experiment where you have a control where you don't write any automated tests and an experimental group where you write automated tests and see which one is cheaper/better/faster in the long run.

We've done that as a sort of a back-of-the-napkin calculation. And in our project, with our team, we are way further ahead using our QA processes (including automated testing) than the "good old days" when QA was ad hoc.

One final thought: I think I'm a pretty good programmer and I've been doing this professionally for almost 20 years and I'm still amazed at how many defects I can find if I actually take the time to write some tests. Even seemingly simple code can contain errors that are difficult to detect by visual inspection alone. That's why using multiple kinds of QA (static analysis, automated testing, individual code reviews, peer code reviews, manual testing, etc.) is a very good strategy for most projects.

Shadow, you said tests give the feeling of safety. But they don't just give you feeling of safety, they give you actual safety. They help you find and root out many bugs very efficiently. If the tests your project has don't provide any actual safety, the team is not writing proper tests, as myself and Blaine have been trying to tell you. Calling tests a crutch is just that, name calling. You could refer to any QA tool/technique a crutch and say it should not be needed, but it either adds more value than it takes away or it doesn't.

I am especially worried when you follow up talking about crutches with the idea that you know how your code behaves without tests. I suppose it is possible you have some kind of very warped brain that is organized such that it gives up intelligence in other areas in favor of giving you the exact kind of intelligence to understand code perfectly, but that is not at all normal, nor should it be. Any good developer can grok the language they work in and have an understanding of code written in it, but that is not at all the same as saying they can just write code, read over it, and know it is correct. They might think and feel they can do this, but that is also not the same. Basically I don't believe you, and I think you are overestimating your own abilities.

On that note I have a personal challenge to any developer that thinks they don't need tests. It is actually a reminder to myself and a challenge I give myself any time I start thinking I don't need tests. Write a non-trivial class, and read over it until you feel confident there are no bugs. Now write tests that thoroughly cover every part of the behavior and achieve 100% code coverage of that class. I bet at least 7/10 times times this will uncover bugs you didn't realize were there. I'm amazed at how often even relatively simple code I write still has bugs that the tests reveal.

There are entire methodologies centered around not writing tests. Most notably the hacker method, used predominantly at Facebook. Basically, you innovate and ship code as fast as possible. You realize that there will be bugs, but deal with those as they come because it's more important to innovate fast. You can look up Erik Meijer as he talks a little bit about TDD.

Most code written today is simply a set of instructions written in a way that is readable by humans. If you write good clean code you can follow it quite easily. The more code you write the harder it get's to follow, but you can still follow it. Take this example:

  • A = > 90
  • B = 80 - 90
  • C = 70 - 80
  • D = 60 - 70
  • F = < 60

const grade = 80;

if (grade > 90) {
console.log('A');
} else if (grade < 90 && grade > 80) {
console.log('B');
} else if (grade < 80 && grade > 70) {
console.log('C');
} else if (grade < 70 && grade > 60) {
console.log('D');
} else if (grade < 60) {
console.log('F');
} else {
console.log('ERR');
}

While this is a very simple example, but this is similar to code that I've seen tests written for. You may spend 10 to 15 minutes writing a test for this bit of code and running your tests and trying to figure out why this prints ERR. Then you're not going to look at this code again and simply run the test. Then the developers who come in who didn't write this code or the tests will come in and simply run the tests without understanding the code. The only time they'll do that is if the tests fail. But it's going to take much longer to figure out the issues because they didn't take the time to figure out how the code works they were lazy and just ran the tests.

In practice this is actually really bad because no one other the the person who wrote the code and the person who wrote the test (sometimes the same person) will take any time to understand the code. You may tell me that I'm wrong, but human nature is to grab the low hanging fruit. Since developers are in fact humans, many of them will not take any time to understand the code because the tests are the low hanging fruit.

While that doesn't encompass ALL developers, it is human nature and includes enough of us to make me think we need to write fewer tests and spend more time understanding the code we write better. This is the very definition of a crutch, because you lean on the tests to find your problems instead of understanding the code well enough to be able to suss out the problems. That doesn't take a "warped brain" that just takes some effort, common sense, and an understanding of the language/environment you're working in.

I will however, concede on one thing. Not all software products are equal, and there are some that really do require tests. If we look at the recent plane crashes of those 737 max planes they pointed to a faulty controller that reported that the plane was stalling when it wasn't. I then sent the plane into a node dive to gain enough speed to safely fly.

Things like this, I believe, are exempt from the hacker method where you try to innovate as fast as possible and deal with errors as they come because on systems like that you can't rapidly deploy changes and fixes. But you also have to understand your code and system and how everything works.

The final cause of the 737 crashes are still under investigation. I do know that Boeing uses TDD. If the initial findings are correct and the issue occurred because of a controller incorrectly sent messages to the automated system that the plane is stalling, why didn't the tests catch it?

You can say that they didn't have enough tests or that they the bad tests, but the core issue is that no one understood the system well enough to catch that issue. No one took the time and effort to do that and it cost people their lives.

shadow, let me get this straight. You are saying that avoiding tests in order to not find bugs that tests would reveal before deployment in order for the users to "find the bugs" for you, is the way for developers to avoid being lazy?

I don't buy that skimping on pre-deployment QA speeds up innovation, and I don't think that quality only matters in life and death scenarios like the plane problem you mentioned. The world is facing the opposite problem. Most software, products, and services, barely even work. I'm constantly getting annoyed by things not working quite right, or not as well as I would like them to. You know what would be truly innovative? Products and code that actually works!

What is more, I feel that true innovation is what emerges when you insist on extreme quality. If we all started enforcing things that are currently impractical, like 100% code coverage, what would happen is we would innovate to make achieving that less painful.

I don't think you fully understand the hacker method at all. You can enforce standards without having to write and maintain thousands of tests.

code of conduct - report abuse