When I was in university, I had a lecturer who didn't like unit tests. He was an eldery man who worked at IBM and gave lectures about mainframes. He always said, the hype about unit tests would simply double the amout of code written and do nothing for its safety.
When I did my first big project in 2009, a HTTP API, nobody in the company I worked for (the company was founded in 2001) had written any unit tests. They had huge C/C++ and PHP code-bases. They did integration tests, but the project I had been given was the first that used unit-tests.
I heard about it at university and wanted to make my first project look good right from the start. So I wrote a bunch of unit tests for every class, ending up with about 200 tests after the first version was released. Trying to hit that famous 100% coverage. Only a few months later the architecture of the API changed and somebody new at the project hat to rewrite more than 100 tests.
In the lifetime of the project, the unit tests didn't prevent any major bugs, just the stuff I had in mind while testing the code, but they slowed down the development progress tremendously. Also, they forced some style on the code that was mostly there to ease the writing of the tests and not the resulting application.
So what is your opinion about this? Did I do unit tests wrong? Is there an alternative? Are integration tests (black- or grey-box) enough when automated? Is TDD a placebo? Are type-systems the way to go?
Lets discuss! :)
Latest comments (47)
I'm surprised that no one has yet mentioned monitoring.
The alternative to building classic object-oriented software guided by tests is to develop microservices with extensive real-time monitoring and alerting. If something's broken, the service will go down, or a metric will spike up/down, and the developer who owns the microservice needs to fix it. This approach is sometimes called programmer anarchy and requires a high level of maturity across the whole team.
That's basically what I'm doing.
Didn't consider this as an alternative to unit tests until now.
In my opinion, unit tests are great as a scaffold while building something. I'm not sure how much they help after that. If the behavior that they're testing can be pulled up into the integration test, then that might be better, which would leave more room for refactoring the implementation (although pulling them up too early might be a waste if you don't feel the need to refactor the implementation).
So my stance at this point is: write unit tests, but don't get too attached to them.
From my experience you should have at least 10 times more code in unit tests than actual code if you want to do true TDD. Also, if you achieve 100% coverage as some code metric tools tells you, you've only at least covered every line with at least one unit test. This says nothing of whether or not you've actually tested every possible input, so there will still be bugs (if you find one write a unit test before you fix it).
As for the issues you mention above, I only have a few comments.
Code bases that use unit tests at least have the benefit of being written in a testable manner. And therefore can be more maintainable.
I once thought of unit tests as useless. Because it took me more time to write them than just bust out some code. Now having successfully written a template engine using TDD I'm a believer.
If you think that all you do as a dev is put your hands on the keyboard and start typing, then having to write unit tests seems like a waste of time. But what about all those hours we spend staring at a screen trying to actually write the code or worse yet trying to figure out why we wrote it that way and why the XYZ doesn't it work like I think it should?
Now come full circle with me. What if while you were thinking about what to write or how to fix it you just write some unit tests while you thought about your issue. Some great things start to happen. First you think about the problem more. Next you are forced to come up with possible inputs, go ahead write that whacky test you don't think matters. Maybe just maybe it will help later on. Lastly, but not finally, you go back to having your hands on the keyboard more but your using the unit tests to help your thought process. And what you are left with is tested code that can be more easily refactored.
Now as for those 100 unit tests you had to refactor... Why not approach the rewrite the same way as new code? You had to think about the change, why not have some unit tests to show for your thoughts?
Lastly, if the actual typing part is taking too long. You're either doing something wrong or you don't have visual studio and resharper. ;-)
Hey, K!
I have to say that I do not think there are good alternatives for unit tests.
Your gripe with them so far is that they did not catch mistakes and slowed down development, right?
I do not think any testing approach will prevent programmers from making mistakes, instead you should focus on writing simple, SOLID methods that are easily unit-tested and then supplement with contract tests as explained in Integrated Tests Are A Scam.
As for slowing down development. How come? Normally when you have an API and tests for it, it should only grow in capabilities. If you have changes that require rewriting half the suite, the tests must be bad (sorry!), probably coupled to implementation details, not behavior.
As counter intuitive as it may seem, the answer to your plight is more, better unit tests, not less, and not something else.
Alternative: Scenario testing
For API testing I preffer scenario tests.
Good way is docs.cucumber.io/ which is really strong.
I think it totally depends on the expectations of the system, how much experience you have, and the risk you can afford.
One of the systems I've been working on for about 14 years is a CRM. It's probably about a million and a half lines of code with a few hundred movable UI components. At one point we had around 10,000 unit+integration tests but have removed many of them. The issue is it tolerates a fair amount of mistakes in edge cases as they typically don't impact many employees at a time because someone with knowledge of the business tested the feature before it went into production. My goal is to try to provide the business with a high ROI for development time and over the years I've seen what works and what doesn't. These days I typically use very few unit tests. In fact given the application I try to limit situations where I feel they are necessary at all causing fewer errors, faster turn around etc.
Most of my development is a UI, maybe some business logic, a model, and a DB. If I have tests I make a class with dependency injection just to test the business logic, and many situations there is no business logic to test. If there are very few critical paths or few people are using it I may also not add unit tests.
Financial portions of an application typically receive many more tests.
If you are a new developer unit tests may be useful to help you understand the potential issues with the patterns you use.
Unit tests don't replace integration tests. Or making tests to reproduce bug reports (seems you missed it the first time).
I think most new developers make things complicated enough to require tests because they are bored or because they think it is clever or they just don't think long term. It may unintentionally function as a training exercise. In my experience systems with a few repeated patterns over and over seem to stand the test of time much better. Much of the "complicated" code just comes from tried and true libraries you shouldn't be editing. The best business code is something someone else who isn't even a great programmer can sit down at and understand quickly so they can add additional features that are valuable to the end user.
You may however be in a very different situation. If you are writing a library to release, your company screams at you for every little error, your software will be installed in hardware and sold, it is customer facing, it is life or death, etc then I would change the way I write and test it accordingly.
As a note one way I reduced complexity (and unit tests) was to not be afraid to move complicated things to the administrative user space where possible. This also allows the business to build groups of people who may not be "programmers" but who can set up complicated business logic on a test server, test it, and move it to production without a developer even being involved. Your software will be more resilient to change. Along those lines I recommend moving anything that looks like a report to it's own department or at least its own thought process/repo.
/rant (since they canceled our fireworks due to rain) lol.
When I was at uni, unit testing didn't exist. We had "black box" and "white box" testing (which kind of map to integration and unit testing). But if anything, the idea that developers needed to write any testing code at all was seen as a general failure of Computer Science. There was an emphasis on things like formal verification (so, using mathematics to verify that code is correct) and the hope that you could just specify what you wanted and the program would be automatically created.
So I'm not surprised that you lecturer wasn't a fan of unit testing. In some respects unit testing is an industry-wide wrong turn, but then unit testing is a lot easier than some of the alternatives (have a look at Z Notation - when I was at uni the course in it brought people to tears).
Unit tests have the greatest ROI when either
On the other hand, unit tests have very low ROI when a feature is not business-critical and has requirements that change very frequently.
Note that the value of unit tests is like everything else: it depends.
As to alternatives, I’ve had cases where API tests (on a running test instance of the application) provided an immensely high ROI. Integration tests, in the sense of testing the collaboration of a chuck of your codebase, those have for me always had a low ROI, because of the effort in setting up while still only resembling actual production behaviour (due to the mocked parts).
I agree with what other commenters already said - I think the essence is that unit tests should guide the design of your software. They will force you to use good practices like single responsibility etc. I've also worked with systems that had a large number of unit tests that didn't seem to add any value, but coincidentally the whole system (codebase) sucked ... so this was not a proof that TDD was useless, on the contrary, TDD didn't work because the design of the system wasn't good.
I'm bookmarking this to read later (so many good comments!) but I'll chime in with a QA perspective:
If you're working at a place with a formal QA step, test your implementation, not your requirements
I've noticed in my devs' specs they'll have tests for things like "this has been called once", "all parts of this
ifcan be hit", yada yada yada, and then there will be things like "it returns all this info", "the info is sorted properly", "the info is in the right format", etc.Then if you look at my tests, they're "it returns all this info", "the info is sorted properly", "the info is in the right format", etc... things a user would see and that are in the story's acceptance criteria for the feature. Where I am, QA automation (end-to-end with a hint of integration testing) is a part of the formal definition of done, so a feature isn't considered done until both of us have written the same thing just at two different levels.