(I'm a unit-test-addict :) )
Did I do unit tests wrong?
In my opinion unit-tests are documentation, so if your product change, your unit-tests must be rewrited. If you had to rewrite to many tests for a little change so maybe you should make your tests more flexible, or use them only to test the "freezed part of your code" (utils functions and algorithms).
Is there an alternative?
In case of API (constantly evolving) some tools create test directly from spec (Swagger maybe).
Are integration tests enough?
It's difficult to test only a function with integration test, the scope is not the same. But testing "GET .../user/1" return the good object it could be ok. I higly recommand to use unit-test to deal with user-inputs (POST requests) because you can test the endpoint with a lot of bad entry (and check for security, malformed, bad type, ...)
Is TDD a placebo?
Personnaly it's a security I love to have :)
> maybe you should make your tests more flexible
> or use them only to test the "freezed part of your code"
Isn't this against the TDD philosophy?
> I higly recommand to use unit-test to deal with user-inputs
How does this eliminate the problem that I only test what I had in mind anyway when writing the functionality in the first place?
Like, when I test my software I find fewer bugs than when someone else tests it. etc.
It's recommanded to test only one case per test, I get the bad practice to put all my testing case into an array:
for (arg1, arg2, result) in [(1,2,3),(-1,-3,-4)]:
assert(my_sum_function(arg1, arg2) == 3)
It's bad but you can make a lot of case and change function name easily.
Maintains few tests is always better to have no test at all. To encourage your team adding tests it should be easy ;). So test the freezed functions is a good start.
I'ld love to write an article about "unexpected testing cases", I have this list of error cases:
The main thing with TDD from my understanding is that tests are the requirements, so anything that falls outside of the tests is by definition irrelevant. Most of the "test everything" recommendations come from the TDD mindset, so if you try to apply that outside of the TDD framework it can get messy.
This perspective helps limit the scope and coupling of your tests, since there is typically an astronomical number of tests that you could do, but a very finite number of testable requirements. Refactoring should not generally break tests, but if refactoring occurs across/between several modules then you will probably have some rework, but I would argue that that is more of a "redesign" than a "refactor".
One good reason to test every module/class is to reduce the scope of any bugs you do come across. If I have a suite of tests that demonstrate my module's behavior then I know where not to look for the bug. With integration/system tests alone you will have some searching to do.
I always have the feeling that is still a problem.
I get rather high leven requirements, but they are implemented by many parts of the code. So simply writing a "Req1 passes" would require to implement many many thigns till the requirement is met.
We're a place where coders share, stay up-to-date and grow their careers.
We strive for transparency and don't collect excess data.