> maybe you should make your tests more flexible
How? :)
> or use them only to test the "freezed part of your code"
Isn't this against the TDD philosophy?
> I higly recommand to use unit-test to deal with user-inputs
How does this eliminate the problem that I only test what I had in mind anyway when writing the functionality in the first place?
Like, when I test my software I find fewer bugs than when someone else tests it. etc.
Use Single responsibility principle (SOLID) / functionnal programming pure function : it reduce the test scope.
Maybe using mock can help
It's recommanded to test only one case per test, I get the bad practice to put all my testing case into an array:
for (arg1, arg2, result) in [(1,2,3),(-1,-3,-4)]:
assert(my_sum_function(arg1, arg2) == 3)
It's bad but you can make a lot of case and change function name easily.
Maintains few tests is always better to have no test at all. To encourage your team adding tests it should be easy ;). So test the freezed functions is a good start.
I'ld love to write an article about "unexpected testing cases", I have this list of error cases:
Not found: unexisting file (or bad permission)
bad type: '01' instead of 01
bad format: negative integer, phone number with +XX prefix, (XSS injection for HTML field), too long (buffer overflow) ...
injection: SQL, XPath, LDAP, ...
textcase: get uppercase when we're waiting lowercase
The main thing with TDD from my understanding is that tests are the requirements, so anything that falls outside of the tests is by definition irrelevant. Most of the "test everything" recommendations come from the TDD mindset, so if you try to apply that outside of the TDD framework it can get messy.
This perspective helps limit the scope and coupling of your tests, since there is typically an astronomical number of tests that you could do, but a very finite number of testable requirements. Refactoring should not generally break tests, but if refactoring occurs across/between several modules then you will probably have some rework, but I would argue that that is more of a "redesign" than a "refactor".
One good reason to test every module/class is to reduce the scope of any bugs you do come across. If I have a suite of tests that demonstrate my module's behavior then I know where not to look for the bug. With integration/system tests alone you will have some searching to do.
I always have the feeling that is still a problem.
I get rather high leven requirements, but they are implemented by many parts of the code. So simply writing a "Req1 passes" would require to implement many many thigns till the requirement is met.
For further actions, you may consider blocking this person and/or reporting abuse
We're a place where coders share, stay up-to-date and grow their careers.
> maybe you should make your tests more flexible
How? :)
> or use them only to test the "freezed part of your code"
Isn't this against the TDD philosophy?
> I higly recommand to use unit-test to deal with user-inputs
How does this eliminate the problem that I only test what I had in mind anyway when writing the functionality in the first place?
Like, when I test my software I find fewer bugs than when someone else tests it. etc.
More flexibility:
It's recommanded to test only one case per test, I get the bad practice to put all my testing case into an array:
for (arg1, arg2, result) in [(1,2,3),(-1,-3,-4)]:
assert(my_sum_function(arg1, arg2) == 3)
It's bad but you can make a lot of case and change function name easily.
Maintains few tests is always better to have no test at all. To encourage your team adding tests it should be easy ;). So test the freezed functions is a good start.
I'ld love to write an article about "unexpected testing cases", I have this list of error cases:
The main thing with TDD from my understanding is that tests are the requirements, so anything that falls outside of the tests is by definition irrelevant. Most of the "test everything" recommendations come from the TDD mindset, so if you try to apply that outside of the TDD framework it can get messy.
This perspective helps limit the scope and coupling of your tests, since there is typically an astronomical number of tests that you could do, but a very finite number of testable requirements. Refactoring should not generally break tests, but if refactoring occurs across/between several modules then you will probably have some rework, but I would argue that that is more of a "redesign" than a "refactor".
One good reason to test every module/class is to reduce the scope of any bugs you do come across. If I have a suite of tests that demonstrate my module's behavior then I know where not to look for the bug. With integration/system tests alone you will have some searching to do.
I always have the feeling that is still a problem.
I get rather high leven requirements, but they are implemented by many parts of the code. So simply writing a "Req1 passes" would require to implement many many thigns till the requirement is met.