I have started wonder what are some general indicators that a testing effort should be reevaluated as it is likely the testing is doing harm. Flack...
For further actions, you may consider blocking this person and/or reporting abuse
Over ambitious tests are a concern. They try to cover too much territory assume a lot of happy path and setup state and skip over a lot of the internals. Not typically worth the effort to maintain. Also any test that doesn't cover the risks that are truly at the heart of the problems with changing software.
I'm liking what you said but having a hard time identifying indicators.
Happy path testing and missing internal logic suggests to me insufficient testing and not bad tests doing harm.
I think tests that don't cover the heart of changing software is likely to surface by the indicator in my article, the explanation and inputs would change more frequently.
I think that what I mean to say is that a test could convey a false level of security and cause harm in that way.
I think this is very dependent on how your software is written. Back end servers can be written in ways that are very testable. I've mostly seen UI automation tests as the ones that can convey incorrect levels of correctness.
I agree backends can also be written in convoluted ways as well.
I think that not understanding specifications is the leading cause of testing failures. I have plenty of those problems in nearly all of my test suites, and it was because I was too lazy to start with a specification.
I am not proud of this.
I became aware of this when I started taking ADHD medication, and I saw the real world consequences each time I decided to shirk my responsibility. In this wake-up process, I decided to redesign, from the ground up, a canvas framework that I had already implemented so that I could put my money where my mouth is. Soon, I'll be able to report back and be proud of my work. That is not this day, sadly.
Given that my specification happens to be the
CanvasRenderingContext2D
prototype specification, it's a lot easier for me to define nearly all of the unit and behavior tests up front. Of course, this is not the case for others.In the case that business software must be designed through an exploratory process, it necessary that standard operating procedure must be derived from a prototype. There is no good alternative unless you buy someone else's software.
Once the prototype is done, a set of specifications can be written by observing the good and bad things about such a solution.
That's when the real testing begins. You're building an Iron Man of your idea, so put it in the ring with everything you have.
This is why you should try to write hermetic tests. Unit tests should only test one piece of functionality.
Tests are supposed to also act as documentation of functionality. If the functionality changes they should break too, that way you know they need to change.
Your tests ideally should be considered immutable, if functionality changes then you no lo longer need that test and a new one created.
Now if you are doing this maybe you should not because that would be a change to the api contract and should get appropriate depreciation and tests should continue to pass.
That said being in QA I have not dived into writing tests at the unit level. I do isolated integration testing. I don't have a lot of say on the architecture and since I think curtain testability architectures can be harmful to readability don't think that is always the correct level to test at.
hmm...when your documentation and specs are aligned not which fail constantly in your production environment.
I am having a hard time identifying what claim your making.
Ahh... sorry I was in a rush to write it what I'm saying is testing does harm.
When your test cases are not following the project specification & documentation in the production environment.