This is so often misunderstood. Daily I'm trying to eliminate swaths of tests to get the most out of the time testing.
When I work with others on the automation being written I ask what is this testing for and the answer is often specific to the incident and not the issue. Which prevents a generic test to catch other case.
We haven't had good review of failure analysis and have spent so much time updating these "incident" tests I don't really believe the claim that they have found bugs.
But I do have bias because I like to evaluate risk of an issue reoccurring, severity if it happens, cost to maintain the test due to routine changes of a release.
We're a place where coders share, stay up-to-date and grow their careers.
This is so often misunderstood. Daily I'm trying to eliminate swaths of tests to get the most out of the time testing.
When I work with others on the automation being written I ask what is this testing for and the answer is often specific to the incident and not the issue. Which prevents a generic test to catch other case.
That's been the rub. The people doing the work typically are not looking at the big picture and likely find it difficult to think that way.
I would love to know, do these incident specific tests catch regressions?
I would love to know that too.
We haven't had good review of failure analysis and have spent so much time updating these "incident" tests I don't really believe the claim that they have found bugs.
But I do have bias because I like to evaluate risk of an issue reoccurring, severity if it happens, cost to maintain the test due to routine changes of a release.