I really like this article. People can spend long days, weekends, and nights writing up every single test in the book. However, when a production issue hits and it was not caught by an extensive regression test people will start wondering if automation is worth it or not. When that starts to creep in then it's all downhill from there.
This is so often misunderstood. Daily I'm trying to eliminate swaths of tests to get the most out of the time testing.
When I work with others on the automation being written I ask what is this testing for and the answer is often specific to the incident and not the issue. Which prevents a generic test to catch other case.
That's been the rub. The people doing the work typically are not looking at the big picture and likely find it difficult to think that way.
I would love to know, do these incident specific tests catch regressions?
I would love to know that too.
We haven't had good review of failure analysis and have spent so much time updating these "incident" tests I don't really believe the claim that they have found bugs.
But I do have bias because I like to evaluate risk of an issue reoccurring, severity if it happens, cost to maintain the test due to routine changes of a release.
Love the idea of using spark joy for one's tests. I have definitely seen teams not delete tests due to fear.
I've seen this as well, but in some ways I can understand a bit. Some people don't want to delete tests because they are afraid that product managers will change their minds in the last minute. However, if done right with version control, I don't think this should even be a problem.
This actually goes for all regression tests not just automated ones. :)
We're a place where coders share, stay up-to-date and grow their careers.
We strive for transparency and don't collect excess data.