My last post asked to include readability when evaluating code structure and design along with many of the other objectives like testing code.
I don't write production code on a daily basis, I'm a software test engineer who has to assure the quality of deliverables to the clients. I don't do this through test plans or reams of regression tests. I utilize an understanding of the system, support from software tools, history of problems, and asking questions. This leaves me with an ability to identify swathes of testing I should not do and I keep reducing that work to meet risk vs cost efforts.
To summarize, there is so much to test and you want to find what not to test. Does anyone do analysis of what types of test coverage they have and how effective it is at finding code issues vs needing updates due to expected changes? I expect that tests which cover logic over layout provides the most benefit.
What are testing types you find falsely claiming failures needing great maintenance and not carrying their weight?
Top comments (2)
I've at least found that if the developers are solid at writing unit tests for their code, having QA tests that cover logic is mostly redundant. HTML template issues are more what I catch at that point. I still write the tests as we need to make sure the functionality is solid, but it does feel somewhat useless at times. For the most part, I have to follow the policies of my employer, so it's not beneficial to do an analysis of what provides the most value as I'm just an individual contributor. I have to provide the same metrics as everyone else at the end of a sprint, so I have to write the same kinds of test coverage as everyone else. It'd be nice if projects could dictate what coverage means, but that's not how my place works.
Validating file contents :| So many weird node errors without much context, meanwhile, the file itself was fine and not an empty string as it claimed.
Oh yes the test coverage metrics. I've had freedom to define "good enough" per project and now that I'm getting to guide testing consistency across projects/groups I'm trying to stay clear of metrics which don't focus on cost/benefit analysis. But the pull from upper management to "just test everything" is always there.