This is an area I have a number of conflicting anecdotal situations (not what I'm covering here). This includes a sequence of related thoughts but contain no satisfactory conclusion.
We already know that testing software will never confirm software doesn't have bugs, that there are endless possibilities, and at some point we must stop.
There are also many different ways to facilitate testing. Examples are machine validation, manual exploration, and human falliblty. And we also want to cover aspects of the software, performance, usability, and many more.
There is also the great push to test closer to the code (shift left was it?). Leading to unit testing, CI/CD systems which piece code and sequences together. We even get to have devs and QA working closer together (or QA completely eliminated).
I bring all this up because it is about balance. It also becomes more complicated when you realize people have different skills (breadth vs depth) and you need to identify, hire, and train to balance out all those areas. It is not good to expect everyone to be the same. Differences in people can be an asset to testing. If only I knew what that looked like.
I don't think I can meet every aspect to effectively test the application. I think this might be a problem area for companies, they have an expectation that one person or a team who are tasked to be the same, can meet all the needs to effectively cover the product.
I find that I rely more heavily on the different project roles to catch issues, from the dev to the PM. But if the project manager finds an issue this tends to be viewed as a failure of testing (note this is not the same as laying blame, but a location to add new steps before approval). I'm still trying to figure out how to turn this into official testing and even expected.
Top comments (0)