Reviewing my writing here, I've realized it was hard to stay on point. There are so many software products with different risk tolerance that I'll throw in different areas of thought but never drive the reader to any conclusions, good luck.
I watched a little from the video mentioned in this article:
And I think I'm looking to achieve similar outcomes, however being a software test engineer I, instead of "designing for testability," look at it as "make it possible to reproduce an issue."
Now I'm a very technical tester and I like to read the code. I think good design works towards code which better describes the system and its complexity.
Now I don't think integration tests are bad, they do risk covering too much at once. But I wanted to present what does it mean to focus on reproduction?
Issues will be showing themselves in production and the question is can you reproduce it closer to development? Are you needing to debug in production? How about an integration environment with third parties? Or do you start writing unittests to find the issue?
I try to test at a level where I can try different thing to replicate information gathered from production. Now there are techniques where production monitors provide all the insight to write a unittest for, but I've never worked close to such a system to understand this development fully.
What I would emphasize here is that I think we need testing at the different levels to reproduce an assortment of conditions. Some are better suited to happen at a function level, others with more systems interacting together. If you can reproduce the problem then you're better positioned to fix the issue and catch it cropping up again.
Top comments (0)