I find that I generally have a good working relationship with the the developers I do testing for. I also find this isn't always the case for others. It has been bugging me as to why this is happening especially since I've not had issues with either party.
I may have identified the cause, but do not have any anicdotes to back it up. My approach to testing is more a curiosity of implementation. "You implemented a date picker using SVG, how does SVG facilitate that? Are dates even supported in IE? How does SVG scale and keep the event markers in the right place. How did you handle swapping the SVG when users select dark mode?"
This approach opens the door for alternatives being discussed and for modifications that might lend to better testing. Since an SVG will need an associate action handler for each date, I can look at verifying each date has a handler. The developer and I can work together on a solution we will both feel comfortable verifies the functionality. They'd also be welcome to question if my tests fail due to his work or mine and we work out an answer though hypothesis and further testing.
All of this takes up precious developer time. It also requires a level of understanding for the system beyond just what the user sees. The developer does not have all the answers nor the time to find them, this is where I dig further myself and can even report back my findings.
I transform my relationship from an inherently adversarial grading system to one where we collaborate to reach an end state of quality features making it to production.
Top comments (1)
Curiosity is a powerful tool.