I recently started at a new job where I have the privilege of setting up a testing foundation for a couple of products. Each one of these products has different development and testing needs, different deployment cycles, and different workflows. But regardless of the differences, I think there are a few testing-specific principles that should be shared among any project. Testing should be accessible (any one can test!), democratized (testing is everyone's responsibility!), and highly documented (wait, what did you test?).
In this post, I'll capture my thoughts on testing documentation and how I leverage it for awesome and not for evil. This type of documentation is intended to live on a ticket/issue or feature-implementation level to capture testing scope, test configurations and matrice, test plan (including acceptance criteria, automation needs, edge cases, and risk to consider), testing notes, and other important documents to reference during testing.
Think of this as the where we want to be testing. Scope can also be thought of as test boundaries and should help inform test executors where we need to focus our testing efforts and what might be considered out of scope. For example: if we know a feature is only going to be touching one portion of the product, we will not need to test the entire product with the same amount of intensity. Testing scope may also include a testing charter, which can be thought of almost as a mission statement.
Example: Analyze MapMaker’s View menu functionality and report on areas of potential risk.
For more information regarding test charters (and session-based testing), take a peek at Jonathan Bach's article "Session-Based Test Management" from Software Testing and Quality Engineering.
Think of this as the technical continuation of what we want to testing. Perhaps there’s a specific OS and browser configuration that we want to make sure we for sure hit. If there are many different combinations, this may be more easily visualized using a test matrix. As testing is being executed, the tester should be marking off what configurations pass, fail, and any warnings that may have occurred during testing. For example: a certain Windows version begins to intermittently throw an error when a user attempts to log in using the latest version of Chrome. Warnings should be brought up in Slack or placed in a ticket for further investigation by either a developer or fellow tester.
Test configurations can also include things such as different users, permission levels, mobile devices and operating systems, and environments.
The following are considerations that should be included within a given test plan.
This is best defined by the whole team and can be done in a GIVEN/WHEN/THEN format. The acceptance criteria should not only cover the happy path (or how the product should be behaving) but also cover any edge cases we may think of and list areas in the code base that may also need to be changed as a result of the upcoming work. By clearly defining acceptance criteria as a team, we are able to have full visibility into the scope of work needed in order to implement a code change and have a better idea of how much development and testing effort is needed to ship. Surfacing any edge cases here also gives the team a head start on uncovering any potential gotchas that might come from development work.
From a tester perspective, calling out acceptance criteria and brainstorming edge cases with the team offers an opportunity to look at the product holistically and allows for more informed and surgical test execution.
This should include what our current coverage looks like, what automation needs to be added (either on a unit, integration, or UI level), and any automation wants. Automation needs might be as small as a couple of integration or unit tests. Automation wants might include an entire suite of tests that cover several cases and different configurations. Either way, needs and wants should be captured in the test plan and also in Github issues that can be worked on in the sprint or backlogged tickets that are prioritized in the backlog for future work.
Risk is a tricky and complicated thing to encapsulate in documentation. Risk can cover anything from gaps in automation test coverage to known issues discovered when previously working on that portion of the code base. The goal of identifying risk here is to help inform a tester of the level of testing needed to confidently say they are “finished” testing an issue and also call out special testing needs in the test plan.
Think of this as the outcome of the testing as it is being executed, both from a manual and an automation standpoint. Include notes taken while performing testing, automation results (from nightly builds, manually triggered runs, etc.), bugs that have been found during testing, any issues that would be considered out of scope for this issue (for example: a feature improvement). Testing notes are very important. It’s critical for test executors to capture what was actually tested and what was discovered/uncovered during testing so that any member of the team can revisit test documentation and have a full understanding of what was executed.
Any other relevant documentation should be listed in the testing documentation for easy reference. Adding this documentation will also enable others who were not apart of initial conversations to know the full scope of the changes and be able to confidently execute testing. Examples of relevant documentation might be design specs, technical specifications (additional API calls being made and what they contain), relevant/dependent Github issues, sprint planning documents, and any initial social or marketing material.
Testing documentation is whatever you make it out to be. I strongly encourage you to speak to your team and discuss what kind of documentation you as a whole find valuable, scalable, and widely understandable by all teammates. Don't forget to think about the product itself and the features therein! The above proposal for a test plan may work for some projects but you might need to make some adjustments to best suit what you and your team are building. Maybe mind maps would be better! Maybe creating a shared living spreadsheet works best for you! Maybe having four sloths strung together hanging and retrieving testing post-its from a cecropia tree is your absolute jam! Maybe the best test documentation is a combination of all of the above. It's up to you!