We are excited to have with us today Olena Kapitanenko, a highly skilled IT expert with a wealth of experience as a Project Lead/Business Process Analyst, Trainer at Svitla Systems, Inc., a successful software company with Ukrainian roots. Olena is a problem solver with a passion for improving business processes and mentoring teams to achieve their goals. Prior to her current role, Olena worked as a Quality Assurance Manager/Business Analyst at HotelFriend AG, a well-known startup in the field of hospitality, where she honed her skills in testing mobile apps, websites, and hotel management systems. We are looking forward to learning more about her experience and insights into the world of IT.
There are a lot of approaches how to build a proper product within such dimensions: scope, time, resources, users, product quality, and project quality. I want to share my hands-on experience with how to link the latter with the rest. I can recommend checking ISO and ISTQB standards and expanding the list of a minimal suit I use in my endeavors usually and sharing in the article below.
Test Strategy
Usually 2-3 pages with links for sections and links - it’s the heart of the QA process.
1) Environment Requirements
Set of targeted devices. For example:
-
List of browsers with roles to be supported. E.g.
-
priority#1: Chrome for usertype#1
-
priority#2: Safary for usertype#2
2) Testing Tools and Testing Management Systems
The QA team performs the manual testing with the [TMS] as a tool for testing documentation
For Test Management System we describe roles and main quants of work. E.g. for Test Rail we use:
Test Cases
The QA writes the test case or a checklist for the story which describes the feature or a part of it. The QA shall decide which type of test case is suitable for a specific case: a test case or a checklist
A test case template shall be used to describe the use case in detail steps to get the single expected result. A checklist type can be used to cover the maximum steps through the feature testing with multiple expected results for each step.
Test Runs
The QA shall run the test runs in test rails in case of:
-
testing the completed story
-
each story shall have the completed test run based on a test suite
-
regression testing
Regression testing test runs to cover the whole system which needs to be tested before the release
-
smoke testing
Smoke testing test run covers important functionality which needs to be tested after the refactoring/release/etc
For test automation we use those tools:
-
Rest API with a dev’s framework + postman
-
integration level (usually CI/CD friendly)
-
functional level (xunit)
-
for performance testing (junit, TestComplete)
-
etc.
3) Testing Types
-
Unit. Integration testing is conducted after unit testing, where the functional correctness of the smallest piece of code, or unit, is tested and performed by developers.
-
Integration. Integration testing is conducted after unit testing, where the functional correctness of the smallest piece of code, or unit, is tested
-
Smoke. Whenever the new functionalities of the software are developed and integrated with an existing build that is deployed in the QA environment. Smoke testing ensures that all critical functionalities are working correctly or not.
-
Regression. Regression testing should be performed whenever the codebase has been modified or altered in any way to verify any previously discovered issues marked as fixed.
Regression will be executed for:
-
the specific functionality after the new changes or critical bug fix
-
production release
-
refactoring
-
integration with external API
Acceptance. User Acceptance Testing is formal testing conducted to determine whether a system satisfies its acceptance criteria
Compatibility. Compatibility testing - is testing the application or product built with a different computing environment. Especially it relates to the new mobile builds or platform (iOS/Android) version updates.
Common process agreement
To be crafted with a team and stakeholders to have a common understanding of the process, its inputs, and outputs. Examples are below
General
The QA team provides product testing during the sprint.
At the start of each sprint, the team lead assigns the QA sub-tasks to QA members for the sprint stories.
The assigned QA is responsible for:
-
Writing the documentation (test cases/checklists) in [TMS]
-
Provide the story testing after all dev tasks are done
-
Re-test the fixed bugs
-
Send the story to BA review after all ACs and bugs are tested
-
Provide the testing for the product during the sprint to search for other defects
Bugs Types
The QA team uses 2 bug issue types during the QA process:
-
bug. The bug is a separate issue not related to any of the sprint stories.
The bug is assigned to the team tech lead so they would know the scope and list of all bugs
-
sub-bug (The sub-bug is a type of sub-task within a separate story). The sub-bug is assigned to the dev team who developed the Acceptance Criteria
During the testing, QA finds the bugs and creates them as a sub-bug type to manage them within 1 story with correct priority.
During the sprint story testing, all bugs related to AC’s story shall be created as a sub-bug
The sub-bugs with Medium and higher priority shall be fixed in the scope of the sprint
At the end of the sprint, the sub-bugs which are not yet fixed shall be reviewed, unlinked to separate issue types, and moved out of the story or sprint based on priority.
Bug flow
The bugs issue type is created for functionality that is not related to any sprint goals or taken story
Example: broken functionality for already done stories after some refactoring or changes
The found bugs shall be created and moved to the backlog
On the sprint grooming/planning, the team shall review backlog bugs, estimate them and take them to the sprint if the priority asks or the team has some capacity to fix it
Only the bugs with High or Highest priority shall be moved to the sprint after the sprint has already started with the PO approval
Bug Priority
The QA is responsible for assigning the correct bug priority for any bug issue type
The CD3P Jira flow suggests the following priorities:
-
Highest
Such an error makes it impossible to proceed with using or testing the software - System crash, a blocker of functionality to test, no workaround, demo blocker, release blocker, etc
The defect is critical to the product and has to be resolved as soon as possible
-
High
It is an incorrect functioning of a particular area of business-critical software functionality, like unsuccessful installation or failure of its main features. - wrong functional AC result, operation error
The defect is critical to the product and has to be resolved as soon as possible
-
Medium
An error has a significant impact on an application, but other inputs and parts of the system remain functional, so you can still use it
The error doesn’t require urgent resolution and can be fixed during the sprint if the team has the capacity or move to the next one (as an example)
-
Low
A defect is confusing or causes undesirable behavior but doesn’t affect user experience significantly
The bug isn’t serious, so it can be resolved after the critical defects are fixed or not fixed at all
-
Lowest
A bug doesn’t affect the functionality or isn’t evident. It can be a problem with third-party apps, grammar or spelling mistakes, etc.
The bug isn’t serious, so it can be resolved after the critical defects are fixed or not fixed at all
Bug Template
The QA team shall proceed with the following template upon bug creation:
Preconditions (if any)
1. Step 1
2. Step 2
Steps to Reproduce (aka STR)
1. Step 1
2. Step 2
..
8. Step 8
Actual Result (aka AR)
*The result received after performing a test case or testing the AC*
Required attachments if possible:
1. screenshot
2. gifs
3. screen recording/video
4. logs
4.1 browser console or network errors
4.2 logs from mobile taken from android studio or Xcode
Expected Result (aka ER)
*An ideal result that the tester should get after the test case is performed*
The tester can indicate:
1. The AC from the story
2. The UI link design from Figma
Planning
To convert strategy to a plan we go through the process above, for different types of QA, QC and testing we have to determine:
-
Team-level roles. E.g. SDET for [type of work], Test Designer for [description of work], Test Executor for [...]
-
Timeline vs Scope
There are also possible butches of Test plans to be implemented for a large rollout.
For example, it might be a plan with a 10 Test Run for Manual QA and Penetration testing in UAT staging before a major release.
Each plan regardless of scale should contain:
-
Scope of work: test design, testing execution, expected reports or other outcomes
-
Time for SoW above
-
Specific executors
Top comments (0)