Regression testing, which is also known as repeated testing, is the process to ensure all the old functionalities still correctly works with the new changes. In other words, handling regression testing is to test an already tested application to find defects as a result of the changes. This is a usual step of any software development process and done by testing specialists. Testers do regression testing by re-executing the tests against the modified application to evaluate whether the revised code breaks anything which was working earlier. The only reason regression testing might not work is that changing or adding new code to a program can easily introduce errors into code that is not intended to be changed.
Regression testing is required in the following scenarios:
- When the code is modified according to the change of need in the requirement
- When new functionality is added
- While fixing defects
- While fixing performance issues
Let's consider an example and how to perform regression testing in that situation:
At the beginning of a project, assuming we have six developers and two testers. The Agile model is set up every two weeks for a Sprint. Everything started.
In the first Sprint, we start with the basic features (e.g., about ten functions), the testers begin designing testing scenarios for testing (e.g. about 100 scenarios). The very first Sprint receives a good rating from customers.
In the second sprint, the developers continue to create new features - about 10, and testers also do things like in the first Sprint - with 100 new scripts plus 100 old scenarios that need to be retested. Well, only 200 scripts, everything is still in control.
In the next Sprint, the developers need to make eight new features and update the two old features due to new customer requirements. At this point, the two testers not only have to design test scripts for the 8 new features but also have to test and update 200 old scenarios. The whole implementation is about 300 scenarios. Did you feel there is something wrong?
Over the next few Sprints, the three developers still meet the number of features and the changing requirements, but with two testers, the number of scripts to create and update are much more. Tiredness begins to spread. The lack of time and the risk of miss bug is higher and higher. Too many problems arise.
Therefore, when regression testing is automated, it enables testers to check into a variety of changes and unusual cases in the production environment. Not all regressions are caused by new features or the consequences of routine bug fixes; database updates or new browser versions can also cause them. Regression can also be an issue with efficiency and speed. Automating those cases which are stable and repeatable allows manual testers to spend more time testing various environments and for merging more complicated cases at a higher level.
And what's more, regression analysis is the key to success. It needs to deal with intelligence rather than hard work.
- Highest Return: Execute tests that contribute to high coverage of the requirements, then any others…
- Quickly Lower Risk: Execute tests for the most critical requirements, then any others…
- Practically Safe: Execute tests for all the critical requirements, then any others… Especially since often ~20% of the test cases are covering ~80% of the business value
Two automated regression testing approaches are most used: data-driven testing and keyword-driven testing.
Data-driven testing is a framework where test input and output values are read from data files (data pools, ODBC sources, CSV files, Excel files, DAO objects, ADO objects, and such) and are loaded into variables in captured or manually coded scripts. In this approach, variables are used for both input values and output verification values. Navigation through the program, reading of the data files, and logging of test status and information are all coded in the test script. Let's consider a simple example of a data-driven script for setting up Admin accounts in a system:
private void Auto_ID_400_test()
public static void AddNewAdmin(DataTable dataTable)
Report.Log(ReportLevel.Info, "Add new Admin:");
Keyword-driven testing is a technique that separates much of the programming work from the actual test steps so that the test steps can be developed earlier and can often be maintained with only minor updates, even when the application or testing needs change significantly. The keyword-driven testing methodology divides test creation into two stages: Planning Stage, Implementation Stage.
Example: Test case for login: Navigate to the login page, type a user ID, type the password, Verify login.
Then, automated testing engineers develop scripts for the keywords:
This strategy does not require programming skills, so it much easier for non-technical people to contribute to the regression testing process. By way of explanation, it is more suitable for the projects catering to a large audience and requiring a broader focus than the data-driven testing.
More details about this approach are available at https://testautomationresources.com/software-testing-basics/automated-regression-testing/