Testing graphical user interfaces is a challenge. Full automation is appealing but it's expensive and doesn't yield good results. UI's are something where a real human should be involved. By using an augmented test app we can leverage manual testing in a cost effective manner.
I discussed this approach to testing on a podcast at TestTalks. This is a deeper exploration of that topic.
Real testers, human beings, have an amazing ability to spot problems. A person is not bound by the limited number of checks that an automated script performs. Instead they are continually watching for any issues of layout, graphical glitches, data oddities, performance problems, broken design and sketchy user interactions. Assisted manual testing harnesses this ability.
A manual testing application is a custom program built specifically for manual testing. The app presents several steps that a real person will perform. We have three such apps at Fuse.
Even this title screen tests features. The logo part (which is a special image format), caught at least one platform specific issue.
Our primary test app is simply called the "manual test app". A tester installs this on any of our supported platforms and follows the instructions in the test. This one has a little over 40 pages that cover a wide variety of the features we offer.
Each page is an isolated test, preventing errors on one page from leaking over into another test. We don't want to have the tester file a chain of issues for a single defect.
An animation test covers basic drawing, trigger response, transforms and animation timing. This revealed performance issues on some devices, as well as rendering issues with the background gradient.
Notice the small
i graphic at the top-right. These are instructions to the tester for each page: what to do and what to expect. This removes the need to provide a lengthy test script. Maintaining scripts is a significant cost in most manual testing, so it's good to avoid them.
We don't want the tester to read these instructions every time they run the app, nor do we want them to dedicate too much brain-space to understanding the tests. Many of the tests are self-explanatory or obvious. A tester may only need to read the instructions the first time, or refer to them when something suspicious happens.
Here the instructions are embedded in the test itself. Though basic it covers some vital features in the system. Fortunately it's been a long time since this page has revealed any errors.
Creating an augmented test app requires a sufficiently modular software design. We need to use all the features in the test app. We don't have to worry about added effort here, since modularization is just good development practice, regardless of testing.
The test app shouldn't be a dumping ground. We only want to include tests where a set of eyeballs is helpful. Focus on things that a human can process but where it'd be difficult to write a unit test. Features with visuals, animation, or user interaction are good candidates.
At first it's tempting to test everything in the manual test app. In the short-term it can appear to reduce programmer effort, but it hurts the testing effort. If there are too many similar tests, or just too many tests, the tester has a harder time identifying problems. They are forced to refer to the instructions more often and have a hard time remembering what they saw last time. This slows the tester down and hampers their ability to identify defects.
Though easy to understand this page combines a large number of features. I fixed several defects while writing it, but I don't believe it has yet to catch any new errors during testing. Regression testing is also good.
Manual tests also shouldn't behave like unit tests. We aren't trying to isolate a single feature, nor are we trying to do exhaustive API coverage. We reduce the total number of tests by combining several features into a single page. The tester uses their amazing biological neural net to process many things concurrently.
This page tests a combination of many vector drawing features. Though not exhaustive it is a nice sanity check. We have a separate test app covering more of those features. We can schedule it to be tested at different intervals and on different devices.
A well written assisted test app should be included in any application that has a graphical user interface. It gives us the advantage of having a real person analyse the app without needing to maintain costly test scripts. It avoids the burden and cost sink of automated UI tests.
Even programmers can use the app to quickly verify their changes haven't broken something obvious. Placed in the hands of a dedicated tester it produces a solid regression test. It's a boon to the quality of the software.