DEV Community

ashleygraf_
ashleygraf_

Posted on • Updated on

Notes on Shi Ling's test prioritisation strategy

As a tester, I prefer a combination of scripted and exploratory testing. It enables me to find potential problems I've already thought about, and surfaced others I haven't uncovered or comprehended yet.

I was really fascinated by Shi Ling's talk so I thought I might try my hand at adapting it for manual testing that would be a combination of these two tactics.

She uses it to decide what to prioritise for test automation, but I think it has it's worth in deciding how to structure manual testing as well. The same principles of limited time apply, although the computer and the human guiding it are faster at different things. And you may well still use automation anyway in testing, say, the average speed of browser requests, because that's the sensible thing to do. It's just not integrated into the codebase.

So how do you decide what to prioritise?

I loved her Scare-o-meter.

Rate on a scale of 1 to 5:
One. What is the business impact if this component fails? (Cost of failure)

My sub-questions. What does failure look like? What failure aspects do we care most about avoiding?

She then notes that the type of failures that would lose the respect of and eventually customers and revenue would be a 5.

The little irritations are a 1. They are acceptable up to a point. In my view, you don't want too many of them. They are still important to find, but not first priority to look for, and they are typically not critical to fix. Too many will eventually create complaints though. Or an irritation that the client can't pin down.

I think if you're running low on time, script tests for the 5s, and add some time for exploratory. I've been testing for only a year, and I've already come across P1s and P2s in unexpected ways. Do we want to get better at expecting? Absolutely. Will what we think is unexpected still happen here and there? Of course. So let's catch it before a customer does.

Do UX tests initially on the strength of a few testing ideas. Build an understanding of the implicit expectations of the client and their customers, and go from there. Think of a few test oracles (products like the product you're testing, that would be considered competitors, or at least in the same space).

Two. How frequently is this component used? (Cost of failure)

1 - more than once every month
2 - more than once every week
3 - more than once every day
4 - more than once every hour
5 - more than once every minute

She points out that logs will always be a more accurate measure of use than personal assessment or even user interviews. Muscle memory takes over.

Three. How complex is this component? (Probability of failure when new changes are added)

1 - Straightforward - there's only a few cases
3 - A little complex - there's around 10 cases, it is manageable
5 - Complex - there's more than 30 cases, adding new code is likely to introduce bugs.

She asks developers which code they are most afraid to touching to surface these. I think this will be an interesting question to ask deciding which scripted tests to start with.

In my mind, features with cyclomatic complexity demand a combination of scripted and exploratory testing time, and a good code review beforehand. But you need a damn good way of keeping track of each variation you test no matter the method you choose. As someone with limited short term memory, I think I would like to write notes in a report where I can keep on adding details as I find them, and then accompany it with signing off in a QMS as I go.

This is where I think questioning the requirements to make sure everyone is on the same page is really handy.

Four. How much domain expertise is required to understand the component? (Cost of maintenance)

Do you need to get industry specialists to perform user acceptance testing? How easy is it to develop an understanding of the projects? How strong is the documentation of test procedures and the product?

1 - Anyone can understand how this component works
3 - You need basic knowledge about the industry
5 - You need an industry specialist

I've only worked with 1s and 3s, so I don't have much to say here. But I definitely think the more granular documentation is, the better. You don't know when you'll need it, and it's faster to read it than to retest to resurface the feature.

Top comments (0)