Manual testing is a great solution for small and quickly evolving software. It allows for the team to focus on providing business value while minimizing the effort of automatization.
It is, however, a job that requires skill in spite of it's apparent simplicity. I've worked as a full time tester and, even after switching to a developer role, kept on working on my testing skills in order to get ever more efficient with my time.
These are some tips I have for you in case you're wondering how to take your testing to the next level, wether if it's your role, you want to deliver higher quality features as a developer or you just want to fill in the gaps for your team.
Know the product, have a plan
First and foremost, one of the greatest strengths of a manual tester is that they get to regularly approach the application as if they were an end user. And, of course, it's necessary to actually know the end user and the product. Who is going to use it? Where? To accomplish what?
One of the principles of (good) software testing is that all tests should be planned. We don't have the time and patience to exhaustively test every corner of the app, so try to outline everything that you need to test, make sure you're more thorough in critical areas:
- Where you deliver the most value to users
- In new modules
- In areas that have proven problematic in the past ( it's great to track these! )
- In modules where you know the feature definition has not been up to par or the development has been rushed
I can't stress this enough, specially if your team doesn't have a full time tester and you need to juggle between different tasks, it will allow you to be more efficient with your scheduling and more confident once the tests have passed.
Focus your effort
Its easy to "get lost" while testing and app and just stumble around while using the different features. In my experience it's really useful to know the different types of testing you're performing and to approach them with clear intention. For example, if i'm testing responsiveness I would go through the whole flow only looking for UI issues, and if I need to also test business logic then I would go through the whole flow again, as to not try to accomplish everything in one sweep.
Here are the most relevant testing types I come across:
Smoke Testing
This is the test that checks if the application or module even starts. This type of test should be automated and if a smoke test fails then be aware it might be a red flag in the process that must be addressed, since we shouldn't be even able to deploy a version that doesn't start.
Sanity Testing
It's a quick check to see at a high level if the app or module works. It's often used to check if a deployment worked correctly and the application is ready for further, deliberate testing. I like to do a quick sanity test mainly to save myself time so that I don't prepare complex scenarios for a feature that wasn't even deployed completely or had some obvious errors.
Regression Testing
After modifying the code, we check all the related modules to see if nothing unexpected changed too (If we added B, does A still work?). Regression testing is the most time consuming as the application grows and further down it's the primary target for automation in order to save manual testing time.
Exploratory Testing
it's an unscripted approach where the tester's skills and creativity is put to the test in order to find new ways to break the application or find out performance bottlenecks. The aim is to improve the app in any way. It's different from the infamous "monkey testing" in the sense that exploratory testing is still planned and you need to properly allocate time for it. Monkey testing on the other hand, is just trying out the app, often aimlessly, for whatever time you have hoping for something to break.
This is where I would check most non functional requirements, as their scenarios are often tedious to write down.
Functional Testing
The main tests, this is where we check all the acceptance criteria are met, the app does what it should and we clear out every border case we can think of. It's important to have these tests well documented for further reference, specially so that we can find regressions (i.e: we re run these scenarios as "regression tests" and find out that the results changed)
Performance Testing
These are the tests where you check the application can handle the expected workload. Often you need to decide what that expected workload is: is it what normal users will do? Or is it what the application allows?
A clear example is this: you have a marketplace and can add products to your cart. As a normal user you would try to have, say, less than 20 products in your cart and try if the app keeps being fast enough. But it's likely the hard validation of the cart limit is much higher, perhaps the app lags after the 100th product.
Both views are important and priorities will shift as the product evolves, the important part is to gain insights about the application limits and keep it in mind going forwards. Just make sure that whatever limits you have won't put the business at risk if a malicious actor appears.
Compatibility
It's when we test that the application works in different platforms and devices. Often it's not required to do exhaustive tests of this type except for specific features where the platform might be relevant.
Be better than automated tests
Keep in mind that the manual tester is not just a cheaper alternative to automated tests. Having a human being test the app can provide real value and insights that no automation can replace.
An analytical tester can find ambiguity and contradictions in the functional specification, identify flows that don't feel right, confidently test the design and keep in mind the business goals the module aims to achieve.
Moreover, finding a bug or a point of improvement can be just the start, a tester involved in the product can help with prioritization, definition and, why not, actually fixing some stuff.
Work with your team
Lastly, communication with all roles is key, be in the know about how each module came to be. If any part of the process was rushed or suffered constant changes, you can compensate by adjusting the testing plan. By owning the bug backlog you can find patterns and work with devs to help them address underlying technical debt. Your training to detect edge cases might help to define features easier to develop and cheaper to maintain. As a tester you can empower the team to confidently deliver a quality product in spite of the many setbacks that may appear during development.
Conclusion
There are teams of all shapes and sizes and I hope you can takeaway something useful to apply to your work! I just want to end this note pointing out that in no way do I think that automated tests should be skipped. What I'm proposing is good communication so that the automated and the manual testing strategies can work cohesively to fill in each others gap.
Top comments (0)