It's just not what you're thinking right now.
I’m not, or not just referring to the existence of good test automation streamlining manual testing by reducing repetitive and tedious work, as well as the potential for error, here, which is what you may assume!
Most see testability as the ease of test automation suite creation. This concept of testability makes improving UI, API, and integration testing relatively easy to define and research. It gives a name to the process, and thus easy-to-find references on how to improve it. However, we don’t often discuss testability for what the testing industry likes to refer to as ‘manual testing’. Unless you hang around the Ministry of Testing and a few other select corners of the web, you find that most of the focus and the material is on assisting test automation. So I wanted to add to the literature outside test automation, and talk about software testability in the context of ‘manual testing’, and I do think it should use the same term, because just like for automation, it makes the testing easier.
I suppose we do call them testing tools. They are both the judicious use of tools already out there, as well as ones you make/program yourself from scratch.
Predominantly, they come in 5 categories:
- they separate concerns (where possible)
- they enhance the team's skills
- they generate test data (easily)
- they change the system's state (easily)
- they search already available test data (with precision)
- they search the logs & db of test envs (in detail)
The ease is what enhances a software’s testability, and overall, they increase a team’s capacity.
I think we should call this manual test assistance ‘software testability’ more often. It’s part and parcel of making software easier to test. But what does this ‘manual testing assistance’ look like? What does this aspect of testability entail?
Here are some ideas.
This is not dogmatic. It’s more of a pick-and-mix box, a buffet, or even an encyclopaedia if you will. It’s a set of ideas you can pick up and try, according to your product’s needs. And in this article I am referring to only the most simple organisational or personal aspects of you or your team. Predominantly, it’s code that you can slot in to make testing easier.
It is also a living document. If you have some ideas, any useful things you've seen and done, you are welcome to share it here.
If you’ve read Ash Winter’s Testability Advocacy Canvas you may think of this article as like a set of visits to local companies to see how they do it.
I would like to acknowledge Ady Stokes for a few contributions, and editing, as well as Sam Connelly and Alan Giles for reviewing the frame. And now, a few ideas for software testability.
View this summary document of my TestBash World 2022 talk on Testability.
Everyone who tests the software - testers, product team, and developers alike - could have read-only access to the Terminal log in AWS and read-only access to the code in your repository software of choice so they can run it locally, enabling them to run more destructive tests that aren’t necessarily cybersecurity-related without heavy planning and coordination or potential chaos.
Within what is displayed on the Dev Tools Console log, display as much useful custom logging as you can when the code is in pre-prod mode.
It’s more direct to read than a stack trace. Of course, you want both. Combined, you know the cause, and where you look in the code to fix it.
Tools like Github give read-only access to the code as a matter of course, the natural consequence of giving someone access to a private repo, so it doesn’t have to be a big step.
Instrumentation, like Honeycomb, and monitoring, like DataDog, that adds it for you, automatically notifying you of some types of anomalies and making it easier to set up the rest
Showing everyone how to read multipage JSON requests and responses
JSON is a major method used in API requests, having taken over from XML. It's important for all testers to be able to understand how to read it.
When JSON payloads and requests become multi-pagers, they can be draining to read! That can be unavoidable, so let’s make it easier!
Explain to your team that they can be converted into CSVs, and prettified with JSON prettifier websites and programs, so it’s easier to read when they go on for a bit.
Clear error messages
Clear error messages make software easier to use for everyone. What makes a clear error message?
They’re just as useful an indicator to testers as users, to what has ‘gone wrong’.
Clear error messages are sent out by the error handlers when the code is in pre-prod mode. Some understandably have to be obfuscated from the API for security reasons, and the rest should be clear anyway for a good user experience. When they do have to be obfuscated, Ady Stokes points out that another approach is to hide them behind a debug menu and make them accessible in any pre-prod environment. The debug menu has to be hard to find, and protected (by a passcode, for example) but it can work really well to see the whole picture.
Reduce silent failures
Silent failures can make testing frustrating. It’s the reason behind the bug reports or messages in Slack that are just like ‘I can’t go any further here’ and then you pull up the Network tab and there’s a 500 error.
Every 500 could show an alert UI element and be added to the debug log as per above in pre-prod mode.
Display all entity database records and make entity model values editable
Database records are all the information we have on users and related entities. It’s where it’s stored and it is what it is.
Displaying the output of the program, in the information created and generated by the entities that use it, where the admins can see everything all users see, in a way easily by all who test the software. Database records. All of them. This is typically an administrative section with different levels of permissions for different levels of users.
Making entity model values that influence their treatment by the software changeable in Admin where reasonable. For example, from basic elements like whether they are active or inactive, up to like the ‘age’ of the account. But perhaps you would exclude ‘has sent a message’ and reserve sending messages for the user platform, and then check if the user has sent a message on the Admin platform.
Give ‘super admins’ access to edit all entity model values. You can turn off everything that won’t be needed or wanted in production with feature flags.
Did you know?
Some systems have these viewing capabilities built into admin UIs complete with change history for audit purposes, particularly in highly regulated environments.
Read-only access to the Database
You can view, and search, but you can’t modify the database, with read-only access.
Cut through all the time and red tape of adding it all to the admin interface. Useful in cases where there just isn’t a need for one and it truly would take up unnecessary, unavailable developer or SDET time.
You might like to use tools like Redash as it is just for viewing and querying data, and you don’t have to navigate through the rest of a platform like AWS to find it, which can be intimidating
Regular error alerts
Know when errors are happening, when they happen. You might get an alert immediately, or receive a summary of alerts for the day, or be able to ping a service to see if it’s still up, but probably some combination of the above.
In particular for integrations with external software. Whose software failed? Yours or theirs? Is it important for the end-user to know who? Or just that it failed? It sure is important for the development team to know.
For example, it comes plugged into AWS, You have to set it up to react to certain events though.
Controlling variables and settings placed in Config files
Config files can place controlling variables, settings and API keys in a central location
This makes it easier for everyone who tests to modify system-wide settings without needing in-depth knowledge of programming, and you can change the system without needing a full re-deploy. This can be a blessing and a curse. You’ll need to have a stringent .gitignore file to make sure no unintended changes go through. You can find plenty of base .gitignore files for different projects to start you off.
Store the base config files in source control.
NOTE: Some API keys should not be saved to version control, for security reasons. There is instead a reference to them in the file, and they are stored as program arguments instead after the keys are retrieved elsewhere.
Creating and managing this system is part of what is known as Infrastructure as Code.
Anonymising production data
This is placing (a subset of) anonymised production data into your pre-prod environments
You can’t get any more realistic than production data, and so it enables more realistic testing. Anonymising production data allows you to load production data onto your test systems, if you have the legal restriction of GDPR, and it’s just generally good practice regardless. Or as I always say, email is always production. If that’s someone’s real email address, that email is getting seen, and that use of your email send quota is real.
A script that anonymises the data, so you reduce the likelihood of compromising the privacy of users. Simple anonymisation is likely not to be enough, and you may have to add randomisation to it. You will likely also have to do a few random manual checks to check that all values have been anonymised in a sensical way.
Testing features and their crontabs separately
Cron tabs set off functions to run at set intervals, typically when they detect a change in the ‘status’ of an entity.
If you change the ‘status’ of an entity, it’s handy to be able if the functions behind the crontab are set up correctly, and will go off appropriately, right after you have made the change. You would test that the set intervals are set correctly separately.
Set each crontab up so they can be set off manually by clicking a button on a hidden screen in admin. For this, you remove the cron, and effectively, what you are doing is saying 'run the function that uses this cronjob'.
Put crontabs in the config file so they can be easily changed to run on a frequent basis and you don't feel the need to set up a button to set the function off. The issue with this, is that while it is more efficient in the short run it can clutter the terminal log, making it more difficult to debug issues you find locally.
Making copies of raw and processed files downloadable (from the same place)
Is uploading and processing files one of the key functions of the software? Have copies of files before they are processed by the software, and after they are processed by the software
The changes may be something you didn’t expect. Raw files enable you to immediately see the difference. There’s a unix utility called diff derived from different. But if you don’t have access to that, you can copy the content from the raw (text) files you now have access to into Diffchecker for example, and it will run a comparison for you.
Downloads of aggregated data
Have bulk downloads of all the data that goes in, so you can analyse it with other software, and see if it matches the results from your software
If you need to analyse user/entity data to check the internal statistics engine is running correctly.
Sped-up individual user creation
Creating users, but in Admin/Swagger/Postman, because it's more efficient, and gives you fresh new users faster.
Chances are it’s probably faster than generating it as a user. If you need a fresh user and you’re not testing the registration process, and you’ve already tested that there are no differences between a user created on the user platform and the admin portal, it may be a better option in a hurry.
Your own custom setup in Admin. Running a single API request Swagger or Postman, or a set in Postman with a Runner, if that is what is required.
Bulk user generation
Bulk test user creation is more than being able to create individual users in Admin. Although you should be able to do that too.
You have just created many test user accounts to use later for testing, and you can use them later for load testing as well (assuming you aren’t also testing registration and verification in the test)
Bulk upload of users through uploading a CSV file in Organisation-level Administration or central Administration
User generation through adding number of users through an API request ‘served’ through Swagger, the Admin section or even a shared Postman collection
One of the pitfalls here is you do have to have put in a little extra work to make the users realistic, or it has the potential to mess up your load testing later. You will also get better results when testing if you base these auto-generated users on your personas.
Size of change
Let’s think a little more conceptually yet again.
Smaller releases to staging before each bout of testing
Smaller releases to staging make it easier for testers to understand the impact of each change, and give the testers more time to test in general, enabling more exploration.
The developers release their code to the pre-prod environments more often. This requires the developers to release more often. This eventually requires the automation of build pipelines.
Mocks and stubs
If you’ve worked on any e-commerce projects, or projects with payments, you’ve already used this. All the payment APIs have test cards that test various transaction and validation states.
But you can also use it in other scenarios. For example, to simulate users. For example, if a user with a particular email address make a purchase, they are treated like a frequent purchaser and sent an email giving a special offer for spending $500 in the last 3 months. This is added through the API request for purchasing which lives in the production code, so pick obscure email addresses (or from your own domains) and hide that section of the code with a feature flag in prod, just in case.
If you or your testers are unable to interact with the database directly, this gives you a way to set up your project so that it interacts with the database for you.
Add predetermined responses to certain inputs. Within say the ‘registration API’ you could say ‘if the user has email address x then act as if they are a new customer, or a frequent customer, or a customer with an expired membership, and so on’.
- A set of accounts are set up to show different account personas
- Card payments give responses based on the expiry date used
Default One Time Password Responses
This is for projects that have 2FA. You can hardcode the same phone number and password.
It saves time diving for the phone, and you don't end up with any important test accounts tied to individual employees through their mobile number.
Registering test phone numbers - Google Cloud
How To Use Twilio Test Credentials with Magic Phone Numbers
- Feature flags
- Feature flag overrides
- Using HTTP proxies / HTTP monitors / Reverse Proxies
- Using mock servers
What can high ‘manual’ software testability do for you?
A better understanding of the system you are working with enables better testing and support.
Better bug reports while in testing
If all those who do testing know how to write strong bug reports, and there’s a mandatory template for bug reports. You could even have different templates for different classes of bugs. Either way, it gives you all the information you could possibly access or need.
Faster chasing down bugs in production
It’s easier to work backwards when you have a more detailed trail leading back to the cause(s) of the issue.
It could make exploratory testing easier
Improved manual testability typically makes more variables alterable at the click of a button or some writing in a text field or the changing of some numbers or a date-field or the send of an API request. Although remember to test your shortcuts first!
Altogether, it could make the quality of ‘manual testing’ performed by your company itself easier to improve.
Who improves software testability?
Your developers and testers. It grows and evolves alongside the production code
This could require
Access to source control
You’ll notice some of this software testability access entail giving wider, if read-only access to Terminal logs and production code, to everyone who tests, potentially the entire team.
An all-access Platform Administrative section
That’s where you add the more easily changeable levers.
This is teaching your team enough to feel comfortable setting up and navigating their local environment. You’ll probably need a fully kitted out user guide, and a more robust README.MD set-up file alongside allocating a buddy to help out when team members get stuck.
Coordination with your DevOps team
Whoever issues permissions for all the various services will probably end up with more work for a while. You can streamline this a tad by issuing a ticket to them with the exact details on how much access a new starter or team or level needs, so there is less back-and-forth setting up.
Recording your progress and plans for testability will keep you on track.
Scheduling Cron job with Crontabs
Team Guide to Testability by Rob Meaney and Ash Winter
Testability Advocacy Canvas by Ash Winter
PayPal Payflow Pro Test Cards
Assembly Payments Test Data
Data Anonymization Techniques
Top comments (1)
In our Endtest platform, we're focused on making it easier for users to apply good practices.
A common good practice we encourage is test on all major browsers, including Safari.
Another good practice is to test from the perspective of the user (whenever possible).
When testing multi-factor auth, don't just send a request to the MFA service to get a session ID.
Perform the exact steps that a real user would perform.
That means clicking on a button, switching to the other browser tab, and going back to the original tab to complete the login.