It’s not easy to create excellent automated tests, I would say it’s even more difficult when you join the company and see that a bunch of stuff needs to be rebuilt, to create at least a good base to write tests. In this article, I would like to describe my observations based on several projects I had an opportunity to work with, and I will try to find the holy grail of rules for creating automated tests.
Wrong automation framework
The decision on the automation framework is the most crucial as it will affect the teamwork on automated tests for a long-term period. Ok, based on what usually the automation frameworks are chosen?
There are two main rules I can observe:
– The one I know, for example, from my last job
– The one which is the most famous at the moment
I am not saying this is bad, but, not enough. The automation framework should be considered at the first point for project needs, there’s no point in pickup the framework with the issues for document upload if in our project it’s base functionality. Also, we should ensure that the framework will support the programming language used in a project. Why? Because, QAs are not the only ones in a team, and quality is a team effort, so we need to make sure the dev team will be able to handle potential changes in the tests repository. If that option is not possible, just make sure, that chosen automation framework has good documentation and community, basically, that’s also one of the crucial points to consider.
Incorrect tests hierarchy
The most often test hierarchy we can observe in companies is Testing Ice Cream Cone, so we have very few unit tests, some UI tests, and lots of Manual Tests.
The Testing Ice Cream Cone approach can be easily identified where the QAs have created an automated test suite, but the development team has not, or there are very limited. As a result, tests will be very long-running and flakey because the development team has not helped build the suite or architect the application in a way that makes it easy to test. The new check-ins by the developers break the product functionality regularly and they are relying on the QAs to report the defects to fix them.
The above scenario can be fixed with reversed test pyramid, where the development team is the base for automation.
In that way, we have complete coverage at a low-level point, and the tests are fast, so developers can receive quickly feedback about their changes. Unit tests also prevent adding not checked changes to the app, so constant breaking changes can be avoided.
The E2E tests (UI), shouldn’t cover a significant part of code, as there are time-consuming and might require a huge effort in maintenance. Rather than on UI tests, the QA team should focus on AP/Components tests to find potential issues at an early stage, and in specific places in the code.
Lack of automation scope (strategy)
As I mentioned at the beginning of this article writing good automated tests is not easy, but there’s one more thing, which is even more difficult – defining what should be automated. At the first glance, that seems to be easy, but from my experience, it’s not. Well-designed tests shouldn’t be created without a definition of what should be automated as a priority, what are our core functionalities, what can wait, which tests require a lot of effort etc.
Usually, automation is done ad hoc, that not cover all crucial functionalities, and it’s really hard to track test coverage. A good approach, in that case, is to at least create a high-level document about the automation scope, ideally share it with the team, so everyone is aware of the automation scope, and eventually propose some changes.
Ignoring automated tests results
Cast the first stone who checked all test reports from the automated tests pipeline. Not checking test reports is unfortunately common practice, the tests are executed, but the results are not verified by QAs, or this is done from time to time. So why are we writing automated tests and then ignoring their results? If the tests are flaky and fail from time to time, they should be fixed and treated seriously with each run. If not why create them?
Maintenance hell
Automated tests sometimes can give you a headache, especially when it comes to maintenance. Anyway, there are a few ways, that could help you and your team with maintaining tests.
Firstly, make sure that UI tests are using predefined selectors, ideally, if that is agreed upon between developers and QAs, thanks to that, you will avoid test failures where the selector has been changed without the knowledge of the QA team.
Secondly, split reusable code functions, organize them, and make them adjustable, so you will avoid multiple places in a code, which handle the same thing, but in a slightly different way. That might be a set of helpers in your tests repository or a separate code repository, that is shared between multiple repositories.
Thirdly, keep UI selectors in one place, so you will avoid eventual updates in multiple places, I not saying everyone needs to follow the Page Object Pattern, but make sure your code is well organized, especially the one which can change often.
Fourthly, make sure your tests will be understandable for everyone, even after a few years, and do not forget about documenting and commenting code. Remember that automated tests might require the whole team to contribute, make sure this journey will be easy and smooth, not hell.
As we can see creating good automated tests is not an easy thing, but it’s still possible, just remember, each project is different and might require various approaches, do not follow blindly the things you know from previous jobs, but be open-minded and flexible. Don’t be afraid of changes, and write code in that way, so everyone will be able to continue your masterpiece.
Top comments (0)