Speed. Product quality. More. Right now. It seems that competition in the market is unrelenting these days. Sometimes, well-established and successful businesses find themselves struggling to keep pace, stay productive, meet clients’ expectations, and deliver value for their customers in the development cycle. That’s exactly where DevOps comes to play. The roots of DevOps lie in the need to break down silos and achieve better ownership of the delivered product and more effective collaboration across teams. But is DevOps and Continuous Automation Testing a Holy Grail for how to adopt automation correctly?
In this article, we’ll talk about continuous automation testing as a part of the DevOps cycle, and share our experience with implementing it in established and reputable companies from various domains and with various business needs. It’s not just an overview, but insights from practice that can be useful for those who are going to adopt DevOps practices and broad automation coverage. We’ll explain:
- what we do
- why we do it
- what we gain
- and why this can be valuable for your business
But let’s start from the very beginning. What is DevOps and what role does Continuous Testing play within it? If you are already aware of these basic terms, be sure to skip this and go directly to the how-to part.
What is Meant by Continuous Testing in DevOps?
When it comes to DevOps, it is really the fusion of Development activities, Quality Assurance, and Operations. In the context of DevOps, software engineering is not just about coding, it’s also about designing, specifications, and requirements. Quality assurance includes all the testing processes and activities needed to provide a quality product. The operation side is crucial here at this stage.
Traditionally, Agile teams strive to build software products quickly, but the infrastructure is never fast enough to keep up. You could write code quickly, and you could test it faster, but the infrastructure couldn’t keep up. So DevOps is basically connecting all of the dots together into what we call the DevOps cycle or the infinity loop. It takes a feature or an enhancement and turns it into something functioning that can be released. When this feature gets into production, we monitor its “behavior”, and then use that information as feedback for our next planning steps. That’s why it’s an infinite loop.
The benefits of the DevOps approach are undeniable. According to the 2020 DevOps Trends Survey, almost all (99%) of the respondents said DevOps has had a positive influence on their organization. Teams that practice DevOps ship better work out faster, simplify incident responses, and enhance collaboration and communication across teams.
However, if you go back to the diagram you might say…
WAIT A MINUTE… WHERE IS THE TESTING?
That’s an excellent question because, for example, you could suggest that the testing is integrated into the verification. Or maybe the testing is in the planning stage. Of course, we should plan for testing, but the answer is more complicated than you might expect. Continuous testing is the process of providing tests and putting them into that infinite loop. We basically use those tests to get feedback that helps us improve our activities later in the cycle. And then informs processes when doing different kinds of tests later on in the cycle. And once we’ve released something, it will also include testing. Once we’ve gone into live production, getting feedback from our users is also a crucial form of testing.
Let’s take a look at continuous testing. Its name emphasizes that it’s not just testing at any single point, it’s testing throughout the whole cycle. It starts by validating the requirements and the user stories and analyzing the system requirements. Then, we continue with the testing of the initial features as they’re being developed, and continue testing as we integrate those features with the other parts of the software. And this isn’t the end of the story. Testing is essential before releasing the software and testing continues once it’s released into production. And monitoring it while it is in production is what? Right, once again it’s a form of testing.
In other words, if you think about testing continuously, testing is everywhere. But the fact that testing is everywhere doesn’t mean you should be doing the same kinds of tests. The testing you’ll do at the beginning when you’re planning and creating something new will not be the same as the testing you’ll be doing later on when you’re monitoring the release.
Before we get to our real-life example of continuous manual and automation testing, let’s take a look at some common expectations from continuous testing in DevOps.
THE EXPECTATION FROM CONTINUOUS TESTING
Among the well-known benefits of DevOps are:
- fast product development and delivery
- accelerated releases
- more efficient testing
- continuous and risk-based feedback to improve products
- increased quality and reliability
- reduced costs
However, we need to keep in mind two important things here:
1. People want DevOps and test automation in order to do things faster
The problem with this concept is when you do things faster it doesn’t necessarily mean you’re doing things better. We don’t want to push low-quality code into production faster, do we? The faster the process is, the less visibility we have, and the fewer opportunities we have to find any defects. Imagine a fully automated, dehumanized, and lightspeed pipeline. You push the button — the code is committed, it’s pushed into the repository, then gets deployed into a staging environment. Automated tests run, find no issues, and then it gets deployed into production. The problem with that is that there’s no human visibility and therefore you’re not really assessing and addressing any risk. As the result, if you just do DevOps and do not pay enough attention to the testing and continuous testing, you don’t get any benefits.ou actually just turn everything into chaos, and the business value drops.
2. People want to Shift the testing left
The other thing we figure out is that when people talk about continuous testing, they mainly think about automated testing as early as possible in the DevOps pipeline. This is basically a “Shift left” concept. In order to achieve that, the tests are designed to be as stable as possible and as fast as possible. Often it means avoiding complex scenarios and making the checks more simple. But continuous testing is not just about automation, it’s not just doing it earlier, it’s about all of the things that need to happen throughout the whole cycle. It is about testing everything at all stages, from “Left” to “Right”. How does it look in practice? Learn the best continuous automation strategies and tips from our real-life experience.
How to Start and do it Best: Continuous Testing for Your Business
It’s impossible to imagine continuous software development without continuous testing. Here, we present the fourteen best ways to implement a Continuous Testing strategy with in an Agile development team using real-life examples. All of the methods are proven in practice and are constantly reviewed and polished to achieve better results.
1.TESTING EARLY
Talking about shifting left, best practices suggest testing even before making any code changes. On the one hand, it means testing the design and the requirements, but is also means doing test-driven development on the other of the processes.
2.WORKING WITH REQUIREMENTS
At Mobidev, we strongly believe the requirements should not be something that the developers go off and figure out themselves. These determinations also shouldn’t be on the shoulders of only the Product Owner, Business Analyst, or Testers. Efficient requirements require everyone to collaborate. Testers are usually the voice of a user, they understand what the system must do. When you test something, you figure out not just does it work, but whether is it delivering value, which is incredibly important for business evolution and growth. That’s why in our projects, developers and testers collaborate together to review the requirements. It gives us a perfect starting point for further development.
3.UNIT TESTS
Here is where the magic with automation testing begins. Based on the requirements, our developers start working on features by developing the tests for them, even before having the feature itself. All of those tests will likely fail at first, and the goal is to make them all pass in the end. This is something that is really called Test-Driven Development.
4.API AUTOMATION
API testing is essential because if you’re developing APIs, other parts of the system will depend on them. And if you break your APIs, you’ll damage those other parts. API Automation is generally easier to develop and maintain because APIs don’t change as often as the UI does. It also runs faster and is easier to integrate into a CI/CD. Our technological stack can vary for this part: for example, we used NodeJS, Mocha, Chai, and Axios as a request library for one of our projects. However, for other projects, we prefer the Fetch library.
Running the tests that cover all of our backend endpoints gives us assurance that at the basic level, the application will work. For a particular project, we might be able to develop some tools to speed up manual testing based on our main API Automation. With the help of our team, some complex scenarios that might take days of work to execute manually, are performed in minutes and with higher accuracy. That could be one of the first visible and measurable benefits of automation on projects.
5.CI/CD INTEGRATION
All of our development and management processes are integrated into the DevOps CI/CD (for example, it could be Azure). As soon as the API Automation is ready, we integrate it into the deployment pipeline, so that the checks would always run before a new build is produced.
6.E2E TESTING
As soon as we’ve done the Unit testing and API Automation in our CI/CD, we started with the UI automation. As for UI testing, it is the most expensive to do, mainly because the user interface changes the most frequently. Also, when it comes to mobile automation, many not-so-obvious factors come into play. Let’s take, for example, the importance of running iOS and Android emulators, which are slower and not as stable and reliable as one would expect.
E2e testing can be provided in different ways. For instance, for one app that was a cross-platform Nativescript, we chose Appium as the basis of the automation framework for it. We adopted a Nativescript-dev-appium driver into our Typescript Mocha tests that were written in Typescript as the app itself. According to the “Shift left” concept, the test suite should’ve been deployed into a CI/CD, just as the API tests were. Due to the slow nature of device emulators, the fastest option to run such tests would be using a device farm like Saucelabs. Otherwise, the test execution might take too long to fit into the build pipeline — sometimes demanding hours to complete. In such a case, there is a good option to split the test suite into threads to run them in about 10 minutes using multiple cloud devices in parallel. However, it isn’t always acceptable to use cloud devices for security concerns: for some projects, it’s prohibited to install a test build on a remote device of a 3rd party service. In such cases, we have no choice but to stay with a ready-to-go E2e mobile test suite and no possibility to shift them “Left”.
7.SHIFTING MOBILE E2E AUTOMATION RIGHT
There’s so much said and done about shifting the automation left these days, but what if there’s no such option? At the earliest stage of the DevOps loop, we had the requirements analysis, unit testing, then automated API testing. When it’s time to conduct functional testing of a newly developed feature, doing it manually is the best option possible. The next crucial stage is regression testing before the release, and that’s where the QA bottleneck might emerge. For example, we had to cover at least 5 Android and 5 iOS Devices for a project. Running regression checks manually on all of them took more than a week for our team.
As we mentioned before, each stage of the DevOps process requires a different testing approach. Traditional e2e mobile automation that we had was intended to run early at first. And when we decide to shift it right to assist the regression testing, we redesign those tests to better fit the new purpose.
8.CHECKING MORE COMPLEX SCENARIOS INSTEAD OF MORE SIMPLE INDEPENDENT CHECKS
For the sake of test stability and performance, Automation engineers usually try to avoid developing tests that are dependent on each other. In other words, if one test fails, the rest shouldn’t fail because of it. The downside of this approach may be smaller test coverage. Many important scenarios require numerous subsequent tests depending on each other. We can neglect such scenarios when the tests are executed at the early stages of development, and leave them for manual testing. But at the time of regression, we do need them covered by automation as much as possible to save the manual testers time.
9.MORE THOROUGH CHECKS AT THE PRICE OF EXECUTION SPEED
For automation to shift left it is critical to keep the execution speed at a proper level. So that when a developer pushes the code, the tests would run as fast as possible and show the green light for deployment. For this reason, most UI e2e tests are not too thorough by nature. For instance, we may check the presence of some UI element on a page, like a button. We may interact with it, like pressing it to verify it’s working. But we usually don’t check its position on the screen. Or its design. Or color. What if a button is the same color as the background, invisible to a human? What if this button is not in its place anymore for some reason, maybe the markup got broken on a smaller screen size? The autotests will let it all through.
For one of our projects, we had to be sure the app is looking good on many mobile devices before the release, it lead us to a point where we should increase the automation test coverage. If anything changes in the UI, we should see it at the regression stage. That’s how we came to the Screenshots comparison. And let’s take a look at it in detail.
10.SCREENSHOTS COMPARISON
Comparing a whole screen of an app provides the most strict, pixel-perfect testing of the UI. Even the slightest visual change becomes visible, and some elements that are unreachable by automation tools like Appium can all be covered this way. But it slows down test execution and adds effort to maintain a collection of example screenshots. Developing automation with screenshots comparison is also tricky, as there may be areas in the screen that change dynamically, like a current date/time, or animations. That’s mostly manageable: we can exclude given areas from the comparison, or compare individual elements instead of full screens.
This approach would not be suitable for early testing, but it really shone for us in the regression routine. Most bugs that were caught by our automation were found in this screenshots comparison. It came especially handy when we have to check changing font sizes across the app on all supported OS versions. Or to quickly get an actual screenshot collection on all supported screen resolutions for a design review.
11.PARAMETRIZING THE TEST SUITE FOR BOTH “LEFT” AND “RIGHT” USAGE
As we add more layers of checks for the regression, it’s also an option to skip all of them for a faster run early in the cycle. Each screenshot comparison can be kept in a separate test, andlaunching the suite with a specific parameter turns them off.
12.AUTOMATION IS ONLY A PART OF TESTING
Not everything can be automated and it’s okay not to automate everything. In practice, this means that you need to be sure what should be automated and what shouldn’t.
For example, in our mobile app project, push messages and biometric login were always checked manually. As well as some non-repeatable scenarios. Some checks take hours to do by hand, need lots of different combinations and devices, and should be repeated many times. These are great candidates for automation. On the other hand, if it takes less time to manually check and is too challenging to automate and support, maybe manual testing is still better. We made sure our manual testers were aware of what checks were covered by automation. This meant they could do less routine tasks, and be more creative and explorative to find bugs that automation just can’t find. Having the regression time shrunk from weeks to days allowed us to provide more frequent releases and improved the quality of each release.
13.NON-FUNCTIONAL TESTING: PERFORMANCE, SECURITY, COMPATIBILITY
This is the most important thing that people tend to forget to do. Early in the pipeline when we’re still in this early environment, we haven’t got the finished product yet, so we’re not running on the production hardware yet. Many assume therefore, that we can’t do performance testing, security testing, or compatibility testing. But this is the wrong answer. Actually, we can do some of these things. That doesn’t mean we’re going to test it on the highest level or immediately fix the revealed problems. But it’s good to at least do some of the testing to find out if there’s a problem later to solve.
We used this logic for the already mentioned mobile app development project: we just monitored the server load when the automation was executed, and that was the initial performance testing. Later we created a simple load test suite in Neoload that our clients used in their corporate network. And extended this suite later on, launching it from time to time to make sure we were on the right track. We also checked the app against some basic security guidelines, like OWASP top 10, and the developers did some checks on their site too. Only much later did we pass a security review via a specialized service that revealed next to no security risks.
We start compatibility testing when a manual QA switches devices from time to time for functional testing as early as possible, finishing it by running an automated suite on all supported devices. And we can even extend it to reviewing the stats of our end users who may use devices we don’t cover.
14.THE RESULT IS WHAT MATTERS THE MOST
As we spread the testing throughout the cycle, both manual and automated, at all possible levels, we get a quality product and happy customers. Once the product has gone live, it’s time to start monitoring the user experience and satisfaction by sending surveys to the users and keeping an eye on the metrics, which are the actual metrics the business is using. That includes the number of new users, and how many of them continued to use the app. This is also a form of testing. It’s essential to use this information as feedback to drive our further development.
Wrapping up
Those are the main lessons that we have learned from our practice, but following the best examples shouldn’t limit you in any way. Each case is unique, and every technical decision depends on multiple factors. How to test, when to test, what to automate, and when — all these questions should be thoughtfully considered. Everything you do should serve the highest purpose — happy customers – while achieving the highest quality through a reasonable effort.
Top comments (0)