We had a problem. Our development was seriously outpacing our testing velocity and our testers just didn't have time to do really good regressions. This of course led to bugs sneaking out the door and affecting our production users. We needed to change so we decided to take some of our development capacity and use it to create automated tests to lessen the burden on our QA.
Here is where things start to get interesting. Cypress publishes base docker images and we started building on top of them with our cypress tests. We did the containerization in the same Jenkin's pipeline as our main app to ensure that our application and its tests stayed in sync. Both test and application image were then pushed to our artifactory and tagged with the version number.
We use Kubernetes to orchestrate our environments and Helm to manage the kubernetes configuration. Helm has an awesome feature called Helm Tests that allows a test image to be run for a chart and its status reported back. We configured our helm charts to use the test image and we were all set to run our cypress tests with the helm test command.
The final result of all this is a set of automated regression tests that run on every release to every environment. Any failure in our helm tests is reported as a failure of the deploy. This gives our QA team confidence that before they even get to feature testing the new image has passed the baseline regression checks. We have a saying "If you aint breaking, you aint building". Bugs are inevitable but now we are able to catch them before QA spends any effort and they can concentrate on feature testing which is where we provide value to our users.
Moving forward we plan on expanding our test suite to include more of our edge cases. As our QA finds bugs manually we'd like to incorporate them so they won't get missed again. Additionally we've considered running the tests suite against production periodically as a health check to let us catch issues before users find them.
Top comments (0)