Continuous releases are a tricky animal in the software engineering world. Mostly because we want to make sure that we are releasing stable software and that the process of releasing that software is efficient. As a Software Engineer In Test, I am going to be touching on few ideas of how we can ensure that the software we release in this model is stable. There are a few ideas of characteristics of what we are looking for.
For releases we have to make sure that at any given time our develop is stable and good to go. This is because on release day, the only thing we would need to do is merge develop into our release branch. The question is, where does the development of the features take place on? Well the development of these features would take place in a branch that is isolated from develop. This would mean that for any feature that is implemented, it would be have to done on a feature branch that branches out from develop. This allows develop to remain in that pristine condition while we work on our feature branches. This also allows testing to get done on that feature branch as well. It is a method that in my opinion, keeps it very clean and isolates most of the potential problems.
This is a very important part of continuous delivery. Having automated tests allows us to run test logic on different parts of the application by the push of a button. I do not have to go back and check all the modules I have worked on previously when I could run my automated tests and they would do it for me. The biggest benefit of having a slew of automated testing is 1) the reduced time it takes to implement modules and 2) the building up of a regression suite. The regression suite will be your best friend in this continuous delivery module because when you have the feature branch in isolation, you can merge it back into develop after a new test has been written. I usually use Testcomplete and for my automated tests but any other technology can do.
The way this comes together although might be different for some other places is as follows from my experience. When the story comes in, the developer branches of the main dev trunk line and starts implementing that story. After the developer is done implementing it, a Peer review is done with another developer and another tester. The purpose of the tester is gaining information about the story and pointing any potential problems that might occur. After the peer review passess, the developer can now begin implementing automated tests. These tests will need to be peer reviewed as well and right after they pass the regression suite can be run on that branch. If any problems are found, they can be taken back to the developer until they have been fixed. To check for the fix, we would have to rerun the failing automated test until it passes(this is after a fix). If all tests pass regression, that is when we would merge back that branch into develop. This cycle ensures us that the things in develop have been thoroughly tested and have gone through the major cycle. Addition of new automated tests to the repo should also be done after the regression passes because that step tells you that the feature is now part of develop and should be kept track of.
Now these seems like a lot of overhead, but from the isolation perspective it makes sense when you are developing all of these new features. You could literally go back to a point in your commits and find out which features are implemented by rerunning your regression suite.
Join my next time as we talk about coming up with your first regression suite.