The idea of Continuous Integration in Software Development has become so popular over the years that it almost seems like everyone knows about it. With a decade of worldwide growth in people searching for it, and now its declining trend in the past few years.
This has led to the creation of many opinions and many more tools to implement Continuous Integration. To date, the Cloud Native Computing Foundation recognises 36 tools under the term "Continuous Integration & Delivery" but there is many more in the wilds of the Internet.
All of them are not equal, either specific to a Platform or Cloud Provider or requiring additional setup. Some for unique use cases or generic enough that switching between can be easy.
So, what do I mean by "Repository Native" CI? I'm talking about the CI tools provided by a Repository Host e.g., BitBucket Pipelines, GitHub Actions and GitLab CI/CD.
These CI tools provide ease of use and require minimal setup and maintenance. Most of them only require adding an additional file, e.g.,
bitbucket-pipelines.yml, to the code repository and it 'just' works, providing an appropriate insight into the running tasks and feedback.
New commits and branches can easily trigger your CI process. Environment Variables for the CI process can be set and controlled in the Code Repository. And most of these Repository Native CI tools can be easily linked to external applications for Notifications.
If hosting your Repositories on a Cloud Provider and using their CI offering, you'll get a similar experience but with the additional benefit of easily integrating with other services that the Cloud Provider offers. For example, using Google Cloud Build and pushing a Container Image to a Google Container Registry without needing set up Service Accounts - in most cases.
In terms of maintenance, there is very little and pretty much seen in a 'hands-off' approach. Keeping your CI process up to date should always be the main focus.
Compared to an external CI tool e.g., Concourse, Jenkins and Tekton, "Repository Native" CI offerings are somewhat more generic. External CI tools usually have unique conventions but appeal to creating more complex CI processes but suffer from adding complexity, which requires additional knowledge.
There is an initial cost to external CI tools, usually in the form of access to the code - "Repository Native" CI tools already have access. Which, while usually, is little, it can quickly build with more repositories needing accessed and how complex the CI process is. For instance, if a task requires access to multiple repositories to run.
Another cost comes from giving appropriate access to the external CI tool to other services e.g., Container Registries or a Cloud Provider. Though, this is also experienced when using Code Repository Hosting - e.g., BitBucket - and needing create a VM on a Cloud Provider - e.g., AWS but is slowly easing with integrations being developed.
One benefit of external CI tools is usually the option to self-host. Allows for enhanced security and tweaking them to run how and where you'd like. But that comes with the greatest cost comes the maintenance: when your CI server is down or slow, you've got to fix it.
Though, in the world of more complex systems and intensive CI tasks, I’ll say that external CI tools do offer some distinct advantages. If self-hosting by being able to add more compute resources, as some "Repository Native" CI tools are limited. And by being able to create unique triggers for tasks e.g., using Slack.
I’ve usually found that the CI process will do short lived simple tasks – style checks, static code analysis and unit tests. These simple tasks benefit more from less setup. Less setup means less maintenance, and less maintenance means more productivity. This promotes the use of “Repository Native” CI tools – following the 80-20 rule.