Continuous Integration and Continuous Deployment are centerpieces of modern software development. They're well known concepts. A bit too well known, I'd say, in the sense that they're so ubiquitously used that few spend much time thinking about what they mean.
Here we'll try to formulate (and nuance a bit) the key concepts that sit behind each of them, what makes them different and what makes them work together.
Continuous Integration is a product of extreme programming (a lot of the things we take for granted today come from the days of good ol' XP). The gist of CI is that developers shouldn't sit on the code they wrote, instead the code should be integrated with the project and other developers' code as soon as possible so that the team can ensure everything still behaves as intended and bugs can be rooted out ASAP.
For that purpose, Continuous Integration relies on the following (these are not hard rules, nor are they necessarily agreed as such but instead they are a compilation of advice given over time by XP practitioners).
a repository: it's easy to forget that version control hasn't always been around and even when it came to be, there were things before Git (just watch Linus Torvalds famous Google presentation on Git - many used plain tarball archives to integrate code). However, it's important to consider that the ability to branch is an enemy of CI. The goal is to integrate. That is, according to Martin Fowler, that developers should merge code often and branch only when really necessary.
self-testing: each piece of code should be able to stand on its own. THis means developers should test before integrating (whether automatically or manually, doesn't matter).
automated integration: since integrations must be done often, there must be a process in places to ensure code quality and integrity in an automated way, that is able to run continuously as well as on demand. Unit tests, static code analysis, integration tests and so on must happen often.
speed: since the process above must take place often, it must be fast. In a nutshell, it should scale as the project grows, as the quantity and complexity of the code grows.
fixes must come with unit tests: a bug means that something escaped the system, therefore a fix should bring the required tests up to speed (whether it means fixing an existing one and/or adding tests)
transparency: tests and their results must be accessible and clear to everyone. This means that manual test plans must be shared and known across the team, the results of test runs must be made public - same goes for automated tests.
Now, the initial proponents of CI included automated delivery in the process. Yes, some time ago CI also meant what we today call CD - Continuous Delivery, in that CD was a step in CI.
Today, we think of CD as a domain of its own. This is natural given the complexity of the systems to deliver. We rarely deliver one piece of software but rather an ensemble made from frontend(s), backends, APIs, maybe content management tools, often working together. It has become a domain of its own, that's governed by some principles:
the item(s) to deliver must have gone through CI process: that is, the quality of the item(s) to delivered must be ensured before it comes up for delivery.
the configuration comes from a secure place: when we develop, we generally keep our configuration together within the environment we work in. However, this is insecure and unfit for a production environment where we must ensure that access to secret configuration item is as restricted as possible and only available to the automated delivery process
data integrity: while individual services may be stateless, applications as a whole rarely are. In that case, any delivery must ensure that data integrity is not affected, that any schema changes happen securely in a way that won't affect the running application and that failures result in graceful rollbacks.
no downtime: today we have many ways to ensure that deliveries come with little to no downtime. This in itself poses challenges (particularly with respect to data integrity).
monitoring: the process doesn't end with a deployment. A successful delivery isn't one that doesn't have errors along the way but one that ensure the application is running as expected. This means tracking of metrics that showcase expected behaviour (eg: no increased error rate, no new errors, maybe less latency if a performance update was done, etc) in each layer of the application (backend, frontend, data).
Top comments (0)