(Cover picture credits: Sigmund @sigmund via Unsplash)
The commoditization of cloud compute and the proliferation (and adoption) of managed cloud services during the first years of the past decade enabled the co-evolution of other software engineering practices that helped teams build and release software faster. This progression eventually led to DevOps, a technique that has helped reduce the dependency on a centralized operations team during the software development lifecycle.
But things have changed. We just don't do cloud anymore. I mean, this is what we do 10 years ago when we spun up virtual machines on the cloud and installed Web servers, application containers, and other type of runtimes so we could deploy and run our applications. Under that context, DevOps emerged as a practice that co-evolved with the technology.
Now, cloud providers offer those runtimes as a service for us so we don't even need to spin the virtual machine hosts. It means that infrastructure resources are now utilities priced per use and accesible through a new whole set of programmatic APIs and infrastructure-as-code frameworks. This is known as serverless and it is making the DevOps practice to co-evolve, the same way cloud did in the 2010s.
To some extent, yes, DevOps is a legacy practice. At least some bits of it, probably the most technical aspect of it related to the automation technologies. Because we know DevOps is much more than that. As Emily Freeman (author of DevOps for dummies) says:
"DevOps is an engineering culture of collaboration, ownership, and learning with the purpose of accelerating the software development lifecycle from ideation to production."
We need new technical guiding principles.
Let me put you an example. In a serverless context, what is an environment? How would you define your SDLC boxes if we don't have the concept of server instances anymore?
Using an analogy with operating systems design, a lifecycle automation solution in a serverless architecture has close connections with traditional package managers. Similar to operating systems and other plug-in-based software, a package in this context is a self-contained unit that provisions the specific infrastructure that the application requires for running its business logic.
The equivalent of a package in the context of cloud-based software would be the combination of the deployable artifacts that compose a given application and the scripts that perform the actual deployment. Therefore, the lifecycle automation management solution is a system that helps in producing the service artifacts, creates the packages, stores them in a central repository, enhances them with metadata, controls their versioning, manages their dependencies, and allows developers to perform a-la-carte deployments.
The following principles will help our modern development teams align with this view and come up with an efficient build and release process to build and deploy applications on serverless architectures with confidence.
1. Version Control
Include both source code and binary artifacts under version control so that you can keep track of changes over time. We recommend using a distributed system for source code management (e.g., Git) so you can work on changes in parallel as a team before merging them, even when the network is unavailable. Use meaningful commits messages to describe the changes you are making (your future self will be thankful for that). Avoid having multiple long-lived branches and keep the mainline in a releasable state (e.g., favor Trunk-based development over GitFlow or GitOps). Only use short-lived branches for features and fixes that can be merged into the mainline once complete and tested.
2. Developer Collaboration
Use tools and platforms that allow developers to collaborate and contribute efficiently to developing the business services (e.g., GitHub). Incentive contribution mechanisms such as Pull Requests and use them as coaching opportunities. In that line, enhance source code repositories with all sorts of documentation that helps developers understand how they can contribute to the project, including Licensing, Notice, and Contributing guidelines.
Attempt small, incremental changes to the source code and integrate them often with the mainline. Write unitary and integration tests for your features and run them locally even before submitting a contribution to the mainline. You can configure the version control system so that the Continuous Integration (CI) pipeline automatically runs build, unit tests, integration tests, and code quality scan upon code commit (i.e., continuous integration).
Also, you can design a build process for quick feedback and ring-fence the services' unit and integration tests by mocking all external dependencies. Use deployment preview techniques when available and applicable. You should be able to perform an additional quality check of your generated artifacts before deploying them in a natural environment.
4. Infrastructure as code
Create infrastructure-as-code repositories to run deterministic and consistent deployments that spin up the necessary cloud infrastructure to run the services and, at the same time, get all the artifacts deployed. Write fast and straightforward post-deployment tests that are executed automatically upon any deployment to exercise the service and its essential dependencies.
You will have to work under the assumption that all the artifacts generated from the mainline are potential release candidates, so you must deploy services often and obtain quick feedback. Also, use auditable and traceable deployments to ensure you know what version of the services is running on each environment every time.
5. Promotion through the SDLC
It would be most excellent to have multiple deployment environments (e.g., SDLC stages) so you can phase the rollout of the software and ensure it is adequately tested. Configure your infrastructure-as-code repositories so that the CD (Continuous Deploy) pipeline automatically runs deployment, smoke tests, and performance tests upon artifact generation (i.e., continuous deployment).
If you are working from a feature branch, deploy services on a cloud sandbox. Alternatively, if you are working from the mainline, deploy them through a lifecycle of environments where they can be promoted and tested until they reach the release candidate status. This process will help you make sure a version of the software is not deployed to a later environment before being deployed and tested at an earlier stage.
All these principles and the tools supporting them are highly available and accessible in the software development industry, to the point that some are even quite a standard. Despite this industrialization, how do you use those tools and the custom pieces you add on top of them make this solution a good platform capability.
This new paradigm based on a serviceful architecture running on serverless computing challenges the traditional concepts of environments since everything is composed of little and independent building blocks, including the infrastructure.
Providing lifecycle automation management functionalities for the development teams will help them not worry about these new constructs, heavy-lifting all the environment provisioning, and promoting their artifacts from their workstations to production. This type of automation could be considered a core capability of internal software platforms and something you may be interested in building for the developers.