CI/CD is ultimately about getting your ideas to market faster by continuously shipping code in smaller, more testable increments. However, to successfully adopt this process, it requires the coordinated buy-in of the entire engineering and testing teams, the adoption of tools to facilitate the process, and major process changes.
Moving to CI/CD is risky if you don't take the time to think through how to handle the transition.
So what are the common mistake teams make when switching to CI/CD?
Making it too easy to bypass pipelines
CI/CD is kind of like working out — it requires a specific set of habits, and a lot of discipline to stick to them. If you skip too many days at the gym, you fall out of the habit and stop going entirely.
In a similar vein, if you give your team a way to bypass part of your CI/CD pipeline, the whole process won't stick. When your (hopefully automated) testing catches bugs, it'll be too tempting to bypass your checks, convincing yourself you'll fix the issues later.
Only in the most extraordinary circumstances should you give anyone a way to bypass pipelines.
Picking the wrong metrics to track, or not tracking metrics at all
Ideally, CI/CD should help you get your ideas to market faster, reduce the risk of any given deployment, ship fewer bugs to production, and ultimately make your team happier because they're shipping more code.
If those are your goals, the metrics you track should align with those objectives. Aggregate lines of code doesn't tell you anything about bugs, the customer experience, or velocity.
Your metrics should measure velocity and highlight chokepoints in your processes
Velocity requires every step of the continuous delivery process to happen quickly, with minimal transition friction. Track how long the transition takes between each step in the process, and the main causes for slow transitions.
- How long does it take for your team to discover and review pull requests
- Is your unit and integration test coverage high?
- How often are your tests failing?
Your metrics should track the customer experience
CI/CD requires moving quickly and confidently. Automated tests help in terms of providing a layer of protection against a broken customer experience, but rarely can they catch everything.
- Identify key business metrics that reflect the true state of the customer experience.
- Make dashboards of key metrics publicly viewable, so if a deploy goes out that breaks the customer experience, you can catch it quickly.
Building CD on top of unstable CI
Automated tests are essential to any successful CI/CD process. To continuously deploy, requiring human intervention to test for bugs is untenable.
Lack of test coverage
Moving to CI/CD without test coverage defeats the entire purpose of the process. It doesn't matter how often you push to production — if your product has bugs, you are delivering a poor customer experience.
Tests help you spot bugs before your customers do, and culturally prevent you from shipping buggy code.
By no means should you always have 100% test coverage, but you should always ensure that the most important flows for the customer experience, and any experiences governed by business logic are thoroughly tested.
Flaky tests based on poor testing frameworks or standards
If you write tests that need to be changed any time you refactor code, you may end up worse-off than if you never wrote them in the first place. Tests should be abstract enough to track user stories (how will these user progress through the onboarding experience), and not so specific that something as small as change in button text would cause a failure.
Furthermore, if you do not have a strong process for training engineers to implement tests, the process of writing them will be untenably slow, and defeat the purpose of CI/CD.
Lack of alignment between development, testing, and other stakeholders in the customer experience
In a strong culture of CI/CD, the engineers who write the tests, the testers who bug-bash flows, and the product teams who manage the customer experience should all have a say in which tests are written, and their ongoing maintenance.
Up front, product managers should enumerate the user stories for testing, engineers should implement tests per those stories, and testers should ensure the customer experience behaves as expected, as well as monitor production metrics to ensure no business logic fundamentally fails.
How to build a better culture of testing
An effective implementation of CI/CD requires organizational buy-in, access to right tools, and continuous measurement.
A strong culture should care about implementing tests, and monitoring that the customer experience behaves as expected after every deploy.
- Tests should be easy to write, and flexible enough to not break when code is refactored
- Tests should be flexible enough to not break when code is refactored
- Product teams should be invested in the testing process — enumerating user stories that are important to validate during CI pipelines
And your tools should make all of that easy. At walrus.ai, we're trying to do just that.
Top comments (2)
Great tips! This was well-timed for me, as I'm in the middle of refactoring my team's CI pipeline.
As for testing, we wrote our own framework a while back, and we've gotten a lot of great mileage out of it. Goldilocks v2.0 is in the works, and is mostly stable and usable in the
master
branch of PawLIB.Great tips!