DEV Community

Cover image for 7 Ways Continuous Delivery Helps Build a Culture of Learning
Marko Anastasov
Marko Anastasov

Posted on • Originally published at semaphoreci.com

7 Ways Continuous Delivery Helps Build a Culture of Learning

At the heart of continuous delivery is a fast feedback loop that immediately shows developers the effects of their work. Mistakes are found and fixed quickly, while beneficial changes can be released and deployed to customers without having to wait for a distant future release date. This rapid feedback helps build an organizational culture of learning and responsibility.

The foundation is continuous integration: whenever developers commit new changes into version control, fast automated tests run in a production-like environment to ensure that both the code and the system as a whole are safe to deploy to users. In many cases, deployment is triggered automatically if tests have passed. If not, the team knows that the system is in a deployable state and initiates deployment manually according to their schedule.

Most of the time, the fast feedback loop gives developers the go-ahead to continue working on their task, or to move on to the next task. If we can verify and deploy our changes within minutes, we preserve the state of flow and can continue to work very effectively.

Then, there are times when the feedback loop produces some new information that requires action. For example, we may receive reports of elevated error rates, or observe a shift in user engagement after we've deployed a change in a shopping cart design. Because continuous delivery enables frequent system updates, the changes we make are small and the feedback loop is set to give us small pieces of information. It's much easier to understand the immediate feedback that arrives after deploying a few hours’ worth of work than the slow feedback that arrives after three months of an all-hands effort.

Here are some specific examples of learning through continuous delivery:

1) Continuous integration leads developers to fix issues quickly. They learn about problems "just in time" — not days or weeks afterwards, when QA finally detects the problem and the code is no longer fresh in the developer's mind.

2) Fixes are small and their takeaways are easy to document. Good documentation helps build organizational knowledge. When developers know what's gone wrong in the past, similar problems can be detected more quickly, or avoided altogether.

3) With fewer significant failures to clean up, the organization can focus its efforts on experiments and metrics-driven learning, for both product feature development and internal processes.

4) Feature flags, sometimes also called feature toggles, allow us to iteratively build complex new features in production and validate them by talking to customers early on. This process, sometimes called continuous product discovery, helps us avoid investing in building expensive things that no one wants. We can focus on the features that do have value to customers, and evolve them until customer needs and business objectives are met.

5) Continuous experimentation leads to continuous learning. Developers share what they've learned in internal company "show and tell" events and demos, which elevate the skills of the whole team and encourage further sharing. In this environment, peers routinely push each other forward.

6) In continuous delivery, everyone owns the quality of their work. Every developer adopts test-driven development, learns how to use logging, and builds monitoring metrics for the components they own. Code review is standard practice in every pull request. When collaboration is second nature, knowledge gets organically transferred between coworkers. As a result, everyone deploys to production, and there's no distant authority that locks knowledge and processes.

7) Through small changes and distributed ownership, people are highly collaborative and team members trust each other. So when things go wrong and there’s an outage, the team tends to approach it as a learning and improvement opportunity, rather than blaming each other in a post-mortem.

Large projects developed in isolation can be hampered by technical roadblocks and long turnaround times, and, even when deployed, they may not meet users’ needs without feedback and additional fixes. A continuous delivery system supports fast, iterative software development — but only if we use it well. It's our responsibility to understand that each iteration is an opportunity for learning, and to apply that learning in our work.

Originally posted on Semaphore blog. Semaphore is cloud-based CI/CD service.

Top comments (5)

Collapse
 
kfecho30 profile image
Kelsey Fecho

I love that someone has pointed out learning "just in time" about their mistakes. I've worked with several companies (tech and otherwise) that do work reviews so far removed from the work process that the feedback is no longer valuable.

Collapse
 
simonhaisz profile image
simonhaisz

I 100% agree that CD is great, but I would break out some of the items you listed to be more Agile/CI related than being specific to CD. Depending on the maturity/size/legacy of your process/org/code you may be able to move to CD quickly enough that it seems to all happen at once. For others it may be more of a journey. I know because this was a journey my company had to go through and it sure took a while with plenty of learning along the way :)

For those who are new to the whole CD concept I suggest checking out Dave Farley's blog: davefarley.net/ He, along with Jez Humble, literally wrote the book on CD. One of the most important of his concepts is that if somethings painful to do, the way to reduce pain is to do it more often rather than less. It forces you to confront your problems and actually fix them.

Collapse
 
markoa profile image
Marko Anastasov

"If it hurts, do it more often." — I've definitely seen this bring good things in many contexts.

I'm currently researching other people's journeys to CD, so I'd love to hear more about yours. :) If you'd like to share feel free to post here or reach me on marko x renderedtext x com.

Collapse
 
simonhaisz profile image
simonhaisz

Sure!

The players: Medium sized company (100-200 R&D department).
The stage: One large monolithic product with tons of features and a couple decades of history.
The audience: Large companies (1B+ revenue) that don't like change.
Previous chapters: Release cycle of a major release every 1-2 years with minor releases every 3-6 months.

We were following a ScrummerFall process at that point. We would have a release plan that would start with all the high priority features, guess at how long the longest poles would be and then fill in the gaps with lower priority features. As breaking the build is bad, most features would be developed on a feature branch and kept out of the official builds. Some of these feature branches would be worked on for months before being merged.

In our favor each team had thousands of automated tests for their layer. The downside is that these were mostly integration tests, so they were slow and flaky. If you were run them all sequentially it would take a day. Then we had hundreds of full-stack E2E tests which were even slower and flakier - their total runtime was several days. Then there were the performance tests, hundreds of tests across dozens of data sets. Their runtime was measured in weeks.

Obviously we did not have a quick feedback loop 😢 And because of the flakiness you almost never saw 100% green. A good build was 90-98% green. Because investigating those intermittent failures was so expensive we got in the bad habit of ignoring them during dev and just re-running them for release time. Run the failing tests multiple times on the same build, if they almost always pass then its a problem with the tests and not the product 😭

Enough was enough so we began our journey. We actually brought Dave in as a consultant to help us. I can personally testify that he's great.

The first thing we did was start treating master as if we would ship off of it at any time. We obviously didn't, because we weren't ready. But it was the start of doing something painful frequently until it stops being painful.

Every test failure became a bug, as it should be. It had to be logged, investigated, and fixed. It was considered a defect in the product unless you could prove it was in the tests. And regardless of where it was it couldn't be closed until it was fixed. Not surprisingly, velocity dropped like a piano. PM was 😠.

So we worked on it and improved our tests. Rewrote whole sections of tech framework so that they would be reliable. Got everyone to treat writing tests as carefully as they wrote prod code. After a while (it sure took a while) we reached the point that a failed test meant there was a real bug. We actually got 100% green builds. I can't say we never had a flaky test again but they really rare now and if a new type of flakiness pops up we've got good tools and techniques to get rid of it.

At the same time we stopped using feature branches and started branching by abstraction using feature toggles/flags. So everyone was pushing to master which meant it was easy to test features in development and even test their combinations. And if your feature 'leaked' out of its toggle? Ooh, that's a bad bug. So we were finding issues way right away instead of after merging 6 months later.

At the same time we worked on improving the feedback loop. Slow tests were investigated and improved/re-written to be faster. We invested in test farms to run tests in parallel easily. Any PR could easily have thousands of test run against it before it was merged. Implemented CI so that we ran builds and the fast tests with every commit. Each official build now runs all of the INT tests in ~30 minutes (instead of hours) and all of the E2E tests in a few hours (instead of days). Performance tests are run every night (instead of monthly).

The end result was that after a year of blood, sweat, and tears we started releasing monthly. We can actually be Agile now and deliver value incrementally, even with our Enterprise customers.

Thread Thread
 
markoa profile image
Marko Anastasov

This is pure gold. Thank you so much for sharing! 🙇‍♂️