<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: GoCD</title>
    <description>The latest articles on DEV Community by GoCD (@gocd).</description>
    <link>https://dev.to/gocd</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/gocd"/>
    <language>en</language>
    <item>
      <title>Continuous Delivery Metrics Part 3: How long does it take to get from committing code to production?</title>
      <dc:creator>Aravind SV</dc:creator>
      <pubDate>Mon, 02 Sep 2019 15:19:25 +0000</pubDate>
      <link>https://dev.to/gocd/continuous-delivery-metrics-part-3-how-long-does-it-take-to-get-from-committing-code-to-production-4kbo</link>
      <guid>https://dev.to/gocd/continuous-delivery-metrics-part-3-how-long-does-it-take-to-get-from-committing-code-to-production-4kbo</guid>
      <description>&lt;p&gt;This is the third post in the series - Actionable Continuous Delivery Metrics. In the previous posts, we gave an overview of why and what metrics matter to your CD process as well as an in depth discussion on deployment frequency. In this post, we’ll get deeper into lead time.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;What Lead Time Is&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Whereas deployment frequency is the number of times deployment happened, lead time for change is the cumulative lapsed time it took from start to deployment. This metric is represented as a duration and it helps answer the question “how long does it take to get from from code commit to production?”&lt;/p&gt;

&lt;p&gt;You may see lead time used interchangeably with cycle time, and there is a lot of confusion around what these two terms mean. By definition, the lead time clock starts when the feature request is made and ends at delivery, while the cycle time clock starts when work begins on the request and ends when the item is ready for delivery. We are not going to distinguish these two terms in this post. We think the important thing is that you clearly define what you mean, and be consistent with your definition. In our context, we suggest avoid saying lead time or cycle time without qualifying the starting point and ending point.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--KzeeBBke--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cl.ly/00278b3fe30c/Image%25202019-09-02%2520at%25208.08.16%2520AM.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--KzeeBBke--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cl.ly/00278b3fe30c/Image%25202019-09-02%2520at%25208.08.16%2520AM.png" alt="Continuous Delivery Metrics Part 3: Lead Time Diagram"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the diagram above, we calculate time from a commit (at the start of a CD pipeline) until when that code commit goes to production (at the end of the CD pipeline). Although other points in the CD pipeline have happened multiple times, it was only after three days that the deployment happened, so we calculate the lead time for that commit as three days.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Why Lead Time is Important&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Lead time is a key element of lean theory. Focusing on reducing lead time will allow you to look at your process as a whole and understand the slowest parts. If teams and organizations only concentrated on lowering lead time they would improve and add more value to their organization.&lt;/p&gt;

&lt;p&gt;Lead time helps you answer questions like "When will this be done? If we start now when can we get it to production? Can we deliver this next week?" It helps inform business decisions and helps with planning.&lt;/p&gt;

&lt;p&gt;Finally, we recommend only considering the “delivery” part when calculating lead time for your CD process. If you define the start time from first line of code, and remove the “fuzzy front-end”, which include activities like requirement gathering, system analysis, prioritization etc, it is a simple and very practical metric to calculate.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;An Example: How to Use Lead Time&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;We will continue with the hypothetical scenario we used in part 2 of the series, in this scenario, the development team received the complaint from their stakeholders that they didn’t get value very often.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s---cFUj5MN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cl.ly/c83a583e8d0c/Image%25202019-09-02%2520at%25208.08.39%2520AM.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s---cFUj5MN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cl.ly/c83a583e8d0c/Image%25202019-09-02%2520at%25208.08.39%2520AM.png" alt="Continuous Delivery Metrics Part 3: Lead Time Diagram"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This picture above is a quick refresher of this team’s CD pipelines represented in GoCD’s Value Stream Map (VSM). It includes some unit tests, integration tests and smoke tests in parallel, and then the regression tests. At the end, there are User Acceptance Tests (UAT) and a production deployment.&lt;/p&gt;

&lt;p&gt;In the previous post, we have found out the deployment frequency is as low as 8%. Here, GoCD enterprise analytics plugin can help find out what parts of the CD pipeline are causing this low deployment frequency.&lt;/p&gt;

&lt;p&gt;In the screenshots below, you can see the average cycle time from BuildAndUnitTests to Production is 33m.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--QKGtDmua--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cl.ly/a3e5d0ac5cdb/Image%25202019-09-02%2520at%25208.09.03%2520AM.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--QKGtDmua--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cl.ly/a3e5d0ac5cdb/Image%25202019-09-02%2520at%25208.09.03%2520AM.png" alt="Continuous Delivery Metrics Part 3: Lead Time Diagram"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When you drill down into each pipeline, you can see these details with wait times (shown in gray), and run times (shown in purple) towards production with each stage shown separately. From there, you can identify your slowest steps.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--dQahtzCI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cl.ly/b7de03028c03/Image%25202019-09-02%2520at%25208.09.51%2520AM.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--dQahtzCI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cl.ly/b7de03028c03/Image%25202019-09-02%2520at%25208.09.51%2520AM.png" alt="Continuous Delivery Metrics Part 3: Lead Time Diagram"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This process can also be done manually, or with another tool. When you track the time and plot the different parts of the workflow, you can achieve similar results. The example below is a possible representation of the process, when done outside of GoCD. You can see there are some long-running parts of your pipeline (as in “Regression” here) and some long delays due to manual approvals.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s---2CbI8XY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cl.ly/2ab44d171e78/Image%25202019-09-02%2520at%25208.10.20%2520AM.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s---2CbI8XY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cl.ly/2ab44d171e78/Image%25202019-09-02%2520at%25208.10.20%2520AM.png" alt="Continuous Delivery Metrics Part 3: Lead Time"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you can see from our example, with lead time metric and the supporting details, you can identify your problematic steps: steps that are slow to complete and require lots of wait time will lead to overall low deployment frequency. From here, we recommend&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  focusing on the slowest steps first- that is what will give you the most gains.&lt;/li&gt;
&lt;li&gt;  converting manual approvals into automated ones, which will increase your level of confidence in the tests and automation.&lt;/li&gt;
&lt;li&gt;  Rewriting the tests in the slowest steps, if possible.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Sometimes, just focusing on optimizing a number of slow steps is not enough. Consider bigger changes to your pipelines. Start with drawing the value stream map and truly understanding the dependencies of your process. Identify opportunities to parallelize, rearrange your tests to start the slow-running tests earlier on in the process.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Summary&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;In this post, we discussed what lead time is, why it is important, and how to act on long lead time. As we discussed in part 1, there are two other important metrics we recommend measuring: change fail percentage and MTTR. These metrics work closely with deployment frequency and lead time. We we will cover the interrelationship of these metrics in future posts.&lt;/p&gt;

</description>
      <category>cd</category>
      <category>testing</category>
      <category>sdlc</category>
      <category>analytics</category>
    </item>
    <item>
      <title>Continuous Delivery Metrics Part 2: How often do you deploy to production?</title>
      <dc:creator>Aravind SV</dc:creator>
      <pubDate>Mon, 05 Aug 2019 17:17:37 +0000</pubDate>
      <link>https://dev.to/gocd/continuous-delivery-metrics-part-2-how-often-do-you-deploy-to-production-1ioi</link>
      <guid>https://dev.to/gocd/continuous-delivery-metrics-part-2-how-often-do-you-deploy-to-production-1ioi</guid>
      <description>&lt;p&gt;This is the second post in the series - &lt;a href="https://www.gocd.org/tags/cd-analytics.html"&gt;Actionable Continuous Delivery Metrics&lt;/a&gt;. &lt;a href="https://www.gocd.org/2018/10/30/measure-continuous-delivery-process/"&gt;In the previous post&lt;/a&gt;, we gave an overview of why metrics matter to your CD process and what metrics we recommend you measure. In this post, we’ll get deeper into the first metric: deployment frequency.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Deployment Frequency Is
&lt;/h2&gt;

&lt;p&gt;Deployment frequency otherwise known as throughput, is a measure of how frequently your team deploys code. This metric is often represented as a percentage and it answers the question “how often do we deploy to production or to another significant point in our CD pipeline such as a staging environment?”.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--JKIZrdxG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cl.ly/2bc0e73dce53/Image%25202019-08-04%2520at%25209.32.03%2520PM.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--JKIZrdxG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cl.ly/2bc0e73dce53/Image%25202019-08-04%2520at%25209.32.03%2520PM.png" alt="deployment frequency concept"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We consider production deployment as a significant point in the CD pipeline and we are counting the number of times a deployment to production happens versus not happening. In the example above, we have 8 instances or opportunities to deploy, with only 2 deployments happening, so the deployment frequency is 25%.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Deployment Frequency Is Important
&lt;/h2&gt;

&lt;p&gt;The word "continuous" in continuous delivery implies high deployment frequency. Having a high deployment frequency means that you have more deployments, and gives you more opportunities for feedback on your software. More importantly, higher deployment frequency means that you’re delivering value to end users and stakeholders more quickly.&lt;/p&gt;

&lt;p&gt;According to &lt;a href="https://puppet.com/resources/whitepaper/2016-state-of-devops-report"&gt;the research done by the State of DevOps report team&lt;/a&gt;, high functioning teams have higher deployment frequency as compared to their less efficient peers. It is good to baseline your deployment frequency and try to increase it as much as it makes sense, in the context of your organization’s business and goals.&lt;/p&gt;

&lt;p&gt;However, deployment frequency has to be balanced with quality. You don't want to increase deployment frequency by removing tests. You want to be able to deliver more often to production, while maintaining or even improving quality. That's what CD is about and what the deployment frequency metric captures.&lt;/p&gt;

&lt;h2&gt;
  
  
  An Example: How to Use Deployment Frequency
&lt;/h2&gt;

&lt;p&gt;If you are measuring your pipeline and have low deployment frequency, what can you do? We’ll take a hypothetical example, and use GoCD, our continuous delivery server, to take you through how you can identify and act on deployment frequency issues. In our example, the team received a complaint from the business that they don’t get value very often. Let’s find out why.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--7JMtb2lb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cl.ly/fddcd382e827/Image%25202019-08-04%2520at%25209.32.37%2520PM.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--7JMtb2lb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cl.ly/fddcd382e827/Image%25202019-08-04%2520at%25209.32.37%2520PM.png" alt="example pipelines"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The picture above shows you the continuous delivery pipelines, represented in GoCD’s Value Stream Map (VSM). Here, GoCD runs through the value stream including some unit tests, then integration tests and smoke tests in parallel, and then eventually the regression tests. Finally, there are User Acceptance Tests (UAT) and a production deployment.&lt;/p&gt;

&lt;p&gt;To understand what is happening here, we start with finding out whether your deployment frequency is concerning. In our example, it’s easy to look this metric up in GoCD using the &lt;a href="https://www.gocd.org/analytics.html"&gt;GoCD enterprise analytics plugin&lt;/a&gt;: go to GoCD’s VSM view, select the part of the CD pipeline you care about, and see the deployment frequency (known as throughput in GoCD). We can see that the throughput is only 9%, which means out of the opportunities to deploy only 9% are reaching production. This number is too low.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--lfkNIf-w--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cl.ly/1bbd2f978ee2/Image%25202019-08-04%2520at%25209.33.06%2520PM.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--lfkNIf-w--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cl.ly/1bbd2f978ee2/Image%25202019-08-04%2520at%25209.33.06%2520PM.png" alt="deployment frequency in GoCD"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;These can also be tracked manually in a spreadsheet or other tool. If you note down the status each time &lt;em&gt;BuildAndUnitTests&lt;/em&gt; ran, as well as the &lt;em&gt;production&lt;/em&gt; pipeline ran, , you’ll see more failures on the way to the &lt;em&gt;production&lt;/em&gt; part of the CD pipelines. In the table below, you can see when you track and plot the run times, in the same time period of five days, &lt;em&gt;BuildAndUnitTests&lt;/em&gt; ran many more times than &lt;em&gt;Production&lt;/em&gt;. Again, you see the deployment frequency is very low.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--v-iKLDBH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cl.ly/9fe23e07096f/Image%25202019-08-04%2520at%25209.33.33%2520PM.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--v-iKLDBH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cl.ly/9fe23e07096f/Image%25202019-08-04%2520at%25209.33.33%2520PM.png" alt="deployment frequency in excel"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The main signal here is clearly that the deployment frequency is low. But why and how should we act on that? There may be many reasons for this. We recommend checking the following potential causes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Is your build very slow? Is slowness and lack of feedback causing the development team to avoid checking in often? Do the team respond by combining changes into bigger chunks and causing even longer delays?&lt;/li&gt;
&lt;li&gt;  Is your end-to-end lead time from commit to deployment too long?&lt;/li&gt;
&lt;li&gt;  Do you have builds that fail very often?&lt;/li&gt;
&lt;li&gt;  Do you have flaky tests? If tests are flaky consider understanding which ones are the biggest problem and &lt;a href="https://gauge.org/2018/10/23/taiko-beta-reliable-browser-automation/"&gt;addressing the root cause&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;  Are you &lt;a href="http://gettingtolean.com/toyota-principle-5-build-culture-stopping-fix"&gt;stopping the line&lt;/a&gt; to address other problems?&lt;/li&gt;
&lt;li&gt;  Do you have long-lived feature branches or pull-requests which are not merged often?. Very often we see the development team working hard on their branches, but the business won’t see that value until the changes are deployed. If your problem is that you have work going on in long-lived branches, consider feature toggles and &lt;a href="https://trunkbaseddevelopment.com/"&gt;trunk based development&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;In this post, we discussed what deployment frequency is, why it is important, and how to act on low deployment frequency. As we discussed in &lt;a href="https://www.gocd.org/2018/10/30/measure-continuous-delivery-process/"&gt;our previous post&lt;/a&gt;, there are three other important metrics we recommend measuring: lead time, change to fail percentage and MTTR. These metrics work closely with deployment frequency, and can will help you further understand the root cause of low deployment frequency. We we will cover the interrelationship of these metrics in future posts.&lt;/p&gt;

</description>
      <category>cd</category>
      <category>testing</category>
      <category>sdlc</category>
      <category>analytics</category>
    </item>
    <item>
      <title>Continuous Delivery Metrics Part 1: Why measure your CD process</title>
      <dc:creator>Aravind SV</dc:creator>
      <pubDate>Mon, 22 Jul 2019 16:53:40 +0000</pubDate>
      <link>https://dev.to/gocd/continuous-delivery-metrics-part-1-why-measure-your-cd-process-2oia</link>
      <guid>https://dev.to/gocd/continuous-delivery-metrics-part-1-why-measure-your-cd-process-2oia</guid>
      <description>&lt;p&gt;As software and IT become key drivers for innovation in most organizations these days, the speed of software delivery becomes very important to their success. More and more teams are adopting Continuous Delivery (CD) and expect to benefit from the accelerated feedback loop CD offers. To understand whether you are improving and delivering on your goals, you need to measure you CD process.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.gocd.org/tags/cd-analytics.html" rel="noopener noreferrer"&gt;In this blog series&lt;/a&gt;, we will share :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Why CD metrics are important&lt;/li&gt;
&lt;li&gt;  What metrics you should measure&lt;/li&gt;
&lt;li&gt;  A step by step guide to getting started&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Continuous Delivery
&lt;/h2&gt;

&lt;p&gt;Jez Humble defined continuous delivery on continuousdelivery.com as&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"the ability to get changes of all types - including new features, configuration, bug fixes, and experiments - into production, safely and quickly in a sustainable way."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;In practice, a CD pipeline can look like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcl.ly%2F8a0b1becab48%2FImage%25202019-07-21%2520at%252010.08.25%2520PM.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcl.ly%2F8a0b1becab48%2FImage%25202019-07-21%2520at%252010.08.25%2520PM.png" alt="CD pipelines"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;On the left of the diagram, the material is a repository such as git or svn. The delivery team commits a change, and your CI/CD server such as GoCD, runs build and unit tests. If these tests fail, the team immediately fixes any problems. The new version with those fixes goes further along the CD pipeline. If tests further down fails, again the team fixes them as quickly as possible. This process happens over and over in the life-cycle of an application.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why measure your CD process
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;1. Measurement, Feedback, and improvement&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcl.ly%2F9967b1958a89%2FImage%25202019-07-21%2520at%252010.08.59%2520PM.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcl.ly%2F9967b1958a89%2FImage%25202019-07-21%2520at%252010.08.59%2520PM.png" alt="Build, Learn, Measure"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If we consider a feedback cycle like “Build, Measure and Learn”, metrics are a way to set specific and measurable goals, direct activities towards achieving those goals, and help you understand if you are achieving those goals.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Predict future behavior&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Data will provide you more accurate estimates to your business. For example, if you know your lead time, you can more accurately answer questions about how long it will take for something to be ready for your customers.&lt;/p&gt;

&lt;p&gt;If you are considering parallelizing tasks or removing manual steps in your process, once you have some data about your current process, you can calculate the time savings of these improvement activities. From there, this data could potentially help your organization estimate dollars made or saved by certain specific improvements.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Continuous delivery benchmarking&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Once you have some data, the values can be used as your baseline. Those baseline values are key to understanding whether you are improving your own process as well as key to understanding where you stand relative to “high performing” teams.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcl.ly%2F9e0d2c3f5c80%2FImage%25202019-07-21%2520at%252010.09.36%2520PM.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcl.ly%2F9e0d2c3f5c80%2FImage%25202019-07-21%2520at%252010.09.36%2520PM.png" alt="CD Benchmarking"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Credit: Forsgren PhD, Nicole. Accelerate: The Science of Lean Software and DevOps: Building and Scaling High Performing Technology Organizations (Kindle Location 564). IT Revolution Press. Kindle Edition.&lt;/p&gt;

&lt;h2&gt;
  
  
  What metrics are important to measure?
&lt;/h2&gt;

&lt;p&gt;Once you introduce a CD pipeline and have established your path-to-production, the next step is to monitor its efficiency. We do not suggest measuring everything. At a high level here are the four metrics that we suggest using to help monitor your CD process. We’ll go deeper into details in future parts of the series.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Deployment Frequency a.k.a Throughput&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Deployment freqency is a measure of how frequently your team deploys code. This metric is represented as a percentage. It is the answer to the question “how often does code reach a certain point in the CD pipeline”.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Lead Time&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Lead time is a measure of how long it takes from committing code to deploying it to a production environment. This metric is represented as a duration. It is the answer to the question "how long does it take from committing code to it reaching production".&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Change Fail Percentage&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Change fail percentage is a measure of the percentage of changes that result in a failure. This metric is represented as a percentage. It is the answer to questions like “what percentage of changes break builds” and “what percentage of deployments result in a service outage”.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Mean time to restore&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Mean time to restore (MTTR) is a measure of how long it takes to fix a build failure. This metric is represented as a mean duration. It is the answer to questions like "how long does it take you to restore service during a failed deployment".&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;This is the part 1 of our Actionable CD metrics blog series. We talked about why CD metrics are important, and an overview of the important metrics we recommend. &lt;a href="https://www.gocd.org/2018/11/30/deployment-frequency/" rel="noopener noreferrer"&gt;In the next post&lt;/a&gt;, we will dig deeper into each metric.&lt;/p&gt;

</description>
      <category>cd</category>
      <category>testing</category>
      <category>sdlc</category>
    </item>
    <item>
      <title>Configuration Strategy for Continuous Delivery of Microservices</title>
      <dc:creator>Sheroy Marker</dc:creator>
      <pubDate>Mon, 03 Jun 2019 18:18:45 +0000</pubDate>
      <link>https://dev.to/gocd/configuration-strategy-for-continuous-delivery-of-microservices-5a26</link>
      <guid>https://dev.to/gocd/configuration-strategy-for-continuous-delivery-of-microservices-5a26</guid>
      <description>&lt;p&gt;This is the fifth post in the series - &lt;a href="https://www.gocd.org/tags/cd-for-microservices.html"&gt;Continuous Delivery of Microservices&lt;/a&gt;. In the &lt;a href="https://www.gocd.org/2018/06/12/cd-microservices-environment-strategy/"&gt;previous post&lt;/a&gt;, we talked about environment strategy - including artifact promotion and ways to leverage modern infrastructure for dynamic environments. In this post, we will discuss configuration strategy.&lt;/p&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;An application’s configuration is everything that is variable across deployment environments such as development, test, and production. Deploying the same code but switching out certain aspects (like URLs to backing services, database connection information, credentials to third-party services, etc.) are examples of what I mean by variables in this context. Such configuration should be stored separately from the application code.&lt;/p&gt;

&lt;p&gt;In a system based on a microservices architecture, configuration also needs to be distributed across multiple services. There are a couple of ways to manage configurations in a distributed way: *make configuration available in environment variables at deploy time, *use an external configuration server product designed to expose configuration&lt;/p&gt;

&lt;p&gt;Here are three things you should consider for your microservices configuration strategy:&lt;/p&gt;

&lt;h2&gt;
  
  
  1: Manage application configurations centrally
&lt;/h2&gt;

&lt;p&gt;An external configuration server is a more appropriate system for managing application configuration and introduces cleaner separation of concerns.&lt;/p&gt;

&lt;p&gt;The configuration management code (in chef or puppet) can solely be responsible for cluster management. With Chef, updating application configuration would require a slow convergence operation of the cluster. With an external configuration server, updates to application configuration can be more dynamic without the need to update any other aspect of the infrastructure.&lt;/p&gt;

&lt;p&gt;Another added advantage of this approach is that it forces consistent practices with organizing configuration by application, and environment.&lt;/p&gt;

&lt;p&gt;There are a number of purpose built external configuration servers you could consider. The Spring cloud config server is a good option for Spring applications. With support for multiple backends, you could integrate with industry standard KV stores such as &lt;a href="https://www.consul.io"&gt;Consul&lt;/a&gt; for non-sensitive configuration, and &lt;a href="https://www.vaultproject.io/"&gt;Vault&lt;/a&gt; for sensitive configuration.&lt;/p&gt;

&lt;h2&gt;
  
  
  2: Standard process for distributing configuration
&lt;/h2&gt;

&lt;p&gt;For microservices systems, it is possible to have different tech stacks across the systems. If one is handling configuration differently for different stacks, then the complexity becomes hard to manage. Therefore, regardless of the tech stack of a microservice, configurations should be distributed to nodes in a standard manner.&lt;/p&gt;

&lt;p&gt;A technique we use is to supply configuration as environment variables per The &lt;a href="https://12factor.net/"&gt;Twelve-Factor App&lt;/a&gt; methodology. As a rule of thumb, always avoid distributing configuration files.&lt;/p&gt;

&lt;p&gt;The Twelve-Factor app is a manifesto that provides some guidelines to be followed while building cloud-native applications. These guidelines let you build applications that are cloud-friendly. To truly harness the advantages of a cloud environment, an application needs to embrace cloud concepts such as elastic scalability, independently deployable and operable services, and statelessness.&lt;/p&gt;

&lt;h2&gt;
  
  
  3: Governance policy around secrets
&lt;/h2&gt;

&lt;p&gt;Secrets such as API keys, passwords, and certificates need to be accessed securely. You need a governance process to ensure secrets access is managed appropriately. One technique we recommend to store all secrets is a central secrets store. The central external configuration server could provide this capability.&lt;/p&gt;

&lt;p&gt;This central store gives you traceability on how and when policies were changed. That traceability goes a long way in setting up a governance process.&lt;/p&gt;

&lt;p&gt;A tool we recommend to store secrets is the &lt;a href="https://www.vaultproject.io/"&gt;Vault&lt;/a&gt; by Hashicorp.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Bj6o1rvl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cl.ly/038cd11a56ea/download/Image%25202019-05-31%2520at%252010.27.20%2520AM.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Bj6o1rvl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cl.ly/038cd11a56ea/download/Image%25202019-05-31%2520at%252010.27.20%2520AM.png" alt="Managing Configuration"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is an example of an architecture where configurations are stored centrally in a config server and updated by the CD pipeline and pushed out to service instances.&lt;/p&gt;

&lt;p&gt;At the top there is an abstraction of the CD pipeline. This updates the config server and then the configuration from the config server is pushed to the service instances. At run time, service instances are aware of how to consume this configuration. When setting-up an architecture like this, you need to consider how many configuration servers should you have. We recommend that you have one configuration server per CD environment, or at least one for production and one for all other environments.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;This is the part 5 of our &lt;a href="https://www.gocd.org/tags/cd-for-microservices.html"&gt;CD for Microservices blog series&lt;/a&gt;. We have talked about configuration strategy for your CD pipeline. In the next post, we will talk about the last consideration: remediation strategies for when something goes wrong.&lt;/p&gt;

</description>
      <category>testing</category>
      <category>cd</category>
      <category>microservices</category>
      <category>architecture</category>
    </item>
    <item>
      <title>Environment Strategy for Continuous Delivery of Microservices</title>
      <dc:creator>Sheroy Marker</dc:creator>
      <pubDate>Mon, 13 May 2019 13:47:12 +0000</pubDate>
      <link>https://dev.to/gocd/environment-strategy-for-continuous-delivery-of-microservices-5bki</link>
      <guid>https://dev.to/gocd/environment-strategy-for-continuous-delivery-of-microservices-5bki</guid>
      <description>&lt;p&gt;This is the fourth post in the series - &lt;a href="https://www.gocd.org/tags/cd-for-microservices.html" rel="noopener noreferrer"&gt;CD of Microservices&lt;/a&gt;. In the &lt;a href="https://www.gocd.org/2018/05/08/continuous-delivery-microservices-test-strategy/" rel="noopener noreferrer"&gt;previous post&lt;/a&gt;, we talked in depth about CI practices, in particular trunk based deployment and feature toggles. In this post, we’ll discuss environment strategy.&lt;/p&gt;

&lt;h2&gt;
  
  
  Environment plan
&lt;/h2&gt;

&lt;p&gt;In some organizations people don’t realize the importance of an environment plan until they have too many environments and maintaining them becomes overwhelming.&lt;/p&gt;

&lt;p&gt;An environment plan communicates the various environments that are involved in the path to production and their intended uses. It also communicates how your artifacts are promoted and the toggle states on these environments.&lt;/p&gt;

&lt;p&gt;You can start with thinking about what environments need to be created upfront and talking about the intended use cases for these environments. Different groups in your organization will have different competing needs. Your environment plan should accommodate the needs from different parties.&lt;/p&gt;

&lt;h2&gt;
  
  
  Artifact promotion
&lt;/h2&gt;

&lt;p&gt;Artifacts are one of many kinds of tangible by-products produced during the development of software. Some of these artifacts are text files like test and coverage reports. Some of these artifacts are binary artifacts like npm packages, jar files, and AMI machine images, which are built once and propagated along the CD pipelines for deployment in downstream environments.&lt;/p&gt;

&lt;p&gt;CD pipelines generate a lot of artifacts. Before you know it, you're getting into terabytes, or tens of terabytes of generated artifacts. It's important to think through an artifact promotion strategy, which deals with where these artifacts are stored, how many are retained, and how you do clean-up.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcl.ly%2F4c76b5839016%2Fdownload%2FImage%25202019-05-13%2520at%25209.36.50%2520AM.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcl.ly%2F4c76b5839016%2Fdownload%2FImage%25202019-05-13%2520at%25209.36.50%2520AM.png" alt="Environment Plan for Continuous Delivery"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The diagram above shows you an environment plan which factors in an artifact promotion strategy. From the plan you can see the environments that exist in the CD pipeline, with different colored arrows depicting different artifact promotion strategies. Early on in this pipeline, we generate artifacts that go to an artifact repository that has a more aggressive clean-up strategy, so we don't really retain too many of those environments. As you go further down this pipeline, artifacts are verified and more robust and you may want to keep these for a little longer, so you could keep them in a store with a different retention policy.&lt;/p&gt;

&lt;h2&gt;
  
  
  Dynamic environments
&lt;/h2&gt;

&lt;p&gt;Environments are expensive and cumbersome to maintain. One way to make the process simpler and less expensive is to create environments on the fly. For example, when running functional tests, you can provision a functional test environment on demand, then clean it up right after your test is done.&lt;/p&gt;

&lt;p&gt;To do this, you need use &lt;a href="http://infrastructure-as-code.com" rel="noopener noreferrer"&gt;IaaC&lt;/a&gt; techniques to script all aspects of environment provisioning. Here are the benefits of doing this:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Parity with production&lt;/strong&gt; If you create all environments dynamically based on automation, you will use exactly the same way to create testing and staging environments. The scale and nature of infrastructure may be different, but the footprint of the environment remains the same.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Prevent drift&lt;/strong&gt; If you have manual processes and manual installations, you can't really ensure that two environments are the same. But using a script to create short-lived dynamic environments prevents drift.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Efficient use of hardware and infrastructure&lt;/strong&gt; Only creating an environment when you need it and cleaning up dynamically optimizes utilization and saves infrastructure costs.&lt;/p&gt;

&lt;p&gt;Technologies such as container schedulers are all the rage these days for deploying and running applications. The diagram below is an example of a Dock - a Docker based build workflow with Kubernetes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcl.ly%2F0aa1199207ce%2Fdownload%2FImage%25202019-05-13%2520at%25209.37.55%2520AM.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcl.ly%2F0aa1199207ce%2Fdownload%2FImage%25202019-05-13%2520at%25209.37.55%2520AM.png" alt="Environment Plan Dynamic Provisioning for Continuous Delivery"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The CI aspect of this pipeline, which is the build pipeline on the left, generates Docker images. The artifacts from this pipeline, which are Docker images, are stored in a Docker registry. Further downstream, deploy environments deploy directly to Kubernetes. You can leverage features of Kubernetes like its concept of labels to provision environments on the fly.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;This is the part 4 of our &lt;a href="https://www.gocd.org/tags/cd-for-microservices.html" rel="noopener noreferrer"&gt;CD of Microservices blog series&lt;/a&gt;. We have talked about environment strategy including artifact promotion and leveraging modern infrastructure for dynamic environments. In the next post, we will talk about the fourth consideration: configuration strategy.&lt;/p&gt;

</description>
      <category>testing</category>
      <category>docker</category>
      <category>cd</category>
      <category>microservices</category>
    </item>
    <item>
      <title>Continuous Delivery of Microservices - Trunk Based Development and Feature Toggles</title>
      <dc:creator>Sheroy Marker</dc:creator>
      <pubDate>Mon, 08 Apr 2019 16:51:48 +0000</pubDate>
      <link>https://dev.to/gocd/continuous-delivery-of-microservices-trunk-based-development-and-feature-toggles-435d</link>
      <guid>https://dev.to/gocd/continuous-delivery-of-microservices-trunk-based-development-and-feature-toggles-435d</guid>
      <description>&lt;p&gt;This is the third post in the series - &lt;a href="https://www.gocd.org/tags/cd-for-microservices.html" rel="noopener noreferrer"&gt;CD of Microservices&lt;/a&gt;. In the &lt;a href="https://www.gocd.org/2018/05/08/continuous-delivery-microservices-test-strategy/" rel="noopener noreferrer"&gt;previous post&lt;/a&gt;, we talked about testing strategy for building CD pipelines on microservices architecture. In this post, we’ll get deep into CI practices.&lt;/p&gt;

&lt;h2&gt;
  
  
  Continuous integration
&lt;/h2&gt;

&lt;p&gt;Continuous integration is a key practice in a successful continuous delivery strategy. Simply defined, it’s a practice that requires developers to integrate their code into a shared repository several times a day. Every checkin is verified by an automated build, allowing teams to detect problems early.&lt;/p&gt;

&lt;p&gt;We’ll focus on two key practices, trunk based development and feature toggles. These two go a long way in implementing a simple and robust CI process.&lt;/p&gt;

&lt;h2&gt;
  
  
  Trunk Based Development
&lt;/h2&gt;

&lt;p&gt;In trunk based development (TBD), developers collaborate on code in a single branch called “trunk”. The key benefit is to avoid drift in development branches and the resulting merge hell.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcl.ly%2Fb86f90a7fe30%2Fdownload%2FImage%25202019-04-04%2520at%25206.42.23%2520PM.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcl.ly%2Fb86f90a7fe30%2Fdownload%2FImage%25202019-04-04%2520at%25206.42.23%2520PM.png" alt="Trunk Based Development - TBD"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is contrary to the practice of maintaining long-lived feature and release branches. In a branching model, though you may be running builds on individual branches, arguably &lt;a href="https://www.gocd.org/2017/05/16/its-not-CI-its-CI-theatre/" rel="noopener noreferrer"&gt;you aren’t doing continuous integration&lt;/a&gt;. In trunk based development, you should never find your trunk in a state where your CD process is unable to deploy. All code should be checked into trunk, built and tested constantly, and the codebase deployable on demand: all of this make CD a reality.&lt;/p&gt;

&lt;p&gt;TBD results in much simpler CD workflows: you don’t have to build multiple branches in parallel, map branches to environments and re-test when the same features get merged to trunk. Simplifying workflows is very important in the context of microservices, as the complexity is exasperated as the number of microservices grows.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://martinfowler.com/articles/feature-toggles.html" rel="noopener noreferrer"&gt;Feature toggles&lt;/a&gt; is an important technique for TBD.&lt;/p&gt;

&lt;h2&gt;
  
  
  Feature Toggles
&lt;/h2&gt;

&lt;p&gt;Feature toggles enable commits of a combination of work-in-progress and completed features. With these toggles, you can turn off the manifestation of incomplete features in production, until the features are dev complete and tested sufficiently in pre-production environments.&lt;/p&gt;

&lt;p&gt;Here is a very simple example. The team are working on four features for the same application: search, menu, sign-in and in-app chat. The in-app chat feature is incomplete (but you still check into trunk in TBD) or you find issues with the in-app chat in pre-production testing. With feature toggles, you can simply turn in-app chat off, even at run time.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcl.ly%2Fbaba3926feaa%2Fdownload%2FImage%25202019-04-04%2520at%25206.42.51%2520PM.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcl.ly%2Fbaba3926feaa%2Fdownload%2FImage%25202019-04-04%2520at%25206.42.51%2520PM.png" alt="Feature Toggles"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Feature toggles are usually stored in a specification or configuration file close to the codebase and used by automation in the CD pipeline to turn toggles on in specific environments. They are just conditions in your code base.You should separate setting these toggles from the actual release process so that you can control it in run time.&lt;/p&gt;

&lt;p&gt;Some things you should consider when implementing feature toggles:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Feature toggles should be short lived&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Feature toggles should be discarded once a feature has gone through the development lifecycle and is turned on in production. They are considered to be a code debt that needs to be cleaned up.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Use tooling to manage toggles’ lifecycles&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Don’t underestimate the amount of effort required to manage these toggles. You can easily run into hundreds of them. Use tooling which provides visibility into the list of toggles, what’s turned on in which environment, and which features will get turned on in production in the next release.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Consider building your own utilities&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;There isn't a lot of tooling out there, so consider writing your own utilities to solve some of these problems. Again, getting this tooling in place before you go down a microservice strategy is a wise thing to do.&lt;/p&gt;

&lt;p&gt;Once you have a mechanism for maintaining feature toggles, you could use the same mechanism to introduce other categories of toggles:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcl.ly%2Fbbdfb85bfea2%2Fdownload%2FImage%25202019-04-04%2520at%25206.43.29%2520PM.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcl.ly%2Fbbdfb85bfea2%2Fdownload%2FImage%25202019-04-04%2520at%25206.43.29%2520PM.png" alt="Feature Toggles Permissions"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This diagram is borrowed from my colleague Pete Hodgson, who has written a lot about toggles. In this diagram, release toggles is defined as toggles that control access to unfinished code, which is same as feature toggles.&lt;/p&gt;

&lt;p&gt;Ops toggles control the behavior of production code. Retail sites that have heavy seasonal traffic use Ops toggles to provide a degraded experience when they have peak loads. For example, when Apple is releasing a new iPhone and it's getting a crazy amount of traffic coming in just to buy the iPhone, they can turn off features like user recommendations during peak time to support the sales transactions.&lt;/p&gt;

&lt;p&gt;Permissions toggles are used to turn on specific behavior for privileged users, such as admin features, or provide a guest user browsing experience.&lt;/p&gt;

&lt;p&gt;Experimental toggles are used for multivariate testing. It is used to test how well a feature is received before you make it permanent. It's the same as A/B testing.&lt;/p&gt;

&lt;p&gt;One thing to note is that each of these categories of toggles has a very different lifecycle and different way to be turned on and off. You need to plan accordingly. Release toggles generally are more short-lived and only live for the duration of a few releases. Once the feature is completely released, you get rid of the toggle and remove your technical debt. An ops toggle is used frequently for functionality in production, so it lives for much longer.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;This is the part 3 of our CD of Microservices blog series. We have talked in depth about CI practices, in particular trunk based deployment and feature toggles. In the next post, we will talk about the third consideration: environment strategy.&lt;/p&gt;

</description>
      <category>testing</category>
      <category>microservices</category>
      <category>continuousdelivery</category>
      <category>ci</category>
    </item>
    <item>
      <title>Test Strategy for Microservices</title>
      <dc:creator>Sheroy Marker</dc:creator>
      <pubDate>Wed, 06 Feb 2019 01:09:27 +0000</pubDate>
      <link>https://dev.to/gocd/test-strategy-for-microservices-355p</link>
      <guid>https://dev.to/gocd/test-strategy-for-microservices-355p</guid>
      <description>&lt;p&gt;This is the second post in the series - &lt;a href="https://www.gocd.org/tags/cd-for-microservices.html" rel="noopener noreferrer"&gt;Continuous Delivery for Microservices&lt;/a&gt;. &lt;a href="https://www.gocd.org/2018/04/25/five-considerations-continuous-delivery-microservices/" rel="noopener noreferrer"&gt;In my previous post&lt;/a&gt;, I gave an overview of five considerations for building CD pipelines on a microservices architecture. In this post, we’ll get deeper into test strategy.&lt;/p&gt;

&lt;h2&gt;
  
  
  Test Strategy
&lt;/h2&gt;

&lt;p&gt;A microservices architecture involves many moving parts with different guarantees and failure modes. Testing and verification of these systems are significantly more nuanced and complex than testing a traditional monolithic application. An effective test strategy needs to account for both testing individual services in isolation and the verification of overall system behavior. You can broadly break testing down into two categories: pre-production testing and monitoring and testing in production.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh610puekxtc6vjt71gtc.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh610puekxtc6vjt71gtc.jpeg" alt="Test Strategies for Microservices" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Pre-production testing of services
&lt;/h2&gt;

&lt;p&gt;Here’s a simple example where you have build pipelines for multiple services and you're testing a service in isolation. In this case, the traditional &lt;a href="https://martinfowler.com/bliki/TestPyramid.html" rel="noopener noreferrer"&gt;test pyramid&lt;/a&gt; helps to maintain a balance between the different types of tests.&lt;/p&gt;

&lt;p&gt;In a typical test pyramid, you have:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Unit tests&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Tests that cover the smallest piece of testable functionality in your software.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Integration tests&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Integration tests, in this context, deal with testing integrations and interface defects for components within your service; these are more granular tests.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Component tests&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
When you look at component tests for microservices, a component is a service that exposes certain functionalities. Therefore, component tests for microservice can just be acceptance tests for services and your tests need to validate whether the service provides the functionality that it promises to.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Contract tests&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Another category of tests that's very applicable to microservices are contract tests. They test the contracts of APIs of your services to see if the API is valid or if the microservice honors its API. A cool variation of these contract tests is consumer driven contract tests. These tests are written by consumer services of an API; the consumers codify this contract in a suite of tests that get run on every change to the API. That way, if a change to the API breaks a contract that one of its consumers expect, this breaking change is caught early in the CD pipeline.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;End-to-end tests&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
The test suites we discussed earlier are applicable to testing individual services. End-to-end tests, however, are more coarse-grained and try to test the functionality of an overall system. Depending on the deployment architecture you're going for, if you are deploying all of your services in a pre-production environment in an aggregate manner, you can run end-to-end tests there. Since end-to-end tests are usually brittle and take a long time to run, you’ll usually want to restrict the number of these tests to as few as possible. If you have microservices that are completely independent and don't get deployed to a pre-production test environment, then consider approaches that test in production.&lt;/p&gt;

&lt;h2&gt;
  
  
  Monitoring and testing in production
&lt;/h2&gt;

&lt;p&gt;This traditional style of testing has its limitations. There are categories of errors that you can’t really simulate in test environments. Examples of these sorts of issues include issues caused by eventual consistency in a highly distributed system, and hardware and network failures causing parts of the system to fail. You have to supplement traditional testing techniques with techniques that allow you to profile and monitor systems in production effectively, and the ability to take remedial action in production when things do go wrong. In this post, I will focus on testing in production, and cover remediation strategy in a later part of this series.&lt;/p&gt;

&lt;p&gt;There is a category of testing in production called &lt;a href="https://en.wikipedia.org/wiki/Fault_injection" rel="noopener noreferrer"&gt;fault-injection&lt;/a&gt;, which is introducing errors in a controlled manner in production to see if your system can hold up to those errors.&lt;/p&gt;

&lt;p&gt;A variation of in-production testing are some specific deployment strategies that are popular in these environments:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Canary deployment&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
&lt;a href="https://www.gocd.org/2017/08/15/canary-releases/" rel="noopener noreferrer"&gt;Canary deployment&lt;/a&gt; is where you take a new release and release it to a certain subsection of your production infrastructure, see how well that goes, and keep increasing the footprint of the new service until the time you completely roll it out. If you face issues, you can start rolling back the new version of your service.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh6gwrmuyhjd2h3nw1psu.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh6gwrmuyhjd2h3nw1psu.jpeg" alt="Canary deployments" width="800" height="565"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Blue-Green deployment&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
&lt;a href="https://www.gocd.org/2017/07/25/blue-green-deployments/" rel="noopener noreferrer"&gt;Blue-green deployments&lt;/a&gt; are similar, where you have a new footprint of your new service, and then you do some testing and route some traffic through it. If everything is fine, you switch over all of your traffic to the new instance of services, otherwise, you keep the old footprint going.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F044lnv9l26h56webwjuy.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F044lnv9l26h56webwjuy.jpeg" alt="Blue green deployments" width="800" height="464"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Multivariate testing&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Another interesting variation of this kind of testing is multivariate testing, where you're not really testing your new service against defects, instead, you are A/B testing new release features behind A/B testing toggles. The purpose of this type of testing is to see how well these features are received. You can decide roll it out to your entire set of users or make fixes where necessary.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;This is part 2 of our &lt;a href="https://www.gocd.org/tags/cd-for-microservices.html" rel="noopener noreferrer"&gt;Continuous Delivery for Microservices&lt;/a&gt; blog series. We have talked in depth about testing strategies for microservices, which include how to apply traditional testing pyramids to pre-production testing for microservices and also new techniques for production monitoring and testing. In my next post, we will talk about the second consideration: CI practices for microservices systems.&lt;/p&gt;

</description>
      <category>microservices</category>
      <category>testing</category>
    </item>
    <item>
      <title>5 Considerations for Continuous Delivery of Microservices</title>
      <dc:creator>Sheroy Marker</dc:creator>
      <pubDate>Mon, 04 Feb 2019 19:27:51 +0000</pubDate>
      <link>https://dev.to/gocd/5-considerations-for-continuous-delivery-of-microservices-n87</link>
      <guid>https://dev.to/gocd/5-considerations-for-continuous-delivery-of-microservices-n87</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu8oni7zdeujrjaf3vils.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu8oni7zdeujrjaf3vils.jpg" alt="Continuous Delivery of Microservices"&gt;&lt;/a&gt;&lt;br&gt;
A microservices architecture builds software as suites of collaborating services. These architectures are generally accepted as a better way to build apps these days.&lt;/p&gt;

&lt;p&gt;Continuous Delivery is an essential component of any software delivery practice. Regardless of the target deployment environment, you have to design a CD workflow to get software changes into production.&lt;/p&gt;

&lt;p&gt;At ThoughtWorks, while partnering with clients building business-critical software, we overcame many challenges around building CD workflows for microservices. In this blog post, I will share the considerations we keep in mind in architecture design and application development.&lt;/p&gt;

&lt;h2&gt;
  
  
  Microservices and Continuous Delivery
&lt;/h2&gt;

&lt;p&gt;According to &lt;a href="https://martinfowler.com/articles/microservices.html" rel="noopener noreferrer"&gt;Martin Fowler&lt;/a&gt;, microservices architectures are  “a particular way of designing software applications as suites of independently deployable services.” These architectures are prevalent these days for building applications based on distributed systems concepts.&lt;/p&gt;

&lt;p&gt;Jez Humble, in his pioneering book describes continuous delivery as "the ability to get changes of all types - including new features, configuration, bug fixes, and experiments - into production, safely and quickly in a sustainable way."&lt;/p&gt;

&lt;p&gt;Regardless of the target deployment environment or your architecture choice, monolithic architecture in the past or microservices these days, it’s important to design a continuous delivery workflow to get your changes into production. A CD workflow is central to a DevOps process, and spans well across various functions in an organization including your development, QA and IT operations.&lt;/p&gt;

&lt;h2&gt;
  
  
  Four Challenges for CD on Microservices
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Maintaining the integrity of complex distributed systems.&lt;/strong&gt; Since you decompose a large monolithic system into smaller, more manageable microservices, the overall complexity of the system itself increases. You now have to deal with distributed systems concerns.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Safely and rapidly releasing features constantly.&lt;/strong&gt; Managing frequent feature releases needs special consideration when your features could involve changes in one or many microservices.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Managing deployments of disparate technology stacks.&lt;/strong&gt; Microservice environments often include disparate technology stacks for services. Managing a deployment process across these different stacks is challenging.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Process and tooling for deploying services independently and out of band.&lt;/strong&gt; There are a lot of tools available to model CD workflows. It’s daunting to initially map out your CD workflow and pick tooling that best represents this workflow.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Five Considerations for CD on Microservices
&lt;/h2&gt;

&lt;p&gt;There are five considerations I recommend keeping in mind when you design a CD workflow on microservices architectures. I will have an in-depth discussion for each of them in the following posts of this series. Here is just an overview:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Have an effective test strategy&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh610puekxtc6vjt71gtc.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh610puekxtc6vjt71gtc.jpeg" alt="Effective test strategy"&gt;&lt;/a&gt;&lt;br&gt;
Testing and verification of microservice systems is significantly more nuanced and complex than testing a traditional monolithic application. An effective test strategy needs to account for both testing individual services in isolation and the verification of overall system behavior.&lt;/p&gt;

&lt;p&gt;For pre-production testing of services, especially in an isolated manner, traditional testing methodologies are still applicable and relevant. The &lt;a href="https://martinfowler.com/bliki/TestPyramid.html" rel="noopener noreferrer"&gt;test pyramid&lt;/a&gt; can still help you in maintaining a balance between the different types of tests. However, this style of testing has limited effectiveness when testing the aggregate of services. There are categories of errors that you can’t simulate in test environments, for example, issues caused by eventual consistency in a highly distributed system, hardware and network failures causing parts of the system to fail.&lt;/p&gt;

&lt;p&gt;You have to supplement traditional testing techniques with techniques like synthetic user testing, lightweight user acceptance testing and fault injection testing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Examine your CI practices&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Continuous integration is a key practice in a successful continuous delivery strategy. Apart from the obvious considerations around build servers and build definitions, &lt;a href="https://trunkbaseddevelopment.com" rel="noopener noreferrer"&gt;trunk based development&lt;/a&gt; and &lt;a href="https://martinfowler.com/articles/feature-toggles.html" rel="noopener noreferrer"&gt;feature toggles&lt;/a&gt; are two key practices that go a long way in implementing a simple and robust CI process.&lt;/p&gt;

&lt;p&gt;In trunk based development, developers collaborate on code in a single branch called “trunk.” The key benefit is to avoid drift in development branches and the resulting merge hell. This is contrary to the practice of maintaining long-lived feature and release branches. In a branching model, though you may be running builds on individual branches, arguably &lt;a href="https://www.gocd.org/2017/05/16/its-not-CI-its-CI-theatre.html" rel="noopener noreferrer"&gt;you aren’t doing continuous integration&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;To do trunk based development, you need to have controls called feature toggles. Feature toggles enable multiple commits of a combination of WIP and completed features. With these toggles, you can turn off the manifestation of incomplete features in production, until the features are dev complete and testing sufficiently in pre-production environments. Feature toggles are usually stored in a specification or configuration file close to the codebase and used by automation in the CD pipeline to turn toggles on in specific environments.&lt;/p&gt;

&lt;p&gt;Once you have a mechanism for maintaining feature toggles, you can use the same mechanism to introduce other categories of toggles like release toggles (to control access to unfinished code), Ops toggles (to control the behavior of production code), permissions toggles (to turn on specific behavior for privileged users), and experimental toggles (for multivariate testing - How well a feature is received before you make it permanent).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Plan your environments&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuzc7vjtxo8qd4l0gf86r.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuzc7vjtxo8qd4l0gf86r.jpeg" alt="Environments plan"&gt;&lt;/a&gt;&lt;br&gt;
An environment plan includes your sets of environments, the intended use of them, strategies to promote artifacts through these environments and toggle states on these environments.&lt;/p&gt;

&lt;p&gt;First, think about what environments are needed and their intended use cases. Different groups in your organization will have different competing needs. When creating an environment, you should cater to all of these competing needs. Secondly, if possible, consider using cloud infrastructure to create environments dynamically. For example, use Kubernetes’ labels capability to create on the fly test environments for automated testing rather than have long lived environments. Thirdly, have an artifact promotion strategy. CD pipelines generate a lot of artifacts. You should you think about: how many artifacts to store, how many repositories you need, etc.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Manage configuration strategically&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;An application’s configuration includes everything that varies per deployment and should be stored separately from the code. How should you treat configuration when you have suites of microservices?&lt;/p&gt;

&lt;p&gt;One technique we've seen to be useful is to manage deployment configuration centrally in repositories like &lt;a href="https://www.consul.io/" rel="noopener noreferrer"&gt;Consul&lt;/a&gt; or &lt;a href="https://www.vaultproject.io/" rel="noopener noreferrer"&gt;Vault&lt;/a&gt;. Spreading deployment configurations across tools like &lt;a href="https://www.chef.io/chef/" rel="noopener noreferrer"&gt;Chef&lt;/a&gt; and the CD pipeline just makes it hard to understand and reason.&lt;/p&gt;

&lt;p&gt;Another technique we use is to standardize processes for distributing configuration regardless of the technology stack of your services, and just let services handle the consuming of this configuration depending on the stack.  For example, we generally use the &lt;a href="https://12factor.net/" rel="noopener noreferrer"&gt;12-factor recommendations&lt;/a&gt; and avoid distributing configuration files.&lt;/p&gt;

&lt;p&gt;And lastly, secrets like certificates need a governance process to ensure they are managed appropriately. This is usually a manual process but you need to think about it earlier and get it in place.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Prepare for things to go wrong&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In microservices systems, multiple services get updated frequently, how do you respond when a deployment of a service introduces instability or bugs?&lt;/p&gt;

&lt;p&gt;Roll forward, which means finding the root cause of a failure and applying the fix as soon as possible, is most times the best remediation response. A prerequisite for being able to do this is to ensure you have the capability to release from a &lt;a href="https://www.gocd.org/2017/06/20/hotfixes-rollback-rollforward/" rel="noopener noreferrer"&gt;hot fix branch&lt;/a&gt; straight to production. You may not want a fix to a production outage to go through the CD pipeline, depending on the time it takes for a change to make it through the pipeline.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8tkb13umqdcxs5iplmw5.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8tkb13umqdcxs5iplmw5.jpeg" alt="Hotfix process"&gt;&lt;/a&gt;&lt;br&gt;
Rollbacks are always tricky in production systems. In most cases if the change is granular and can be reasoned about, it’s easy to rollback. But if the deployment includes changes that aren’t easy to reason about, e.g., DB changes, especially ones that make schema changes, you need to deploy DB changes separately from code changes in consecutive deployments to ensure backwards compatibility of DB changes with earlier versions of code.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;This is the part 1 of our Continuous Delivery of Microservices blog series. We have talked about four challenges and five considerations for building CD pipelines on microservices architecture. In the next blog, I will have an in-depth discussion on the first consideration: testing strategy for a system based on microservices architectures.&lt;/p&gt;

</description>
      <category>testing</category>
      <category>microservices</category>
      <category>continuousdelivery</category>
      <category>teststrategy</category>
    </item>
  </channel>
</rss>
