<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Accelerate Delivery</title>
    <description>The latest articles on DEV Community by Accelerate Delivery (@acceldelivery).</description>
    <link>https://dev.to/acceldelivery</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/acceldelivery"/>
    <language>en</language>
    <item>
      <title>Continuous Delivery For Hiring</title>
      <dc:creator>Accelerate Delivery</dc:creator>
      <pubDate>Fri, 26 Oct 2018 12:20:10 +0000</pubDate>
      <link>https://dev.to/acceldelivery/continuous-delivery-for-hiring-1iel</link>
      <guid>https://dev.to/acceldelivery/continuous-delivery-for-hiring-1iel</guid>
      <description>&lt;p&gt;&lt;em&gt;This article was originally posted at the &lt;a href="https://accelerate.delivery/continuous-delivery-for-hiring/"&gt;Accelerate Delivery&lt;/a&gt; blog.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Continuous Delivery removes the friction of delivering code to production. We can apply similar principles to the hiring process in software engineering. Like a software delivery pipeline, we can optimize steps in the process. As in continuous delivery we want to maximize the throughput and stability of our hiring pipeline. In the context of hiring "throughput" and "stability" have particular meanings:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Throughput&lt;/strong&gt; is the speed that we hire a candidate&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Stability&lt;/strong&gt; is the quality of the candidate regarding team fit and engineering skills&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We'll examine some experiments you can run on your own hiring pipeline and the benefits you should expect to achieve.&lt;/p&gt;

&lt;h2&gt;
  
  
  Tactics
&lt;/h2&gt;

&lt;p&gt;The following experiments can run individually or all at once. You can adopt them 100% or to a small degree. Like all things continuous delivery, it's best to take the smallest, actionable step.&lt;/p&gt;

&lt;h3&gt;
  
  
  Remove phone screens
&lt;/h3&gt;

&lt;p&gt;Let's assume a typical process to hire an engineer looks like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--J4h-cx2W--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/6g56zmbe95k9hmrrotp2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--J4h-cx2W--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/6g56zmbe95k9hmrrotp2.png" alt="Hiring Process"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The phone screen stage regularly loops several times. There are phone screens with recruiters, HR representatives, engineering managers, and engineers. Employers hope to create a funnel of applicants that decrease in quantity but increase in quality with each phone screen. The goal is to have the highest quality candidates make it to the interview stage. The antipattern here is that high and low quality candidates go through all the same steps. Each step has a chance to weed out bad fits, but also deter good fits. Imagine what it's like for the candidate when faced with these gatekeepers. High quality candidates have choices. Why will they choose to go through yet-another-phone-screen with your company? From the hiring company's perspective it's a funnel to produce great candidates. From the engineer's perspective it's a gauntlet.&lt;/p&gt;

&lt;p&gt;As a hiring manager, I want to replace as many phone screens as I can with "trust". One form of trust is that other people are doing their job. Establish relationships with people upstream in the hiring process. Learn how they screen candidates. Teach them how you do it. Help them become champions for your preferences. The other form of trust is that you are doing your job. Convince the other gatekeepers that your phone screen will be representative of their needs. The goal is to remove yourself entirely, or be the definitive phone screen. From the candidate's perspective this means they have may have a single phone screen. They may have no phone screens if they already have a relationship with you or someone you trust.&lt;/p&gt;

&lt;h3&gt;
  
  
  High signal interviews
&lt;/h3&gt;

&lt;p&gt;An interview is like a code review. In this simile the candidate is a pull request and the interviewing team is doing a peer review. A peer reviewer trusts that CI enforces a stylistic baseline with linters and formatters. Peer reviewers don't review code style. Peer reviewers check for requirements implementation, logic flaws, and maintainability. This is an engineer's precious output. This is the unique, creative code that a human can produce that a computer cannot. Like a code review, you should not be verifying stylistic opinions in an interview. Interviewers should be checking the logic and culture that the candidate produces.&lt;/p&gt;

&lt;p&gt;To remove stylistic concerns verify as much as you can about a candidate before the interview. Look for any possibility of a code sample. Reading code before the interview shortcuts awkward whiteboard questions about printing odd|prime|divisible-by-five numbers. Raise your baseline understanding of a candidate before an interview. This allows the interview to focus on the candidate's precious output.&lt;/p&gt;

&lt;p&gt;During the interview present the candidate with high-signal questions. These questions should lead the interviewing team to make the ultimate decision: yes or no. "High-signal" has a different meaning for different teams. Given your team's desired characteristics, ask the candidate questions that verify a match. If your team is "biased towards action", "collaborative", "enthusiastic", "curious", etc. ask questions that drive answers around those characteristics. Avoid stylistic opinion questions like, "Which CSS-in-JS library do you prefer?" This is a great topic to discuss over lunch, but not helpful in an interview. We are looking for "&lt;a href="https://blog.codinghorror.com/strong-opinions-weakly-held/"&gt;strong opinions, weakly held&lt;/a&gt;", but we shouldn't measure a candidate based on their current opinions. It's more important to verify that a candidate changes their opinions based on new information than what those opinions actually are.&lt;/p&gt;

&lt;h3&gt;
  
  
  Offer letter at the end of an interview
&lt;/h3&gt;

&lt;p&gt;Look at the interview process diagram again:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--J4h-cx2W--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/6g56zmbe95k9hmrrotp2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--J4h-cx2W--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/6g56zmbe95k9hmrrotp2.png" alt="Hiring Process"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Sadly the interview is only the halfway point. We can make optimizations even after the interview. Employers gain time and good will by having an offer letter ready at the end of an interview. If you've honed your interview process to produce a high signal then what's stopping you? There's prep work involved in crafting an offer letter, but nothing unusual. The variables are generally: the salary you can offer, sign-on bonuses, stock options, and the candidate's name. For many candidates you can take care of this work up front. There may be ranges you can adjust as the interview progresses, but the variance should be minimal. This is an interview for a Senior ______ Engineer, or a Director of _______. The budget and variations are well-known before the interview takes place.&lt;/p&gt;

&lt;p&gt;There's momentum during an interview. The candidate meets people they could be working with. The candidate begins to develop a relationship with them. The candidate asks questions and voices concerns that are hopefully answered and alleviated. The end of the interview comes and then, the candidate waits. Try drafting the offer letter before the interview. Get feedback from the interviewers before the end of the interview. You can continue the momentum of the interview with an offer letter and be one step closer to hiring. This isn't to say the "Offer" stage of the process completely collapses into the "Interview" stage. There is still time for negotiations around the offer. This is to say that you can start the offer stage immediately.&lt;/p&gt;

&lt;h2&gt;
  
  
  Benefits
&lt;/h2&gt;

&lt;p&gt;Applying these tactics to hiring has similar advantages to continuous delivery in software development.&lt;/p&gt;

&lt;h3&gt;
  
  
  Low risk
&lt;/h3&gt;

&lt;p&gt;Much like a small PR reduces risk. A short interview process removes risk for the candidate. They don't have to think about sneaking away from their current position for many phone screens or several on-site interviews. As an employer you've demonstrated that you know what you're looking for and you make it easy for the right candidate to simply say, "yes".&lt;/p&gt;

&lt;p&gt;Risk decreases for the employer too. You're more likely to bring high performing candidates from the interview stage to the hired stage. High qualified candidates have less time to interview and consider other offers.&lt;/p&gt;

&lt;h3&gt;
  
  
  High throughput
&lt;/h3&gt;

&lt;p&gt;Open headcount is a signal that the business and your team have a need. The longer the headcount is open, the longer that need goes unfulfilled. In an understaffed situation the incumbent engineers are more stressed. These developers work around the gap or try to pick up the slack. Fill open headcount as quickly as possible with the right person. You can get qualified candidates into roles faster when you optimize your hiring process.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Hiring, like software delivery, can benefit by focusing on stability and throughput. You don't want to sacrifice one for the other. Make modifications to lend credence to both.&lt;/p&gt;

</description>
      <category>cd</category>
      <category>agile</category>
      <category>career</category>
    </item>
    <item>
      <title>The Impact of QA on Continuous Delivery</title>
      <dc:creator>Accelerate Delivery</dc:creator>
      <pubDate>Mon, 01 Oct 2018 12:53:23 +0000</pubDate>
      <link>https://dev.to/acceldelivery/the-impact-of-qa-on-continuous-delivery-345m</link>
      <guid>https://dev.to/acceldelivery/the-impact-of-qa-on-continuous-delivery-345m</guid>
      <description>&lt;p&gt;&lt;em&gt;This was originally posted at the &lt;a href="https://accelerate.delivery/qa-impacts-cd/"&gt;Accelerate Delivery&lt;/a&gt; blog&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  QA in Engineering
&lt;/h2&gt;

&lt;p&gt;A software engineering teams's goal is to maximize the throughput and stability of effort put into a product. I'd like to examine where Quality Assurance (QA) fits into this goal. To calibrate, Wikipedia describes QA as:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;a means of monitoring the software engineering processes and methods used to ensure quality&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;From my experience a designated role performs QA, the QA engineer. This person checks that developers have coded what they intended. QA engineers verify that developer's effort matches design's and product management's expectations. Here's a diagram showing a simplified delivery flow for a new feature, and where QA fits:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--N-DqNPyU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/gnchwak9z5v2n3l2v36w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--N-DqNPyU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/gnchwak9z5v2n3l2v36w.png" alt="Delivery Lifecycle"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As mentioned engineering's goal is to maximize throughput and stability. What do we mean by maximizing stability? We can substitute "stability" for "expected behavior". We want to maximize the expected behavior from our product. This expected behavior can cover a wide spectrum. On one end of the spectrum stability is ensuring operational needs. For example, stability can mean that your product scales quickly and reliably in a high traffic scenario. On the other end of the spectrum stability is ensuring UI requests. For example, stability can mean that the blue button has the color and typography that design intended.&lt;/p&gt;

&lt;p&gt;The QA engineer's domain is stability. Adding a QA verification step to your delivery pipeline has an impact. What is the trade-off for this stability?&lt;/p&gt;

&lt;h2&gt;
  
  
  The Good
&lt;/h2&gt;

&lt;p&gt;First, QA engineers improve quality if they perform their job. Developers make mistakes. Let me write it again because it's a fact and it's fine. Developers make mistakes. QA distributes the burden of catching mistakes. That burden is already shared by the developer who wrote the code, the peer reviewers, and the accepting product manager. QA verification is an extra check.&lt;/p&gt;

&lt;p&gt;Second, a QA engineer's perspective is a beneficial addition. Developers want to get code working. To do this they generally focus on the happy path. Peer reviewers focus a little less on the happy path, but generally empathize with the developer. QA engineers try to break the code. This is their unique perspective. The code is a black box to them. It's not a well-placed design pattern, the latest JavaScript library, or a clever algorithm. It's a thing they interact with. They want that interaction to be predictable under pressure.&lt;/p&gt;

&lt;p&gt;Finally, trust is improved with a QA step in the delivery pipeline. The analogies I hear from product managers are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You keep a beat cop on the street to prevent crime&lt;/li&gt;
&lt;li&gt;You put a backstop behind the pitcher&lt;/li&gt;
&lt;li&gt;You have a goalie as a last line of defense&lt;/li&gt;
&lt;li&gt;You can drive 120 mph and get where you're going faster, but the accidents will be worse.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These analogies are bad abstractions (What if we don't need "beat cops" anymore? What if it's more like "asking the pitcher to pitch slower than 50mph"? What if machines let us drive 120mph safely?), but their intention is clear. We have a higher degree of trust if we lower our tolerance to risk. If we abide by the rules we can expect fewer radical actors performing untrustworthy actions. If we submit our work to the QA before its shipped then we've done our due diligence.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Bad
&lt;/h2&gt;

&lt;p&gt;Let me preface this section with caveats. My experience is with outsourced QA engineers in another time zone. Also, I'm speaking to a web development delivery pipeline. &lt;strong&gt;Please note these caveats!&lt;/strong&gt; They make a huge difference when discussing a QA engineer's role. Adjust the following points to your own situation; it's likely different, but related.&lt;/p&gt;

&lt;h3&gt;
  
  
  Cost
&lt;/h3&gt;

&lt;p&gt;There's a financial cost for QA engineers. Compare this cost against the price of a bad build making it to production. For some products the price of a bad build is higher than others. In web development rolling back a bad build is easy. However, even in web development, a bad build can be expensive. QA engineers require money. Ask yourself if it's worth the cost.&lt;/p&gt;

&lt;h3&gt;
  
  
  Throughput
&lt;/h3&gt;

&lt;p&gt;Throughput will suffer any time you add a step in your delivery pipeline. Our QA team is seven hours ahead of our dev team. This is a bad split. It means that any work we finish has no hope of verification on the same day. Here are the approximate cycle time additions when a developer creates a pull request that QA needs to verify:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;
    +1 day:
    &lt;ul&gt;
      &lt;li&gt;Day 1: no issues found, merge PR&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;
    +2 days:
    &lt;ul&gt;
      &lt;li&gt;Day 1: QA has questions, we answer&lt;/li&gt;
      &lt;li&gt;Day 2: QA approves, merge PR&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;
    +X days:
    &lt;ul&gt;
      &lt;li&gt;Day 1: QA finds issues, dev updates&lt;/li&gt;
      &lt;li&gt;Day 2: QA reviews, finds issues&lt;/li&gt;
      &lt;li&gt;Day X: repeat until PR merge&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Waiting at least one day for QA on every issue is a high price to pay.&lt;/p&gt;

&lt;h3&gt;
  
  
  Stability
&lt;/h3&gt;

&lt;p&gt;At PBS, for the pbs.org product, we ran an experiment. We had no designated QA role for two months. Of the 202 requests we delivered in that time, we flagged &amp;lt;5% as quality issues. These quality issues were trivial by engineers' and product managers' standards. I translated "trivial" to mean that they were easy to remedy and did not impact the mission. A few examples are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Pressing spacebar on Firefox did not play/pause video&lt;/li&gt;
&lt;li&gt;We displayed a "Donation" screen instead of a "Related Videos" screen at the end of a subset of videos&lt;/li&gt;
&lt;li&gt;A carousel display unexpectedly reverted to a previous iteration&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These were not system outages or recovery events (in the &lt;a href="https://dev.to/delivery-metrics/"&gt;MTTR&lt;/a&gt; sense). With a mature, reliable automated test suite. We could increase throughput with little impact to stability.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Mitigations
&lt;/h2&gt;

&lt;p&gt;How do you compromise on throughput while increasing stability?&lt;/p&gt;

&lt;h3&gt;
  
  
  Selective QA
&lt;/h3&gt;

&lt;p&gt;We don't have to have a verification step for every change. There are cases where QA is more valuable:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--FvsEEbCz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/7bcor2fy4za3ybf6o7v4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--FvsEEbCz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/7bcor2fy4za3ybf6o7v4.png" alt="QA value scale"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This diagram shows risk quadrants as mission importance an deployment complexity increase. There is a green, low-risk quadrant for delivery without QA. There are yellow caution areas where you should use your discretion. Finally the red quadrant illustrates high-risk, high-value work that benefits from QA.&lt;/p&gt;

&lt;h3&gt;
  
  
  Move QA into peer review
&lt;/h3&gt;

&lt;p&gt;You can collapse the peer review and verification steps into one. Have QA engineers verify a developers work &lt;strong&gt;on the pull request&lt;/strong&gt;. This still impacts Cycle Time. The upside is that it allows us continue trunk-based development. When we merge the pull request it's ready for deployment. There's no need for valueless release management via git cherry-picking. QA has verified the work and trunk is in a deployable state.&lt;/p&gt;

&lt;h3&gt;
  
  
  Quality Auditor
&lt;/h3&gt;

&lt;p&gt;Change the point where QA verifies quality. The QA engineer monitors the product in production. They create issues at the beginning of the delivery pipeline. This is in contract to a verification step in the middle of the pipeline. The benefit of the approach is that QA doesn't affect Cycle Time. Developers and QA engineers don't get bogged down in hashing out edge cases mid-development. This keeps work-in-progress to a minimum. However, production stability may lower. The hope is that auditors will catch issues and maintain quality over the long term.&lt;/p&gt;

&lt;h3&gt;
  
  
  Automate 🤖
&lt;/h3&gt;

&lt;p&gt;Write and trust your tests. Software developers become responsible for coding their QA companion. We'll make mistakes. The goal is to never make the same mistake twice.&lt;/p&gt;

&lt;h2&gt;
  
  
  Self Reflection
&lt;/h2&gt;

&lt;p&gt;Examine where your delivery process could improve. Do you have an increasing backlog of features? Try increasing your throughput. Do you have an increasing backlog of bugs? Try increasng your stability. In addition, examine your company's road map. Is your team accomplishing long term plans? Of your 2018 "big ideas" how many were achieved? If plans are unrealized ask yourself if ambition is worth the price of safety. Quoting Niccolò Machiavelli's &lt;em&gt;The Prince&lt;/em&gt;:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;All courses of action are risky, so prudence is not in avoiding danger (it's impossible), but calculating risk and acting decisively. Make mistakes of ambition and not mistakes of sloth. Develop the strength to do bold things, not the strength to suffer.&lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>cd</category>
      <category>devops</category>
      <category>agile</category>
    </item>
    <item>
      <title>Delivery Metrics Worth Tracking</title>
      <dc:creator>Accelerate Delivery</dc:creator>
      <pubDate>Mon, 17 Sep 2018 13:19:20 +0000</pubDate>
      <link>https://dev.to/acceldelivery/delivery-metrics-worth-tracking-4oad</link>
      <guid>https://dev.to/acceldelivery/delivery-metrics-worth-tracking-4oad</guid>
      <description>&lt;p&gt;&lt;em&gt;This was originally posted at the &lt;a href="https://accelerate.delivery/delivery-metrics/"&gt;Accelerate Delivery&lt;/a&gt; blog&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Metrics That Matter
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;Engineering is stuck in firefighting mode.&lt;/p&gt;

&lt;p&gt;We have a never-ending backlog.&lt;/p&gt;

&lt;p&gt;We're concerned with the quality of the product.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I've paraphrased a few concerns that I've encountered as an engineering manager at PBS. In April 2018 we changed our approach to how we develop software. We started tracking metrics on throughput and stability. A few members of my team read the excellent book, &lt;a href="https://www.goodreads.com/book/show/35747076-accelerate"&gt;Accelerate&lt;/a&gt;, which presents the metrics that have had a profound impact on product delivery.&lt;/p&gt;

&lt;p&gt;The PBS engineering team has had other ideas how to improve throughput and stability for products like pbs.org. Here's a brief history on how we came to change our approach. A problem our software developers face is that there is an endless amount of work to be done.  PBS is a non-profit organization; we can't always have more engineers. Budget and headcount are tough to come by. When we couldn't get more engineers to do the work, we considered our options. The first option we chose was to do less work. We ignored some products during sprints to focus on others. Work got done, but we lived in the shadow of a looming backlog. Some weeks we resolved more issues than we created. Some weeks... After months of this approach we changed tack. If we can't add more people, and we can't do less work, what could we change?&lt;/p&gt;

&lt;p&gt;We made an effort to use the engineers we have more efficiently. We did this by tracking software delivery performance. My team could make regular decisions at the issue level. They could optimize each issue for stability and throughput.  These small decisions led to a noticeable impact. Here's a chart of the issues we've created vs. resolved:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--wahFs-S0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/fc7sw7z074koyed9xrzk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--wahFs-S0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/fc7sw7z074koyed9xrzk.png" alt="Created vs. Resolved"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Notice the change in April? That growing green area represents progress! We track four metrics:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Deployment Frequency&lt;/li&gt;
&lt;li&gt;Change Failure Rate&lt;/li&gt;
&lt;li&gt;Mean Time To Recovery&lt;/li&gt;
&lt;li&gt;Cycle Time&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I'd like to explore each metric in depth. Also, I'll comment on how I've noticed it make an impact on our development effort.&lt;/p&gt;

&lt;h2&gt;
  
  
  Deployment Frequency
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;How often we deploy a product&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;A deployment should be a non-event. Deployments shouldn't require careful scrutiny or manual monitoring. They should feel like a byproduct of writing code. Ideally a deployment is a single incremental change. By deploying as frequently as possible we incentivize several productive habits.&lt;/p&gt;

&lt;p&gt;First, we make production deployments easy. By measuring the last mile of product development, the time and effort it takes to deploy, we encourage automation. Deployments should be as simple as pushing a button. That button should be accessible and understood by many people. We used to have a "release manager" role. This person knew the proper incantations to transmute a git repository into a production service. Now any engineer can do a deployment, and does several times a day. &lt;/p&gt;

&lt;p&gt;Second, we push the deployment button as often as possible. Talk about gamification, you get a point every time you push a button. We're motivated to deploy every pull request that makes it through review. &lt;a href="https://dev.to/trunk-development-feature-flags/"&gt;Feature flagging makes this even easier&lt;/a&gt;. If a pull request represents a partially completed effort, we put it behind a feature flag. The change remains incremental. The work is available on production and may be useful to some users.&lt;/p&gt;

&lt;p&gt;Finally, we keep batch sizes small.  Changes don't build up with regular deployments. Encouraging frequent deployments has the beneficial result of keeping your change set small. That means if something does go wrong with a release it should be easy to hone in on the affected code. It can also limit the impact of a change.&lt;/p&gt;

&lt;h2&gt;
  
  
  Change Failure Rate (CFR)
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;The percentage of deployments we have to rollback&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This metric goes hand-in-hand with "Deployment Frequency". Where that metric encourages throughput, "Change Failure Rate" encourages stability.  This metric is the indicator that you're going too fast.&lt;/p&gt;

&lt;p&gt;I've used this metric to encourage my team to slow down if we notice an uptick in rollbacks. It's like a tachometer though. I argue that a sustained 0% CFR means you're moving too slowly. I want to red-line deployments a bit.  It's worth a rollback to test the boundary of how fast you can go. The boundary is different for different teams and products. For example, I'm more comfortable with &amp;gt;0% CFR in web development. Rollbacks are cheap and quick on the web. I'm risk-averse to rollbacks when it comes to app development. Rollbacks with packaged software are difficult and slow.&lt;/p&gt;

&lt;h2&gt;
  
  
  Mean Time To Recovery (MTTR)
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;Hours elapsed from the start of a service failure to system recovery&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;From my experience MTTR lends a sense of urgency to stability. When our system fails my team knows the clock is ticking. The benefits are twofold:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;We know the top priority is to return to stability.&lt;/li&gt;
&lt;li&gt;We maintain trust with our peers.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Trust is essential to any software development organization.  They know that we're taking the issue seriously. They can follow along in a chat channel and pull the latest information. &lt;/p&gt;

&lt;p&gt;After watching MTTR we changed the way we handle recovery events. We examined our incident response process and have taken steps to formalize it. We're using a light version of the process laid out by the &lt;a href="https://landing.google.com/sre/book/chapters/managing-incidents.html"&gt;Site Reliability Engineering book&lt;/a&gt;.  Our process used to be a general miasma of panic. Developers didn't know if they were supposed to be involved in the incident or not. The root cause analysis and remediation measures weren't publicly available or coordinated.  Stakeholders would get infrequent updates on the situation. After adopting our process our recoveries are more orderly. We don't have any MTTR metrics before we adopted an incident response process. Over time I'd like to calculate the trend of our response times.&lt;/p&gt;

&lt;p&gt;Another process we adopted after watching MTTR are blameless postmortems. These analyze the cause of the problem and how to avoid it in the future. Postmortems are useful ways to share perspective and knowledge. We generally copy/paste a postmortem template each time that we crafted from a few articles: &lt;a href="http://code.hootsuite.com/blameless-post-mortems/"&gt;Hootsuite's 5 Whys&lt;/a&gt; and &lt;a href="https://codeascraft.com/2012/05/22/blameless-postmortems/"&gt;Blameless PortMortems and a Just Culture&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Cycle Time &lt;a href="https://dev.to/delivery-metrics/#note"&gt;*&lt;/a&gt;
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;Minutes it takes a task to start in development and end in production&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This is my favorite metric. If I had to watch one number to gauge software delivery health it would be this one. Cycle time is the time it takes to get past all the hurdles you encounter in software development. Upstream dependencies, gaps in testing, acceptance review, and deployment pain are all captured in one number. At PBS we've reacted to this metric in a few ways.&lt;/p&gt;

&lt;p&gt;First, we scrutinize issues for upstream blockers before we start work. We're trying to minimize the amount of work we have in progress. We're destined for a stalled effort if we begin development and find that we need to something from a supporting service. It's more efficient to identify the blocker upfront before we write any code.&lt;/p&gt;

&lt;p&gt;Second, Cycle Time incentivizes us to keep tasks small. A small task is quicker to develop and review. The risk of the change is low so we deploy it more quickly too. Feature flags work well to keep tasks small. We've adopted feature flags at PBS with great success. We recognize that feature flags are tech debt:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;They add some complexity to our code.&lt;/li&gt;
&lt;li&gt;We have to remove the flag eventually.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The benefits outweigh the drawbacks for us though. I love reviewing a pull request that affects three files with a feature flag. I cringe at reviewing a PR that represents the full feature, affects 15 files, and carries significantly more risk.&lt;/p&gt;

&lt;h2&gt;
  
  
  Backlog - 0̸
&lt;/h2&gt;

&lt;p&gt;The engineers who have adopted &lt;a href="https://www.goodreads.com/book/show/35747076-accelerate"&gt;Accelerate&lt;/a&gt;'s metrics at PBS have noticed a positive trend. After we measured our software delivery performance, we could improve it. There have been benefits outside of numbers and graphs. Work has been more fun! Some people hear "continuous delivery" and think it sounds exhausting. They think it means a never-ending, scrutinized effort. Engineers can imagine a boss calling them into their office to talk about why their Cycle Time went up 10 minutes yesterday. To me there is nothing further from the truth. Our engineering effort sees the light of day as quickly as possible.  We're shipping daily. The backlog is diminishing. We're creating a product we're proud to work on.&lt;/p&gt;

&lt;p id="note"&gt;
  * It's worth noting that Accelerate refers to my "Cycle Time" as "Lead Time". From what &lt;a href="https://www.agilealliance.org/glossary/lead-time/"&gt;I've read&lt;/a&gt; "Lead Time" is a longer metric. It represents the time from when a requirement is identified, or an issue is put in the backlog, until that issue is completed. There may be some dispute here. A few people were more comfortable with "Cycle Time" at PBS and that's what we stuck with.
&lt;/p&gt;

</description>
      <category>devops</category>
      <category>productivity</category>
      <category>programming</category>
    </item>
    <item>
      <title>Feature Flags in JavaScript with Vanna</title>
      <dc:creator>Accelerate Delivery</dc:creator>
      <pubDate>Thu, 06 Sep 2018 13:14:43 +0000</pubDate>
      <link>https://dev.to/acceldelivery/feature-flags-in-javascript-withvanna-20ng</link>
      <guid>https://dev.to/acceldelivery/feature-flags-in-javascript-withvanna-20ng</guid>
      <description>&lt;h2&gt;
  
  
  I See a Red Button
&lt;/h2&gt;

&lt;p&gt;Vanna is an open source feature flagging library written and used at &lt;a href="https://www.pbs.org"&gt;PBS&lt;/a&gt;. Let's dive into the &lt;a href="https://github.com/pbs/vanna-js-client"&gt;JavaScript client&lt;/a&gt;. To set up our tutorial, a story.&lt;/p&gt;

&lt;p&gt;Mick is a front-end developer. The design team asked Mick to change the color of a red button to black. Product Management isn't ready to go all-in on the black button. Design and Product Management ask our prodigal engineer if there's a way to hedge our bets. They want to show the experimental black button to a small group of users. Mick smiles and puts on his shades. 😎&lt;/p&gt;

&lt;p&gt;Here's a brief example of how Vanna lets you do this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// 👇 An instance of vanna client - implementation to come&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;features&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;app/features&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; 

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;paintItBlack&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;features&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;variation&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;paint-it-black&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;paintItBlack&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="c1"&gt;// Render experimental black button&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="c1"&gt;// Render red button&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is feature flagging at its simplest. With feature flagging you can &lt;a href="https://accelerate.delivery/trunk-development-feature-flags/"&gt;merge to the trunk&lt;/a&gt; more frequently. You can reduce risk by limiting new, volatile code to a subset of users. Vanna let's you do this in a way that's controlled outside of your application code. This unlocks another trait of continuous delivery.&lt;/p&gt;

&lt;p&gt;A desirable goal of continuous delivery is to decouple deployments from releases. Deployments are the act of moving code to a server. Releases are the act of making code paths available to users. You can read more in &lt;a href="https://hackernoon.com/decouple-deployment-from-release-b4b9182b6a46"&gt;this Hacker Noon article&lt;/a&gt;.  To decouple releases from deployments Vanna receives its features from a JSON response. This allows us to update feature availability without doing a code deployment.&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating Features
&lt;/h2&gt;

&lt;p&gt;Let's dive into the shape of the feature response. The response looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;features&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;paint-it-black&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;slug&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;i-want-to-paint-it-black&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;enabled&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;targetSegment&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;alpha-tester&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The feature response contains any number of feature objects. In our sample&lt;br&gt;
there is one feature, &lt;code&gt;"paint-it-black"&lt;/code&gt;. The feature has three&lt;br&gt;
properties:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;"slug"&lt;/code&gt; - This names the feature. It's useful for feature
identification when you're only given the feature values. We'll use it for
overriding feature availability in our advanced example.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;"enabled"&lt;/code&gt; - This key makes the feature available. Think of it as
the master circuit breaker. If this is &lt;code&gt;false&lt;/code&gt;, the feature will
be off for everyone.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;"targetSegment"&lt;/code&gt; - Features target users. You make a feature
available to groups of users with this key. We'll see how users identify as a
&lt;code&gt;userSegment&lt;/code&gt; when we instantiate a &lt;code&gt;new VannaClient&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;There is no console interface to create this JSON response at the moment. Right now we'll write the JSON by hand and make it accessible through a CDN. An admin interface and API service to create this response is a future enhancement. Hand-crafting the JSON was the smallest step we could take towards developing the Vanna library. Taking this MVP approach makes it easier for us to experiment and iterate.&lt;/p&gt;
&lt;h2&gt;
  
  
  Using vanna-js
&lt;/h2&gt;

&lt;p&gt;In our simple example we assumed the availability of the client library. Let's implement it.&lt;/p&gt;

&lt;p&gt;We'll set &lt;code&gt;userSegment&lt;/code&gt; based on the presence of a cookie. See our &lt;a href="https://accelerate.delivery/setting-cookie-flags-django/"&gt;previous post&lt;/a&gt; on setting cookies for feature flags.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// app/features.js&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;VannaClient&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;@pbs/vanna&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;Cookies&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;js-cookie&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;isAlphaTester&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;Cookies&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="kd"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;alpha-tester&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nx"&gt;VannaClient&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;                                              
  &lt;span class="na"&gt;uri&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;https://cdn.com/features.json&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;                    
  &lt;span class="na"&gt;userSegment&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;  &lt;span class="nx"&gt;isAlphaTester&lt;/span&gt; &lt;span class="p"&gt;?&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;alpha-tester&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;regular&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;           
  &lt;span class="na"&gt;fallbacks&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;                                                                
    &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;paint-it-black&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;                                              
  &lt;span class="p"&gt;}&lt;/span&gt;                                                                          
&lt;span class="p"&gt;});&lt;/span&gt;                                                                           
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When you instantiate a &lt;code&gt;new VannaClient&lt;/code&gt; you're responsible for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;uri&lt;/code&gt; - This is the location of the JSON feature control response.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;userSegment&lt;/code&gt; - This is the user's group. Vanna enables the feature for this
user on a match to an enabled &lt;code&gt;"targetSegment"&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;fallbacks&lt;/code&gt; - This sets the default behavior for feature flags. Note that a
fallback must be set for every feature in the JSON response.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We can now use Vanna to finish our task. In our initial example we created a boolean to split our code path with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;paintItBlack&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;features&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;variation&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;paint-it-black&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Vanna's &lt;code&gt;variation()&lt;/code&gt; method takes the feature's &lt;code&gt;"targetSegment"&lt;/code&gt; and client's &lt;code&gt;userSegment&lt;/code&gt; into consideration. On a match between the two the method returns &lt;code&gt;true&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;With this tutorial you can use Vanna as a feature flagging library. You can decouple deployments from releases. You can ship software more quickly with a lower risk. Using Vanna in this way for feature flagging is perfect for simple use cases. Advanced options are available for power users who need more customization.&lt;/p&gt;

&lt;h2&gt;
  
  
  Overriding Variations
&lt;/h2&gt;

&lt;p&gt;Controlling features with a single &lt;code&gt;userSegment&lt;/code&gt; seems coarse. What if we want finer control? What if I want to enable a specific feature regardless of my &lt;code&gt;userSegment&lt;/code&gt;? The Vanna client allows you to override variation eligibility. We can extend our previous &lt;a href="https://accelerate.delivery/toggle-django-views-with-cookie-flags/"&gt;post&lt;/a&gt; about toggling flags on feature-specific cookies. We'll allow Vanna to opt-in to a feature based on the presence of named cookies. The following highlighted blocks show how we can add to our previous Vanna client:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// app/features.js&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;_&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;lodash&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;VannaClient&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;getFeatureVariation&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;@pbs/vanna&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;Cookies&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;js-cookie&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nx"&gt;getVariationOverride&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;featureSlug&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;featureKey&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;`feature:&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;featureSlug&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;overrideValue&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;Cookies&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="kd"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;featureKey&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;overrideValue&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;overrideValue&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;true&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="kc"&gt;undefined&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;isAlphaTester&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;Cookies&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="kd"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;alpha-tester&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nx"&gt;VannaClient&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;                                              
  &lt;span class="na"&gt;uri&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;https://cdn.com/features.json&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;                    
  &lt;span class="na"&gt;userSegment&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;  &lt;span class="nx"&gt;isAlphaTester&lt;/span&gt; &lt;span class="p"&gt;?&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;alpha-tester&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;regular&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;           
  &lt;span class="na"&gt;fallbacks&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;                                                                
    &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;paint-it-black&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;                                              
  &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="na"&gt;_overrides&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;getFeatureVariation&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;feature&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;userSegment&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;variation&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;getFeatureVariation&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;feature&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;userSegment&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
      &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;overrideVariation&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;getVariationOverride&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;feature&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;slug&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
      &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;_&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;isUndefined&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;overrideVariation&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;?&lt;/span&gt; &lt;span class="nx"&gt;variation&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;overrideVariation&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;                                                                           
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With this additional code a user can opt-in to features that are not part of their &lt;code&gt;userSegment&lt;/code&gt;. In our example if a user doesn't have the &lt;code&gt;"alpha-tester"&lt;/code&gt; cookie, but does have a &lt;code&gt;"feature:i-want-to-paint-it-black"&lt;/code&gt; cookie, they will see the black button. The opposite use case also works. An &lt;code&gt;"alpha-tester"&lt;/code&gt; can opt out of a feature by setting a named cookie to &lt;code&gt;"false"&lt;/code&gt;. This variation override allows for finer control over feature availability. We've used cookies to override the variation, but you could use local storage or anything that's available in JavaScript.&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://github.com/pbs/vanna-js-client"&gt;vanna-js-client&lt;/a&gt; is an open source project. Please check out the brief, readable source code. It's a lightweight way to add feature flags to your JS project.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>webdev</category>
      <category>javascript</category>
    </item>
  </channel>
</rss>
