<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: iedaddy</title>
    <description>The latest articles on DEV Community by iedaddy (@iedaddy).</description>
    <link>https://dev.to/iedaddy</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/iedaddy"/>
    <language>en</language>
    <item>
      <title>Is There Such a Thing As Good Technical Debt?</title>
      <dc:creator>iedaddy</dc:creator>
      <pubDate>Thu, 01 Mar 2018 22:42:54 +0000</pubDate>
      <link>https://dev.to/iedaddy/is-there-such-a-thing-as-good-technical-debt-490b</link>
      <guid>https://dev.to/iedaddy/is-there-such-a-thing-as-good-technical-debt-490b</guid>
      <description>&lt;h1&gt;
  
  
  &lt;a href="http://iedaddy.com/wp-content/uploads/2018/03/Technical_Debt.jpg" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fiedaddy.com%2Fwp-content%2Fuploads%2F2018%2F03%2FTechnical_Debt-150x150.jpg"&gt;&lt;/a&gt;
&lt;/h1&gt;

&lt;p&gt;I like the term “Technical Debt” because it is an easy metaphor for the average business owner to understand and put into real terms. A particular benefit of the debt metaphor is that it’s very handy for communicating to non-technical people. Just like financial debt, Technical Debt will incur interest payments, which come in the form of the extra effort that we have to do in future development because of the choices that we make now.&lt;/p&gt;

&lt;p&gt;Technical debt isn’t always bad, just like how a business may borrow and incur debt to take advantage of a market opportunity, developers may also incur technical debt to hit an important deadline or get a particular feature to market faster than if they had “done it right the first time.” There may also be prudent debt within a system where team members recognize that it may not be worth paying down if the interest payments are sufficiently small, such as within portions of the system that are rarely updated or touched by development – we may not need to care about comment density, complexity, or refactoring if that sub-system is never going to receive feature updates. The tricky thing about technical debt is that unlike money, sometimes it’s difficult to effectively measure how it will impact your future velocity and, in some cases, may never need to be paid off in the future. Each type of technical debt must be weighed against the specific system and its lifecycle.&lt;/p&gt;

&lt;p&gt;Technical Debt comes from various sources, some of which can be good and some bad, but the idea behind the technical debt metaphor is that there is a cost associated to taking short cuts, making mistakes, or deliberate choices and that the cost of not dealing with these issues will increase over time.&lt;br&gt;&lt;br&gt;
It’s no secret that I’m a big fan of sonarQube ( &lt;a href="https://www.sonarqube.org" rel="noopener noreferrer"&gt;https://www.sonarqube.org&lt;/a&gt; ), an open source dashboard for managing code quality. It tries to calculate a technical debt (called a ‘code smell’) for a code base, using static code analysis findings like code coverage of automated tests, code complexity, duplication, violations of coding practices, comment density, and the following of basic coding standards.&lt;/p&gt;

&lt;p&gt;And while it does of a good job of reporting the areas of technical debt that it can through code analysis, what do the numbers really mean to the business? And this is where get into the fuzziness of technical debt and how it impacts the long-term vitality of a project. When I think about the biggest cost of technical debt, it usually revolves around how the designs or code implemented today may slow down our ability to deliver future features, thus creating an opportunity cost for lost revenue.&lt;br&gt;&lt;br&gt;
Keeping this in mind, when measuring technical debt, it’s important and specific to each project to identify the impact that these different kinds of technical debt have. It is by evaluating each type of technical debt that has the potential to hurt and figuring out when there is too much of a certain type of technical debt that we can start to intelligently manage it.&lt;br&gt;&lt;br&gt;
When looking at a project and evaluating the different kinds of technical debt and how much they might cost you, this involves a fuzzier approach than just reviewing the sonarQube dashboard. Here’s some of the categories I like to group by when discussing different types of technical debt and the interest we may end up paying on them:&lt;/p&gt;

&lt;h2&gt;
  
  
  Different Types of Technical Debt
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Architectural ($$$$)&lt;a href="http://iedaddy.com/wp-content/uploads/2018/03/leaning-tower-of-technical-debt.jpg" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fiedaddy.com%2Fwp-content%2Fuploads%2F2018%2F03%2Fleaning-tower-of-technical-debt-150x150.jpg"&gt;&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;If you’re building out a system where a key component or platform is fundamentally flawed so that it’s not scalable or reliable, this can be a huge problem that you may not even realize until real customers are running on your product. If you can’t scale out your architecture the way you need to because of core dependency problems or incorrect assumptions about how your customers will be using your system, you will have no choice but to rewrite or retool huge chunks of the system.&lt;br&gt;&lt;br&gt;
A good example of this would be the game Star Citizen and their choice to build out the game on the CryEngine platform (&lt;a href="https://www.extremetech.com/gaming/237434-star-citizen-single-player-delayed-indefinitely" rel="noopener noreferrer"&gt;https://www.extremetech.com/gaming/237434-star-citizen-single-player-delayed-indefinitely&lt;/a&gt;)&lt;/p&gt;

&lt;h3&gt;
  
  
  Fragile Code ($$$)
&lt;/h3&gt;

&lt;p&gt;&lt;a href="http://iedaddy.com/wp-content/uploads/2018/03/fragile_code.jpg" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fiedaddy.com%2Fwp-content%2Fuploads%2F2018%2F03%2Ffragile_code-150x150.jpg"&gt;&lt;/a&gt;In every large system, there’s always a couple of modules that seem to give developers the most problems. These are the sub-systems or components with code that is hard to understand and expensive and dangerous to change because it was poorly written to begin with or uses extremely outdated technology. Because these subsystems are so fragile, no developer wants to touch them and when they do it’s usually to go in and apply a very specific fix for their situation and then move on. Because these short-sighted fixes accumulate over time, the problem only gets worse. These fragile components need to be identified and evaluated for a complete rewrite to ‘bullet-proof’ them or they will continue to be an expensive debt on the project’s ledger.&lt;/p&gt;

&lt;h3&gt;
  
  
  Untestable or Undertested Code ($$$)&lt;a href="http://iedaddy.com/wp-content/uploads/2018/03/danger-untested-software-ahead.jpg" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fiedaddy.com%2Fwp-content%2Fuploads%2F2018%2F03%2Fdanger-untested-software-ahead-150x150.jpg"&gt;&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;Writing Unit Tests takes time. It also requires that developers write their code so that it can be unit tested. A developer who is writing their code so that it can be unit tested tends to break their functionality up into small atomic components that make unit testing easy. If your system has monolithic functions that don’t automate well and you choose not to take the time to refactor them, you end up with tests that are brittle and slow and keep falling apart whenever you change the code. This causes your testing expenses to increase over time as additional options and features are added to the code base. Even worse is when brittle automated tests are ignored on failure because “it always fails anyways”. This can lead to an increase in manual and exploratory testing costs as well as additional costs in unplanned work when code is returned with a slew of bug reports that could have been avoided with proper automated testing in place.&lt;/p&gt;

&lt;h3&gt;
  
  
  No Automated Deployment ($$)
&lt;/h3&gt;

&lt;p&gt;&lt;a href="http://iedaddy.com/wp-content/uploads/2018/03/easy-automated-deployment.png" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fiedaddy.com%2Fwp-content%2Fuploads%2F2018%2F03%2Feasy-automated-deployment-150x150.png"&gt;&lt;/a&gt;I’m including this under technical debt because we pay “interest” on this every time there is a release in terms of man hours and inherent risks. This is one of those hidden costs that nobody seems to think about until you actually sit down and review how it’s impacting not only your releases, but also your development cadence. Manual release processes are inherently error prone. As such, each release ends up being an all-hands on deck scenario “just in case”. These costs keep adding up over time. Not only with late nights, but also with taking time out of the development team’s normal cycle to prepare for a release and losing productivity during their present cycle. Is the cost of automating a deployment more expensive than scheduling one manual release? Probably. But automation pays huge dividends on each subsequent release of the product and probably has one of the best long-term ROI’s.&lt;/p&gt;

&lt;h3&gt;
  
  
  Black Box Code ($$)&lt;a href="http://iedaddy.com/wp-content/uploads/2018/03/black-box-magic.png" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fiedaddy.com%2Fwp-content%2Fuploads%2F2018%2F03%2Fblack-box-magic-150x150.png"&gt;&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;This is the code that just works, written by some long-lost Jedi Code master who has since left the company or retired. We all know it works and we see it working within our systems, but nobody can explain why it works the way that it does. This is also a really tricky area because the business may decide that it’s OK to carry this technical debt on the project ledger because there are no plans to change any of the functionality that this Black Box is responsible for. And that’s fine. Until something changes. And it doesn’t work. This type of technical debt is like those mortgages with the huge balloon payment at the end. You can get away with not paying it down for years and then either bite the bullet and pay it off or if your product reaches end-of-life you may be able to retire the system without ever having to pay it.&lt;/p&gt;

&lt;h3&gt;
  
  
  Outdated Libraries ($-$$$$)
&lt;/h3&gt;

&lt;p&gt;&lt;a href="http://iedaddy.com/wp-content/uploads/2018/03/outdated-libraries.jpg" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fiedaddy.com%2Fwp-content%2Fuploads%2F2018%2F03%2Foutdated-libraries-150x150.jpg"&gt;&lt;/a&gt;This can be a small amount of technical debt or it can be a huge amount of technical debt. That’s because this has to be evaluated on the risk that it presents to the business. This is especially important when outdated libraries contain newly discovered security flaws ( like the outdated struts library that was exploited during the Experian hack &lt;a href="https://www.cyberscoop.com/equifax-breach-apache-struts-fbi-investigation" rel="noopener noreferrer"&gt;https://www.cyberscoop.com/equifax-breach-apache-struts-fbi-investigation&lt;/a&gt; or the Heartbleed vulnerability ( &lt;a href="http://heartbleed.com/" rel="noopener noreferrer"&gt;http://heartbleed.com/&lt;/a&gt; ). An outdated library may be considered a small amount of technical debt until it isn’t, and then it becomes an all hands on deck remediation exercise to get your systems patched before being exploited.&lt;/p&gt;

&lt;h3&gt;
  
  
  Poor Error Handling and Instrumentation ($$-$$$)&lt;a href="http://iedaddy.com/wp-content/uploads/2018/03/poor-error-handling.jpg" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fiedaddy.com%2Fwp-content%2Fuploads%2F2018%2F03%2Fpoor-error-handling-150x150.jpg"&gt;&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;If you don’t have proper error handling in your code, it’s hard to troubleshoot when something goes wrong. Even worse, you may not notice that certain sub-systems are erroring out unless you have a way of instrumenting those errors and performance issues. When the system isn’t working in the manner that it should be, if you don’t have instrumentation in place it’s difficult to pin-point the root cause of the issues unless you have built in various windows into the processes of your systems.&lt;/p&gt;

&lt;h3&gt;
  
  
  Copy-and-paste code ($$)
&lt;/h3&gt;

&lt;p&gt;&lt;a href="http://iedaddy.com/wp-content/uploads/2018/03/copy-paste.jpg" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fiedaddy.com%2Fwp-content%2Fuploads%2F2018%2F03%2Fcopy-paste-150x150.jpg"&gt;&lt;/a&gt;When code works, it works; and sonarQube does a pretty decent job of figuring out duplicate code blocks through it’s static code analysis. So, we may end up with many slightly different variations of code structures that developers have cut and pasted and then slightly modified over the iterations in order to get code into production. We always tell ourselves, “At some point I can go back and parameterize the code to consolidate and refactor the functions.” But, that time is rarely budgeted during the project’s iterations and the debt continues to pile up. Any changes to how the code works now requires the developer to remember where the multiple code locations are and make the same updates over and over again. If it’s only a few spots it may not be a big deal, but ignoring the problem causes the costs of making additional updates greater over the life of the project.&lt;/p&gt;

&lt;h3&gt;
  
  
  Inconsistent Programming Practices ($-$$)&lt;a href="http://iedaddy.com/wp-content/uploads/2018/03/noconsistency.jpg" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fiedaddy.com%2Fwp-content%2Fuploads%2F2018%2F03%2Fnoconsistency-150x150.jpg"&gt;&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;Sometimes it’s easy to tell who wrote what portion of the system by reviewing the code. One developer may always seem to use one particular pattern vs another, or they create wrappers around certain modules in a very specific way that makes their usage different than how another developer instantiates that code, or have variables named a particular way. This practice may go unnoticed in small teams, but the more developers who are involved in updating a system the more complex this problem becomes and the harder it is to hand off to other developers. Code should be a developer-neutral as possible and one should only be able to tell who wrote a particular line of code by reviewing the check-in logs.&lt;/p&gt;

&lt;h3&gt;
  
  
  Backwards Compatibility ($-$$)
&lt;/h3&gt;

&lt;p&gt;&lt;a href="http://iedaddy.com/wp-content/uploads/2018/03/backwards-compatible.png" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fiedaddy.com%2Fwp-content%2Fuploads%2F2018%2F03%2Fbackwards-compatible-150x150.png"&gt;&lt;/a&gt;Usually this is a necessary debt that should be carried on a short-term basis. You’re going to want to maintain some sort of compatibility with the previous version. But what about the version before that one, or the one before that? The further you go to maintain backwards (or forward) compatibility of your systems the greater the cost to maintain and test all the compatibility scenarios that your system can handle.&lt;/p&gt;

&lt;h3&gt;
  
  
  Inefficient Design ($)&lt;a href="http://iedaddy.com/wp-content/uploads/2018/03/streamlining.jpg" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fiedaddy.com%2Fwp-content%2Fuploads%2F2018%2F03%2Fstreamlining-150x150.jpg"&gt;&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;In today’s day and age, hardware is cheap. Sometimes you can get away with wasteful practices by throwing some hardware at the problem and it will go away for a while. This can lead to some lazy programming practices where inefficient memory usage or processing will not surface during initial rollouts. As you scale out your compute needs will grow and these problems will start to surface.&lt;/p&gt;

&lt;h3&gt;
  
  
  Magic Numbers ($)
&lt;/h3&gt;

&lt;p&gt;&lt;a href="http://iedaddy.com/wp-content/uploads/2018/03/magic-number.jpg" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fiedaddy.com%2Fwp-content%2Fuploads%2F2018%2F03%2Fmagic-number-150x150.jpg"&gt;&lt;/a&gt;In general, magic numbers are unique values with meanings or multiple occurrences that can preferably be replaced by named constants. Their use in code is generally low hanging fruit, meaning that they can easily be replaced, but they make it difficult for other coders not as familiar with a system to get up to speed on the how and why of a particular magic number’s use. Replacing these values with a named constant in your code allows a more descriptive identifier to be used within the code and thus an easier understanding of the code blocks as a whole.&lt;/p&gt;

&lt;h3&gt;
  
  
  Custom Functions for Built In features (0-$)&lt;a href="http://iedaddy.com/wp-content/uploads/2018/03/customized-code.jpg" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fiedaddy.com%2Fwp-content%2Fuploads%2F2018%2F03%2Fcustomized-code-150x150.jpg"&gt;&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;Every programmer is going to have a different level of experience with a particular framework. As they build out the functions for a system, they may end up creating particular functions that are already handled within the framework. Once that function is built (assuming it doesn’t have major bugs associated with it) it becomes a sunk cost. Sure, it’s inefficient, but as long as it’s working, then there’s not really any technical debt associated with the duplicate functionality.&lt;/p&gt;

&lt;h3&gt;
  
  
  Documentation (0-$)
&lt;/h3&gt;

&lt;p&gt;&lt;a href="http://iedaddy.com/wp-content/uploads/2018/03/documentation.png" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fiedaddy.com%2Fwp-content%2Fuploads%2F2018%2F03%2Fdocumentation-150x150.png"&gt;&lt;/a&gt;Nobody reads documentation. Any documentation that is written is usually out of date by the time it’s published. So, is this really technical debt? Maybe not, it will depend on the complexity of the program and why the documentation is being produced. For small projects it may be easier for the developer to just read through the code and consider it to be ‘self-documenting’ (assuming we have good commenting practices). But for larger projects or systems that require regulatory scrutiny or are subject to an audit, the documentation may be considered a necessary evil that must be produced.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;The technical debt metaphor is useful because it gives us a model that non-technical team members can use to evaluate the choices made throughout the lifecycle of a project. There is also a useful distinction between debt that must be paid down and debt that can be carried over time.&lt;/p&gt;

&lt;p&gt;Prudent debt can be considered acceptable if the team recognizes that they are taking on that debt, and understand the trade-off of an earlier release versus the costs of paying it off. The important part of this evaluation process is that the team recognizes that they are in fact taking on these risks and weighing them against the efforts needed to remediate the issues further down the product lifecycle and plan for that eventual paying of the piper.&lt;/p&gt;

&lt;p&gt;Even the best teams will have debt to deal with as a project progresses through its lifecycle – So it’s important that the team members recognize this and make a conscious choice of when to accept technical debt and when to take the time to remediate it.&lt;/p&gt;

&lt;p&gt;&lt;a href="http://iedaddy.com/wp-content/uploads/2018/03/technical-debt-uphill.jpg" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fiedaddy.com%2Fwp-content%2Fuploads%2F2018%2F03%2Ftechnical-debt-uphill-300x126.jpg"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The post &lt;a href="http://iedaddy.com/2018/03/understanding-technical-debt/" rel="noopener noreferrer"&gt;Is There Such a Thing As Good Technical Debt?&lt;/a&gt; appeared first on &lt;a href="http://iedaddy.com" rel="noopener noreferrer"&gt;Experiences of an Inland Empire Dad&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>automation</category>
      <category>refactoring</category>
      <category>development</category>
    </item>
    <item>
      <title>Feature Flags as a Continuous Delivery Release Tool</title>
      <dc:creator>iedaddy</dc:creator>
      <pubDate>Thu, 15 Feb 2018 20:11:22 +0000</pubDate>
      <link>https://dev.to/iedaddy/feature-flags-as-a-continuous-delivery-release-tool-3gdm</link>
      <guid>https://dev.to/iedaddy/feature-flags-as-a-continuous-delivery-release-tool-3gdm</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;“Are feature flags better for risk mitigation, fast feedback, hypothesis-driven development or subscription tiers?”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Yes.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="http://iedaddy.com/wp-content/uploads/2018/02/feature_toggle.png" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fiedaddy.com%2Fwp-content%2Fuploads%2F2018%2F02%2Ffeature_toggle.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Feature flags can be used to enable many different behaviors within the final product, they can give product owners fine-grained control over the product’s behavior and user accessibility both throughout the development lifecycle as well as within the client facing product.  They are valuable technique to power more effective DevOps and drive innovation throughout the development and delivery process.&lt;/p&gt;

&lt;p&gt;Common types of feature flags include:&lt;/p&gt;

&lt;h2&gt;
  
  
  Kill Switch
&lt;/h2&gt;

&lt;p&gt;&lt;a href="http://iedaddy.com/wp-content/uploads/2018/02/reb_button.jpg" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fiedaddy.com%2Fwp-content%2Fuploads%2F2018%2F02%2Freb_button-300x225.jpg"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A feature flag is a condition in your code, an IF/THEN for two different options.&lt;br&gt;&lt;br&gt;
At its simplest, a feature flag can be used to flag a new or risky behavior.  The great thing about a kill switch is that they can be made available within the product separate from a deployment.&lt;/p&gt;

&lt;p&gt;You can turn on the new feature at any time.  If the feature doesn’t behave as expected, it’s possible to shut it off quickly.  This allows for development on other features to continue without forcing a complete rollback of your production code.&lt;/p&gt;

&lt;p&gt;With feature flagging, you’re mitigating risk by making every feature encapsulated and controlled so if a feature has problems in production, they can be turned off rather than having a deployment rolled-back.&lt;/p&gt;

&lt;h2&gt;
  
  
  Beta Feedback / Canary Release
&lt;/h2&gt;

&lt;p&gt;&lt;a href="http://iedaddy.com/wp-content/uploads/2018/02/canary_coalmine.jpg" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fiedaddy.com%2Fwp-content%2Fuploads%2F2018%2F02%2Fcanary_coalmine-300x135.jpg"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For faster feedback, beyond an on/off switch, you can control at a very granular level who sees the new feature.  This allows you to expose a new feature to an advanced group of “friendly” customers who are willing to try out some new features you are developing.  These beta testers can review the feature and take it for a spin and give you great feedback that might not have been considered when building out the original use-cases for the product.&lt;/p&gt;

&lt;h2&gt;
  
  
  Hypothesis-Driven Development &amp;amp; A/B Testing
&lt;/h2&gt;

&lt;p&gt;&lt;a href="http://iedaddy.com/wp-content/uploads/2018/02/hypothesis.png" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fiedaddy.com%2Fwp-content%2Fuploads%2F2018%2F02%2Fhypothesis-300x133.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Hypothesis-Driven Development Feature flags are also good for long-term access level control.  They help prove out certain metrics by being able to measure the end-user behavior depending on whether or not a feature is enabled for a group of users and measuring their behavior.  For example, if you have a feature that only advanced users should access, you can use a feature to have a different experience for “Newbie” vs “Power users”.&lt;br&gt;&lt;br&gt;
You can use also feature flags to control localization, due to various regulations throughout the world, you may find that you need to enable features in one country and disable it in others in order to stay within regulatory compliance for that particular nation.&lt;/p&gt;

&lt;h2&gt;
  
  
  Subscription Plans and Permissions Toggles
&lt;/h2&gt;

&lt;p&gt;&lt;a href="http://iedaddy.com/wp-content/uploads/2018/02/subscribe-300x300.jpg" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fiedaddy.com%2Fwp-content%2Fuploads%2F2018%2F02%2Fsubscribe-300x300.jpg"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can bundle several flags together to form a subscription, for example you can have a bronze, silver, and gold tier and enable various sets of features that each subscription plan might have available to them based on what subscription is tied to their account.  This is a model that many of the “Freemium” apps out there subscribe to, where everyone is granted access to a base set of features, but other features are reserved only for customers who are in the paid groups.&lt;/p&gt;

&lt;h2&gt;
  
  
  Managing the technical debt of Feature Flags
&lt;/h2&gt;

&lt;p&gt;Feature Flags have a tendency to multiply rapidly, particularly when first introduced.  Toggles need to be viewed as technical debt and they come with a carrying cost, so it’s important to also keep them from proliferating within the application.&lt;/p&gt;

&lt;p&gt;&lt;a href="http://iedaddy.com/wp-content/uploads/2018/02/technical-debt.jpg" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fiedaddy.com%2Fwp-content%2Fuploads%2F2018%2F02%2Ftechnical-debt.jpg"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In order to keep the number of feature flags manageable, a team must be proactive in removing feature flags that are no longer needed.  Once a particular canary feature has been turned on in production and has proven stable, it should be removed and expired.  Other techniques might include adding governance to your development rules by placing a limit on the number of feature flags a system is allowed to have.  Once the limit is reached in order to add another feature flag, you’ll need to review the current set and remove some existing flags in order to make room under the cap.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;The post &lt;a href="http://iedaddy.com/2018/02/feature-flags-release-tool/" rel="noopener noreferrer"&gt;Feature Flags as a Continuous Delivery Release Tool&lt;/a&gt; appeared first on &lt;a href="http://iedaddy.com" rel="noopener noreferrer"&gt;Experiences of an Inland Empire Dad&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>featureflags</category>
      <category>featuretoggles</category>
    </item>
    <item>
      <title>How to Address Tactical Blockers to Strategic DevOps Transformation</title>
      <dc:creator>iedaddy</dc:creator>
      <pubDate>Tue, 16 Jan 2018 18:08:59 +0000</pubDate>
      <link>https://dev.to/iedaddy/how-to-address-tactical-blockers-to-strategic-devops-transformation-37h4</link>
      <guid>https://dev.to/iedaddy/how-to-address-tactical-blockers-to-strategic-devops-transformation-37h4</guid>
      <description>&lt;p&gt;There’s been a lot of talk and metrics out there regarding companies that adopt&lt;a href="http://iedaddy.com/2016/10/implementing-devops-principles/" rel="noopener noreferrer"&gt;DevOps principles&lt;/a&gt;.  The overall synopsis is that companies who adopt these principles are generally more profitable, recover faster, &lt;a href="https://www.theregister.co.uk/2017/06/06/state_of_devops_low_performers_are_fast_but_ignore_quality/" rel="noopener noreferrer"&gt;and deploy 200 times more frequently with 2,555 faster lead times&lt;/a&gt; than those companies that don’t.&lt;/p&gt;

&lt;p&gt;Adopting DevOps seems like a great strategic decision within most management circles.  And yet, time and time again we can see enterprises that struggle with their DevOps adoptions.  This is fairly common because most people don’t seem to get that implementing DevOps is not something you can just go out and “buy” or even fully plan for.  Planning out the adoption is important because we want to have the right resources in the right place.  &lt;a href="https://dev.to/iedaddy/focus-on-devops-core-tool-categories-14lj-temp-slug-5838912"&gt;Tools are useful&lt;/a&gt; because they help to enable processes, but &lt;a href="http://iedaddy.com/2017/03/devops-job-title-tools-or-process/" rel="noopener noreferrer"&gt;don’t confuse your tool with your process or your people&lt;/a&gt;.  Just because you’ve gone out and planned a strategy for DevOps adoption or &lt;a href="https://dev.to/iedaddy/focus-on-devops-core-tool-categories-14lj-temp-slug-5838912"&gt;bought an automation or collaboration tool&lt;/a&gt;, that doesn’t mean you’re doing DevOps, it just means you’re setting yourself up for success along the journey.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“The Plan is useless, but planning is essential”&lt;/p&gt;

&lt;p&gt;–&lt;a href="https://en.wikipedia.org/wiki/Dwight_D._Eisenhower" rel="noopener noreferrer"&gt;Dwight Eisenhower&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;We’ve been told at a strategic level that we want to move to DevOps, management has gone out and put together a great plan for the rollout, maybe they’ve even already implemented some “DevOps” tools in the enterprise.  But how do we ensure that we are able to execute on that plan from a tactical and operational level in order to make it successful?&lt;/p&gt;

&lt;p&gt;&lt;a href="http://iedaddy.com/wp-content/uploads/2018/01/OODA-loop.png" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fiedaddy.com%2Fwp-content%2Fuploads%2F2018%2F01%2FOODA-loop-300x300.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is where the concept of OODA loops (Observe, Orient, Decide, Act) come in.  The OODA concepts were developed by &lt;a href="https://en.wikipedia.org/wiki/John_Boyd_(military_strategist)" rel="noopener noreferrer"&gt;United States Air Force Colonel John Boyd&lt;/a&gt; as applied to the combat operations process.  It is surprisingly well suited for other situations where we are “in the trenches,” trying to become a &lt;a href="http://searchcio.techtarget.com/definition/change-agent" rel="noopener noreferrer"&gt;change agent within our organizations&lt;/a&gt;.  If you don’t think being a change agent and waging a battle have a lot in common, you’re not paying attention.&lt;/p&gt;

&lt;h2&gt;
  
  
  Observe
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;“If we don’t communicate with the outside world to gain information for knowledge and understanding, we die out to become a non-discerning and uninteresting part of that world.”&lt;/p&gt;

&lt;p&gt;–&lt;a href="https://en.wikipedia.org/wiki/John_Boyd_(military_strategist)" rel="noopener noreferrer"&gt;John Boyd&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The first step is always to observe the situation.  By observing and taking into account new information we create an open system and gain the information that’s important to creating mental models.  That’s not to say our observations will ever be perfect, or even right.  There are pitfalls we need to be aware of throughout our observations:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;We often cannot see the complete information, or the information is imperfect in some way.&lt;/li&gt;
&lt;li&gt;We receive so much information that separating the information from the data becomes difficult.&lt;/li&gt;
&lt;li&gt;Our own experiences and opinions will influence how we perceive the information&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When looking to be a change agent for DevOps transformation, start by finding out where you are, gather metrics that can be benchmarked against so that we know which way the needle is going as we start our DevOps journey. Not every technique is going to work at every company, so we need a way to figure out how to fail fast while promoting the things that are working.  We all have a mental model of how things “should” be, but that’s rarely an accurate depiction of reality.  Deciding on and gathering metrics will help you get a better understanding of your operational situation.&lt;/p&gt;

&lt;p&gt;But don’t assume that having the data is enough.  What is more important is how you interpret that data to bring out the truly valuable information.  Even if you have perfect data, without judgement and understanding of the data it’s meaningless.  One of the challenges of being effective in observation is knowing what information to monitor and applying the correct filters for that data.&lt;/p&gt;

&lt;h2&gt;
  
  
  Orientation
&lt;/h2&gt;

&lt;p&gt;This is where we build out our mental models and synthesize the data we’ve gathered into knowledge.  As we receive more information we are constantly breaking apart our old paradigms and rebuilding our mental models, creating new ones from the pieces in a continuous process.  Boyd called this “destructive deduction” and “creative induction”, using old fragments to form new mental models that more closely align with what is really happening around us.&lt;/p&gt;

&lt;p&gt;Most people are not bad decision makers, they just fail to place the information they have at hand in the proper context.  It is the context of the information that we’ve synthesized that turns it into knowledge that will lead to good decisions.&lt;/p&gt;

&lt;p&gt;Good orienting is the ability to constantly make new mental models on the fly and in the face of uncertainty and unpredictable change.  It is also a &lt;em&gt;continual process&lt;/em&gt;, as soon as you create a new model it quickly becomes outdated as the environment around you changes.  So, you must practice and have a robust toolbox of mental models that you can cultivate and grow in order to quickly assimilate information into actionable knowledge.&lt;/p&gt;

&lt;p&gt;A good anecdote that can demonstrate this is the old saying, “When all you’ve got is a hammer, everything is a nail.”  This is illustrated quite well in the paradigm shift between BlockBuster and NetFlix.  Blockbuster’s business model focused on hard-copy movie rentals and their brick and mortar stores as their primary business.  NetFlix when they first came on the scene were focused on DVD rentals by mail as their business plan.  NetFlix quickly saw the shift in consumers wanting to stream media via the internet and shifted their business model.  BlockBuster on the other hand were slow to move away from their tried and true business model.  Eventually Blockbuster tried to shift their business model, however by then it was too late.  Orienting and changing your mental models is a constant process of destruction and construction based on the environment around you and the data you have available to you.&lt;/p&gt;

&lt;p&gt;You also need to have multiple mental models, because if you’ve only got one or two of them, by the very nature of human psychology you’ll warp your reality so that your think the data and information you’ve observed will fit your models, and everything once again becomes a nail for your hammer.&lt;/p&gt;

&lt;p&gt;It’s not something that comes easy, as humans we tend to be creatures of habit.  But destroying and creating mental models is something that comes with practice and experience.  And it does become easier and eventually becomes something that doesn’t require deliberate thought, just something you do.&lt;/p&gt;

&lt;p&gt;Always Be Orienting (ABO) should be a part of your daily mantra.  Make it a goal that every day you add to your mental model toolbox, examine your existing models and ask yourself if they still apply to the information you now have.&lt;/p&gt;

&lt;p&gt;In addition, not all models are equal, some work and some don’t.  Ones that work in one specific situation don’t always work well in another.  This is especially true of DevOps transformations because we have a lot of uncertain variables in the equation, especially when it comes to people.&lt;/p&gt;

&lt;p&gt;You can go out and &lt;a href="https://dzone.com/articles/devops-use-cases" rel="noopener noreferrer"&gt;read case studies&lt;/a&gt; of what has (and has not) worked for other companies on their own DevOps journey all over the internet.  And these are great ways to add to your toolbox so that you have additional models, concepts and strategies that are ready to immediately implement with a similar situation comes up.  Of course, since every situation is different, if those don’t work you’ll need to continue the process of orientation until you create a model that is better suited for your particular situation.&lt;/p&gt;

&lt;p&gt;And that’s why you Always need to Be Orienting, orientation turns information into knowledge and it is knowledge that is the real predictor of making good decisions.&lt;/p&gt;

&lt;h2&gt;
  
  
  Decision
&lt;/h2&gt;

&lt;p&gt;When we decide, we’re selecting the model that we believe most closely matches our current information about the situation. This is our best guess at selecting a course of action and hence why it’s so important that we have many different models to choose from in our Orienting phase.  The more we have the more we can increase the likelihood of having a model that closely resembles the Observations that we have on hand.&lt;/p&gt;

&lt;p&gt;Making a decision is easy if you’ve got the right mental models in place because you’re going to be able to predict what happens in the future.  The ability to predict a future outcome can be the difference between success and failure.&lt;/p&gt;

&lt;p&gt;Making better and faster choices than your opponent gives you a greater chance of successful outcomes and a decisive advantage in influencing those outcomes.  The better you become at understanding your data and turning it into knowledge that will help you predict the future outcomes, the better decisions you will be able to make.&lt;/p&gt;

&lt;p&gt;As you gain experience, eventually you’ll have observations about a situation that match up with certain proven mental models, there is no need to create and destroy a series of mental models and decide on which one is most appropriate, you’ll already know.  Having this ability to quickly orient and act is something Boyd called “Implicit Outcome and Control.”  It’s what allows you to speed up your OODA loops and make quick and successful decisions.&lt;/p&gt;

&lt;h2&gt;
  
  
  Act
&lt;/h2&gt;

&lt;p&gt;Nothing is of benefit is gained unless you close the loop by actually carrying out the decision and acting. Once the results of the action are observed, you start the loop again.&lt;/p&gt;

&lt;p&gt;When competing against others, you gain advantage the faster and better you can cycle through the loop.&lt;/p&gt;

&lt;h2&gt;
  
  
  Everything is Connected
&lt;/h2&gt;

&lt;p&gt;While traditionally depicted as a cycle or ring, because each decision does not happen in a vacuum, each loop is actually a set of interacting loops that are constantly interrelated, operating on and orbiting each other.  Some of the loops are going to be small, rapidly occurring iterations that occur multiple times each hour or day, while other loops have larger orbits, moving at much slower speeds that sometimes take weeks or months to see a measurable result from.&lt;/p&gt;

&lt;p&gt;Even then, just like in orbits that involve celestial bodies, the orbits themselves will act upon other loops that are in close proximity, similar to how the gravity of planets or moons will affect other entities that are within their gravity well.  Again, nothing ever happens in a vacuum and just like with planetary orbits, a rogue asteroid or other cataclysmic event can come in and disrupt your entire system.&lt;/p&gt;

&lt;p&gt;Orientation touches every aspect because it’s how we interpret a situation based on culture, experience, new information, analysis, synthesis and even heritage.  That’s one of the main proponents for including diversity in decision making systems, it adds more models and options to the decisions you can make that create a better chance over overall success when you act upon those decisions.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;So that’s OODA loops (or orbits) at a high level, its just a system/framework that can be used to help us with tactical (and strategic) actions to enable a desired outcome (in this case, implementing a DevOps culture within our company).  Obviously, it can be applied to other things in life as this model in particular is adapted from a decision-making framework for combat situations but as seen here can be adapted for other decision-making processes.&lt;/p&gt;

&lt;p&gt;There are a couple points that really need to be emphasized about why this system works:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Having the right metrics in place in order to gather meaningful data that can be translated into information.&lt;/li&gt;
&lt;li&gt;Having diverse mental models that will allow us to choose the best fit for the data that you’ve gathered to turn that information into knowledge.&lt;/li&gt;
&lt;li&gt;You must be constantly adapting to evolving situations.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;DevOps is a process that involved people, process and tools.  Because we inherently have a certain amount of uncertainty in that system, balancing the equation is going to be a juggling act that requires us to constantly review the system and make adjustments as we gain additional knowledge and see the outcomes of our actions.&lt;/p&gt;

&lt;p&gt;We can plan our journey at the beginning and set ourselves up for success.  But we also can’t be afraid to throw out that plan as the situation changes and adapt to new data.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“Everybody has a plan until they get punched in the mouth”&lt;/p&gt;

&lt;p&gt;–&lt;a href="https://en.wikipedia.org/wiki/Mike_Tyson" rel="noopener noreferrer"&gt;Mike Tyson&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;It’s how you react to adversity that defines you, not the adversity itself.&lt;/p&gt;

&lt;p&gt;The post &lt;a href="http://iedaddy.com/2018/01/address-tactical-blockers-strategic-devops-transformation/" rel="noopener noreferrer"&gt;How to Address Tactical Blockers to Strategic DevOps Transformation&lt;/a&gt; appeared first on &lt;a href="http://iedaddy.com" rel="noopener noreferrer"&gt;Experiences of an Inland Empire Dad&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>metrics</category>
      <category>ooda</category>
    </item>
    <item>
      <title>Digital Transformation with Microsoft Collaboration Tools</title>
      <dc:creator>iedaddy</dc:creator>
      <pubDate>Fri, 05 Jan 2018 23:24:14 +0000</pubDate>
      <link>https://dev.to/iedaddy/digital-transformation-with-microsoft-collaboration-tools-1o2d</link>
      <guid>https://dev.to/iedaddy/digital-transformation-with-microsoft-collaboration-tools-1o2d</guid>
      <description>

&lt;p&gt;Just a Friday musings post, nothing too technical, but I did want to give some kudos to Microsoft for how well they’ve succeeded in the collaboration space over the last year and sort of pontificate on where SharePoint is going to fit in to the future of digital transformation.  Its no secret that I’m an off and on &lt;a href="https://products.office.com/en-us/sharepoint/collaboration"&gt;SharePoint&lt;/a&gt; fan, I’ve been dealing with this platform for a good chunk of my professional career and I’ve seen it evolve over the last decade, for better and for worse.  It’s been really interesting to see how they’ve shifted with their &lt;a href="https://news.microsoft.com/cloudforgood/"&gt;cloud initiave&lt;/a&gt; and completely changed the way that companies are running their business.&lt;/p&gt;

&lt;p&gt;What’s been great about this evolution is that we’re now seeing a true shift towards a collaboration suite that is capable of supporting both large and small businesses.  Even better is that Microsoft seems to have really put some effort into removing a lot of the complexity of SharePoint and more emphasis on real document and peer collaboration features.  Meaning you get more out of the box features and apps that enable the average end user to immediately start reaping the rewards of owning the system and that you can foster some real citizen developer cultures within your organization, no matter the size.&lt;/p&gt;

&lt;p&gt;I think one of the major tipping points for the collaboration space was when &lt;a href="https://slack.com"&gt;Slack&lt;/a&gt; was launched, creating a de facto standard of the features and capabilities that the average end user is looking for in an enterprise social/networking tool.&lt;/p&gt;

&lt;p&gt;Microsoft really took that to heart, and you can see that with their release of Teams over a year ago with how they enabled a lot of deep integration with their entire suite of tools.  Teams ended up being the glue that Microsoft was looking for.  This was really highlighted last summer when &lt;a href="https://en.wikipedia.org/wiki/Satya_Nadella"&gt;Satya Nadella&lt;/a&gt; announced the &lt;a href="https://www.microsoft.com/microsoft-365"&gt;Microsoft 365&lt;/a&gt; initiative at the Microsoft Inspire conference.  They envisioned a product family of Office 365, Windows 10, enterprise mobility, cloud storage and security all wrapped up with a nice bow on it.&lt;/p&gt;

&lt;p&gt;Now we’re seeing these great features in both the Teams app as well as the Microsoft 365 packages and a it feels like they are pulling back from their previous swiss army knife approach to the collaboration space that was the &lt;a href="https://products.office.com/en-us/sharepoint/sharepoint-server"&gt;SharePoint wheel&lt;/a&gt;.  Instead of getting complex and highly customizable software platforms that do “everything” but not always in the best or most intuitive way, we’re getting more purpose driven features delivered in holistic cloud-based tool that lower the entry point into the collaboration space for the end user whether they are a small business or a large enterprise.&lt;/p&gt;

&lt;p&gt;So where does this leave SharePoint?  I think we’re going to see a fundamental shift this year where much of the concepts that have surrounded the old “team site” and community templates of SharePoint will be moved into the Teams platform for real-time peer-to-peer collaboration scenarios.  Honestly, I see a lot of features in the Teams product that I’ve always wanted in the &lt;a href="https://docs.microsoft.com/en-us/sharepoint/dev/solution-guidance/modern-experience-customizations-customize-sites"&gt;SharePoint collaboration team sites&lt;/a&gt;, or the &lt;a href="https://docs.microsoft.com/en-us/vsts/collaborate/collaborate-in-a-team-room"&gt;Team Foundation Server chat rooms&lt;/a&gt;, or &lt;a href="https://support.office.com/en-us/article/Share-an-Outlook-calendar-with-other-people-353ed2c1-3ec5-449d-8c73-6931a0adab88"&gt;clunky Exchange calendars&lt;/a&gt; &amp;amp; meetings, &lt;a href="https://www.onenote.com/"&gt;OneNote notebooks&lt;/a&gt;, etc.&lt;/p&gt;

&lt;p&gt;I think that SharePoint &lt;em&gt;as a platform&lt;/em&gt; and as the backbone to a lot of these technologies will still remain strong, but I think a lot of the front-end, user facing GUI features native to the SharePoint product are going to fall victim to digital transformation ideas, that your data will sit on a back end SharePoint system “somewhere” and that your end-users will use a bevy of apps to access that SharePoint backbone, with some of them never experiencing the native SharePoint UI.  But I think that can be a good thing and I’m looking forward to what 2018 is going to hold in store for us &lt;a href="http://www.spsevents.org/"&gt;SharePoint geeks&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The post &lt;a href="http://iedaddy.com/2018/01/digital-transformation-microsoft-collaboration-tools/"&gt;Digital Transformation with Microsoft Collaboration Tools&lt;/a&gt; appeared first on &lt;a href="http://iedaddy.com"&gt;Experiences of an Inland Empire Dad&lt;/a&gt;.&lt;/p&gt;


</description>
      <category>microsoft</category>
      <category>microsoftoffice</category>
      <category>sharepoint</category>
      <category>technology</category>
    </item>
    <item>
      <title>DevOps – Where should we start?</title>
      <dc:creator>iedaddy</dc:creator>
      <pubDate>Thu, 04 Jan 2018 15:44:11 +0000</pubDate>
      <link>https://dev.to/iedaddy/devops--where-should-we-start-2pff</link>
      <guid>https://dev.to/iedaddy/devops--where-should-we-start-2pff</guid>
      <description>&lt;p&gt;Start with what hurts most.  Most people are looking for a checklist on how to implement DevOps, but it’s rarely that simple.  The problem is that every group is at different stages of their DevOps transformation. So, where each journey “starts” will be different for every organization.&lt;/p&gt;

&lt;p&gt;However, DevOps is also a culture that builds upon itself, with each layer building on the foundation of what’s come before, so in this regard there are building blocks that you need to have in place before you can build the next layer.  I often refer to this as the “LEGO” approach.  You have a bunch of individual building blocks, and in some organizations, you’ll have areas that are more mature with several blocks already put together into some basic structures and processes and you need to form a strategy to put them all together into one cohesive DevOps structure.&lt;/p&gt;

&lt;p&gt;Here’s some of the basic building blocks that I think every organization would need in order to build a DevOps culture.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Metrics:&lt;/strong&gt; We have to be able to measure where we are if we want to figure out where we need to go.  Metrics will help us figure out where we are, what’s important to the organization, and how each step in the DevOps journey will help improve our process to add value for our end users.  You need to be able to measure and report on them in order to reinforce positive behaviors.  Find a couple of basic measurements and put that up on a dashboard somewhere for the entire team to see in order to help them focus on what we’re looking to improve.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Source Control:&lt;/strong&gt; This is probably one of the most fundamental building blocks that a company must have in place before building out any other part of their DevOps journey.  They have to have their code in a place where it can be accessed and reviewed and see what changes have been made.  This is fundamental to being able to protect their code, perform check-ins, peer reviews, and get an understanding of just what exactly it is they are pushing through the delivery pipeline.&lt;/p&gt;

&lt;p&gt;This is also where we start to build out some of our other metrics.  Things like code churn and active files become easy to identify, even some of the more ambiguous metrics like lines of code are at least available to us in order to wrap our arms around just how big a project is or how much technical debt might be lurking under the covers.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Work Item Tracking:&lt;/strong&gt;  Use some form of work item tracking for Features, Backlog Items, Bugs, and Tasks.  I’m a big fan of Agile, so having a Product Backlog or Kanban Board visible to the entire group will allow them to focus on a goal.  Tracking Work Items clearly lays out who is responsible for what and adds a level of accountability.  Each iteration should have a clear definition of what “done” means for the team.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Continuous Integration:&lt;/strong&gt; Having implemented a source control system that allows your code to be centralized means you now have a location where you can start building a Continuous Integration (CI) system that can retrieve the source code and perform Automated Builds, Unit Tests and Static Code Analysis (SCA), all of which can also help to contribute to some useful metrics on how well you’re doing.  Developers should be encouraged to check their code in early and often so that we get constant feedback every time the project is built, tested, and packaged into build artifacts on a separate build machine (avoid the “It Works On My Machine” Syndrome).&lt;/p&gt;

&lt;p&gt;When building your CI blocks, this is a great place to gather additional metrics where you can see that there are consistent green builds, all your Unit Tests are passing, and if you’re using tools like sonarQube or Fortify it will help to produce additional metrics around code quality, standards and technical debt where team members can visibly see the quality of the code through reporting dashboards and gain confidence in their builds.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Automated Deployment (Single Environment FIRST):&lt;/strong&gt; This is where we start seeing a lot of synergy, but also a lot of issues with building trust in the process.  Too often an over-eager engineer will try to automate the entire pipeline in one shot.  Instead, start small working on automating the deployment to just a single environment, avoid trying to “Boil the Ocean” and instead work on ironing out the kinks with a single environment deployment.  Build trust and confidence in the ability to deploy the code with no manual steps involved.  Do this in a “throw-away” environment where you can deploy and destroy over and over again.  Over time, other colleagues will see the amazing advantages to having and automated deployment and will invite you in to enable this for the other environments in your pipeline.  But get your automated deployments dialed in first in the single environment, bullet-proof it and build that trust with the rest of the team that this is a process they can trust and have confidence in.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Automated Testing:&lt;/strong&gt; Once we’ve got a fully automated and repeatable process to deploy the system to a test environment, work on automating the testing of that system with integration and coded UI testing.  In the CI build, we run unit tests, but with a fully functional system we have the opportunity to run integration and load tests.  Having the ability to automatically reconstitute an entire environment and run more advanced testing is critical to increasing the team’s velocity and reducing downtime due to simple mistakes that can be caught through basic function testing.  However, be sure not to put all your trust in automation testing.  These tests are great for scripting out positive testing, like being able to log in or query for known information, but not every test can be accounted for and there’s nothing like having a human at the helm to do exploratory testing.&lt;/p&gt;

&lt;p&gt;The great thing about this is that it also becomes an iterative process, each time you build out a test it can be scripted out and you can build out your suite of tests so each iteration is more comprehensive than the last.  I also want to point out that while Automated Testing is great for basic scenarios, but trying to maintain complex scenarios through scripts can sometimes be more trouble than it’s worth.  One of the worst things you can have is Automated Tests that continually fail and are ignored because it’s chalked up to “Oh, that always fails when the interface changes” because now you’re training your team to ignore certain tests.  Every scripted test should pass every single time before certifying the build.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Infrastructure As Code:&lt;/strong&gt; Once we’ve got the deployment of the code and we can successfully deploy to a known set of servers or environments, the next step is to introduce the concept known as Infrastructure as Code (IaC) in the pipeline.  By having your code and infrastructure configurations both stored in source control, this will enable your team to stand up new environments by simply changing some parameters in source control.  Having parameterized scripts utilizing tools like Chef or Puppet, we can free up the manual process of logging onto VMWare portals to provision servers or setting up features like web servers that need to be configured before being able to successfully deploy to the environment, further reducing the chance of human error.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Monitoring:&lt;/strong&gt; This is one of those areas that often gets overlooked.  Once the iteration is over and the latest code is pushed to production, we often forget that operating the system is just as important as building it.  But this is also one of the most critical pieces for business.  We must monitor our code and our systems as it is running in production to ensure we are delivering value to our end users.  This should not just be limited to the usual performance and exception monitoring either.  Monitoring should help answer the more interesting questions to the business, like how the end users are using the application, what features they use the most that might benefit from being expanded as well as features that may not be used at all and can be trimmed from the system in a future iteration. Someone once explained to me that feature curation like running a restaurant.  As you operate the restaurant you expand the menu and enhance certain meals but if nobody orders certain items then you take them off the menu.  No restaurant would ever try to have 400 items on the menu and do all the shopping and everything else that goes with having those menu items that nobody ever orders, so why would you maintain 400 features in your system that nobody ever uses?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Continuous Improvement:&lt;/strong&gt; If something isn’t broken that doesn’t mean you can’t improve it.  As you move through the process of integrating different tools and processes into your delivery pipeline, think about how you can speed up the activities that take the longest or contain the most risk.  Review your workflow constraints and think about how you can reduce cycle times.  DevOps is not just about automating your pipeline. Sometimes you need to go back and rearchitect your existing system in a way that enables it to efficiently deploy in a way that does not impact your users.  A lot of systems aren’t built with that capability of handling sophisticated DevOps deployment strategies like Blue-Green deployments, canary builds, rings, and other strategies that enable zero downtime while creating low-risk releases.  But these are exactly the opportunities you’re going to want to target as your DevOps journey matures.&lt;/p&gt;

&lt;p&gt;&lt;a href="http://iedaddy.com/wp-content/uploads/2018/01/DevOpsJourney.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Ojy09jfa--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/http://iedaddy.com/wp-content/uploads/2018/01/DevOpsJourney-300x193.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As many people like to say, “A journey of a thousand miles begins with the first step.”  The DevOps journey is no different, it starts with a few small steps and you start to move down the maturity road.  Not everyone starts in the same place, but it’s a road we all walk down together.  DevOps is about the journey and not the destination.&lt;/p&gt;

&lt;p&gt;The post &lt;a href="http://iedaddy.com/2018/01/devops-where-should-we-start/"&gt;DevOps – Where should we start?&lt;/a&gt; appeared first on &lt;a href="http://iedaddy.com"&gt;Experiences of an Inland Empire Dad&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>devops</category>
    </item>
    <item>
      <title>DevOps and Databases Velocity Gap</title>
      <dc:creator>iedaddy</dc:creator>
      <pubDate>Wed, 20 Dec 2017 19:54:29 +0000</pubDate>
      <link>https://dev.to/iedaddy/devops-and-databases-velocity-gap-282h</link>
      <guid>https://dev.to/iedaddy/devops-and-databases-velocity-gap-282h</guid>
      <description>

&lt;p&gt;Pushing code is easy, there are a bunch of tools out there today that automate those deployments.  But there’s often an overlooked aspect of deployments that are still done manually.  Database deployment is often still a manual process.  These deployments often slow down our delivery pipelines, they are slow, error-prone and resource intensive because it’s still a manual process that interferes with a clean hand-off between development and operations.&lt;/p&gt;

&lt;p&gt;Some people have referred to this as the “Velocity Gap”. The blocker that is our biggest constraint in delivery because we can only move as fast as our slowest member, and more often than not that ends up being the database implementation.  If the goal is that we deliver 10 times a day, 10 manual database deployments can create quite the bottleneck.&lt;/p&gt;

&lt;p&gt;Similarly to security, the reason why the database is causing such a roadblock is because it is typically the last team to be brought into the life cycle. Databases cannot be reverted or replaced like application features. It is designed to preserve and protect data, and for that reason it must be preserved itself.  Solving this part of the application delivery is a complex one, because unlike the delivery of code, where a roll-back consists of deploying the previous build artifacts, the database is always in constant motion and more often than not you can’t just go back to the previous state-in-time because the data entered into the system must still be preserved.  For example, in an order based system, you can’t just blow away the last 3 days of purchases because you found a bug and need to roll-back.&lt;/p&gt;

&lt;p&gt;This fundamentally goes to how databases are designed.  Just like developers had to learn how to structure their code to support automated and unit testing, database architects must now learn how to build out resilient databases that will support the devOps cadence of release. To successfully bring the database into the devOps fold, database administrators should be integrated into the team, learn about development, and trust the development process. DevOps means having cross-functional teams, so the database administrators should be a part of the team and able to weigh in on the architecture, in the traditional way of doing things, when a change happens the database admin typically doesn’t know why the change is happening or how it will impact the overall product. Bringing them to the team will help them understand not only the function of the product, but enable them to weigh in on the architecture.&lt;/p&gt;

&lt;p&gt;This is extremely important because our traditional data structures in a database don’t support a devOps model because we’re missing a layer of abstraction.  Often times when we look at restructuring our database to support the devOps deployment model.  When choosing how to structure our database and it’s deployment process, there are some key fundamentals that need to be taken into account:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Testable&lt;/strong&gt; : I can test any database change before running it on the Production database.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Automated&lt;/strong&gt; : I can automate the whole process so that I can’t get it wrong.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Trackable&lt;/strong&gt; : Each database should have a log of what has been done to its schema.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Atomic:&lt;/strong&gt; The update process must either be completed successful or entirely rolled-back.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Recoverable&lt;/strong&gt; : Each update should automatically make a backup in case the worst happens.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For this reason, one model I’ve found that works extremely well is the concept of having core data tables that handle our data and then an interface database responsible for views, stored procedures, and business logic that exists outside of the raw data.&lt;/p&gt;

&lt;p&gt;By breaking out into 2 separate databases on the same server, it makes it easy to have both facade databases “in-flight” while still relying on the core-data to remain constant.  Small changes, updates and features are applied to one facade to support the canary build or blue-green model of deployment.&lt;/p&gt;

&lt;p&gt;&lt;a href="http://iedaddy.com/wp-content/uploads/2017/12/canary-1.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--gLWxKYba--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/http://iedaddy.com/wp-content/uploads/2017/12/canary-1-300x180.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;By breaking out the database code and isolating it from the raw data store, it allows us to run a side-by-side in production using a canary or ring deployment.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Now, does this work in every case?  No.  But it does go a long way to allowing us to perform rapid deployments and roll-backs of many of our changes while still preserving our “in-flight” data as in the case of various e-commerce applications.  By abstracting our database “functionality” in the form of views and stored procedures in one database and the raw data in another, we can roll-back the changes quickly while preserving our actual data.&lt;/p&gt;

&lt;p&gt;Fundamentally though, this is going to change the structure of your deployments and artifacts.  You need to bring your DBA’s in early and explain the reasoning behind the structure and what it means in practical terms to them.&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;p&gt;The post &lt;a href="http://iedaddy.com/2017/12/devops-databases-velocity-gap/"&gt;DevOps and Databases Velocity Gap&lt;/a&gt; appeared first on &lt;a href="http://iedaddy.com"&gt;Experiences of an Inland Empire Dad&lt;/a&gt;.&lt;/p&gt;


</description>
      <category>devops</category>
      <category>canarybuild</category>
      <category>database</category>
    </item>
  </channel>
</rss>
