<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: David Wengier</title>
    <description>The latest articles on DEV Community by David Wengier (@davidwengier).</description>
    <link>https://dev.to/davidwengier</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/davidwengier"/>
    <language>en</language>
    <item>
      <title>Debugging IIS Rewrite Rules</title>
      <dc:creator>David Wengier</dc:creator>
      <pubDate>Mon, 21 May 2018 00:00:00 +0000</pubDate>
      <link>https://dev.to/davidwengier/debugging-iis-rewrite-rules-4jbi</link>
      <guid>https://dev.to/davidwengier/debugging-iis-rewrite-rules-4jbi</guid>
      <description>&lt;p&gt;Where I work we have a number of ASP.NET web applications that run different parts of our site so that we can have some segregation of code and containment of scope without just having an enormous monolithic project that holds everything intermingled together. It’s nothing too exciting technically, but the marketing department also needs to be able to present the entire site as a whole to visitors, and the Google Bot, for that sweet sweet SEO juice (and easier navigation and other less cynical reasons I’m sure). The way we achieve that is with prodigious use of the IIS URL Rewrite engine, which allows us to create a set of rules that take the incoming HTTP requests and either route them through to different web applications, or different virtual paths, or stop some in their tracks entirely.&lt;/p&gt;

&lt;p&gt;There is lots of documentation and examples on the web about setting these up, and I certainly don’t claim to be an expert in the full range of their capabilities, but one thing I do know is that whilst they are fantastic when they are working, and just sit there happily doing their job without complaint, when adding new ones it can sometimes appear to be a bit of a mystery as to whether they are working. Additionally, because we use them to consolidate a lot of different applications and URLs into one coherent set of public URLs, getting them running locally quickly ends up with requests to local environments being redirected to live environments, with no real way of knowing if it was because the rules are working perfectly, or simply because they fell through to some catch all at the end.&lt;/p&gt;

&lt;p&gt;Fortunately there is a way to debug the rules, or at least get logging out of the engine, albeit a little hidden.&lt;/p&gt;

&lt;h2&gt;
  
  
  Not all failures are failures
&lt;/h2&gt;

&lt;p&gt;The answer lies in the IIS Failed Request Tracing feature and the fact that it is possible to configure it to trace successful requests just as easily as failed ones. The feature can be access through the IIS manager or the configuration can be specified in the &lt;code&gt;web.config&lt;/code&gt; file of your application.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwengier.com%2Fimages%2Fposts%2FFRT.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwengier.com%2Fimages%2Fposts%2FFRT.png" alt="Failed Request Tracing"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The module itself has quite a nice wizard to guide you through setting up a new rule, however to debug rewrites in the way that I want to, its a little unintuitive so I’ll detail exactly what I did.&lt;/p&gt;

&lt;p&gt;The first step is straight forward enough, where you select which filenames you want to trace. In the modern era of MVC and WebApi this feels a little antiquated, since file names are a bit naff, so probably best, and certainly easiest, to just leave this selection on “All content”.&lt;/p&gt;

&lt;p&gt;The second screen is where the real magic happens:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwengier.com%2Fimages%2Fposts%2Ffrt-step-2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwengier.com%2Fimages%2Fposts%2Ffrt-step-2.png" alt="Failed Request Tracing Step 2"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The first input option here is to specify which HTTP status codes should be traced, and this is where we flip the “failure” title on its head. By specifying a successful code here (ie, 2xx or 3xx) we get tracing for successful requests and not just failed ones.&lt;/p&gt;

&lt;p&gt;Depending on how much logging you want happening, you could narrow this down to just the statuses you want to track, for example just specify 301 to trace permanent redirects, or you could widen it. I think starting as wide as you can and specifying &lt;code&gt;200-399&lt;/code&gt; for this value is the best, that way even if you’re adding a new permanent redirect rule that you want to trace, you’ll get the logs even if you had something wrong with your rule and the request fell through to a different rewrite rule.&lt;/p&gt;

&lt;p&gt;If the requests you’re trying to trace are getting through to your site and resulting in errors or bad URLs you also might want to add in &lt;code&gt;404&lt;/code&gt; and &lt;code&gt;500&lt;/code&gt; to the list.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwengier.com%2Fimages%2Fposts%2Ffrt-step-3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwengier.com%2Fimages%2Fposts%2Ffrt-step-3.png" alt="Failed Request Tracing Step 3"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The 3rd screen allows for the selection of which IIS modules you want to trace so in order to keep some of the noise out of the log its best to untick everything except &lt;code&gt;Rewrite&lt;/code&gt; and &lt;code&gt;RequestRouting&lt;/code&gt;. Leave the verbosity at verbose, mainly because its fun to say “verbose verbosity”.&lt;/p&gt;

&lt;p&gt;And thats it, you’re all configured. You can also configure the equivalent of all of this in the web.config file with the follow config:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;tracing&amp;gt;
    &amp;lt;traceFailedRequests&amp;gt;
    &amp;lt;add path="*"&amp;gt;
        &amp;lt;traceAreas&amp;gt;
        &amp;lt;add provider="WWW Server" areas="Rewrite,RequestRouting" verbosity="Verbose" /&amp;gt;
        &amp;lt;/traceAreas&amp;gt;
        &amp;lt;failureDefinitions timeTaken="00:00:00" statusCodes="200-399" /&amp;gt;
    &amp;lt;/add&amp;gt;
    &amp;lt;/traceFailedRequests&amp;gt;
&amp;lt;/tracing&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Finally make sure the feature itself is enabled by clicking “Edit Site Tracing…” from the right hand bar and tick the Enabled checkbox. If it’s already enabled then great, but the screen is still useful to grab the path to the log files, which by default is &lt;code&gt;%SystemDrive%\inetpub\logs\FailedReqLogFiles&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Looking at the log files
&lt;/h2&gt;

&lt;p&gt;The Failed Request Tracing module logs to XML files which I definitely don’t recommend looking at in raw form. Fortunately IIS also generates an XSL file for you which nicely formats the logs into something vaguely readable. I’ve found the easiest way to view the logs is simply to open the XML file in Internet Explorer (yes, I know) as that will automatically find and apply the xsl file, whereas Chrome did not.&lt;/p&gt;

&lt;p&gt;The logs themselves are quite verbose, as we requested:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwengier.com%2Fimages%2Fposts%2Ffrt-output.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwengier.com%2Fimages%2Fposts%2Ffrt-output.png" alt="Failed Request Tracing Sample Log"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You’ll see the output for each rule you have in your rewrite configuration, you can see the input values and the patterns matched against. You can see whether each on succeeded, though its worth noting that you need to apply the &lt;code&gt;negate&lt;/code&gt; value yourself so a negative rule might say “Succeeded: false” and you have to remember that that means the rule as you wrote it did in fact match.&lt;/p&gt;

&lt;p&gt;Hopefully after trawling through the file you can work out what is going on, though personally I found it easier to search the file for the rule you think is probably at fault, rather than scanning through them all, but I’m coming from a codebase with a &lt;em&gt;lot&lt;/em&gt; of rules.&lt;/p&gt;

</description>
      <category>iisrewritedebugging</category>
    </item>
    <item>
      <title>Promoting Binaries and Hotfixable Deployments</title>
      <dc:creator>David Wengier</dc:creator>
      <pubDate>Mon, 07 May 2018 00:00:00 +0000</pubDate>
      <link>https://dev.to/davidwengier/promoting-binaries-and-hotfixable-deployments-ahf</link>
      <guid>https://dev.to/davidwengier/promoting-binaries-and-hotfixable-deployments-ahf</guid>
      <description>&lt;p&gt;There are a two different schools of thought when it comes to deploying to production environments. Well okay, we’re developers, so there are probably 100 different schools of thought but bear with me. One option is to promote the same binaries from testing, through staging, and all the way to production, and the other is to maintain a branch in your source repository for the current state of production, and deploy from that. The general thinking is that with the former you get safety in knowing that your production deployments is &lt;em&gt;exactly&lt;/em&gt; what has been through your testing cycles, and with the latter you’re always in a position to hotfix and correct a production issue regardless of what state your testing branch might be in.&lt;/p&gt;

&lt;p&gt;Fortunately its an argument that can be avoided, and instead you can set up an environment where you get the best of both worlds: Predictable results from promoting binaries to production and an insurance policy in case you need to hotfix. I am using &lt;a href="https://www.jetbrains.com/teamcity/" rel="noopener noreferrer"&gt;TeamCity&lt;/a&gt; and &lt;a href="https://www.jetbrains.com/teamcity/" rel="noopener noreferrer"&gt;Octopus Deploy&lt;/a&gt; to do this but the ideas are the same no matter what technology you use.&lt;/p&gt;

&lt;h2&gt;
  
  
  Commit hashes are important
&lt;/h2&gt;

&lt;p&gt;One of the best pieces of advice I have for anyone setting up any kind of CI/CD, automation, devops workflow is to get your commit hashes into your binaries and packages as early and as often as possible. Having a known identifier that can track binaries and directly correlate them to source code is invaluable for all sorts of things, but in this case its especially important so that the build server and deployment packages know what each other is talking about.&lt;/p&gt;

&lt;p&gt;To get commit hashes into your build output in TeamCity is as straightforward as configuring a setting on the build in question. The “Build number format” setting dictates how TeamCity should format build numbers in its output, and also the format of the &lt;code&gt;%build.number%&lt;/code&gt; variable that you can use in, or pass in to, scripts and build steps. The normal approach for a build number would be something like &lt;code&gt;1.0.%build.counter%&lt;/code&gt;, where the major and minor versions are hardcoded to 1.0, and the build number increments automatically with every build. Personally I’m a fan of using something like GitVersion to allow the number of commits to be used instead of the build number, as it allows resiliency across build server reinstalls, but thats for another discussion.&lt;/p&gt;

&lt;p&gt;Tagging the commit hash on the end is done by adding a hyphen after the build counter, and then inserting the commit hash. In TeamCity this is the &lt;code&gt;%build.vcs.number%&lt;/code&gt; variable, so our full build number format is as follows.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;1.0.%build.counter%-%build.vcs.number%
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will give a build number something like &lt;code&gt;1.0.134-770ac6d169006ce42b5bbc022a6a166135bbe8a7&lt;/code&gt;. Success in that we have the commit hash in the build number, but its a bit ugly and unnecessarily long. You only need around 7 or 8 characters to be unique for most repos (the Linux Kernel is starting to need 12 but they have hundreds of thousands of commits) so I like to shorten the hash down a bit. Doing this in TeamCity is a little un-intuitive as there are no operations that can be performed in the simply macro language you use to specify the build format. To change the build number you need to use a build step and use the TeamCity feature called &lt;a href="https://confluence.jetbrains.com/display/TCD10/Build+Script+Interaction+with+TeamCity#BuildScriptInteractionwithTeamCity-servMsgsServiceMessages" rel="noopener noreferrer"&gt;service messages&lt;/a&gt;, which is a standard pre-defined structure of output, written to standard output, that TeamCity will pick up and process. I’ve done this with a short PowerShell script as the first step in each build I define.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Write-Host "Old build number was: %build.number%"
$buildNumber = "1.0.%build.counter%"
$shortHash = "%build.vcs.number%"
$shortHash = $shortHash.substring(0, 10)
$buildNumber = "$buildNumber-$shortHash"
Write-Host "New build number is: $buildNumber"
Write-Host "##teamcity[buildNumber '$buildNumber']"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;My script is overly long because of the debugging output but I find build logs verbose enough already so keeping a couple of lines out of it isn’t worth worrying about. Strictly speaking the whole script could be a one-liner.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Write-Host "##teamcity[buildNumber '1.0.%build.counter%-$("%build.vcs.number%".substring(0, 10))']"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I’m keeping the short has at 10 characters for no good reason, you could easily change that to whatever you desire. Its worth noting that with this as the first step of the build plan the “Build number format” setting has been rendered effectively useless for all but the first few seconds of the build, until it runs this script. With the script in blame the build number will now be &lt;code&gt;1.0.134-770ac6d169&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pass hashes through to Octopus
&lt;/h2&gt;

&lt;p&gt;Now that we have our short build number its important to use that in the version number for any package pushed to Octopus, and the release made from those packages. This gives full traceability from git commit, to build, through to deployment. If you also use something like &lt;a href="https://github.com/AArnott/Nerdbank.GitVersioning" rel="noopener noreferrer"&gt;NerdBank.GitVersioning&lt;/a&gt; you can tag your DLLs with the same commit hash, which means you can also include it in your application logs or audit tracking.&lt;/p&gt;

&lt;p&gt;With the version number in the package being deployed in Octopus it means we can now create a powershell script, and put it in the process for a production deployment. That script fast forwards the master branch to the specific commit that has been deployed, guaranteeing that the master branch will point at exactly where the develop branch was at when that package was built.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/..%2Fimages%2Fposts%2Ffast-forward-to-master-step.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/..%2Fimages%2Fposts%2Ffast-forward-to-master-step.png" alt="Fast Forward to Master Step"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Set-Location -Path "&amp;lt;path to source repository&amp;gt;"
$vers = "$($OctopusParameters["Octopus.Release.Number"])".Split("-")[1]
Write-Host "Version is: $vers"
$commitHash = $vers.Substring($vers.IndexOf("-") + 1)
Write-Host "This release is from commit hash: $commitHash" 

Write-Host "Fetching latest origin just to be sure"
git fetch origin --prune
Write-Host "Resetting to current master"
git reset origin/master --hard
Write-Host "Fast forwarding to $hash"
git merge $hash --ff-only
Write-Host "Pushing back to origin"
git push origin
Write-Host "Finished"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The script needs to be run in a git working copy and assumes master is checked out, though that could be added easily enough. I could have reset to the specific commit and just pushed that, but I like the extra protection that &lt;code&gt;--ff-only&lt;/code&gt; provides. It ensures that if anything goes wrong with the working copy, or the script gets run at an incorrect time, there at least won’t be any commits lost that will require navigating the reflog for. There might be a better way to achieve this, or perhaps that worry is for nothing, but I don’t profess to be a git expert.&lt;/p&gt;

&lt;h2&gt;
  
  
  Hotfixes are now just another build
&lt;/h2&gt;

&lt;p&gt;Now that master is at the point of the deployed production build, hotfix branches can be created from, and merged back into, the master branch, which can then be built and deployed with the normal build and deployment process knowing that any changes that have been made to the develop branch will not be included. In an ideal world develop remains deployable and this process isn’t needed, but an insurance policy is a good idea and in this case, cheap to have. In my case I’ve set up a separate build on TeamCity for the master branch that is not automatically triggered, and considering each production deploy will change the master branch thats best.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.wengier.com%2Fimages%2Fposts%2Fhotfix-lifecycle.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.wengier.com%2Fimages%2Fposts%2Fhotfix-lifecycle.png" alt="HotFix Lifecycle"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The hotfix build releases on a hotfix channel in Octopus so that it can deploy direct to staging, avoiding test. This way test still maps to the develop branch so that process isn’t interrupted. Specifying the channel to use is a matter of setting the right parameters in the TeamCity build step that does your Octopus release creation.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.wengier.com%2Fimages%2Fposts%2Fpush-to-octopus.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.wengier.com%2Fimages%2Fposts%2Fpush-to-octopus.png" alt="Specify Channel in Octopus"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The only issue that I ran into with this is that because I’m not using a “smart” build number, but instead just a numerically increasing build counter, the first hotfix build didn’t actually get deployed by Octopus. Looking at the TeamCity and Octopus logs it was clear that while the build and release versions were correct, when it came time to pick which packages went into a release Octopus saw the hotfix build as being older than the last develop build, simply because of the build counter.&lt;/p&gt;

&lt;p&gt;To solve this I configured the Octopus release creator to force a package version to use. Since we have commit hashes at every step of the way the actual version numbers all become rather irrelevant so this feels like a perfectly safe thing to do. In theory if two releases point to the same commit hash, it doesn’t matter if one is v2.0.1 and the other is v3.56.231, they have the same code and therefore will function the same way.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.wengier.com%2Fimages%2Fposts%2Fpush-to-octopus-advanced-options.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.wengier.com%2Fimages%2Fposts%2Fpush-to-octopus-advanced-options.png" alt="Advanced Octopus Options"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You might need to click “Show Advanced Options” in TeamCity to get this item to appear.&lt;/p&gt;

&lt;h2&gt;
  
  
  Hope for the best, plan for the worst
&lt;/h2&gt;

&lt;p&gt;Now we have a situation where the devlop branch is build an deployed automatically, as often as we like. We know the commit hash at every step of the way, so we can map everything back to the raw source commit and we have our insurance policy in place if things go wrong, via the master branch moving and hotfix build available for manual triggering.&lt;/p&gt;

</description>
      <category>teamcity</category>
      <category>octopus</category>
      <category>build</category>
      <category>deploy</category>
    </item>
    <item>
      <title>Targeting builds for multiple frameworks and machines</title>
      <dc:creator>David Wengier</dc:creator>
      <pubDate>Mon, 30 Apr 2018 00:00:00 +0000</pubDate>
      <link>https://dev.to/davidwengier/targeting-builds-for-multiple-frameworks-and-machines-5h22</link>
      <guid>https://dev.to/davidwengier/targeting-builds-for-multiple-frameworks-and-machines-5h22</guid>
      <description>

&lt;p&gt;I’ve recently starting working on a new project in my spare time, &lt;a href="http://github.com/davidwengier/dbupgrader"&gt;DbUpgrader&lt;/a&gt;, and I’m trying to work on it for at least a few minutes every night. I variously use a MacBook Pro or Windows machine, and sometimes I use Visual Studio 2017 but sometimes I’m just using Visual Studio Code and mucking around on the console. I’d like to also try out Visual Studio for Mac sometime soon. All of these different environments have their advantages and features, but I mostly want to make sure that I can work in all of them, on the same project, without issue.&lt;/p&gt;

&lt;p&gt;Enter the &lt;a href="https://github.com/dotnet/project-system"&gt;new project system&lt;/a&gt; in Visual Studio which allows for minimal .csproj files that remain easily editable MSBuild targets without having to compromise and have separate build scripts for each scenario. The challenge I set myself was to see if I could create a single solution with projects that fulfilled the following needs:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Opens in Visual Studio on Windows without error&lt;/li&gt;
&lt;li&gt;Builds in Visual Studio without issue&lt;/li&gt;
&lt;li&gt;Tests appear in the Test Explorer in Visual Studio and tests run as expected&lt;/li&gt;
&lt;li&gt;Works with &lt;code&gt;dotnet build&lt;/code&gt; on Mac and Windows&lt;/li&gt;
&lt;li&gt;Works with &lt;code&gt;dotnet test&lt;/code&gt; on Mac and Windows&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This may seem easy but its slightly complicated by the fact that I want to support not only the full .NET Framework v4.6 on Windows, but also .NET Core on Mac and Windows, without the .NET 4.6 support being an issue on Mac. To support .NET 4.6 the shared libraries need to be .NET Standard 1.3 or lower, but I also have some functionality and tests that use &lt;code&gt;Microsoft.Data.Sqlite&lt;/code&gt; which is .NET Standard 2.0, and therefore incompatible with .NET 4.6. So on Windows I want a build for .NET 4.6 without Sqlite support, and a build for .NET Core with it, and on Mac a build for .NET Core with support and no errors relating to missing .NET Framework support.&lt;/p&gt;

&lt;h2&gt;
  
  
  Multi-targeting means multi-builds
&lt;/h2&gt;

&lt;p&gt;The easiest way to think about multi-targeting in the new project system is to remember this simple fact: Each target framework acts like its a duplicate of the whole project. Consider a .csproj file with the following declaration.&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight xml"&gt;&lt;code&gt;&lt;span class="nt"&gt;&amp;lt;TargetFrameworks&amp;gt;&lt;/span&gt;net46;netcoreapp2.0&lt;span class="nt"&gt;&amp;lt;/TargetFrameworks&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;When building this project MSBuild will run the build twice, ones for .NET Framework 4.6 (net46) and once for .NET Core (netcoreapp2.0). Knowing this helps explain the logic of how the project file should be laid out in order to change what is built for each target.&lt;/p&gt;

&lt;p&gt;In my case I want the Sqlite code to only be built for netcoreapp2.0 because it needs to target .NET Standard 2.0, and net46 is not quite at that level. The full table of versions and what they support is &lt;a href="https://github.com/dotnet/standard/blob/master/docs/versions.md"&gt;on GitHub&lt;/a&gt; but suffice to say that net46 maps to .NET Standard 1.3.&lt;/p&gt;

&lt;p&gt;Armed with this information we know that we need to exclude the Sqlite dependencies and files when building for net46 and this is done with a &lt;code&gt;Condition&lt;/code&gt; attribute on the relevant spots in the project file.&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight xml"&gt;&lt;code&gt;&lt;span class="nt"&gt;&amp;lt;ItemGroup&lt;/span&gt; &lt;span class="na"&gt;Condition=&lt;/span&gt;&lt;span class="s"&gt;"'$(TargetFramework)' == 'netcoreapp2.0'"&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&lt;/span&gt;
  &lt;span class="nt"&gt;&amp;lt;ProjectReference&lt;/span&gt; &lt;span class="na"&gt;Include=&lt;/span&gt;&lt;span class="s"&gt;"..\..\src\DbUpgrader.Sqlite\DbUpgrader.Sqlite.csproj"&lt;/span&gt; &lt;span class="nt"&gt;/&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;/ItemGroup&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Here I am instructing the project system to only reference the Sqlite project if the target framework of the build is netcoreapp2.0. This is where thinking about the targets as separate builds makes sense. When its passing through this file building for net46 MSBuild will see that the condition is not met, and simply skip over this part of the file. No reference will be added. When building for netcoreapp2.0 the reference will be added.&lt;/p&gt;

&lt;h2&gt;
  
  
  Excluding files
&lt;/h2&gt;

&lt;p&gt;Thats all well and good for the reference but obviously if the reference is there then there must be files that use it. Because the new project system doesn’t need specific file inclusions its unlikely that you would have a node that can have a condition added to it, so we need to be a bit creative.&lt;/p&gt;

&lt;p&gt;You can use an &lt;code&gt;exclude&lt;/code&gt; attribute on a &lt;code&gt;&amp;lt;Compile&amp;gt;&lt;/code&gt; element alongside the normal &lt;code&gt;include&lt;/code&gt; but I found the usage of that a bit ugly, and since by default there isn’t any &lt;code&gt;&amp;lt;Compile&amp;gt;&lt;/code&gt; elements needed in the Sdk projects it seemed a bit clunky to add one back in. The solution I settled on was to simply update the &lt;code&gt;DefaultItemExcludes&lt;/code&gt; property that already exists, and is already used by the default project. The glob support in the new system makes this a breeze too, needing only a single addition to exclude multiple files and folders/subfolders.&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight xml"&gt;&lt;code&gt;&lt;span class="nt"&gt;&amp;lt;PropertyGroup&lt;/span&gt; &lt;span class="na"&gt;Condition=&lt;/span&gt;&lt;span class="s"&gt;"'$(TargetFramework)' == 'net46'"&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&lt;/span&gt;
  &lt;span class="nt"&gt;&amp;lt;DefaultItemExcludes&amp;gt;&lt;/span&gt;$(DefaultItemExcludes);Integration\Sqlite\**\*&lt;span class="nt"&gt;&amp;lt;/DefaultItemExcludes&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;/PropertyGroup&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Since we’re now telling MSBuild to &lt;em&gt;exclude&lt;/em&gt; items we don’t want, we flip the condition so its based on net46. These two things combined mean we get the project including everything we want when building for .NET Core, and not including the wrong things when building for .NET Framework.&lt;/p&gt;

&lt;h2&gt;
  
  
  Targeting the targets
&lt;/h2&gt;

&lt;p&gt;If the conditions so far have been based on the frameworks being targeted, then how do you make the targets conditional? To do that you need something at a higher level and fortunately the operating system fills this role perfectly. We can tell MSBuild to build .NET Core and .NET Framework on Windows, just .NET Core on a Mac, and everything will flow correctly from there based on whichever target is being built at the time. The conditions look very similar too.&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight xml"&gt;&lt;code&gt;&lt;span class="nt"&gt;&amp;lt;TargetFrameworks&amp;gt;&lt;/span&gt;netcoreapp2.0&lt;span class="nt"&gt;&amp;lt;/TargetFrameworks&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;TargetFrameworks&lt;/span&gt; &lt;span class="na"&gt;Condition=&lt;/span&gt;&lt;span class="s"&gt;"'$(OS)' != 'Unix'"&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&lt;/span&gt;net46;netcoreapp2.0&lt;span class="nt"&gt;&amp;lt;/TargetFrameworks&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Two things to note here: The first thing is that the OS for Mac is “Unix”. This surprised me, but is not a big deal. I guessed originally that it would be “Mac” and when that didn’t work I simply added a build task to my project file and observed what the output was. The task is as follows, and its run by specifying &lt;code&gt;InitialTargets="LogDebugInfo"&lt;/code&gt; in the &lt;code&gt;&amp;lt;Project&amp;gt;&lt;/code&gt; node, but its a good reminder again that these csproj files are also simply MSBuild scripts and can be treated as such - though double check Visual Studio is still happy afterwards.&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight xml"&gt;&lt;code&gt;&lt;span class="nt"&gt;&amp;lt;Target&lt;/span&gt; &lt;span class="na"&gt;Name=&lt;/span&gt;&lt;span class="s"&gt;"LogDebugInfo"&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&lt;/span&gt;
  &lt;span class="nt"&gt;&amp;lt;Message&lt;/span&gt; &lt;span class="na"&gt;Text=&lt;/span&gt;&lt;span class="s"&gt;"Building for $(TargetFramework) on $(OS)"&lt;/span&gt; &lt;span class="na"&gt;Importance=&lt;/span&gt;&lt;span class="s"&gt;"High"&lt;/span&gt; &lt;span class="nt"&gt;/&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;/Target&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Secondly you’ll notice that there is only a condition on one of the elements. This was not what I tried first, as I assumed that there would be problems having duplicated elements without conditions to differentiate them. Indeed whilst having conditions on both worked fine in the &lt;code&gt;dotnet build&lt;/code&gt; world (on Mac and Windows) Visual Studio itself got very confused. I posted about it on Twitter and the very helpful &lt;a href="https://twitter.com/davkean"&gt;David Kean&lt;/a&gt; who works for Microsoft on the new project system &lt;a href="https://twitter.com/davkean/status/987820416579223552"&gt;pointed&lt;/a&gt; me to &lt;a href="https://github.com/dotnet/project-system/issues/1829"&gt;this GitHub issue&lt;/a&gt; explaining that I’d hit a bug. It wasn’t a big deal to remove one condition I just had to make sure the order was right. Having two &lt;code&gt;&amp;lt;TargetFrameworks&amp;gt;&lt;/code&gt; elements means the second one overrides the first, so in order for Windows to still get net46 support it had to come last.&lt;/p&gt;

&lt;p&gt;It looks like as long as the project file has one element without a condition Visual Studio (at least v15.6.7 that I’m trying this on) is happy, though I suspect the IDE thinks I’m developing for .NET Core only. When building from Visual Studio however, since it just runs MSBuild, there is no issue. In theory this could mean that the IDE could mark something as correct and have the build subsequently fail, or vice versa, but thats a minor price to pay for the flexibility and I presume that would only be a temporary problem until the build is fixed.&lt;/p&gt;

&lt;h2&gt;
  
  
  I like your new stuff better than your old stuff
&lt;/h2&gt;

&lt;p&gt;In general the new project system is great, and I love being able to edit the project file while its open in Visual Studio and seeing the changes take effect immediately. In general getting people to think about project files and build files is a good thing as it encourages the “devops mindset” which I’m personally a fan of, and think every developer should try to attain.&lt;/p&gt;

&lt;p&gt;But thats commentary for another time.&lt;/p&gt;


</description>
      <category>dotnet</category>
      <category>net</category>
      <category>netcore</category>
      <category>netframework</category>
    </item>
    <item>
      <title>Codify your coding standards with .editorconfig</title>
      <dc:creator>David Wengier</dc:creator>
      <pubDate>Mon, 23 Apr 2018 00:00:00 +0000</pubDate>
      <link>https://dev.to/davidwengier/codify-your-coding-standards-with-editorconfig-1knd</link>
      <guid>https://dev.to/davidwengier/codify-your-coding-standards-with-editorconfig-1knd</guid>
      <description>

&lt;p&gt;Every dev team has coding standards. Sometimes they’re established through convention, tradition, example and maybe sometimes there is even a formal document outlining them (hopefully in a living format that can be updated!). No matter how its done though, nobody wants to be the bad guy in code reviews or pull requests and pull people up for what are usually minor infractions, however at the same time nobody wants to see a codebase be neglected and let inconsistency creep in, or readability wane.&lt;/p&gt;

&lt;p&gt;Visual Studio has many excellent rules and formatting options to enable it to be fully configured to match your coding standards and conventions, but in a team environment it can be a pain to keep everything in sync. There are “team settings file” options which work most of the time but its not perfect and it still requires everyone to configure Visual Studio to use that shared file any time they join a team, or reinstall their machine.&lt;/p&gt;

&lt;p&gt;Fortunately there is a way to enforce some coding standards at tooling level without these concerns, in particular with Visual Studio 2017 it now honours the configuration in a .editorconfig file, which overrides an individual developers settings and tells the IDE how to behave on a per-repository basis. The .editorconfig file is simply committed to the root of the repository and from then on it dictates things like indentation, formatting, style and naming rules. Not all IDEs will support all of the same features but the list on &lt;a href="http://editorconfig.org/#download"&gt;the official site&lt;/a&gt; is certainly impressive.&lt;/p&gt;

&lt;p&gt;In this post I’ll be talking about how to codify some specific .NET related rules for Visual Studio. For more detailed information the &lt;a href="https://docs.microsoft.com/en-us/visualstudio/ide/create-portable-custom-editor-options"&gt;official documentation&lt;/a&gt; is great, though I might be biased since its where I submitted my first ever PR to the docs project.&lt;/p&gt;

&lt;h2&gt;
  
  
  Naming Rules
&lt;/h2&gt;

&lt;p&gt;Naming rules allow you to codify the standards around naming and casing of fields, properties, constants etc. in your codebase. Each naming rule needs a name, which is specified in lower case with underscores, a severity, and a style to apply. For example:&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;dotnet_naming_rule.public_members_must_be_pascal.severity = error
dotnet_naming_rule.public_members_must_be_pascal.symbols = public_symbols
dotnet_naming_rule.public_members_must_be_pascal.style = pascal_style
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;In this example &lt;code&gt;dotnet_name_rule&lt;/code&gt; denotes that we’re defining part of a rule, &lt;code&gt;public_members_must_be_pascal&lt;/code&gt; is the name of our rule, and we’re going to apply it to symbols that match the &lt;code&gt;public_symbols&lt;/code&gt; naming symbols which we’ll define later. We want this rule to be enforced at all times so the &lt;code&gt;severity&lt;/code&gt; is &lt;code&gt;error&lt;/code&gt;, which means Visual Studio will treat violations the same as it treats compiler errors. Lastly we’ve said that things that match this rule should use the style defined in &lt;code&gt;pascal_style&lt;/code&gt; which is the name we will give to our style.&lt;/p&gt;

&lt;h2&gt;
  
  
  Naming Styles
&lt;/h2&gt;

&lt;p&gt;Naming styles define how a developer should format symbols that match any applied rules. Like naming rules they have a name, and they can then specify prefixes, suffixes, word separators and capitalization rules. In this case we simple need to define the capitalization rule of &lt;code&gt;pascal_case&lt;/code&gt; like so:&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;dotnet_naming_style.pascal_style.capitalization = pascal_case
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Again &lt;code&gt;dotnet_naming_style&lt;/code&gt; means we’re defining a style and &lt;code&gt;pascal_style&lt;/code&gt; is the name of the style which we used in the rule.&lt;/p&gt;

&lt;h2&gt;
  
  
  Naming Symbols
&lt;/h2&gt;

&lt;p&gt;The final piece of the puzzle tells Visual Studio which symbols the rule should apply to. For our &lt;code&gt;public_symbols&lt;/code&gt; we need to specify the accessibility to be public, and that we want the rule to apply to properties, methods, fields, events and delegates. We could probably also add in classes, structs and enums to this.&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;dotnet_naming_symbols.public_symbols.applicable_kinds = property,method,field,event,delegate
dotnet_naming_symbols.public_symbols.applicable_accessibilities = public
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Naming symbols also allow you to specify &lt;code&gt;required_modifiers&lt;/code&gt; so that you can target static, readonly, async or const symbols differently.&lt;/p&gt;

&lt;h2&gt;
  
  
  Putting it all together
&lt;/h2&gt;

&lt;p&gt;Those three elements combined are what makes a rule fully codified and means Visual Studio can be the bad guy when it comes to enforcing coding standards. No more need to have arguments about whether constants are SHOUTING_AT_YOU or are ABitMoreSubtle, you can end the age old battle between &lt;code&gt;_fields&lt;/code&gt; and &lt;code&gt;m_fields&lt;/code&gt; etc.&lt;/p&gt;

&lt;p&gt;Additionally naming symbols and styles can be used by multiple naming rules so you only need to define something like &lt;code&gt;pascal_style&lt;/code&gt; once to apply a pascal case capitalization convention to a few different things.&lt;/p&gt;

&lt;p&gt;Be warned however, if you’re introducing this to a legacy code base you need to tread carefully and probably just take the hit and fix all of the issues it raises in the same commit. Even if you set the severity to &lt;code&gt;warning&lt;/code&gt; or &lt;code&gt;suggestion&lt;/code&gt; at the very least you’ll be potentially filling up the error window with issues and it’s never a good idea to give anyone a reason to ignore things in the error window.&lt;/p&gt;

&lt;p&gt;The .editorconfig file can also be used to specify indentation styles, brace usage and style, &lt;code&gt;var&lt;/code&gt; usage and even whether &lt;code&gt;this.&lt;/code&gt; is required to be used, or where System using statements should go. If you can spend the time to fill out all of the possibilities it makes like much easier in a team, as your codebase is immune to the quirks of individual dev machine configurations, or in open source projects ensuring contributors always match the style of the project they’re contributing to.&lt;/p&gt;

&lt;p&gt;A full example of the .editorconfig file I’m currently using for my personal projects can be found in the DbUpgrader project &lt;a href="https://github.com/davidwengier/dbupgrader/blob/master/.editorconfig"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Gotchas
&lt;/h3&gt;

&lt;p&gt;Some gotchas with setting up editor config files that I’ve found so far:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;If you specify constants should be pascal case then VS won’t error when a constant is all caps since thats still valid pascal case.&lt;/li&gt;
&lt;li&gt;Ordering of rules in files seems to be inconsistent so rules around private fields and constants sometimes overlap for private constants, and VS will think you’re doing the wrong this.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I will update the post if I find others.&lt;/p&gt;


</description>
      <category>codingstandards</category>
      <category>style</category>
      <category>editorconfig</category>
      <category>visualstudio</category>
    </item>
    <item>
      <title>Reviewable Stored Procedures and Views with DbUp</title>
      <dc:creator>David Wengier</dc:creator>
      <pubDate>Mon, 16 Apr 2018 00:00:00 +0000</pubDate>
      <link>https://dev.to/davidwengier/reviewable-stored-procedures-and-views-with-dbup-295k</link>
      <guid>https://dev.to/davidwengier/reviewable-stored-procedures-and-views-with-dbup-295k</guid>
      <description>&lt;p&gt;We use &lt;a href="https://dbup.github.io/"&gt;DbUp&lt;/a&gt; at work to manage database changes and migrations and for the most part it works fine as long as you have a known schema that you’re coming from. The downside of the current implementation is that changes to stored procedure definitions are not easily reviewable in source control. Fortunately enabling this workflow with DbUp is relatively straightforward.&lt;/p&gt;

&lt;h2&gt;
  
  
  Project format
&lt;/h2&gt;

&lt;p&gt;Our DbUp project looks fairly standard:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--HhvfvxyJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/uxtz7n0o27vpbipnr2gr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--HhvfvxyJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/uxtz7n0o27vpbipnr2gr.png" alt="DbUp Project"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;DbUp takes care of running the scripts and making sure none are run more than once via its in build journaling system, a record of which is also stored in the database. The problem is that those “Alter Procedure” scripts all simply have a full copy of the stored procedure in them, even if those half-dozen files are all changing the same stored procedure.&lt;/p&gt;

&lt;p&gt;The first step in enabling reviewable stored procs and views is to create a new folder for scripts that will be unjournaled, so they are always run whenever DbUp is run. I’m going to just call this StoredProcs for now as thats the first thing I ‘ll be moving across.&lt;/p&gt;

&lt;p&gt;The basic idea is that you use that folder for SQL scripts that contain a simple DROP and CREATE script for each stored procedure. DbUp runs these scripts every time, essentially making sure the database definition is always correct, and allowing developers to make changes to the existing scripts in the source repository, rather than having to create new ones all the time, as with normal migration scripts.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ZxSdaXDs--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/44izwnvjh4yk0chszznz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ZxSdaXDs--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/44izwnvjh4yk0chszznz.png" alt="DROP and CREATE Script"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The DbUp script runner
&lt;/h2&gt;

&lt;p&gt;The existing DbUp script runner looks fairly basic, like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;var upgrader = DeployChanges.To
               .SqlDatabase(connectionString)
               .WithScriptsEmbeddedInAssembly(Assembly.GetExecutingAssembly())
               .LogToConsole()
               .JournalToSqlTable("dbo", "SchemaVersions")
               .WithTransaction()
               .Build();
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We need to add a new upgrader to this script and instead of storing the journal in a table we will use the &lt;code&gt;NullJournal&lt;/code&gt; that is build into DbUp&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;var storedProcUpgrader = DeployChanges.To
               .SqlDatabase(connectionString)
               .WithScriptsEmbeddedInAssembly(Assembly.GetExecutingAssembly())
               .LogToConsole()
               .JournalTo(new NullJournal())
               .WithTransaction()
               .Build();
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The last piece of the puzzle is to put a filter onto each upgrader so each one only loads the scripts we want. The final code looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public static int Main()
{
    var connectionString = ConfigurationManager.ConnectionStrings["ConnectionString"].ConnectionString;

    var upgrader = DeployChanges.To
        .SqlDatabase(connectionString)
        .WithScriptsEmbeddedInAssembly(Assembly.GetExecutingAssembly(), s =&amp;gt; !IsStoredProc(s))
        .LogToConsole()
        .JournalToSqlTable("dbo", "SchemaVersions")
        .WithTransaction()
        .Build();

    var storedProcUpgrader = DeployChanges.To
        .SqlDatabase(connectionString)
        .WithScriptsEmbeddedInAssembly(Assembly.GetExecutingAssembly(), s =&amp;gt; IsStoredProc(s))
        .LogToConsole()
        .JournalTo(new NullJournal())
        .WithTransaction()
        .Build();

    // migrate the database data, and table schema changes first
    if (!UpgradeAndLog(upgrader))
    {
        return 1;
    }
    // now we can change stored procs that rely on the adjusted schema
    if (!UpgradeAndLog(storedProcUpgrader))
    {
        return 1;
    }

    Console.ForegroundColor = ConsoleColor.Green;
    Console.WriteLine("Success!");
    Console.ResetColor();
    return 0;
}

private static bool UpgradeAndLog(DbUp.Engine.UpgradeEngine upgrader)
{
    var result = upgrader.PerformUpgrade();
    if (!result.Successful)
    {
        Console.ForegroundColor = ConsoleColor.Red;
        Console.WriteLine(result.Error);
        Console.ResetColor();
        return false;
    }
    return true;
}

private bool IsStoredProc(string scriptName)
{
    return (scriptName.StartsWith("My.NameSpace.StoredProcs.", StringComparison.OrdinalIgnoreCase));
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Get reviewing
&lt;/h2&gt;

&lt;p&gt;Every change to the stored proc of view definition script will be just that - a change - so whatever source repository diff process you use will show only what has been done. Additionally you always have the current up-to-date definitions of your scripts in your source repository so you’re one step closer to not having to worry about having a known good starting point for your database, at least from the schema point of view.&lt;/p&gt;

&lt;p&gt;So far we’re rolling this out on a change-by-change basis, but there is no reason all of the relevant parts of the database couldn’t be scripted to seed this effort giving you a known baseline.&lt;/p&gt;

&lt;p&gt;This same theory applies to Views or Functions, or anything else where a migration script would need to contain the entire definition, and dropping the object is not a destructive operation.&lt;/p&gt;

</description>
      <category>dbup</category>
      <category>review</category>
    </item>
    <item>
      <title>I don’t want to remote into production</title>
      <dc:creator>David Wengier</dc:creator>
      <pubDate>Mon, 09 Apr 2018 00:00:00 +0000</pubDate>
      <link>https://dev.to/davidwengier/i-dont-want-to-remote-into-production-55i1</link>
      <guid>https://dev.to/davidwengier/i-dont-want-to-remote-into-production-55i1</guid>
      <description>&lt;p&gt;A friend of mine tweeted this article, and excellent summary today, about a recent production outage at Travis CI:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;A read-only prod is a slightly safer prod, if you don't have access to truncate all tables, then it is less likely to happen :P&lt;a href="https://t.co/7urWybtZA8"&gt;https://t.co/7urWybtZA8&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;— NullOps (@NullOpsio) &lt;a href="https://twitter.com/NullOpsio/status/983237339634810880?ref_src=twsrc%5Etfw"&gt;April 9, 2018&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I personally feel even stronger about this: I don’t want access to production, read only or otherwise. I don’t even want to install the database management software, be it SQL Server Management Studio or MySQL client, if I can help it.&lt;/p&gt;

&lt;p&gt;I started a new job today and during the onboarding it was mentioned that production server access was locked down by IP so if needed, it would have to be done via a VPN if I wasn’t in the office. Not to speak ill of my new employer, let me be clear this was only mentioned as a “just in case” option, as part of an answer to someone elses direct question, and not part of regular onboarding information that people need to know. In fact everyone there knows that its not a good idea, and something that should be worked to remove in future, and helping to do that is a large part of my role, but I digress.&lt;/p&gt;

&lt;p&gt;I said flat out: I don’t want access.&lt;/p&gt;

&lt;h2&gt;
  
  
  I don’t trust myself, and you should too
&lt;/h2&gt;

&lt;p&gt;Eventually everyone has that moment where they do the wrong thing. Perhaps they run an UPDATE statement without a WHERE clause, or they’re connected to the wrong environment when they tweak some configuration value. Most people, you hope, only make these sorts of mistakes once, and I’ve made mine (quite a few years ago, don’t worry!) and I don’t want to do it again. The easiest way I can think of to guarantee that is to simply make it impossible.&lt;/p&gt;

&lt;p&gt;Yes, I can double check everything I do. Yes, I can get people to check over my shoulder, or review. Yes, I can work through checklists with well documented steps.&lt;/p&gt;

&lt;p&gt;Or I can just make incorrect actions impossible. If I can’t remote to a server, I can’t be remoted to the wrong server. If I can’t connect to a database, I can’t forget part of a script or statement.&lt;/p&gt;

&lt;h2&gt;
  
  
  If you want something fixed, make it a problem
&lt;/h2&gt;

&lt;p&gt;The ideal situation for production environments (or indeed most other environments) is that their setup and configuration is completely automated and needs no manual work. By making manual work impossible you force people and teams to do the necessary work to create tooling to enable that. There can be no shortcuts taken and temptation is removed by virtue of a firm wall between developers and where their code is deployed. If I need to make a change to a database schema I want the only option to be to create a change script, or similar. I would apply that script to my dev environment and in time to testing and staging environments.&lt;/p&gt;

&lt;p&gt;If the only possible path is automated tooling then by the time a deployment to production is done not only are you guaranteed that the tooling is in place, you’ve also ideally tested its execution a few times in various environments.&lt;/p&gt;

&lt;p&gt;Its always tempting for developers to leave this sort of effort to the end, but that sort of thinking is what leads to manual processes lasting years as workarounds because other work is higher priority. On the other hand if a developer has no option to solve their annoyance than to do the work right the first time, rest assured they will do that.&lt;/p&gt;

&lt;h2&gt;
  
  
  Data is cheating too
&lt;/h2&gt;

&lt;p&gt;If you want to take this to the logical extreme, which I do, I also don’t want to modify any data in the database directly. If I’m building out a new feature that requires data in a specific shape, either to run or simply for me to manually test with, then I would rather build the seeding scripts, or ideally the data management front end, first. The seeding scripts can help with functional/integration tests, or the data management work presumably solves some future need in the product (and if thats not the case then don’t do it! But also consider why that data is needed).&lt;/p&gt;

&lt;h2&gt;
  
  
  Codified knowledge is shared knowledge
&lt;/h2&gt;

&lt;p&gt;The other advantage of creating tooling, scripts or otherwise automating things is that anything that is codified and committed to a source repository is something that other people can reason about, read and hopefully understand. There is nothing that reduces &lt;a href="https://en.wikipedia.org/wiki/Bus_factor"&gt;Bus Factor&lt;/a&gt; like having a series of scripts and tools that anyone can pick up.&lt;/p&gt;

&lt;p&gt;Essentially, avoid making manual work a necessity as much as is humanly possible. Of course be pragmatic about things, in particular there is nothing wrong with doing manual work once to get a feel for it, before automating, but I loathe to do something twice or three times. Additionally some manual work can be fun to opt in to, and I specifically avoid using tools like Ninite or Chocolately for this reason, as I simply enjoy the process of building a new machine.&lt;/p&gt;

&lt;p&gt;But I don’t want to touch the production environment.&lt;/p&gt;

</description>
      <category>production</category>
      <category>devops</category>
    </item>
  </channel>
</rss>
