<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Jim Borden</title>
    <description>The latest articles on DEV Community by Jim Borden (@borrrden).</description>
    <link>https://dev.to/borrrden</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/borrrden"/>
    <language>en</language>
    <item>
      <title>Learning About Cryptography through Pain</title>
      <dc:creator>Jim Borden</dc:creator>
      <pubDate>Sun, 12 Jul 2020 13:43:27 +0000</pubDate>
      <link>https://dev.to/borrrden/learning-about-cryptography-through-pain-13ll</link>
      <guid>https://dev.to/borrrden/learning-about-cryptography-through-pain-13ll</guid>
      <description>&lt;p&gt;Due to a history that could be an entire series on its own, I recently found myself with the following task:  A C++ implementation that would be able to perform crypto and X509 operations using keys provided by something that wrapped the implementation (in this case C# and Java).  I have been taking on the C# challenge and I realized that when things go wrong in crypto, you are usually only left with a bunch of garbage bytes and no clue as to how you got there.  Here is what I've realized about the shortcomings of .NET Standard 2.0 and its comparable platforms.  There seems to have been improvements made for 2.1 and .NET 5, etc, but for now I will point out some things I've learned after banging the wall for a while.&lt;/p&gt;

&lt;p&gt;So the operation I have been focused on is creating a valid X509 certificate for use with a server that uses TLS.  Currently the library is focused on the RSA algorithm, which for those unfamiliar is an algorithm which is asymmetric, meaning that two unique keys are used.  One is kept secret, and the other is distributed freely.  If a piece of data is encoded with one of the keys, it can be decoded with the other.  The logic for performing all the actual encoding, and construction of the certificate was done in C++, but the key operations were delegated out to the platform wrapping the implementation (C# here).  The operations that need to be performed are as follows:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Extract the public key data in DER format&lt;/li&gt;
&lt;li&gt;Sign using the private key (This is so that the private key material does not need to be provided to the C++ implementation, and if possible can be generated in a way that never exposes it to the user)&lt;/li&gt;
&lt;li&gt;Decrypt using the private key (Same reasoning as 2)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Aside from those operations, the rest can be performed in C++.  So let's begin on the journey.  The first operation, extracting the public key, immediately gave me problems.&lt;/p&gt;

&lt;p&gt;The current class to use for these operations seems to be &lt;code&gt;RSACryptoServiceProvider&lt;/code&gt; and it has no method to export to DER format.  There also doesn't appear to be any class that can encode it to DER, so this had to be written by hand.  ASN1 is quite confusing to work with, but I was able to write it entirely thanks to an answer from &lt;a href="https://stackoverflow.com/a/28407693/1155387"&gt;Stack Overflow&lt;/a&gt;.  There are export methods on the class, but they are only proprietary encodings for use with the Win32 crypto APIs.  &lt;/p&gt;

&lt;p&gt;The bulk of the pain came from the sign operation.  The C++ library actually does something kind of funky first as a sanity check.  It will send a sentinel value to be signed, and then attempt to verify it to make sure that the public key associated is actually capable of working with the signed data.  That sounds simple enough, but to explain the reason it is not it is necessary to step back and give a brief primer about how signatures work in RSA.&lt;/p&gt;

&lt;p&gt;Firstly, RSA cannot encode anything that is longer than its key.  It also cannot encode anything that is shorter than its key.  This is because it's actually not looking at the data as a set of bytes but rather a gigantic number that is represented by those bytes.  It then performs some clever math on that number, and the encoded bytes are the result of representing &lt;em&gt;THAT&lt;/em&gt; number.  To reverse this, you take the resulting number, and do the same clever math but with the parameters of the other key and you will arrive back at the same gigantic number, and thus the original bytes.  &lt;/p&gt;

&lt;p&gt;The primitive operation of RSA signing is the same as RSA decryption (that does &lt;em&gt;not&lt;/em&gt; mean that they are the same thing, but they use the same encoding process at some point).  But that means that you can only encode a relatively small amount of data (256 bytes, for example, using a 2048 bit key.  Actually less because of padding, which is coming up in a minute).  So what do you do when you want to sign something bigger?  The answer lies in the steps that come BEFORE the encoding of the data using RSA.  Instead of encoding the data itself, a hash of the data is encoded.  A number of algorithms are supported for hashing via RSA, but SHA256 is a commonly used one.  SHA256 is a 256 bit hashing algorithm, which means that it will reduce any arbitrarily large set of data into 32 bytes in a way that is unique, but non-reversible (it's actually not strictly unique, but that's part of the game.  It is unique enough that finding a non-unique hash is exceptionally hard.  Usually once it is found, it starts to become time to move on to the next algorithm that is newer).  &lt;/p&gt;

&lt;p&gt;But wait, now the data is 32 bytes and it is too small!  This is no good either.  The answer is to fill up the other 224 bytes with padding data.  The padding data has to be defined and agreed upon otherwise it will be impossible to tell what is padding and what is not.  But now that things are the correct length things can be encoded and decoded properly.&lt;/p&gt;

&lt;p&gt;This is where the first puzzling problem comes in.  For the reasons described above, when using RSA you need to say "I am using RSA with X hashing algorithm."  However, the first thing that came back from the C++ library was a request to encode something with RSA and using &lt;em&gt;no hash algorithm&lt;/em&gt;.  I dislike reading long RFCs and I'm sure this is in there somewhere but apparently this means to skip the hashing part, and just encode the received data directly, because it is already less than or equal to the maximum length able to be encoded.  &lt;/p&gt;

&lt;p&gt;Very well, that just means skip straight to the encoding portion, which as I mentioned before is the same as the decrypt function.  So I tried calling &lt;code&gt;Decrypt&lt;/code&gt; on the &lt;code&gt;RSACryptoServiceProvider&lt;/code&gt; class on the data I received.  Nope, it threw an exception.  Probably because it was expecting to get readable data on the other side (It can tell if the data is valid or not because of the padding scheme used).  It is &lt;em&gt;decrypting&lt;/em&gt; something after all.  That's disappointing.  Apple's signature API has a mode to sign without a hash function, and Java has the &lt;code&gt;NONEWithRSA&lt;/code&gt; mode in its &lt;code&gt;Signature.getInstance&lt;/code&gt; method, but C# has apparently left this out so there is no way to do this with &lt;code&gt;SignData&lt;/code&gt; or &lt;code&gt;SignHash&lt;/code&gt; either.  &lt;/p&gt;

&lt;p&gt;Furthermore, the &lt;code&gt;Decrypt&lt;/code&gt; method forces you to use a padding mode, meaning that it can only accept &lt;em&gt;unpadded&lt;/em&gt; data.  If I try to pass in data that I manually padded I get an exception again because the data is too long to encode (since it is trying to re-add the padding).  However, I began to wonder how complex the RSA encoding operation actually is.  It's actually surprisingly simple.  The bulk of the work comes from calculating the parameters to work with, but using the parameters is another story.  Excellent!&lt;/p&gt;

&lt;p&gt;So there are three main numbers involved with encoding and decoding:  The modulus (n), the private exponent (d) and the public exponent (e).  There is a lot of complex math to get these numbers, but the result is that any number &lt;code&gt;m&lt;/code&gt; can be transformed into &lt;code&gt;c&lt;/code&gt; using &lt;code&gt;m ^ d mod n&lt;/code&gt; and then transformed back to the original &lt;code&gt;m&lt;/code&gt; using &lt;code&gt;c ^ e mod n&lt;/code&gt;.  So I just have to repeat the first part of this operation in my "raw signature" method.  &lt;/p&gt;

&lt;p&gt;Since these numbers have to be very very &lt;em&gt;VERY&lt;/em&gt; large in order to provide security, normal integer primitives won't cut it.  &lt;code&gt;long&lt;/code&gt; is, after all, a 64 bit number and it is recommended to have a number that is at least 2048 bits in order to be secure.  The way to work with such numbers in C# is to use the &lt;code&gt;BigInteger&lt;/code&gt; class.  It even has a constructor that accepts &lt;code&gt;byte[]&lt;/code&gt; directly, great!&lt;/p&gt;

&lt;p&gt;So I manually padded the bytes I received, and then passed them into a &lt;code&gt;BigInteger&lt;/code&gt; which even, conveniently, has a &lt;code&gt;ModPow&lt;/code&gt; method for doing exactly what I described above.  So I did that using the parameters from the key and was meant with instant success.&lt;/p&gt;

&lt;p&gt;Just kidding, it didn't work at all!  Often I would get exceptions saying that the exponent needed to be a positive number.  Well duh, why isn't it?  Well it turns out the answer lies in our old friend endianness.  As a review, endianness determines what order to read bytes when a numeric value is represented by more than one byte.  Little endian will read from the end of the block to the beginning, and big endian will read from the start to the end.  The bytes stored in the parameters of the key are in big endian, but &lt;code&gt;BigInteger&lt;/code&gt; wants little endian.  This means that the bytes have to be reversed first before being used (.NET Core 3, and perhaps .NET Standard 2.1 have fixed this by adding an endianness option to &lt;code&gt;BigInteger&lt;/code&gt;).  Furthermore, once the math is done, the resulting bytes have to be reversed a second time to get them back into big endian. &lt;/p&gt;

&lt;p&gt;Ready to work now right?  YES, well...sometimes.  &lt;code&gt;BigInteger&lt;/code&gt; not only works with little endian, it works with &lt;em&gt;signed&lt;/em&gt; little endian meaning that it will interpret the bytes using the two complement (a method for representing negative numbers in binary).  In order to stop this, a 0 needs to be appended to the end of the last byte is higher than &lt;code&gt;0x80&lt;/code&gt; to avoid it being falsely interpreted as a negative number.  Furthermore, on the way back out this 0 might still be there and so before sending it to C++ the zero needs to be trimmed before reversing the bytes and sending them.  &lt;/p&gt;

&lt;p&gt;Finally after all that, the weird "no hash algorithm" method of signing was working.  However, the resulting certificates were all invalid.  I was still using another path for signing when an actual algorithm was specified, because that is what is supported in C#.  So what was going wrong?  Come to find out that there are two methods on &lt;code&gt;RSACryptoServiceProvider&lt;/code&gt;:  &lt;code&gt;SignData&lt;/code&gt; and &lt;code&gt;SignHash&lt;/code&gt;.  The former will hash the data first before signing it, and the latter will just sign it directly (which makes me wonder why the latter takes a hashing algorithm as an argument.  It doesn't need to hash anything, after all.  I suppose it uses it to try to check that it actually received something of the valid size?).  The C++ library was sending me hashes to sign, not arbitrary data, and I was using the wrong one!  Switching to the correct one (&lt;code&gt;SignHash&lt;/code&gt;) fixed that but it took me a while to figure out what the difference was.&lt;/p&gt;

&lt;p&gt;Step 3 had no problems at all, as a call to &lt;code&gt;Decrypt&lt;/code&gt; was all that was needed but hours and hours got spent on steps 1 and 2.  I'm hoping that this information can help someone who might be in a similar position at some point.  Happy cryptoing!&lt;/p&gt;

</description>
      <category>c</category>
      <category>interop</category>
      <category>cryptography</category>
      <category>rsa</category>
    </item>
    <item>
      <title>Git commit info in your build</title>
      <dc:creator>Jim Borden</dc:creator>
      <pubDate>Thu, 07 Mar 2019 05:00:12 +0000</pubDate>
      <link>https://dev.to/borrrden/git-commit-info-in-your-build-390l</link>
      <guid>https://dev.to/borrrden/git-commit-info-in-your-build-390l</guid>
      <description>&lt;p&gt;For quite a long time I've been generating git commit info for use inside my assemblies at work but I've been doing it in a roundabout fragile way.  I found a library that includes an MSBuild task to do it for me called &lt;a href="https://github.com/kzu/GitInfo"&gt;GitInfo&lt;/a&gt;.  I was skeptical at first but it was literally the easiest thing I could possibly do.&lt;/p&gt;

&lt;p&gt;All you need to do to use it is add the Nuget package.  Behind the scenes what it does is generate a new target for MSBuild to run which adds another autogenerated file based on a template.  It's nothing too fancy, but it takes care of a lot of annoying things for you like trying to locate the git executable and running various git commands to extract useful information.  Auto generated files are like the ones that get automatically generated when you have XAML files in your project (&lt;code&gt;*.g.cs&lt;/code&gt;).  They wind up in your "output" directory and don't interfere with your git staging area.  But because it is compiled source, all you need to do is call into a static class to get all the juicy git info.  Personally, I use &lt;code&gt;ThisAssembly.Git.Commit&lt;/code&gt; which is the short commit hash.  However, there is a whole lot more to play with.&lt;/p&gt;

&lt;p&gt;Check it out!&lt;/p&gt;

</description>
      <category>csharp</category>
      <category>dotnet</category>
      <category>git</category>
      <category>msbuild</category>
    </item>
    <item>
      <title>Multi Targeting</title>
      <dc:creator>Jim Borden</dc:creator>
      <pubDate>Thu, 07 Mar 2019 04:54:39 +0000</pubDate>
      <link>https://dev.to/borrrden/multi-targeting-1p5</link>
      <guid>https://dev.to/borrrden/multi-targeting-1p5</guid>
      <description>&lt;p&gt;Couchbase Lite .NET runs in a lot of different places, that means:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;.NET Framework on Windows&lt;/li&gt;
&lt;li&gt;.NET Core on Mac, Windows, Linux&lt;/li&gt;
&lt;li&gt;Xamarin iOS&lt;/li&gt;
&lt;li&gt;Xamarin Android&lt;/li&gt;
&lt;li&gt;UWP&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Currently, the structure is that the entire library is a .NET Standard 2.0 library with some platform specific features provided via dependency injection.  However, there have been some kinks with this approach, so I decided to take a look at what I would have to do in order to make it a multi target project instead.&lt;/p&gt;

&lt;p&gt;First I will start off with a definition of what multi targeting is.  Until recently, every .NET project has been based off of a certain runtime such as .NET Framework, or Xamarin iOS and the artifacts are for that runtime.  .NET Standard was introduced as a way to provide one fake runtime that actually ran on a bunch of different ones but this takes things one step further.  Instead of one target framework, the project will target many and during compile it will compile outputs for each one of them.  So if you had .NET Framework &lt;em&gt;and&lt;/em&gt; Xamarin iOS, you would get both outputs from one project.&lt;/p&gt;

&lt;p&gt;Does this sound like the old PCL days?  Well that seems to be where it got its cue from.  Back in the PCL days, library authors would use a trick called "bait and switch" in which they wrote a shell interface to be reference via the portable library and relied on an implementation detail of NuGet that would cause it to choose the most specific implementation for a given runtime.  That way, if a "portable" and ".NET Framework" version were in the same package, .NET Framework would prefer the latter variant.  If a non-supported platform came along then it would simply throw exceptions on that platform since that was what was actually being generated in the "bait" assembly. &lt;/p&gt;

&lt;p&gt;This approach was pretty well received and has now pretty much become official.  The multi target project is a tool to help with this in that it can generate many platforms at once to package into a NuGet package.  So let's get into exactly what it takes to get started.&lt;/p&gt;

&lt;p&gt;STEP ONE:  Create a &lt;code&gt;global.json&lt;/code&gt; file next to your solution.  The contents will be as follows.  The version should be whatever highest version is available (which at the time of writing is 1.6.68):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;msbuild-sdks&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;MSBuild.Sdk.Extras&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;1.6.68&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;This step enables you to create a new project type that adds onto the current .NET Core SDK.  This is needed because .NET Core does not, and may not ever, support Xamarin, UWP, and others out of the box.  This SDK was written by &lt;a href="https://rd.microsoft.com/en-us/oren-novotny"&gt;Oren Novotny&lt;/a&gt; who authors many things that are amazing in the .NET universe so you can trust that it won't just fade away.&lt;/p&gt;

&lt;p&gt;STEP TWO:  With that in place, you need to hand edit your &lt;code&gt;.csproj&lt;/code&gt; file to use this SDK instead of the default one.  Just replace the first line&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight xml"&gt;&lt;code&gt;
&lt;span class="nt"&gt;&amp;lt;Project&lt;/span&gt; &lt;span class="na"&gt;Sdk=&lt;/span&gt;&lt;span class="s"&gt;"Microsoft.NET.Sdk"&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;with&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight xml"&gt;&lt;code&gt;&lt;span class="nt"&gt;&amp;lt;Project&lt;/span&gt; &lt;span class="na"&gt;Sdk=&lt;/span&gt;&lt;span class="s"&gt;"MSBuild.Sdk.Extras"&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;STEP THREE:  The final step, find your &lt;code&gt;&amp;lt;TargetFramework&amp;gt;&lt;/code&gt; entry, and change it to a semicolon separated &lt;code&gt;&amp;lt;TargetFrameworks&amp;gt;&lt;/code&gt; entry (plural).  For example&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight xml"&gt;&lt;code&gt;&lt;span class="nt"&gt;&amp;lt;TargetFramework&amp;gt;&lt;/span&gt;netstandard2.0&lt;span class="nt"&gt;&amp;lt;/TargetFramework&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;becomes&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight xml"&gt;&lt;code&gt; &lt;span class="nt"&gt;&amp;lt;TargetFrameworks&amp;gt;&lt;/span&gt;netstandard2.0;netcoreapp2.0;net461;uap10.0.16299;MonoAndroid80;Xamarin.iOS10&lt;span class="nt"&gt;&amp;lt;/TargetFrameworks&amp;gt;&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;That's it!  Now when you compile, you will find that your bin/Debug or bin/Release folder will contain several entries, one for each of the frameworks you are targeting.  You can even use platform specific code, but you need to either only include it when compiling for the framework that supports those functions (more advanced and outside the scope of this article) or guard the source with &lt;code&gt;#if&lt;/code&gt; blocks.  As a hint, the above definitions will be automatically added for each of the target frameworks in order:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;NETSTANDARD2_0&lt;/code&gt; and &lt;code&gt;NETSTANDARD&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;NETCOREAPP2_0&lt;/code&gt; and &lt;code&gt;NETCOREAPP&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;NETFRAMEWORK&lt;/code&gt; and &lt;code&gt;NET461&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;&lt;code&gt;UAP_10_0_16299&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;__ANDROID__&lt;/code&gt; and a ton of android API level specific androids&lt;/li&gt;
&lt;li&gt;&lt;code&gt;__IOS__&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Happy cross platforming!&lt;/p&gt;

</description>
      <category>csharp</category>
      <category>dotnet</category>
      <category>visualstudio</category>
    </item>
    <item>
      <title>What's new in C# 7.3?</title>
      <dc:creator>Jim Borden</dc:creator>
      <pubDate>Thu, 26 Apr 2018 02:25:13 +0000</pubDate>
      <link>https://dev.to/borrrden/whats-new-in-c-73-26fk</link>
      <guid>https://dev.to/borrrden/whats-new-in-c-73-26fk</guid>
      <description>&lt;p&gt;Visual Studio 15.7 Preview 3 has shipped initial support for some C# 7.3 features.  Let's see what they are!&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;code&gt;System.Enum&lt;/code&gt;, &lt;code&gt;System.Delegate&lt;/code&gt; and &lt;code&gt;unmanaged&lt;/code&gt; constraints.
&lt;/h3&gt;

&lt;p&gt;Now with generic functions you can add more control over the types you pass in.  More specifically, you can specify that they must be &lt;code&gt;enum&lt;/code&gt; types, &lt;code&gt;delegate&lt;/code&gt; types, or "blittable" types.  The last one is a bit involved, but it means a type that consists only of certain predefined primitive types (such as &lt;code&gt;int&lt;/code&gt; or &lt;code&gt;UIntPtr&lt;/code&gt;), or arrays of those types.  "Blittable" means it has the ability to be sent as-is over the managed-unmanaged boundary to native code because it has no references to the managed heap.  This means you have the ability to do something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;&lt;span class="k"&gt;void&lt;/span&gt; &lt;span class="n"&gt;Hash&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;T&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;(&lt;/span&gt;&lt;span class="n"&gt;T&lt;/span&gt; &lt;span class="k"&gt;value&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;where&lt;/span&gt; &lt;span class="n"&gt;T&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;unmanaged&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;fixed&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;T&lt;/span&gt;&lt;span class="p"&gt;*&lt;/span&gt; &lt;span class="n"&gt;p&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;&amp;amp;&lt;/span&gt;&lt;span class="k"&gt;value&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; 
        &lt;span class="c1"&gt;// Do stuff...&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I'm particularly excited about this one because I've had to use a lot of workarounds to be able to make helper methods that work with "pointer types."&lt;/p&gt;

&lt;h3&gt;
  
  
  Ref local re-assignment
&lt;/h3&gt;

&lt;p&gt;This is just a small enhancement to allow you to assign &lt;code&gt;ref&lt;/code&gt; type variables / parameters to other variables the way you do normal ones.  I think the following code is an example (off the top of my head)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;&lt;span class="k"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;DoStuff&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;ref&lt;/span&gt; &lt;span class="kt"&gt;int&lt;/span&gt; &lt;span class="n"&gt;parameter&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="c1"&gt;// Now otherRef is also a reference, modifications will &lt;/span&gt;
    &lt;span class="c1"&gt;// propagate back&lt;/span&gt;
    &lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;otherRef&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;ref&lt;/span&gt; &lt;span class="n"&gt;parameter&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

    &lt;span class="c1"&gt;// This is just its value, modifying it has no effect on &lt;/span&gt;
    &lt;span class="c1"&gt;// the original&lt;/span&gt;
    &lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;otherVal&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;parameter&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Stackalloc initializers
&lt;/h3&gt;

&lt;p&gt;This adds the ability to initialize a stack allocated array (did you even know this was a thing in C#?  I did :D) as you would a heap allocated one:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;&lt;span class="n"&gt;Span&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="kt"&gt;int&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;x&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;stackalloc&lt;/span&gt;&lt;span class="p"&gt;[]&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="m"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="m"&gt;3&lt;/span&gt; &lt;span class="p"&gt;};.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Indexing movable fixed buffers
&lt;/h3&gt;

&lt;p&gt;I can't really wrap my head around this one so see if you &lt;a href="https://github.com/dotnet/csharplang/blob/master/proposals/csharp-7.3/indexing-movable-fixed-fields.md"&gt;can understand it&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Custom fixed statement
&lt;/h3&gt;

&lt;p&gt;This is the first I've seen this one, and it is exciting for me!  Basically, if you implement an implicit interface (one method), you can use your own types in a &lt;code&gt;fixed&lt;/code&gt; statement for passing through P/Invoke.  I'm not sure what the exact method is (&lt;code&gt;DangerousGetPinnableReference()&lt;/code&gt; or &lt;code&gt;GetPinnableReference()&lt;/code&gt;) since the proposal and the release notes disagree but if this method returns a suitable type then you can eliminate some boilerplate.&lt;/p&gt;

&lt;h3&gt;
  
  
  Improved overload candidates
&lt;/h3&gt;

&lt;p&gt;There are some new method resolution rules to optimize the way a method is resolved to the correct one.  See &lt;a href="https://github.com/dotnet/csharplang/blob/master/proposals/csharp-7.3/improved-overload-candidates.md"&gt;the propsal&lt;/a&gt; for a list of the.&lt;/p&gt;

&lt;h3&gt;
  
  
  Expression Variables in Initializers
&lt;/h3&gt;

&lt;p&gt;The summary here is "Expression variables like out var and pattern variables are allowed in field initializers, constructor initializers, and LINQ queries."  but I am not sure what that allows us to do...&lt;/p&gt;

&lt;h3&gt;
  
  
  Tuple comparison
&lt;/h3&gt;

&lt;p&gt;Tuples can be compared with &lt;code&gt;==&lt;/code&gt; and &lt;code&gt;!=&lt;/code&gt; now!&lt;/p&gt;

&lt;h3&gt;
  
  
  Attributes on backing fields
&lt;/h3&gt;

&lt;p&gt;Have you ever wanted to put an attribute (e.g. &lt;code&gt;NonSerializable&lt;/code&gt;) on the backing field of a property, and then realized that you then had to create a manual property and backing field just to do so?&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;Serializable&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;Foo&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;NonSerialized&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="k"&gt;private&lt;/span&gt; &lt;span class="kt"&gt;string&lt;/span&gt; &lt;span class="n"&gt;MySecret_backingField&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

    &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="kt"&gt;string&lt;/span&gt; &lt;span class="n"&gt;MySecret&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;get&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;MySecret_backingField&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="k"&gt;set&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="n"&gt;MySecret_backingField&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;value&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Not anymore!&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;Serializable&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;Foo&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;field&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;NonSerialized&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="kt"&gt;string&lt;/span&gt; &lt;span class="n"&gt;MySecret&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="k"&gt;get&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;set&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>csharp</category>
      <category>dotnet</category>
      <category>visualstudio</category>
    </item>
    <item>
      <title>What is git rebase?</title>
      <dc:creator>Jim Borden</dc:creator>
      <pubDate>Fri, 20 Apr 2018 11:41:13 +0000</pubDate>
      <link>https://dev.to/borrrden/what-is-git-rebase-4h9a</link>
      <guid>https://dev.to/borrrden/what-is-git-rebase-4h9a</guid>
      <description>&lt;p&gt;You can think of git commits like a long chain.  Each time you make a commit, you add one more link in the chain.  The problem is that sometimes that chain can split into two chains like this:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;D
|
C   C2
|   |
B --
|
A
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;What most people do when they want to move C2 back into the original chain is what is called a merge and it looks like this:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;E
|  \
D   |
|   |
C   C2
|   |
B --
|
A
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;In which E is a commit which combines the changes of C, D, and C2 into a new commit .  However, with branches and then branches off of branches this can quickly lead to a spaghetti looking history.  Instead, let's see what happens when you switch to the branch with commit C2 (let's call it 'dev') and run rebase on the main line (let's call it 'master').  So&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git checkout dev
git rebase master
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;C2
|
D
|
C
|
B
|
A
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This history is much cleaner now, as C2 was simply moved to the end of the line.  This is what rebase does.  Note that master is still pointing to 'D', and dev is pointing to the new 'C2' location.&lt;/p&gt;

</description>
      <category>git</category>
      <category>explainlikeimfive</category>
      <category>beginners</category>
    </item>
    <item>
      <title>The Couchbase Lite .NET Pipeline</title>
      <dc:creator>Jim Borden</dc:creator>
      <pubDate>Fri, 20 Apr 2018 02:19:53 +0000</pubDate>
      <link>https://dev.to/borrrden/the-couchbase-lite-net-pipeline-4kal</link>
      <guid>https://dev.to/borrrden/the-couchbase-lite-net-pipeline-4kal</guid>
      <description>&lt;p&gt;I'm fairly proud of the pipeline I have managed to set up for myself working as the developer of Couchbase Lite .NET.  The more automation you can have, the easier your life is and I'm also looking for ways to expand this so if anyone has any suggestions please let me know.  I'm considering Gerrit as well, but the normal flow seems awfully verbose for a single developer.  It would be nice to have commit validation though!  Hopefully someone can take away something from the setup I have made as well!&lt;/p&gt;

&lt;p&gt;Here is a diagram of the start to end flow at a high level:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fco8fvqp1z5ejy17hvgd7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fco8fvqp1z5ejy17hvgd7.png" alt="Pipeline Diagram"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So, let's start with what everyone knows.  I have just made a commit to the repo&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fp31t8163bupc1pj84rgf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fp31t8163bupc1pj84rgf.png" alt="GitHub commit"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Our official builds of Couchbase Lite .NET are based on &lt;a href="https://github.com/couchbase/manifest/blob/master/couchbase-lite-net/master.xml" rel="noopener noreferrer"&gt;a manifest&lt;/a&gt; containing the repos and branches needed and where to check them out.  This manifest format is from the Google tool &lt;a href="https://source.android.com/setup/develop/repo" rel="noopener noreferrer"&gt;repo&lt;/a&gt;.  There is &lt;a href="https://github.com/couchbase/build/blob/master/scripts/jenkins/manifest/create-build-manifest" rel="noopener noreferrer"&gt;a script&lt;/a&gt; that we have that detects changes in key repos inside the manifest and if there are changes triggers a new source scrape.  It runs as a Jenkins job (as do all the other downstream jobs).  The source scrape will pull down all of the repositories needed at the correct commit and compress them all into a GZipped tarball for archiving (similar to the "download zip" feature of GitHub, except it works across multiple repos).  This source is then uploaded to a NAS (Network Access Share) machine and available on an internal HTTP server.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fxda2youyfalndjlkbzk7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fxda2youyfalndjlkbzk7.png" alt="Server listing"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And so this commit has resulted in a source scrape for build number 129 for version 2.1.0.  This source scraping job then triggers a downstream job to build.  The build job will pull the newly created source, and run the build script on it to produce a Nuget package.  This is done once for the Enterprise Edition, and once for the Community Edition.  These packages are then uploaded to our internal feed, hosted by a &lt;a href="https://inedo.com/proget" rel="noopener noreferrer"&gt;Proget server installation&lt;/a&gt;.  &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fp40wzr88knlbt9rne8ee.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fp40wzr88knlbt9rne8ee.png" alt="Proget package"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Should this build be the lucky one, this is exactly what will be shipped.  There are no further changes to the assembly whatsoever.  However, the process is far from over.  The reason that I upload the packages to the nuget feed at this point is because I want to detect any packaging mistakes, and ensure that the packages that go out are usable.  So the packaging itself is also in essence being tested.  &lt;/p&gt;

&lt;p&gt;Once the upload is finished, the next downstream job is triggered.  This is the unit testing job.  The term "unit test" is used horrendously loosely here as I don't have any standards for making actual units.  It's more just a collection of stuff that I want to make sure works (not sure what to call it) and stuff that failed before that was reported.  The tests are run on .NET Core Windows, .NET Core macOS, .NET Core Ubuntu, UWP, iOS simulator, and Android emulator.  Each suite is run twice:  The first is building from source a debug build of the community edition and running the tests on that.  The debug builds have lots of extra asserts that detect failures more quickly.  If those pass then a release build is made of the unit tests using the package from the feed that was just inserted.  On .NET Core Windows only an extra run is done to gather code coverage data.  &lt;/p&gt;

&lt;p&gt;If this passes, then the packages are all promoted to a separate feed.  This feed is reserved for packages that passed the first round of automated testing.  Our QE team can be confident that packages pulled from this feed are sane.  The next step is triggered after that, which is a build of the QE test server application for .NET Core, UWP, Xamarin Android, and Xamarin iOS.  The test server application is an internal tool we use that will accept commands over HTTP so that it can be involved in orchestration with separate processes and programs.  The reason that this is done is for the next step.&lt;/p&gt;

&lt;p&gt;The final step in the pipeline process is a set of functional testing.  This means that we need to start up a program (or several) running Couchbase Lite, and also orchestrate instances of Sync Gateway and Couchbase Server and hook them up to run against each other to test replication scenarios.  A typical run will look like this (carried about by a python client running pytest on a machine separate to the below ones):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Install Couchbase Server (machine 1 / one time)&lt;/li&gt;
&lt;li&gt;Install Sync Gateway (machine 2 / one time)&lt;/li&gt;
&lt;li&gt;Ensure Sync Gateway and Couchbase Server are shut down&lt;/li&gt;
&lt;li&gt;Start Couchbase Server&lt;/li&gt;
&lt;li&gt;Clear Couchbase Server bucket data&lt;/li&gt;
&lt;li&gt;(optional) Prepopulate Couchbase Server bucket data&lt;/li&gt;
&lt;li&gt;Start Sync Gateway with a config pointing to the Couchbase Server bucket&lt;/li&gt;
&lt;li&gt;Download and install the Test Server app (machine 3)&lt;/li&gt;
&lt;li&gt;Start the test server app&lt;/li&gt;
&lt;li&gt;Confirm everything is listening on the correct port&lt;/li&gt;
&lt;li&gt;Begin issuing commands to both setup scenarios, and confirm correct results&lt;/li&gt;
&lt;li&gt;Stop the test server app&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is very intensive and even simple scenarios can take minutes per test (as opposed to milliseconds for unit testing).  After this point a report is generated and saved regarding the pass / fail results and what went wrong.&lt;/p&gt;

&lt;p&gt;When it comes time to release a DB (developer build) to our prerelease feed, it's almost as simple as moving a package from our internal feed to our external one. There is one step in between which unzips the package, changes the nuget version and then rezips it so that it shows up with the proper identifier on a Nuget feed (e.g. 2.1.0-db001).  When GA time comes, in addition to other testing kicked off manually by the QE team, the process is the same except that instead of moving it to the prerelease feed, it gets moved to nuget.org and its version gets changed to a non-prerelease version (e.g. 2.1.0).  &lt;/p&gt;

&lt;p&gt;Automation is very nice, and saves a lot of time and helps you catch things more quickly.  I hope I can get even more automated in the future! &lt;/p&gt;

</description>
      <category>couchbaselite</category>
      <category>dotnet</category>
      <category>nuget</category>
      <category>jenkins</category>
    </item>
    <item>
      <title>New Features in C# 7.1 and 7.2</title>
      <dc:creator>Jim Borden</dc:creator>
      <pubDate>Thu, 19 Apr 2018 06:15:31 +0000</pubDate>
      <link>https://dev.to/borrrden/new-features-in-c-71-and-72-30n7</link>
      <guid>https://dev.to/borrrden/new-features-in-c-71-and-72-30n7</guid>
      <description>&lt;p&gt;I haven't counted exactly but I must have around a decade of .NET experience at this point.  I love working with .NET and C#, and I get excited to see new features in the language.  The latest major released version of C# is version 7, with two minor releases being 7.1 and 7.2.  I'll talk about the new features of 7.1 and 7.2, and then go over some of the features I am excited about for future versions.&lt;/p&gt;

&lt;h1&gt;
  
  
  C# 7.1
&lt;/h1&gt;

&lt;h3&gt;
  
  
  Async Main
&lt;/h3&gt;

&lt;p&gt;Remember a couple versions ago when C# introduced the &lt;code&gt;async&lt;/code&gt; and &lt;code&gt;await&lt;/code&gt; keywords?  I sure do, that was one of the best features I could have asked for.  It was also dangerous unless you realize how it works.  The async function being awaited would return early until it is ready to pick up at a later point, and the caller would continue as if the function containing the &lt;code&gt;await&lt;/code&gt; had finished, unless that caller also had an &lt;code&gt;await&lt;/code&gt;.  So it's easy to see how you would get "&lt;code&gt;await&lt;/code&gt;-infected" all the way up your stack.  Then you got to &lt;code&gt;main()&lt;/code&gt; which was required to be &lt;code&gt;void&lt;/code&gt; and you were out of luck.  No more!  Now you can have an asynchronous &lt;code&gt;main()&lt;/code&gt; that  is automatically handled by the CLR.  It looks like this:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;static async Task&amp;lt;int&amp;gt; Main()&lt;/code&gt; or &lt;code&gt;static async Task Main()&lt;/code&gt; (which represent asynchronous &lt;code&gt;int&lt;/code&gt; and &lt;code&gt;void&lt;/code&gt; respectively).  Happy asynching!&lt;/p&gt;

&lt;h3&gt;
  
  
  Default Literal Expressions
&lt;/h3&gt;

&lt;p&gt;At a high level there are two types of objects in C#:  structs and classes.  The difference being that structs are not allocated on the managed heap and all their memory is simply laid out starting from whatever address they occupy then and there.  This means that it does not make sense the have a &lt;code&gt;null&lt;/code&gt; struct since &lt;code&gt;null&lt;/code&gt; is obviously not on your current stack, and never will be.  However, when using generic methods this meant that it was awkward to initialize a variable of generic type &lt;code&gt;T&lt;/code&gt; because you didn't know what a suitable "default" value should be.  Enter the &lt;code&gt;default(T)&lt;/code&gt; syntax.  If you write &lt;code&gt;default(&amp;lt;type&amp;gt;)&lt;/code&gt; then you will get the compiler chosen default value for whatever type you've chosen.  This can get messy though, since nobody wants to look at something like &lt;code&gt;default(Tuple&amp;lt;string, KeyValuePair&amp;lt;string, object&amp;gt;&amp;gt;)&lt;/code&gt;.  Now you can shorten this to simply &lt;code&gt;default&lt;/code&gt; without the type argument and it will be inferred from the variable type.  A small change but a nice touch!&lt;/p&gt;

&lt;h3&gt;
  
  
  Inferred Tuple Element Names
&lt;/h3&gt;

&lt;p&gt;C# 7 introduced a fun new way to make tuples.  I'm not sure many people liked making a tuple and then remembering what &lt;code&gt;Item1&lt;/code&gt; and &lt;code&gt;Item2&lt;/code&gt; on the tuple was.  So instead, now you can write a block like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;&lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"Jim"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;age&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="m"&gt;32&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;pair&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;age&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;age&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="c1"&gt;// Now access via pair.name and pair.age&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now this has been improved upon to simply this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;&lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;pair&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;age&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="c1"&gt;// Now access via pair.name and pair.age&lt;/span&gt;
&lt;span class="c1"&gt;// automatically named from the variable names&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Compiler Options /refout and /refonly
&lt;/h3&gt;

&lt;p&gt;If you remember back to the Portable Class Library (PCL) days, some people required platform specific functionality in order for their library to work, but they wanted their library to be usable from other PCL that targeted the same subset of platforms as they did.  They couldn't compile it as a PCL, though, because they used platform specific API that wasn't present in the defined PCL API set.  So what they did instead was trick the NuGet client by putting in a shim that did nothing as a PCL library, and also put in the actual assemblies for each platform they supported under their target moniker (e.g. net461, xamarinios, etc).  Since NuGet always prefers a platform specific assembly, what would happen is that when they installed it on their top level project NuGet would pull out the correct platform assembly, but when they compiled their PCL it would be compiling against the shim.  Since they both had the same version and everything else looked identical the .NET compiled happily overwrote the shim version with the real one at the top level and everything worked out.  This was coined "bait and switch."  &lt;/p&gt;

&lt;p&gt;Well, it was such a good trick that it actually got legitimized.  Have you ever wondered what a reference assembly is?  It is exactly the "shim" I talked about above.  It is pretty much like a header file for C or C++ in that it defines what API is available without actually implementing it.  It is used for when a library otherwise cannot target .NET Standard and needs to have a per platform assembly, but still wants to be usable from shared .NET Standard libraries.  Now, there are compiler options to automatically generate this reference assembly for you.  /refout will specify where to write the automatically generated reference assembly, and also compile the implementation.  /refonly will only compile the reference assembly.&lt;/p&gt;

&lt;h1&gt;
  
  
  C# 7.2
&lt;/h1&gt;

&lt;h3&gt;
  
  
  Reference Semantics with Value Types
&lt;/h3&gt;

&lt;p&gt;Reference semantics were introduced in C# 7 as a well to reduce the number of copies made when returning values and such.  For example, instead of returning a copy of an item that is inside of an array, you can directly return the same object that is in the array.  These modifications have been extended to work with structs and resulted in the following new modifiers:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;in&lt;/code&gt; Specifies that you want to pass a struct by reference, but the caller cannot modify it&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;ref readonly&lt;/code&gt; returns a struct by reference as an object that may not be modified&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;readonly struct&lt;/code&gt; creates a struct that can never be modified once created&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;ref struct&lt;/code&gt; deserves an article of its own, but basically means that it can never be allocated on the managed heap.  The main motivation is for the awesome new &lt;code&gt;Span&amp;lt;T&amp;gt;&lt;/code&gt; class which also deserves its own article.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Non-Trailing Named Arguments
&lt;/h3&gt;

&lt;p&gt;This one is meant to give meaning to all those literal &lt;code&gt;true&lt;/code&gt; &lt;code&gt;false&lt;/code&gt; and &lt;code&gt;null&lt;/code&gt; objects that you pass in your code.  So for example if you have something like&lt;/p&gt;

&lt;p&gt;&lt;code&gt;public void Log(bool verbose, string message, object arg)&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;You could now make the call a little bit clearer by changing the following line to the second line:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;&lt;span class="nf"&gt;Log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"Foo"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="m"&gt;42&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="nf"&gt;Log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;verbose&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"Foo"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="m"&gt;42&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Leading Underscore in Numeric Literals
&lt;/h3&gt;

&lt;p&gt;This is a very small one that makes a readability improvement.  Before when you used a numeric literal, it could not begin with an underscore and so you ended up with &lt;code&gt;0b0101_0101&lt;/code&gt; which is hard to read.  Now you can put in better spacing with &lt;code&gt;0b_0101_0101&lt;/code&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  private protected Access Modifier
&lt;/h3&gt;

&lt;p&gt;This one was debated about for years with an all out war emerging in the community.  Nobody could agree on what to name it and so in the end it was decided to go with the same as managed C++.  This angered some but I don't care because I want to use it.  What this new access modifier means is that any element declared with this will be visible to subclasses, but only if those subclasses are in the same assembly as the base class.  This is very useful for me in the following situation:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;&lt;span class="k"&gt;internal&lt;/span&gt; &lt;span class="k"&gt;sealed&lt;/span&gt; &lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;Hidden&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;

&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="k"&gt;abstract&lt;/span&gt; &lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;Visible&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;protected&lt;/span&gt; &lt;span class="n"&gt;Hidden&lt;/span&gt; &lt;span class="n"&gt;HiddenObj&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="k"&gt;get&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;private&lt;/span&gt; &lt;span class="k"&gt;set&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will fail to compile because a subclass of &lt;code&gt;Visible&lt;/code&gt; has a property it can see, which is of a type it cannot see because &lt;code&gt;Hidden&lt;/code&gt; is internal.  The only remedy until now was to make the property &lt;code&gt;internal&lt;/code&gt; instead of &lt;code&gt;protected&lt;/code&gt; which meant lots of noise when dealing with the type inside the assembly.  Now with &lt;code&gt;private protected&lt;/code&gt; It can be invisible to both other types in the same assembly, and subclasses outside the assembly.&lt;/p&gt;

</description>
      <category>csharp</category>
      <category>dotnet</category>
    </item>
    <item>
      <title>Moving My Blog Home!</title>
      <dc:creator>Jim Borden</dc:creator>
      <pubDate>Thu, 19 Apr 2018 05:33:03 +0000</pubDate>
      <link>https://dev.to/borrrden/moving-my-blog-home-9be</link>
      <guid>https://dev.to/borrrden/moving-my-blog-home-9be</guid>
      <description>&lt;p&gt;Hello Everyone, &lt;/p&gt;

&lt;p&gt;I recently discovered this site and I think it will get way more visibility than my old blog did.  You can find it &lt;a href="https://borrrden.wordpress.com/"&gt;here&lt;/a&gt; (along with a duplicate of my first post here).  So therefore I am going to put a note there (not that anyone will probably notice) and start writing things here instead!  Hope I can post something interesting.&lt;/p&gt;

</description>
      <category>announcement</category>
    </item>
    <item>
      <title>A .NET Glossary</title>
      <dc:creator>Jim Borden</dc:creator>
      <pubDate>Wed, 18 Apr 2018 01:01:30 +0000</pubDate>
      <link>https://dev.to/borrrden/a-net-glossary-57l4</link>
      <guid>https://dev.to/borrrden/a-net-glossary-57l4</guid>
      <description>&lt;p&gt;.NET has evolved greatly since its induction way back in the early 2000s and at this point it is beginning to rival Java in the ways it can be deployed.  This post is meant to be a simple list of terms that get thrown around so hopefully things can make a little more sense to those unfamiliar.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;.NET Framework&lt;/strong&gt;: This is the name for the original implementation of the .NET runtime and libraries that ran exclusively on Windows.  It begin with version 1.0, then went to 1.1, 2.0, 3.0, 3.5, 4.0, 4.5 before the ecosystem explosion.  Since then the release have been more frequent and more patch-like.  The current version as of this writing is 4.7.1.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Mono&lt;/strong&gt;: This is what I commonly refer to as the the “great long-running reverse engineering game.”  Several companies, all fronted by the same individual, made it their goal to bring an implementation of the .NET runtime and libraries to Linux.  After they achieved that, they went on to Macintosh, and eventually wrote implementations for many platforms including iOS, Android, and various gaming consoles.  This seems to be heading for replacement by .NET Core.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Xamarin&lt;/strong&gt;: Xamarin is the name of the company that eventually ended up with the rights to the Mono source.  They are now owned by Microsoft but spent a good number of years independently maintaining Mono, with a focus on mobile devices.  Their most popular frameworks are Xamarin iOS and Xamarin Android which are runtime / library implementations designed to run on iOS and Android respectively.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;.NET Core&lt;/strong&gt;:  This is the official Microsoft attempt to bring the .NET runtime and libraries across various flavors of desktop platforms.  It has a great command line interface and ability to publish non-runtime dependent applications so that if desired it can run in an environment without a .NET runtime installed at all.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;.NET Standard&lt;/strong&gt;: Due to the fact that so many variations of runtimes trying to accomplish the same thing are in existence, the .NET Standard is a list of API that an implementation must support to claim support for a given version of the .NET Standard.  Thus, if you compile against the .NET Standard, your library can run on any runtime that conforms to that standard, and any future ones as well.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;UWP&lt;/strong&gt;:  Universal Windows Platform is Microsoft’s attempt to unify phone, tablet, and desktop with one code base that targets it all.  It is a sandboxed environment that produces self contained applications that are designed to be distributed on the Windows Store.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Roslyn&lt;/strong&gt;:  You might have seen this term flying around.  This is the name of Microsoft’s latest C# compiler environment.  It is open source and available as a prebuilt library so that any application can gain access to the powerful C# compilation and analysis methods that Visual Studio uses.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;.NET Native&lt;/strong&gt;:  This is an AOT (ahead of time) compiler for UWP that will precompile .NET IL (intermediate language, which is the compiled form of .NET assemblies that is consumed by the .NET runtime) down into native code (either x86, x64, or ARM) designed to run on the Windows operating system.  This gives large performance gains on memory / resources constrained devices.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ASP .NET&lt;/strong&gt;: This is Microsoft’s framework for building server side applications designed to be interacted with via HTTP calls.  It is Windows only, but it’s newer and younger sibling ASP .NET Core is designed to run cross platform.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;.NET Runtime&lt;/strong&gt;:  This is the engine that is responsible for hosting applications written in .NET.  It is a C++ program designed to allocate managed memory, perform garbage collection, and load .NET assemblies into and out of its address space and subsequently compiling them on demand for execution by the host operating system.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;.NET Framework Libraries&lt;/strong&gt;:  These are the libraries that provide basic functionality as a platform.  Things such as string operations, file I/O, threading, etc are provided by these libraries.  They were historically bundled with a given release of the .NET Framework, but these days they are modular and distributed out of band.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Assembly&lt;/strong&gt;:  This is the term for a binary image containing instructions that are executable by the .NET runtime (as opposed to a binary image containing machine instructions that are executable by the operating system).&lt;/p&gt;

</description>
      <category>dotnet</category>
      <category>csharp</category>
      <category>beginners</category>
    </item>
  </channel>
</rss>
