<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Tony Knight</title>
    <description>The latest articles on DEV Community by Tony Knight (@tonycknight).</description>
    <link>https://dev.to/tonycknight</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/tonycknight"/>
    <language>en</language>
    <item>
      <title>Measuring performance using BenchmarkDotNet - Part 3 Breaking Builds</title>
      <dc:creator>Tony Knight</dc:creator>
      <pubDate>Sat, 22 May 2021 00:25:13 +0000</pubDate>
      <link>https://dev.to/newday-technology/measuring-performance-using-benchmarkdotnet-part-3-breaking-builds-36il</link>
      <guid>https://dev.to/newday-technology/measuring-performance-using-benchmarkdotnet-part-3-breaking-builds-36il</guid>
      <description>&lt;h1&gt;
  
  
  Introduction
&lt;/h1&gt;

&lt;p&gt;Previously we discussed the &lt;a href="https://dev.to/newday-technology/measuring-performance-using-benchmarkdotnet-part-2-4dof"&gt;absolute bare minimum&lt;/a&gt; to run &lt;a href="https://benchmarkdotnet.org/"&gt;BenchmarkDotNet&lt;/a&gt; in your CI pipeline. Your code builds, benchmarks are taken, and you have to drill down into the numbers. &lt;/p&gt;

&lt;p&gt;But what if bad code was committed? A small change sneaks in, probably under tight deadlines, that suddenly makes your nice fast code run like treacle? &lt;/p&gt;

&lt;p&gt;How would you know about - and more importantly &lt;em&gt;stop&lt;/em&gt; - such horrors? That's what we'll try and address in this post.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;As before, we're only talking pure code here, that is, your class methods and algorithms. APIs, services, applications are much more complex, and we haven't considered I/O. So let's keep it simple and just focus on the performance of pure code.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h3&gt;
  
  
  What this post will cover
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Discuss ways to stop builds upon degraded code performance&lt;/li&gt;
&lt;li&gt;Installing tools in a sandbox environment&lt;/li&gt;
&lt;li&gt;Collecting benchmark data for analysis&lt;/li&gt;
&lt;li&gt;Analysing results and breaking builds&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  What you'll need for this post
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;.NET 7 SDK installed on your local machine&lt;/li&gt;
&lt;li&gt;A BenchmarkDotNet solution&lt;/li&gt;
&lt;li&gt;An IDE, such as VS, VS Code or Rider.&lt;/li&gt;
&lt;/ul&gt;




&lt;h1&gt;
  
  
  The simplest possible way to break a build...
&lt;/h1&gt;

&lt;p&gt;...is, surprisingly, not a sledgehammer. It's even simpler than that.&lt;/p&gt;

&lt;p&gt;For the vast majority of build platforms, to stop a build you normally need your script to return a non-zero return code. That age old trick is so simple and effective: it stops bad things dead in their tracks. So let's use that:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;We want our benchmark analysis to return 0 on success and 1 on failure&lt;/strong&gt;. &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Easy! but this leaves us with a trickier problem. &lt;/p&gt;




&lt;h1&gt;
  
  
  How to detect performance has degraded?
&lt;/h1&gt;

&lt;p&gt;You've got the stats from BenchmarkDotNet. You now need to monitor each build's performance results, or more accurately, &lt;em&gt;detect deviance from accepted performance&lt;/em&gt;. &lt;/p&gt;

&lt;h3&gt;
  
  
  What is acceptable performance?
&lt;/h3&gt;

&lt;p&gt;This is a &lt;em&gt;very&lt;/em&gt; broad subject, and it's often difficult to put precise time limits on micro code performance. For much optimisation work, you'll be iteratively changing code so that &lt;em&gt;performance should always improve with each commit&lt;/em&gt;. Therefore, you can stop optimising &lt;em&gt;when the results are good enough&lt;/em&gt;. &lt;/p&gt;

&lt;p&gt;So, as we often do not have absolute time requirements and we iteratively improve our performance as a matter of course, we'll take a broad view:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Accepted performance is the best recorded benchmark time&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;That leads us onto deviance.&lt;/p&gt;

&lt;h3&gt;
  
  
  Deviance from acceptable performance
&lt;/h3&gt;

&lt;p&gt;Why do we want deviance and not absolutes? &lt;em&gt;Because we cannot guarantee that repeated benchmark runs, even with a static codebase and the same infrastructure, will yield exactly the same time measurements over each iteration.&lt;/em&gt; &lt;/p&gt;

&lt;p&gt;And as each build feeds many time-critical activities - user acceptance, security validation and the like - we don't want a tiny deviation to choke off this supply of new features. &lt;/p&gt;

&lt;p&gt;How do we know what is an appropriate deviance is and how do we measure it? That's another &lt;em&gt;very&lt;/em&gt; broad subject and depends entirely on your circumstances. For now, let's take a simple (&amp;amp; admittedly crude!) method just to illustrate the key point of stopping slow code getting into our main codebase.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Acceptable deviance, expressed as a percentage, falls in between &lt;code&gt;[baseline measurement] &amp;lt; [new measurement] &amp;lt; [baseline measurement + deviance%]&lt;/code&gt;&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Here we're simply allowing some slippage from the best recorded time.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Please remember:&lt;/strong&gt; the subject is extremely broad and this article is just an introduction to the subject. But for now, the main take-away point is: &lt;strong&gt;whatever the current performance, keep improving it and never degrade!&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h1&gt;
  
  
  It seems we need a tool for this
&lt;/h1&gt;

&lt;p&gt;You could build your own, but here's something from our own stables: a dotnet tool to detect deviance in BenchmarkDotNet results:&lt;/p&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--A9-wwsHG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev.to/assets/github-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/NewDayTechnology"&gt;
        NewDayTechnology
      &lt;/a&gt; / &lt;a href="https://github.com/NewDayTechnology/benchmarkdotnet.analyser"&gt;
        benchmarkdotnet.analyser
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      A tool for analysing BenchmarkDotNet results
    &lt;/h3&gt;
  &lt;/div&gt;
&lt;/div&gt;


&lt;p&gt;&lt;strong&gt;BenchmarkDotNet.Analyser (BDNA)&lt;/strong&gt; is a tool for iteratively collecting and analysing BenchmarkDotNet data. It's distributed as a dotnet tool, so you can use it locally and on almost any CI platform. You just need .NET 7 installed.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;BDNA is in beta and we want to continually improve it. We welcome &lt;a href="https://github.com/NewDayTechnology/benchmarkdotnet.analyser/issues"&gt;bug reports and feature suggestions&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Installing
&lt;/h2&gt;

&lt;p&gt;The latest version is distributed via &lt;a href="https://www.nuget.org/packages/bdna/"&gt;Nuget&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;For the remainder of this section, I'll lead you through installing BDNA in a sandbox environment so if you do get into any problems you can simply destroy the directory and start again without any side effects.&lt;/p&gt;




&lt;h3&gt;
  
  
  Create a new sandbox
&lt;/h3&gt;

&lt;p&gt;The sandbox will be a directory on your local drive.  &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;We won't be pushing this directory to source control in this article. But the same steps are used in a cloned local repository.&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight powershell"&gt;&lt;code&gt;&lt;span class="n"&gt;mkdir&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;c:\projects\scratch\tools&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="n"&gt;cd&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="nx"&gt;cd&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;projects\scratch\tools&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Create a tools manifest
&lt;/h3&gt;

&lt;p&gt;The tools manifest is simply a version list of the repo's tools, to ensure version consistency and stability: just like your own project's package dependencies. As we want these tools installed locally we'll create a new manifest in our directory:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight powershell"&gt;&lt;code&gt;&lt;span class="n"&gt;dotnet&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;new&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;tool-manifest&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;blockquote&gt;
&lt;p&gt;Globally installed .NET tools are very convenient: you install it once on your machine and keep updating as necessary. But they place nasty dependencies on your build platform, and there's no guarantee your team members will use &lt;em&gt;exactly&lt;/em&gt; the same version. Locally installed tools provide consistency, and are installed to the local repository. &lt;/p&gt;
&lt;/blockquote&gt;


&lt;h3&gt;
  
  
  Install BDNA
&lt;/h3&gt;

&lt;p&gt;All that's left now is to download and install BDNA:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight powershell"&gt;&lt;code&gt;&lt;span class="n"&gt;dotnet&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;tool&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;install&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;bdna&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;This will install the latest non-preview version. If you want to install a specific version, just give the version, say:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight powershell"&gt;&lt;code&gt;&lt;span class="n"&gt;dotnet&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;tool&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;install&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;bdna&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;--version&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;0.2.263&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;blockquote&gt;
&lt;p&gt;BDNA packages are &lt;a href="https://www.nuget.org/packages/bdna/"&gt;listed here&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Check that BNDA is correctly installed:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight powershell"&gt;&lt;code&gt;&lt;span class="n"&gt;dotnet&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;tool&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;list&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;and you will get a list of repo-local tools:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Package Id      Version              Commands      Manifest
-------------------------------------------------------------------------------------------------------
bdna            0.2.263              bdna          projects\scratch\tools\.config\dotnet-tools.json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Check that it's up and running:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight powershell"&gt;&lt;code&gt;&lt;span class="n"&gt;dotnet&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;bdna&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;and you should be greeted with a banner, like:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--z2EFq6QW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/giszsu71jom47lu8aukw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--z2EFq6QW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/giszsu71jom47lu8aukw.png" alt="alt text" width="587" height="174"&gt;&lt;/a&gt;&lt;/p&gt;


&lt;h3&gt;
  
  
  The installation is done!
&lt;/h3&gt;

&lt;p&gt;You have successfully installed BDNA into your directory, and exactly the same steps will apply in a cloned git repository.&lt;/p&gt;


&lt;h1&gt;
  
  
  Checking benchmarks
&lt;/h1&gt;

&lt;p&gt;What remains now is to get some benchmarks. If you've followed this series, you'll have some demonstration projects that generate benchmarks, such as &lt;/p&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--A9-wwsHG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev.to/assets/github-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/NewDayTechnology"&gt;
        NewDayTechnology
      &lt;/a&gt; / &lt;a href="https://github.com/NewDayTechnology/benchmarking-performance-part-2"&gt;
        benchmarking-performance-part-2
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      A simple demonstration of BenchmarkDotNet
    &lt;/h3&gt;
  &lt;/div&gt;
&lt;/div&gt;






&lt;h3&gt;
  
  
  Get some benchmarks
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://github.com/NewDayTechnology/benchmarking-performance-part-2/"&gt;Clone the repo&lt;/a&gt; and start building:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight powershell"&gt;&lt;code&gt;&lt;span class="n"&gt;dotnet&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;clean&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="n"&gt;dotnet&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;restore&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="n"&gt;dotnet&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;build&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-c&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;Release&lt;/span&gt;&lt;span class="w"&gt;

&lt;/span&gt;&lt;span class="n"&gt;cd&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;src\benchmarkdotnetdemo\bin\Release\net7.0&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="n"&gt;dotnet&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;Benchmarkdotnetdemo.dll&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-f&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;The results will be found under &lt;code&gt;**\BenchmarkDotNet.Artifacts\results&lt;/code&gt;.&lt;/p&gt;


&lt;h3&gt;
  
  
  Collect the data from your recent BenchmarkDotNet run
&lt;/h3&gt;

&lt;p&gt;BDNA works by aggregating sequential benchmark runs. To aggregate (from the repo's root directory):&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight powershell"&gt;&lt;code&gt;&lt;span class="n"&gt;dotnet&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;bdna&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;aggregate&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-new&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;".\src\benchmarkdotnetdemo\bin\Release\net7.0\BenchmarkDotNet.Artifacts\results"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-aggs&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;".\bdna"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-out&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;".\bdna"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-runs&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;30&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;blockquote&gt;
&lt;p&gt;To see all options try &lt;code&gt;dotnet bdna aggregate -?&lt;/code&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Repeatedly run benchmarks (&lt;code&gt;dotnet Benchmarkdotnetdemo.dll -f *&lt;/code&gt;) and aggregate (&lt;code&gt;dotnet bdna aggregate ...&lt;/code&gt;) to build a dataset. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;When benchmarking you'll need points of reference for each datapoint. You can use &lt;code&gt;--build %build_number%&lt;/code&gt; when aggregating each benchmark run to annotate with the build number. Tags are also supported.&lt;/p&gt;
&lt;/blockquote&gt;


&lt;h3&gt;
  
  
  Analyse the data
&lt;/h3&gt;

&lt;p&gt;Now, we want to check the dataset for deviances. To see some errors we'll assume very strict deviance (0%) and allow no errors:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight powershell"&gt;&lt;code&gt;&lt;span class="n"&gt;dotnet&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;bdna&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;analyse&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;--aggregates&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;".\bdna"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;--tolerance&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;0&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;--maxerrors&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;0&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;--verbose&lt;/span&gt;&lt;span class="w"&gt; 
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;code&gt;dotnet bdna analyse&lt;/code&gt; will send results to the console. If all is well you'll see a nice confirmatory message:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--YeHHRyPl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/o8ia50fp9s3j7i5c1gmf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--YeHHRyPl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/o8ia50fp9s3j7i5c1gmf.png" alt="alt text" width="384" height="79"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;But if there are degraded benchmarks they'll be listed:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--kwKvjKWn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ib9vq7a5ng9derzaodx6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--kwKvjKWn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ib9vq7a5ng9derzaodx6.png" alt="alt text" width="800" height="70"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If too many errors are found the tool's return code will be 1: &lt;strong&gt;your CI script will need to watch for this return code and fail the build accordingly&lt;/strong&gt;. &lt;/p&gt;


&lt;h3&gt;
  
  
  Reporting on the data
&lt;/h3&gt;

&lt;p&gt;Console logs are often fine for CI pipelines. Wouldn't it be good to get some graphs of performance over time?&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight powershell"&gt;&lt;code&gt;&lt;span class="n"&gt;dotnet&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;bdna&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;report&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;--aggregates&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;".\bdna"&lt;/span&gt;&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="nt"&gt;--verbose&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-r&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;csv&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-r&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;json&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-f&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-out&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;".\bdna_reports"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;blockquote&gt;
&lt;p&gt;For help see &lt;code&gt;dotnet bdna report --help&lt;/code&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;BDNA will build a CSV (and/or JSON) file containing selected benchmarks. Each benchmark is exported with namespace, class, method, parameters and annotations (build number, tags, etc). &lt;/p&gt;

&lt;p&gt;Import the report file in your favourite BI tool and:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--OxUx9kjv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sero4h4zjxykgv2l5ea2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--OxUx9kjv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sero4h4zjxykgv2l5ea2.png" alt="alt text" width="800" height="292"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;These measurements were taken from a machine with a lot of background procesing going on, and so you see peaks and troughs in the measurements. The general trend is flat. This is good, as the code didn't change between builds. &lt;/p&gt;
&lt;/blockquote&gt;


&lt;h1&gt;
  
  
  What have we learned?
&lt;/h1&gt;

&lt;p&gt;We've discussed a very simple method of determining degraded performance where we compare results against a best-known result.&lt;/p&gt;

&lt;p&gt;We've described how to set up local dotnet tools and nuget configurations.&lt;/p&gt;

&lt;p&gt;We've introduced a tool that can collect, report &amp;amp; detect performance degradations, and how it can be used in a sandbox environment.&lt;/p&gt;


&lt;h1&gt;
  
  
  More reading
&lt;/h1&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--A9-wwsHG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev.to/assets/github-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/NewDayTechnology"&gt;
        NewDayTechnology
      &lt;/a&gt; / &lt;a href="https://github.com/NewDayTechnology/benchmarkdotnet.analyser"&gt;
        benchmarkdotnet.analyser
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      A tool for analysing BenchmarkDotNet results
    &lt;/h3&gt;
  &lt;/div&gt;
&lt;/div&gt;




&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--A9-wwsHG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev.to/assets/github-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/NewDayTechnology"&gt;
        NewDayTechnology
      &lt;/a&gt; / &lt;a href="https://github.com/NewDayTechnology/benchmarking-performance-part-2"&gt;
        benchmarking-performance-part-2
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      A simple demonstration of BenchmarkDotNet
    &lt;/h3&gt;
  &lt;/div&gt;
&lt;/div&gt;


</description>
      <category>dotnet</category>
      <category>performance</category>
      <category>ci</category>
      <category>benchmark</category>
    </item>
    <item>
      <title>Measuring performance using BenchmarkDotNet - Part 2</title>
      <dc:creator>Tony Knight</dc:creator>
      <pubDate>Thu, 01 Apr 2021 16:07:43 +0000</pubDate>
      <link>https://dev.to/newday-technology/measuring-performance-using-benchmarkdotnet-part-2-4dof</link>
      <guid>https://dev.to/newday-technology/measuring-performance-using-benchmarkdotnet-part-2-4dof</guid>
      <description>&lt;h1&gt;
  
  
  Introduction
&lt;/h1&gt;

&lt;p&gt;Previously we &lt;a href="https://dev.to/newday-technology/measuring-performance-using-benchmarkdotnet-part-1-39g3"&gt;discussed what BenchmarkDotNet gives us&lt;/a&gt; and how to write simple benchmarks. As a quick reminder:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;We use benchmarks to find code performance&lt;/li&gt;
&lt;li&gt;BenchmarkDotNet is a nuget package&lt;/li&gt;
&lt;li&gt;We use console apps to host and run benchmarks&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So what's next to do? We need to run it and get benchmarks as easily and frequently as we can.&lt;/p&gt;




&lt;h1&gt;
  
  
  Running Benchmarks locally
&lt;/h1&gt;

&lt;p&gt;We have a sample .Net core console application coded up and ready to go in Github: &lt;/p&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev.to%2Fassets%2Fgithub-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/NewDayTechnology" rel="noopener noreferrer"&gt;
        NewDayTechnology
      &lt;/a&gt; / &lt;a href="https://github.com/NewDayTechnology/benchmarking-performance-part-2" rel="noopener noreferrer"&gt;
        benchmarking-performance-part-2
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      A simple demonstration of BenchmarkDotNet
    &lt;/h3&gt;
  &lt;/div&gt;
&lt;/div&gt;


&lt;h3&gt;
  
  
  Build and run
&lt;/h3&gt;

&lt;p&gt;Once you've cloned the repo, just run a &lt;code&gt;dotnet publish&lt;/code&gt; from the local repository's root folder:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;dotnet publish -c Release -o publish
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;blockquote&gt;
&lt;p&gt;If you're unfamiliar with dotnet's CLI, &lt;code&gt;dotnet publish&lt;/code&gt; will build and integrate the application, pushing the complete distributable application to the &lt;code&gt;./publish&lt;/code&gt; directory. &lt;a href="https://docs.microsoft.com/en-us/dotnet/core/tools/dotnet-publish" rel="noopener noreferrer"&gt;You can read more here&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;At this point, you've got a benchmarking console application in &lt;code&gt;./publish&lt;/code&gt; that's ready to use. Because I like my command line clean, I'm going to change the working folder:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd publish
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;...and we're almost ready to start.&lt;/p&gt;


&lt;h3&gt;
  
  
  Before you run, prepare your machine
&lt;/h3&gt;

&lt;p&gt;Whenever you're measuring CPU performance you've got to be mindful of what else is running on your machine. Even with a 64 core beast your OS may interrupt the benchmark execution and skew results. That skew is not easy to measure or counter: it's best to assume that the interrupts and switches always happen.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Whenever you run final benchmarks make sure the absolute minimum software and applications are running. Before you start, close down all other applications before running your benchmarks. Browsers, chat, video, everything! &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;For now, don't close down everything: we're just exploring BenchmarkDotNet here and you need a browser open to read. But, when capturing real results &lt;strong&gt;always remember to run on idle machines&lt;/strong&gt;.&lt;/p&gt;


&lt;h3&gt;
  
  
  And now to get some benchmarks
&lt;/h3&gt;

&lt;p&gt;To run them all we need to:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;dotnet ./benchmarkdotnetdemo.dll -f *
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;code&gt;-f *&lt;/code&gt; is a BenchmarkDotNet argument to selectively run benchmarks by their fully qualified namespace type. We've elected to select all of them with the wildcard &lt;code&gt;*&lt;/code&gt;; if we want to run only selected benchmarks, I'd  have to use &lt;code&gt;-f benchmarkdotnetdemo.&amp;lt;pattern&amp;gt;&lt;/code&gt; as all these benchmarks fall in the &lt;code&gt;benchmarkdotnetdemo&lt;/code&gt; namespace. For instance, &lt;code&gt;-f benchmarkdotnetdemo.Simple*&lt;/code&gt; will run all the "Simple" benchmarks.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Each console application with BenchmarkDotNet has help automatically integrated. Just use &lt;code&gt;--help&lt;/code&gt; as the arguments, and you will get a very comprehensive set of switches.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;So now all we have to do is wait, and eventually your console will give you the good news:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// ***** BenchmarkRunner: End *****
// ** Remained 0 benchmark(s) to run **
Run time: 00:03:44 (224.56 sec), executed benchmarks: 3

Global total time: 00:08:03 (483.58 sec), executed benchmarks: 15
// * Artifacts cleanup *
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;All good! The result files will have been pushed to the &lt;code&gt;BenchmarkDotNet.Artifacts&lt;/code&gt; folder:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    Directory: C:\...\benchmarking-performance-part-2\publish\BenchmarkDotNet.Artifacts


Mode                 LastWriteTime         Length Name
----                 -------------         ------ ----
d-----          4/1/2021  11:50 AM                results
-a----          4/1/2021  11:50 AM         128042 BenchmarkRun-20210401-114253.log
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;The &lt;code&gt;.log&lt;/code&gt; file is simply the benchmark console echoed to file.&lt;/p&gt;

&lt;p&gt;Within the &lt;code&gt;/results&lt;/code&gt; directory you'll find the actual reports:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    Directory: C:\...\benchmarking-performance-part-2\publish\BenchmarkDotNet.Artifacts\results


Mode                 LastWriteTime         Length Name
----                 -------------         ------ ----
-a----          4/1/2021  11:47 AM         109014 benchmarkdotnetdemo.FibonacciBenchmark-measurements.csv
-a----          4/1/2021  11:47 AM         103104 benchmarkdotnetdemo.FibonacciBenchmark-report-full.json
-a----          4/1/2021  11:47 AM           3930 benchmarkdotnetdemo.FibonacciBenchmark-report-github.md
-a----          4/1/2021  11:47 AM           6632 benchmarkdotnetdemo.FibonacciBenchmark-report.csv
-a----          4/1/2021  11:47 AM           4484 benchmarkdotnetdemo.FibonacciBenchmark-report.html
-a----          4/1/2021  11:50 AM          83537 benchmarkdotnetdemo.SimpleBenchmark-measurements.csv
-a----          4/1/2021  11:50 AM          53879 benchmarkdotnetdemo.SimpleBenchmark-report-full.json
-a----          4/1/2021  11:50 AM           1215 benchmarkdotnetdemo.SimpleBenchmark-report-github.md
-a----          4/1/2021  11:50 AM           2119 benchmarkdotnetdemo.SimpleBenchmark-report.csv
-a----          4/1/2021  11:50 AM           1881 benchmarkdotnetdemo.SimpleBenchmark-report.html
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;As you can see, it's a mix of CSV, HTML, markdown and pure JSON ready for publication and reading.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;These formats are determined by either the benchmark code or the runtime arguments. I've included them all in the demo repo to give a feel of what's on offer.&lt;/p&gt;
&lt;/blockquote&gt;


&lt;h3&gt;
  
  
  Interpreting the results
&lt;/h3&gt;

&lt;p&gt;We've &lt;a href="https://dev.to/newday-technology/measuring-performance-using-benchmarkdotnet-part-1-39g3"&gt;previously discussed&lt;/a&gt; the various reports' contents. But suffice to say BenchmarkDotNet runs &amp;amp; reports benchmarks &lt;strong&gt;but does not evaluate them&lt;/strong&gt;. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Evaluating these benchmarks and acting on them is a fairly complex problem: what analysis  method to use? How do we run and capture results? Can we use benchmarks as a PR gateway? This will be the subject of a future post.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;But before we run, we'd would like benchmarks running on all git pushes, right?&lt;/p&gt;


&lt;h1&gt;
  
  
  Running benchmarks in CI
&lt;/h1&gt;

&lt;p&gt;Let's implement the simplest possible approach:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;build benchmarks&lt;/li&gt;
&lt;li&gt;run them&lt;/li&gt;
&lt;li&gt;capture the report files&lt;/li&gt;
&lt;li&gt;present for manual inspection&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In short, benchmarks are built, run, and the results published as workflow artifacts. Anyone with access can download these artifacts.&lt;/p&gt;

&lt;p&gt;Because our repo is in Github, and we want to show this in-the-flesh we'll be using Github Actions.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;One day, Github Actions will support deep artifact linking and one-click reports, just like Jenkins and TeamCity have provided for years. But until that day dawns the tedium of download-extract-search is our lot :(&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Here's a super-simple Github Actions workflow:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;If you're unfamiliar with Action workflows, one of the best hands-on introductions is from &lt;a href="https://dev.to/newday-technology/api-s-from-dev-to-production-part-3-7dn"&gt;Pete King&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;This workflow file is in the sample Github repository, under &lt;code&gt;./.github/workflows/dotnet.yml&lt;/code&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Looking at the workflow, let's skip past the job's build steps as they're self explanatory.&lt;/p&gt;




&lt;h4&gt;
  
  
  &lt;code&gt;Publish&lt;/code&gt;
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: Publish
      run: dotnet publish -c Release --verbosity normal -o ./publish/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Here we prepare a fully publishable .Net core application. &lt;br&gt;
We always need to build with Release configuration: BenchmarkDotNet will not adequately run without normal compiler optimisations. The application with its dependencies, including the code-under-test, is pushed to a &lt;code&gt;./publish/&lt;/code&gt; directory within the job.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;One glorious day, both Windows and linux will finally and completely converge on a single standard for directory path separators. Until that time, please be careful if you're writing these workflows on Windows!&lt;/p&gt;
&lt;/blockquote&gt;


&lt;h4&gt;
  
  
  &lt;code&gt;Archive&lt;/code&gt;
&lt;/h4&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: Archive 
      uses: actions/upload-artifact@v2
      with:
        name: benchmarkdotnetdemo
        path: ./publish/*
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;We're just arching the binaries here, in case we want to distribute and run locally.&lt;/p&gt;


&lt;h4&gt;
  
  
  &lt;code&gt;Run Benchmarks&lt;/code&gt;
&lt;/h4&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: Run Benchmarks    
      run: dotnet "./publish/benchmarkdotnetdemo.dll" -f "benchmarkdotnetdemo.*"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;This is where we run the benchmarks.&lt;/p&gt;

&lt;p&gt;As of now there are no Github Actions to support benchmark running, so all we do here is run the console application itself within the Github Actions job. &lt;/p&gt;

&lt;p&gt;We're running all benchmarks in the &lt;code&gt;benchmarkdotnetdemo&lt;/code&gt; namespace, and we expect the results to be pushed to the same working folder.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Note the double quotes! In windows you won't need to quote these arguments, but you will need to for Github Actions. If you don't, you'll see strange command line parsing errors.&lt;/p&gt;

&lt;p&gt;Previously I remarked that you should only run benchmarks on an idle machine. Here we'll be running these on virtualised hardware, where OS interrupts are an absolutely unavoidable fact of life.  Clearly we're trading precision for convenience here, and the code-under-test is simple enough not to worry too much about single-tick precision metrics.&lt;/p&gt;
&lt;/blockquote&gt;


&lt;h4&gt;
  
  
  &lt;code&gt;Upload benchmark results&lt;/code&gt;
&lt;/h4&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: Upload benchmark results
      uses: actions/upload-artifact@v2
      with:
        name: Benchmark_Results
        path: ./BenchmarkDotNet.Artifacts/results/*
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;This is where we present the results for inspection. &lt;/p&gt;

&lt;p&gt;We just zip up the benchmark result files into a single artifact called &lt;code&gt;Benchmark_Results&lt;/code&gt;.&lt;/p&gt;


&lt;h2&gt;
  
  
  And lastly...
&lt;/h2&gt;

&lt;p&gt;That's it! Every time you push changes to this solution, benchmarks will be run. Performance degradations won't fail the build as we're not analysing the results, and we're certainly not applying quality gates in this solution. But you've got the minimum useful visibility, albeit very simply:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Falx667v9j5fv4o03jpfj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Falx667v9j5fv4o03jpfj.png" alt="GHA-build-results"&gt;&lt;/a&gt;&lt;/p&gt;


&lt;h1&gt;
  
  
  What have we learned?
&lt;/h1&gt;

&lt;p&gt;Running benchmarks is very simple on face value, but there are considerations when doing so: you don't want to run while you're rendering videos!&lt;/p&gt;

&lt;p&gt;Incorporating benchmark reporting into a CI pipeline is straight forward, although the lack of build reporting in Github Actions is a disappointment.&lt;/p&gt;

&lt;p&gt;We've yet to act on those benchmarks' results. For instance, we don't yet fail the build if our code-under-test is underperforming.&lt;/p&gt;


&lt;h1&gt;
  
  
  Up next
&lt;/h1&gt;

&lt;p&gt;How to fail the build if your code's underperforming.&lt;/p&gt;


&lt;h1&gt;
  
  
  Further Reading
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.github.com/en/actions" rel="noopener noreferrer"&gt;Github Actions&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.microsoft.com/en-us/dotnet/core/tools/" rel="noopener noreferrer"&gt;Dotnet CLI&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Demo Github source &amp;amp; Actions
&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev.to%2Fassets%2Fgithub-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/NewDayTechnology" rel="noopener noreferrer"&gt;
        NewDayTechnology
      &lt;/a&gt; / &lt;a href="https://github.com/NewDayTechnology/benchmarking-performance-part-2" rel="noopener noreferrer"&gt;
        benchmarking-performance-part-2
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      A simple demonstration of BenchmarkDotNet
    &lt;/h3&gt;
  &lt;/div&gt;
&lt;/div&gt;


&lt;/li&gt;

&lt;/ul&gt;

</description>
      <category>dotnet</category>
      <category>performance</category>
      <category>ci</category>
      <category>benchmark</category>
    </item>
    <item>
      <title>Measuring performance using BenchmarkDotNet - Part 1</title>
      <dc:creator>Tony Knight</dc:creator>
      <pubDate>Mon, 15 Mar 2021 18:15:31 +0000</pubDate>
      <link>https://dev.to/newday-technology/measuring-performance-using-benchmarkdotnet-part-1-39g3</link>
      <guid>https://dev.to/newday-technology/measuring-performance-using-benchmarkdotnet-part-1-39g3</guid>
      <description>&lt;h1&gt;
  
  
  Introduction
&lt;/h1&gt;

&lt;p&gt;We all must build fast software, right? Right? It’s true that microservices tend to introduce latencies - stateless functions mean a whole lot more network calls, and you can wave goodbye to data locality. But a microservice is still dependent on its own code being fast, and at least fast enough. &lt;/p&gt;

&lt;p&gt;In the past we’ve relied on profilers, stopwatches, dedicated performance teams, and sometimes plain old complaints from the field. All of these methods require some form of measurement; unfortunately they tend to be “big picture” performance that lacks detail - and often without concrete scenarios. This gets very expensive very quickly.&lt;/p&gt;

&lt;p&gt;Very often, you just want to measure the code’s performance without the baggage of dependencies. You might have a critical piece of code that &lt;em&gt;absolutely must&lt;/em&gt; meet certain performance criteria. Measuring such microcode cam obviously be done with profilers - dotTrace, ANTS to name just two.  The problem is they bring their own baggage as well, and worse can’t be easily relied upon in a CI pipeline. So how can you measure microcode performance in CI? Unit tests are a terrible idea, what else is there? Step forward BenchmarkDotNet.&lt;/p&gt;




&lt;h1&gt;
  
  
  TL;DR
&lt;/h1&gt;

&lt;p&gt;Measure your code’s performance with benchmarks at near zero cost and. All you need are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;.NET 7 SDK&lt;/li&gt;
&lt;li&gt;VS/VSCode&lt;/li&gt;
&lt;li&gt;BenchmarkDotNet from Nuget&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We’ll talk about how to write simple benchmarks, how to run them and how to interpret the results.&lt;/p&gt;




&lt;h1&gt;
  
  
  What is BenchmarkDotNet?
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://benchmarkdotnet.org/index.html" rel="noopener noreferrer"&gt;BenchmarkDotNet&lt;/a&gt; does what it says on the tin: benchmark .net code. It’s available as a &lt;a href="https://www.nuget.org/packages/BenchmarkDotNet/" rel="noopener noreferrer"&gt;Nuget packaged library&lt;/a&gt; for inclusion into your .net console applications. It is very &lt;a href="https://github.com/dotnet/BenchmarkDotNet#who-use-benchmarkdotnet" rel="noopener noreferrer"&gt;widely used&lt;/a&gt; by all major players in the .net world, including the &lt;a href="https://github.com/dotnet/runtime" rel="noopener noreferrer"&gt;dotnet core runtime project&lt;/a&gt; itself.&lt;/p&gt;

&lt;h1&gt;
  
  
  What does a HelloWorld benchmark look like?
&lt;/h1&gt;

&lt;p&gt;Let’s say you have a very basic Fibonacci implementation - and you want to measure its resource usage growth as more numbers are generated.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;By “resource usage” I mean time and memory consumed per method call. &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;In other words, you'd want to know how it scales. Here's an implementation of "get the first N Fibonacci numbers":&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;

&lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="k"&gt;static&lt;/span&gt; &lt;span class="n"&gt;IEnumerable&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="kt"&gt;int&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt; &lt;span class="nf"&gt;GetFibonacci&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;this&lt;/span&gt; &lt;span class="kt"&gt;int&lt;/span&gt; &lt;span class="n"&gt;count&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;w&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;x&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

    &lt;span class="k"&gt;yield&lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="k"&gt;foreach&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;_&lt;/span&gt; &lt;span class="k"&gt;in&lt;/span&gt; &lt;span class="n"&gt;Enumerable&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Range&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="m"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;count&lt;/span&gt; &lt;span class="p"&gt;-&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;y&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;w&lt;/span&gt; &lt;span class="p"&gt;+&lt;/span&gt; &lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="k"&gt;yield&lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;y&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="n"&gt;w&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="n"&gt;x&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;y&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;blockquote&gt;
&lt;p&gt;No prizes are sought for best efficiency here. Please &lt;strong&gt;do not&lt;/strong&gt; take this as a reference implementation of Fibonacci!&lt;/p&gt;
&lt;/blockquote&gt;



&lt;p&gt;To answer the scaling question, we would implement a benchmark, run it and analyse the results. Skipping forward a rendered benchmark report would look something like the below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fccoaxassb9omnz96k1ji.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fccoaxassb9omnz96k1ji.png" alt="alt text"&gt;&lt;/a&gt;&lt;/p&gt;


&lt;h2&gt;
  
  
  What do all the headers actually mean?
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;The column&lt;/th&gt;
&lt;th&gt;What it means&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Method&lt;/td&gt;
&lt;td&gt;The name of the code-under-test; a single benchmark may have several methods under test for, e.g. scenarios. This value is lifted directly from your benchmark code.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Count&lt;/td&gt;
&lt;td&gt;An arbitrary parameter: in this case the number of Fibonacci numbers generated by the method under test.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Mean/Error/StdDev/StdError&lt;/td&gt;
&lt;td&gt;Execution time statistics. Note that these can be given down to nanoseconds, depending on how fast your code is. Low is best.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Min/Q1/Median/Q3/Max&lt;/td&gt;
&lt;td&gt;Quartile execution time statistics: note the time units. Low is best.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Ops/sec&lt;/td&gt;
&lt;td&gt;The number of operations executed per second for the method/parameter combination. High is good.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Rank&lt;/td&gt;
&lt;td&gt;The fastest performing method/parameter combination. Low is best.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Gen 0/1/2&lt;/td&gt;
&lt;td&gt;The total number of collections per generation&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Allocated&lt;/td&gt;
&lt;td&gt;Total bytes allocated against all generations&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;blockquote&gt;
&lt;p&gt;Note the header information in the report! It’ll give details on the OS, CPU, .Net version, JIT method and GC configuration. Always benchmark like-for-like!&lt;/p&gt;
&lt;/blockquote&gt;


&lt;h2&gt;
  
  
  OK… what do those numbers &lt;em&gt;really&lt;/em&gt; mean?
&lt;/h2&gt;

&lt;p&gt;Let’s look at each iteration of &lt;code&gt;Count&lt;/code&gt;, and we’re using it here to get the first &lt;code&gt;Count&lt;/code&gt; numbers of the Fibonacci sequence.&lt;/p&gt;

&lt;p&gt;Where &lt;code&gt;Count&lt;/code&gt; is 1 the mean execution time is 103.4 nanoseconds. That’s 0.1 microseconds, or 0.0001 milliseconds. I like that: nice and fast. &lt;/p&gt;

&lt;p&gt;Where &lt;code&gt;Count&lt;/code&gt; is 13 (yes, the parameters themselves follow Fibonacci!) the mean time is 407.2 ns: four times what &lt;code&gt;Count=1&lt;/code&gt; is, yet the Count is 13 times bigger. I’ll take that, for now. &lt;/p&gt;

&lt;p&gt;Where &lt;code&gt;Count&lt;/code&gt; is 34 the mean time is 1,077.9 ns, or 1.077 microseconds, or just over 0.0001 milliseconds. That’s taking 2.6 times more time than &lt;code&gt;Count = 13&lt;/code&gt;. Let’s compare against &lt;code&gt;Count = 1&lt;/code&gt;: &lt;code&gt;Count&lt;/code&gt; is 34 times bigger , yet takes 10 times the time. I’ll take that too. &lt;/p&gt;

&lt;p&gt;If we plot &lt;code&gt;Count&lt;/code&gt; against the time ratio we see this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmaausg26euewio5iowek.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmaausg26euewio5iowek.png" alt="alt text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In other words, time used does not grow as &lt;code&gt;Count&lt;/code&gt; grows. If it did, the lines would be parallel.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;So the benchmarks are showing that the implementation has reasonably acceptable scaling. It's not constant time, but it’s better than O(n) time: a pleasant surprise.&lt;/p&gt;

&lt;p&gt;If you're not satisfied with the performance results, simply make your changes, re-run the benchmarks &amp;amp; re-analyse. That's it.&lt;/p&gt;
&lt;/blockquote&gt;


&lt;h2&gt;
  
  
  You haven’t mentioned the memory yet, have you?
&lt;/h2&gt;

&lt;p&gt;Trust me, I’m getting to that. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Pay particular attention to memory usage. Garbage collections and memory allocations are as important as sheer speed!&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;Count=1&lt;/code&gt; used 128 bytes.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Count=13&lt;/code&gt; used 312 bytes&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Count=34&lt;/code&gt; used 744 bytes.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If we plot &lt;code&gt;Count&lt;/code&gt; against the allocation growth ratios, we see this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft87hec7nhpqs0z0ultm8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft87hec7nhpqs0z0ultm8.png" alt="alt text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;This means the used memory isn’t constant either: the memory used for &lt;code&gt;Count=34&lt;/code&gt; is greater than the memory used for &lt;code&gt;Count=1&lt;/code&gt;. Again it's better than O(n). To my mind this is OK, but not great: we need more investigation. It's probably incurred with yield return, but do we want to sacrifice the readability? Probably not, but in any case we’re getting new perspectives on our code. &lt;em&gt;This is a good thing&lt;/em&gt;.&lt;/p&gt;
&lt;/blockquote&gt;


&lt;h2&gt;
  
  
  What other rendered reports can you get?
&lt;/h2&gt;

&lt;p&gt;You can output a markdown version of your report and many other formats; Markown output is GitHub inspired.&lt;/p&gt;

&lt;p&gt;You can use the following attributes to output the many different types of rendered reports:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;

&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;JsonExporterAttribute&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Full&lt;/span&gt;&lt;span class="p"&gt;()]&lt;/span&gt;
&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nf"&gt;CsvMeasurementsExporter&lt;/span&gt;&lt;span class="p"&gt;()]&lt;/span&gt;
&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nf"&gt;CsvExporter&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;CsvSeparator&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Comma&lt;/span&gt;&lt;span class="p"&gt;)]&lt;/span&gt;
&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nf"&gt;HtmlExporter&lt;/span&gt;&lt;span class="p"&gt;()]&lt;/span&gt;
&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;MarkdownExporterAttribute&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;GitHub&lt;/span&gt;&lt;span class="p"&gt;()]&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;An example of the GitHub Markdown report:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu8eh79e4i3zd4zoti91b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu8eh79e4i3zd4zoti91b.png" alt="alt text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Charting is supported through &lt;a href="https://www.r-project.org/" rel="noopener noreferrer"&gt;the R project&lt;/a&gt;. As R is a world in itself, I’m going to skip the subject. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;If you want charts, consider importing the rendered JSON into Excel. The &lt;code&gt;CsvExporter&lt;/code&gt; attribute will generate a CSV with the data you need.&lt;/p&gt;
&lt;/blockquote&gt;


&lt;h2&gt;
  
  
  Full code example
&lt;/h2&gt;

&lt;p&gt;What does the benchmark code look like using BenchmarkDotNet? It might surprise you to see how simple it is.&lt;/p&gt;

&lt;p&gt;BenchmarkDotNet relies on declarative code over which it will reflect. Leaving aside the class attributes (more on those later), note the &lt;code&gt;[Params]&lt;/code&gt; attribute over &lt;code&gt;Count&lt;/code&gt; from the report above, likewise &lt;code&gt;[Benchmark]&lt;/code&gt; and &lt;code&gt;Fibonacci()&lt;/code&gt;.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;

&lt;span class="k"&gt;using&lt;/span&gt; &lt;span class="nn"&gt;System.Linq&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;using&lt;/span&gt; &lt;span class="nn"&gt;BenchmarkDotNet.Attributes&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;using&lt;/span&gt; &lt;span class="nn"&gt;BenchmarkDotNet.Exporters.Csv&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;namespace&lt;/span&gt; &lt;span class="nn"&gt;benchmarkdotnetdemo&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nf"&gt;InProcess&lt;/span&gt;&lt;span class="p"&gt;()]&lt;/span&gt;
    &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;MemoryDiagnoser&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;RankColumn&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;MinColumn&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;MaxColumn&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;Q1Column&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;Q3Column&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;AllStatisticsColumn&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;JsonExporterAttribute&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Full&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;CsvMeasurementsExporter&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nf"&gt;CsvExporter&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;CsvSeparator&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Comma&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="n"&gt;HtmlExporter&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;MarkdownExporterAttribute&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;GitHub&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nf"&gt;GcServer&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;true&lt;/span&gt;&lt;span class="p"&gt;)]&lt;/span&gt;
    &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;FibonacciBenchmark&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nf"&gt;Params&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="m"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="m"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="m"&gt;3&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="m"&gt;5&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="m"&gt;8&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="m"&gt;13&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="m"&gt;21&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="m"&gt;34&lt;/span&gt;&lt;span class="p"&gt;)]&lt;/span&gt;
        &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="kt"&gt;int&lt;/span&gt; &lt;span class="n"&gt;Count&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="k"&gt;get&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;set&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;

        &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;Benchmark&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
        &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="k"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;Fibonacci&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
        &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;xs&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;Count&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;GetFibonacci&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;ToList&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;blockquote&gt;
&lt;p&gt;You’ll notice that the benchmarks have a return type of &lt;code&gt;void&lt;/code&gt;  and do not have any assertions. Remember: we’re not proving functional correctness here, we’re measuring resource usage.&lt;/p&gt;
&lt;/blockquote&gt;


&lt;h1&gt;
  
  
  Show me the code!
&lt;/h1&gt;

&lt;p&gt;I’ve created a simple BenchmarkDotNet implementation here:&lt;/p&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev.to%2Fassets%2Fgithub-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/NewDayTechnology" rel="noopener noreferrer"&gt;
        NewDayTechnology
      &lt;/a&gt; / &lt;a href="https://github.com/NewDayTechnology/benchmarking-performance-part-1" rel="noopener noreferrer"&gt;
        benchmarking-performance-part-1
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      A simple demonstration of BenchmarkDotNet
    &lt;/h3&gt;
  &lt;/div&gt;
&lt;/div&gt;



&lt;p&gt;There’s only the one C# project in there - &lt;code&gt;benchmarkdotnetdemo.csproj&lt;/code&gt; - that contains the minimal files.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;BenchmarkDotNet will only work if the console project is built with a &lt;em&gt;Release&lt;/em&gt; configuration, that is with code optimisations applied. Running in &lt;em&gt;Debug&lt;/em&gt; will result in a &lt;em&gt;run-time error&lt;/em&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Setup
&lt;/h3&gt;

&lt;p&gt;This is the &lt;code&gt;Program.cs&lt;/code&gt; file, and like all C# console apps you need an entry point: &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;

&lt;span class="k"&gt;using&lt;/span&gt; &lt;span class="nn"&gt;System&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;using&lt;/span&gt; &lt;span class="nn"&gt;BenchmarkDotNet.Running&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;namespace&lt;/span&gt; &lt;span class="nn"&gt;benchmarkdotnetdemo&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;Program&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;static&lt;/span&gt; &lt;span class="kt"&gt;int&lt;/span&gt; &lt;span class="nf"&gt;Main&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kt"&gt;string&lt;/span&gt;&lt;span class="p"&gt;[]&lt;/span&gt; &lt;span class="n"&gt;args&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="k"&gt;try&lt;/span&gt;
            &lt;span class="p"&gt;{&lt;/span&gt;
                &lt;span class="n"&gt;BenchmarkSwitcher&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;FromAssembly&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;typeof&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;Program&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="n"&gt;Assembly&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
                    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Run&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;args&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

                &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
            &lt;span class="p"&gt;}&lt;/span&gt;
            &lt;span class="k"&gt;catch&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;Exception&lt;/span&gt; &lt;span class="n"&gt;ex&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="p"&gt;{&lt;/span&gt;
                &lt;span class="n"&gt;Console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;ForegroundColor&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;ConsoleColor&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Red&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
                &lt;span class="n"&gt;Console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;WriteLine&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ex&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Message&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
                &lt;span class="n"&gt;Console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;ResetColor&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
                &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
            &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Aside from the standard method entry point let’s go over it bit by bit.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;

&lt;span class="k"&gt;using&lt;/span&gt; &lt;span class="nn"&gt;BenchmarkDotNet.Running&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;For bootstrapping BenchmarkDotNet, this is the only import you need.&lt;/p&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;

&lt;span class="n"&gt;BenchmarkSwitcher&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;FromAssembly&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;typeof&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;Program&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="n"&gt;Assembly&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;Run&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;args&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;This one-line-to-rule-them-all will perform all command line parsing, all help, all benchmark execution and all report generation. &lt;/p&gt;

&lt;p&gt;One point here is &lt;code&gt;.FromAssembly(typeof(Program).Assembly)&lt;/code&gt; - this informs BenchmarkDotNet of its benchmark search scope. Benchmarks are internally discovered by reflection - you’ll see soon enough.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;NOTE: If you were to run the project without any command line arguments, BenchmarkDotNet will assume an interactive CLI. &lt;/p&gt;

&lt;p&gt;&lt;code&gt;.Run(args)&lt;/code&gt; returns a sequence of report objects comprised of the same data used for rendered reports: I’ve excluded them for simplicity. If you want to run benchmarks and fail CI builds if performance dips they are your first place to look.&lt;/p&gt;
&lt;/blockquote&gt;


&lt;h2&gt;
  
  
  Create a new benchmark
&lt;/h2&gt;

&lt;p&gt;There is a file called &lt;code&gt;SimpleBenchmark.cs&lt;/code&gt;. Let’s have a look.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;

&lt;span class="k"&gt;using&lt;/span&gt; &lt;span class="nn"&gt;BenchmarkDotNet.Attributes&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;using&lt;/span&gt; &lt;span class="nn"&gt;BenchmarkDotNet.Exporters.Csv&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;namespace&lt;/span&gt; &lt;span class="nn"&gt;benchmarkdotnetdemo&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nf"&gt;InProcess&lt;/span&gt;&lt;span class="p"&gt;()]&lt;/span&gt;
    &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;MemoryDiagnoser&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;RankColumn&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;MinColumn&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;MaxColumn&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;Q1Column&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;Q3Column&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;AllStatisticsColumn&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;JsonExporterAttribute&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Full&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;CsvMeasurementsExporter&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nf"&gt;CsvExporter&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;CsvSeparator&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Comma&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="n"&gt;HtmlExporter&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;MarkdownExporterAttribute&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;GitHub&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nf"&gt;GcServer&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;true&lt;/span&gt;&lt;span class="p"&gt;)]&lt;/span&gt;
    &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;SimpleBenchmark&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;Benchmark&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
        &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="k"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;NoopTest&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;

        &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;Benchmark&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
        &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="kt"&gt;int&lt;/span&gt; &lt;span class="nf"&gt;AddTest&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="kt"&gt;int&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;MaxValue&lt;/span&gt; &lt;span class="p"&gt;+&lt;/span&gt; &lt;span class="kt"&gt;int&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;MinValue&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

        &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;Benchmark&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
        &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="kt"&gt;int&lt;/span&gt; &lt;span class="nf"&gt;MultiplyTest&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="m"&gt;11&lt;/span&gt; &lt;span class="p"&gt;*&lt;/span&gt; &lt;span class="m"&gt;3&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h4&gt;
  
  
  FibonacciBenchmark.cs
&lt;/h4&gt;

&lt;p&gt;Just for completeness: note the similar declarations as &lt;code&gt;SimpleBenchmarks.cs&lt;/code&gt;. In this case, we’re adding a &lt;code&gt;[Params]&lt;/code&gt; parameter to support benchmark permutations. &lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;

&lt;span class="k"&gt;using&lt;/span&gt; &lt;span class="nn"&gt;System.Linq&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;using&lt;/span&gt; &lt;span class="nn"&gt;BenchmarkDotNet.Attributes&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;using&lt;/span&gt; &lt;span class="nn"&gt;BenchmarkDotNet.Exporters.Csv&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;namespace&lt;/span&gt; &lt;span class="nn"&gt;benchmarkdotnetdemo&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nf"&gt;InProcess&lt;/span&gt;&lt;span class="p"&gt;()]&lt;/span&gt;
    &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;MemoryDiagnoser&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;RankColumn&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;MinColumn&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;MaxColumn&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;Q1Column&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;Q3Column&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;AllStatisticsColumn&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;JsonExporterAttribute&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Full&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;CsvMeasurementsExporter&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nf"&gt;CsvExporter&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;CsvSeparator&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Comma&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="n"&gt;HtmlExporter&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;MarkdownExporterAttribute&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;GitHub&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nf"&gt;GcServer&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;true&lt;/span&gt;&lt;span class="p"&gt;)]&lt;/span&gt;
    &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;FibonacciBenchmark&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nf"&gt;Params&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="m"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="m"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="m"&gt;3&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="m"&gt;5&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="m"&gt;8&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="m"&gt;13&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="m"&gt;21&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="m"&gt;34&lt;/span&gt;&lt;span class="p"&gt;)]&lt;/span&gt;
        &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="kt"&gt;int&lt;/span&gt; &lt;span class="n"&gt;Count&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="k"&gt;get&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;set&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;

        &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;Benchmark&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
        &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="k"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;Fibonacci&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
        &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;xs&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;Count&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;GetFibonacci&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;ToList&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h3&gt;
  
  
  How are benchmarks executed?
&lt;/h3&gt;

&lt;p&gt;Without going into too much detail, BenchmarkDotNet will attempt to run your benchmarks many many times over to settle on mean and median values. &lt;/p&gt;

&lt;p&gt;When you run the benchmarks you may first be confused by just how many iterations are involved, so let’s give a simplistic explanation. Modern OSs are preemptive multitaskers, CPUs have pipeline caches as well as instruction reordering features. .NET itself has the JIT compiler. This means that &lt;em&gt;no single execution of code can be relied upon to give a canonical result&lt;/em&gt;.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;This is part of the reason why unit tests are terrible for benchmarking! They only run once and incur their own (unaccounted) overheads.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;BenchmarkDotNet will run warm up iterations before it can take representative values. These show up as various stages: OverheadJitting  &amp;amp; WorkloadJitting, WorkloadPilot, OverheadWarmup, OverheadActual.&lt;/p&gt;

&lt;p&gt;JIT comes at a cost: the first time any .NET code executes it must first be JIT compiled. The more complex the code the higher the JIT cost, usually showing as CPU and time costs. As we’re interested only in runtime performance, these steps eliminate JIT costs from measurements.&lt;/p&gt;

&lt;p&gt;In the same vein  other warmup steps are run to eliminate other “once only” costs, for instance to warm up pipelining caches. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft434gva68vqgk4nfpd0m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft434gva68vqgk4nfpd0m.png" alt="alt text"&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;After these steps have completed, BenchmarkDotNet will iterate these operations to yield the final statistics; these are shown as WorkloadActual steps.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkkkshojo5dsobe2ogndi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkkkshojo5dsobe2ogndi.png" alt="alt text"&gt;&lt;/a&gt;&lt;/p&gt;



&lt;blockquote&gt;
&lt;p&gt;If you want more detail, please refer to &lt;a href="https://benchmarkdotnet.org/articles/guides/how-it-works.html" rel="noopener noreferrer"&gt;BenchmarkDotNet’s own documentation&lt;/a&gt;. In these code samples we’re using the default &lt;code&gt;Throughput&lt;/code&gt; strategy for microbenchmarking.&lt;/p&gt;
&lt;/blockquote&gt;


&lt;h3&gt;
  
  
  How long does it take?
&lt;/h3&gt;

&lt;p&gt;It depends ;) Simple calculations, such as in the demo project, will run in under a minute. Adding permutations (such as with &lt;code&gt;[Params]&lt;/code&gt;)  will linearly increase the benchmarking time, as each parameter will be benchmarked in its own right.&lt;/p&gt;

&lt;p&gt;With that in mind, it’s quite clear that resource hungry algorithms, benchmarked with a large variety of parameters, will take a considerable amount of time. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Don’t expect to parallelise BenchmarkDotNet: it runs benchmarks sequentially. Thread context switching is itself a cost and extremely difficult to compensate for.&lt;/p&gt;
&lt;/blockquote&gt;


&lt;h1&gt;
  
  
  What have we learned?
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;We’ve seen how to get BenchmarkDotNet &lt;/li&gt;
&lt;li&gt;We’ve seen how to integrate it in a simple console application&lt;/li&gt;
&lt;li&gt;We’ve seen the minimum work needed to build benchmarks&lt;/li&gt;
&lt;li&gt;We’ve had a taste of the reports and inferences we can gain from BenchmarkDotNet&lt;/li&gt;
&lt;/ul&gt;


&lt;h1&gt;
  
  
  Next Steps
&lt;/h1&gt;

&lt;p&gt;How to incorporate into CI?&lt;/p&gt;


&lt;h1&gt;
  
  
  More Information
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://en.wikipedia.org/wiki/Benchmark_(computing)" rel="noopener noreferrer"&gt;What is benchmarking - Wiki&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://benchmarkdotnet.org/index.html" rel="noopener noreferrer"&gt;BenchmarkDotNet&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/dotnet/BenchmarkDotNet" rel="noopener noreferrer"&gt;BenchmarkDotNet on Github&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://stebet.net/benchmarking-and-performance-optimizations-in-c-using-benchmarkdotnet/" rel="noopener noreferrer"&gt;A real world use case of BenchmarkDotNet&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;GitHub repository: &lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev.to%2Fassets%2Fgithub-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/NewDayTechnology" rel="noopener noreferrer"&gt;
        NewDayTechnology
      &lt;/a&gt; / &lt;a href="https://github.com/NewDayTechnology/benchmarking-performance-part-1" rel="noopener noreferrer"&gt;
        benchmarking-performance-part-1
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      A simple demonstration of BenchmarkDotNet
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;Measuring performance with BenchmarkDotNet part 1&lt;/h1&gt;
&lt;/div&gt;

&lt;p&gt;&lt;a rel="noopener noreferrer" href="https://github.com/NewDayTechnology/benchmarkdotnetdemo/actions/workflows/dotnet.yml/badge.svg"&gt;&lt;img src="https://github.com/NewDayTechnology/benchmarkdotnetdemo/actions/workflows/dotnet.yml/badge.svg" alt="Build"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/NewDayTechnology/benchmarking-performance-part-1CODE_OF_CONDUCT.md" rel="noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/aa04298d32dcc4713314045eb64482ed5d22bf73c7131f3cd48fe0fda7e6f886/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f436f6e7472696275746f72253230436f76656e616e742d322e302d3462616161612e737667" alt="Contributor Covenant"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A simple demonstration of the superlative &lt;a href="https://benchmarkdotnet.org/index.html" rel="nofollow noopener noreferrer"&gt;BenchmarkDotNet&lt;/a&gt; and its integration into Github Actions.&lt;/p&gt;

&lt;p&gt;Measuring code performance is self evidently a vital discipline to software engineering and yet is so often skipped, usually for false economies. &lt;a href="https://benchmarkdotnet.org/index.html" rel="nofollow noopener noreferrer"&gt;BenchmarkDotNet&lt;/a&gt; makes this essential task simplicity itself, with a syntax and style that's immediately intuitive to anyone versed in unit testing.&lt;/p&gt;

&lt;p&gt;Just exercise your code in a declarative way, include it in your CI pipeline, and enjoy the results.&lt;/p&gt;

&lt;p&gt;This project just demonstrates the basics: the .net project, the CI pipeline and the resultant reports.&lt;/p&gt;

&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;The Benchmarks&lt;/h2&gt;
&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;NoopTest&lt;/code&gt;
The absolute minimum function that can be benchmarked - it does nothing.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;AddTest&lt;/code&gt;
A simple addition metric, again of minimal complexity.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;MultiplyTest&lt;/code&gt;
A simple multiplication metric, again of minimal complexity.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;Fibonacci&lt;/code&gt;
Benchmarking a Fibonacci implementation, measuring the computation time for the first N Fibonacci numbers.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Builds&lt;/h2&gt;
&lt;/div&gt;

&lt;p&gt;Builds are managed with love…&lt;/p&gt;
&lt;/div&gt;


&lt;/div&gt;
&lt;br&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/NewDayTechnology/benchmarking-performance-part-1" rel="noopener noreferrer"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;br&gt;
&lt;/div&gt;

&lt;/li&gt;

&lt;/ul&gt;

</description>
      <category>dotnet</category>
      <category>performance</category>
      <category>metrics</category>
      <category>benchmark</category>
    </item>
  </channel>
</rss>
