<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: NewDay Technology</title>
    <description>The latest articles on DEV Community by NewDay Technology (@newday-technology).</description>
    <link>https://dev.to/newday-technology</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/newday-technology"/>
    <language>en</language>
    <item>
      <title>The Backend Testing Breakup 💔</title>
      <dc:creator>Jay Glass</dc:creator>
      <pubDate>Mon, 16 Dec 2024 16:23:39 +0000</pubDate>
      <link>https://dev.to/newday-technology/the-backend-testing-breakup-4kj9</link>
      <guid>https://dev.to/newday-technology/the-backend-testing-breakup-4kj9</guid>
      <description>&lt;p&gt;Or: How to maximize the value tests provide&lt;/p&gt;

&lt;h2&gt;
  
  
  Preamble
&lt;/h2&gt;

&lt;p&gt;Ah, testing. If you’re like me and have a love-hate relationship with tests, this article is for you.&lt;/p&gt;

&lt;p&gt;Having the right testing at the right point in the development process can be a lifeline and prevent costly bugs getting into production, but test suites also have the tendency to grow into unwieldy monsters.&lt;/p&gt;

&lt;p&gt;Have you ever worked on a project where understanding and updating the tests was more complex than implementing the actual (money-making) business logic? If not, I guarantee at some point you will.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;But it doesn’t have to be this way.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Tests can be simple and manageable, and my hope in writing this article is to help steer testing in a less painful direction for everyone involved - myself especially.&lt;/p&gt;

&lt;h2&gt;
  
  
  Goals of this article
&lt;/h2&gt;

&lt;p&gt;To suggest a battle-tested way of making testing as painless as possible.&lt;/p&gt;

&lt;p&gt;To provide a way to implement this form of testing without impeding feature development.&lt;/p&gt;

&lt;p&gt;To cover the importance of buy-in and how to get it.&lt;/p&gt;

&lt;h2&gt;
  
  
  But first
&lt;/h2&gt;

&lt;p&gt;Let’s not forget the goal of testing: to give us confidence that our system will behave as expected in production.&lt;/p&gt;

&lt;p&gt;Any test we write should support this goal. &lt;/p&gt;

&lt;p&gt;So…&lt;/p&gt;

&lt;h1&gt;
  
  
  How do we make testing as painless as possible?
&lt;/h1&gt;

&lt;p&gt;With these simple principles:&lt;/p&gt;

&lt;p&gt;Test for each thing as early as possible&lt;/p&gt;

&lt;p&gt;Only test for each thing once&lt;/p&gt;

&lt;p&gt;But to dig into these principles we need to cover (and establish common language for) the levels of testing.&lt;/p&gt;

&lt;h1&gt;
  
  
  Levels of testing
&lt;/h1&gt;

&lt;p&gt;I am yet to work at a company that doesn’t have at least one form of testing named differently from everywhere else I’ve worked. &lt;/p&gt;

&lt;p&gt;We generally all agree on what a unit test is but if some of the other names differ from what you’re used to, please bear with. I’ve even half-jokingly suggested coming up with new, less-opinion-loaded names for the various levels of testing but I usually get shot down because “there are existing names already” - and everyone then spends the rest of the session debating what those existing names' responsibilities actually are…🤦‍♀️ so for now let’s just go with what appears to be most common.&lt;/p&gt;

&lt;h2&gt;
  
  
  The test &lt;del&gt;pyramid&lt;/del&gt; &lt;del&gt;trophy&lt;/del&gt; &lt;del&gt;tetrahedron&lt;/del&gt; hierarchy
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuz9reqllzuw7zrnw24tf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuz9reqllzuw7zrnw24tf.png" alt="Test Levels" width="519" height="529"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The test pyramid/trophy/(insert shape here) is great but it does tend to direct focus to the point that there should be more tests at one specific level rather than some other specific level. Which is often translated as “level x should have more tests than level y”. In the Pyramid paradigm, it means writing more Unit Tests than Component Tests, more Component Tests than Contract Tests, and so forth all the way up. &lt;/p&gt;

&lt;h3&gt;
  
  
  #1
&lt;/h3&gt;

&lt;p&gt;Which detracts from the point: We shouldn’t be focusing on the amount of tests. We should be focusing on the practicality. And this is where “Test each thing as early as possible” comes in:&lt;/p&gt;

&lt;p&gt;If some logic can be verified in a Unit Test, we should do so in a Unit Test.&lt;/p&gt;

&lt;p&gt;Because these test single things, they have single points of failure. A failing Unit Test tells us immediately what unit of logic failed which makes it extremely fast to find and fix.&lt;/p&gt;

&lt;p&gt;As we go up the testing levels, the amount of moving parts increases so if we skip Unit Tests completely and something breaks, did it break because of an internal logic issue? Or an integration issue? A network issue?&lt;/p&gt;

&lt;p&gt;Sure you can start debugging or trawl through logs but if you could, why not have the Unit Test to tell you exactly which bit of logic the failure came from?&lt;/p&gt;

&lt;p&gt;And the same applies as we go up the levels: we shouldn’t be verifying integrations or contracts in Smoke Tests, putting them in the earlier levels means we can catch issues in a more isolated manner making the root cause easier to identify.  &lt;/p&gt;

&lt;p&gt;A failing Contract Test takes you directly to the contract that is causing a problem. If we skip Contract Testing and a Smoke Test fails, we need to do a lot more digging to get to the information which tells us it is a contract causing the issue. &lt;/p&gt;

&lt;p&gt;So, when adding some new functionality, we should (repeat with me) “test each thing as early as possible”. &lt;/p&gt;

&lt;h3&gt;
  
  
  #2
&lt;/h3&gt;

&lt;p&gt;And if we have a Contract Test verifying that a contract is as expected, we don’t need to do so again in an Integration Test. Some third-party dependencies may not even need any Integration Tests at all, especially if all we are concerned with testing is that given a specific request format, we get a specific response format. &lt;/p&gt;

&lt;p&gt;We may find that maintaining the additional Integration Test gives us no value. &lt;/p&gt;

&lt;p&gt;The same goes for Component Tests: we shouldn’t be testing any logic. That should already be done in the Unit Tests. So if a Component Test fails, we know we’ve already proven the internal logic so it must be something else - like a configuration or internal dependency registration issue.&lt;/p&gt;

&lt;p&gt;If we start asserting logic in Component Tests, contracts and integrations in Smoke Tests we are duplicating our assertions, crossing the responsibilities of our levels of testing, and creating more test code to understand, maintain and update leading to an eventual monster of a test suite that developers will treat with an appropriate level of dread.&lt;/p&gt;

&lt;p&gt;So mantra #2: &lt;em&gt;Only test for each thing once&lt;/em&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Defining the test types
&lt;/h1&gt;

&lt;p&gt;…or each level of tests' responsibilities&lt;/p&gt;

&lt;p&gt;To help illustrate this we have the following scenario:&lt;/p&gt;

&lt;p&gt;A web API for a Zoo which allows zookeepers to: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;add an animal, which will result in the animal being added to the database and an email notification being sent.&lt;/li&gt;
&lt;li&gt;change the feeding time of an animal, which will result in the time being updated in the database and an email notification being sent.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffg96qz9igudk54bfmmwb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffg96qz9igudk54bfmmwb.png" alt="Zoo API Architecture" width="800" height="323"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For the tests, following the mantra of “Only test for each thing once”, we get the following breakdown:&lt;/p&gt;

&lt;h2&gt;
  
  
  Unit Tests
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fht5obdxcr6kx6meljwsc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fht5obdxcr6kx6meljwsc.png" alt="Unit" width="667" height="381"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;These are done on the smallest possible unit of logic. We are only concerned with validating that the unit of logic behaves as expected given different inputs. To ensure we are only testing our logic and not any dependencies, dependencies are mocked to return an expected response to help us assert the unit of logic’s behavior for a specific scenario. &lt;/p&gt;

&lt;p&gt;These tests are usually written against the public functions defined in business logic.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Example Test Cases&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;AddAnimal&lt;/code&gt; returns success when the mocked &lt;code&gt;AnimalRespository&lt;/code&gt; reports that the record has been created.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;AddAnimal&lt;/code&gt; returns an &lt;code&gt;AnimalAlreadyExists&lt;/code&gt; error when the mocked &lt;code&gt;AnimalRepository&lt;/code&gt; reports that the animal already exists.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;AddAnimal&lt;/code&gt; returns success when the mocked &lt;code&gt;EmailClient&lt;/code&gt; reports it successfully sent the email.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;AddAnimal&lt;/code&gt; returns an &lt;code&gt;InvalidEmail&lt;/code&gt; error when the mocked &lt;code&gt;EmailClient&lt;/code&gt; only forwards the email request to the mocked &lt;code&gt;EmailClient&lt;/code&gt; if the call to the mocked &lt;code&gt;AnimalRepository&lt;/code&gt; returned successfully. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;etc&lt;/p&gt;

&lt;p&gt;and similar for the &lt;code&gt;ChangeFeedingTime&lt;/code&gt; function of the &lt;code&gt;FeedingTimeService&lt;/code&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;💰 &lt;strong&gt;Tip:&lt;/strong&gt; Keeping functions/units-of-logic small results in smaller, more manageable and easier to understand Unit Tests.&lt;/p&gt;

&lt;p&gt;✅ Now we know that each individual piece of business logic works as expected.&lt;/p&gt;

&lt;h2&gt;
  
  
  Component Tests
&lt;/h2&gt;

&lt;p&gt;The next step is to test that all these functions can be chained together and execution flows through the running application as expected.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi0toxc5sl0xuazfcbkem.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi0toxc5sl0xuazfcbkem.png" alt="Component Flow" width="800" height="371"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Since we are only concerned with testing the execution flow, we can mock external dependencies (the Database and Email Web API).&lt;/p&gt;

&lt;p&gt;We have also already tested all the logic within the various functions so we don’t need to check any of those again either.&lt;/p&gt;

&lt;p&gt;All we assert is that after entering the system, execution flows through to the correct external dependencies and back resulting in the expected response.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Example Test Cases&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;When an add animal request is sent to the API, the animal is added to the database, an email is sent and we get a success response.&lt;/li&gt;
&lt;li&gt;When an add animal request is sent to the API but the animal already exists, we get an “animal already exists” error response.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;etc&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;✅ Now we know that when pieced together, execution flows through the individual pieces of business logic as expected.&lt;/p&gt;

&lt;p&gt;💰 &lt;strong&gt;Tip:&lt;/strong&gt; If you need to provide a mock of your entire application: these mocked external dependencies can be used in place of the actual implementations (e.g. by clever in-memory injection) and viola: you have a version of your application which has all the logic, requires almost no additional maintenance, and isn’t beholden to external dependencies causing trouble.&lt;/p&gt;

&lt;h2&gt;
  
  
  Contract and Integration Tests
&lt;/h2&gt;

&lt;p&gt;We now know our application works when the external dependencies behave as expected - but how do we confirm that the external dependencies behave as expected? This is what Contract and Integration Tests do. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy11pednhkapdtv73f83x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy11pednhkapdtv73f83x.png" alt="External Dependencies" width="800" height="373"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Contract Tests
&lt;/h3&gt;

&lt;p&gt;These ensure that the external dependencies contracts conform to the schema that we expect. &lt;/p&gt;

&lt;p&gt;For example: We would have tests which send the Email Web API the various requests we use, to ensure they are accepted with the fields and their values in the format in which we are sending them.&lt;/p&gt;

&lt;p&gt;This means that if the Email Web API were to suddenly make a field mandatory that we are not supplying, the Contract Tests would start to fail and we would know there is something we need to change.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Example Test Cases&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Email Contract Tests&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;When an email request is sent with all the required properties, we get a success response.&lt;/li&gt;
&lt;li&gt;When an email request is sent with a body greater than 5000 characters, we get an error response stating the body is too large.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Database Contract Tests&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;When an add animal request is sent with all the required properties, we get a success response.&lt;/li&gt;
&lt;li&gt;When an add animal request is sent without a name, we get an error response stating the animal name is required.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;etc&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Integration Tests
&lt;/h3&gt;

&lt;p&gt;Similarly, Integration Tests ensure the external dependencies behave as we expect. &lt;/p&gt;

&lt;p&gt;E.g. The Database may have a rule which prevents Junior Zookeepers from setting feeding times. In this case we can have an Integration Test which asserts that if a Junior Zookeeper tries to change a feeding time, the database returns an error.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Example Test Cases&lt;/strong&gt;&lt;br&gt;
Email Integration Tests&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;When an email request is sent, a test email account receives the email.&lt;/li&gt;
&lt;li&gt;When an email request is sent which is identical to a previous request, we get an error response stating the email is a duplicate.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Database Integration Tests&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;When a Senior Zookeeper updates the feeding time, we get a success response.&lt;/li&gt;
&lt;li&gt;When a Junior Zookeeper updates the feeding time, we get an error response stating permission denied.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;etc&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;✅ Now we know that the application behaves as expected and the external dependencies behave as expected.&lt;/p&gt;

&lt;h2&gt;
  
  
  Smoke Tests
&lt;/h2&gt;

&lt;p&gt;So what’s next? Well, when we deploy our application it may be running on different hardware, with different configuration, network rules and restrictions, and who knows what else. &lt;/p&gt;

&lt;p&gt;Smoke Tests help us identify any major issues which haven’t and can’t be caught by the previous levels of testing.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8r3ab40f7gpcyd8gsibv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8r3ab40f7gpcyd8gsibv.png" alt="Full System" width="800" height="326"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The usual convention is to pick a few of the mission-critical features that touch on different dependencies. Because we know the application behaves as expected in all other regards, we can keep the amount of tests in this level to very few. This has the bonus of saving us test maintenance headaches as smoke testing is often the most time-consuming and complex, both in functionality and time to set up.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Example Test Cases&lt;/strong&gt;&lt;br&gt;
When an add animal request is sent to the API, we receive a success response.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; We could create another test case: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;When a feeding time update request is sent to the API, we receive a success response.
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But the first test case already touches both the database and the email dependency so for this example, the one is sufficient.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;✅ Now we know that the application behaves as expected, the dependencies behave as expected, and when deployed, the application can start up, dependencies can be reached and some of the mission-critical features run as expected.&lt;/p&gt;

&lt;h1&gt;
  
  
  To summarize
&lt;/h1&gt;

&lt;p&gt;We know our logic works thanks to Unit Tests. &lt;/p&gt;

&lt;p&gt;We know execution flows through the application correctly thanks to Component Tests. &lt;/p&gt;

&lt;p&gt;We know external dependencies conform to the contracts we expect thanks to the Contract Tests. &lt;/p&gt;

&lt;p&gt;We know external dependencies behave as expected thanks to the Integration Tests.&lt;/p&gt;

&lt;p&gt;And now we know that the application runs in the various environments thanks to the Smoke Tests.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why does this work?
&lt;/h2&gt;

&lt;p&gt;Because each individual piece has been tested, and we’ve tested the individually tested pieces work together, an we’ve tested that the external pieces behave as our individually tested pieces expect them to, and we’ve tested that our entire system still runs as expected in deployment environments.&lt;/p&gt;

&lt;h1&gt;
  
  
  Levels of testing conclusion
&lt;/h1&gt;

&lt;p&gt;We’ve got a clear split in the responsibilities of the various levels of testing.&lt;/p&gt;

&lt;p&gt;We have a common set of names to use for the various test levels.&lt;/p&gt;

&lt;p&gt;We know exactly what to test in each level.&lt;/p&gt;

&lt;p&gt;We know to test anything in the earliest possible level.&lt;/p&gt;

&lt;p&gt;We know not to test for anything that has already been proven in an earlier level.&lt;/p&gt;

&lt;h2&gt;
  
  
  And the result is
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Happy Developers&lt;/strong&gt; 🥳 Need I say more?&lt;/p&gt;

&lt;p&gt;…but in case I do:&lt;/p&gt;

&lt;p&gt;A much more manageable test suite with simple tests that are easy to understand, update and debug.&lt;/p&gt;

&lt;p&gt;A clear idea of how to implement testing when picking up a new task.&lt;/p&gt;

&lt;p&gt;A common testing paradigm to use across backend teams.&lt;/p&gt;

&lt;p&gt;And the &lt;strong&gt;real money saver&lt;/strong&gt;: Quicker feedback on issues with lower levels of testing highlighting issues sooner. &lt;/p&gt;

&lt;h2&gt;
  
  
  What about other forms of testing?
&lt;/h2&gt;

&lt;p&gt;To reduce the scope of this article non-functional forms of testing have been intentionally omitted. &lt;/p&gt;

&lt;p&gt;Performance, security, scalability etc are all important however they differ greatly and have different levels of relevancy depending on the particular project. In my experience, the functional forms of testing tend to be bigger time sinks as they are run and updated far more frequently - so the focus is on the area with the biggest potential for improvement - the functional testing that forms part of the typical testing done for every code change from the developer’s machine to production - as part of the typical software development lifecycle.&lt;/p&gt;

&lt;p&gt;Let’s not overload ourselves by trying to do absolutely everything in one go.    &lt;/p&gt;

&lt;h1&gt;
  
  
  Putting into practice
&lt;/h1&gt;

&lt;p&gt;Before this article suffers the same fate of so many before it and gets lost in an abundance of browser tabs, if you see value in the concepts - why not do something about it?&lt;/p&gt;

&lt;h2&gt;
  
  
  Implementation Guide
&lt;/h2&gt;

&lt;p&gt;The problem with a lot of tech improvements is that they’re seen as blockers. &lt;em&gt;Drop everything and work on this improvement which will prevent us from working on any new features or business as usual for some amount of time.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;This may work for smaller tasks than improving an entire, mature test suite - but for large tasks, this may and should have the business quaking in their boots at the prospect of falling behind on delivery schedules. &lt;/p&gt;

&lt;p&gt;But there is a way to develop your cake and eat it:&lt;/p&gt;

&lt;h3&gt;
  
  
  The strangler pattern
&lt;/h3&gt;

&lt;p&gt;Following the concept of the programming pattern, instead of updating the entire test suite in one go, we can methodically update tests as we touch on them as part of the day-to-day software development process.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Decide on and agree a direction for the test suite across the team.&lt;/li&gt;
&lt;li&gt;When a particular area of code is worked on, as part of that work update any of the tests that code uses to conform to the new direction.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This allows the migration of tests, bit by bit, &lt;strong&gt;without having a massive impact on the day-to-day commitments&lt;/strong&gt; to features and business-as-usual development tasks.&lt;/p&gt;

&lt;p&gt;AND means any new tests created, are created following the new direction - so &lt;strong&gt;don’t add to the mess that needs to be migrated later.&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;After a few months of this, Engineering Improvements tasks can be raised to migrate the still outstanding tests. These can be small enough to slowly pick away at without affecting the team’s cadence much.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;And before you know it, you’ll be reaping the benefits of a smoother, quicker and easier test suite.&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting buy-in
&lt;/h2&gt;

&lt;p&gt;But before doing anything, you’ll need to first convince your team and then convince the people who decide how your team spends its time that this is a worthy endeavor:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;For the team:

&lt;ul&gt;
&lt;li&gt;if the current test suite is painful enough that you’re reading this article - that’s the job done. Otherwise one might suggest that this is a way to ensure tests don’t get to that point of pain.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;💰 &lt;strong&gt;Tip:&lt;/strong&gt; Make this one of your or your team’s official objectives to get additional visibility and support within your company. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;For the business:

&lt;ul&gt;
&lt;li&gt;increase feature delivery by reducing the overhead/cost of developing new features due to the time spent updating unwieldy tests
&lt;/li&gt;
&lt;li&gt;maintain and improve client confidence with the software reliability increase due to improved testing&lt;/li&gt;
&lt;li&gt;this will also make for happier developers which means less developer churn - less costly onboarding&lt;/li&gt;
&lt;li&gt;if multiple teams adopt this, the company will benefit from the knowledge-share across teams and the cohesion that comes from having a shared testing paradigm.&lt;/li&gt;
&lt;li&gt;and can be done without additional cost if done with a strangler approach - which is to only move testing to the new way of doing so when the test code for that specific feature needs to be modified anyway&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h1&gt;
  
  
  Fin
&lt;/h1&gt;

&lt;p&gt;And there you have it. The lessons I’ve learned (and what many others have shared with me) over the years to help take testing from a brittle behemoth to a simple, structured, manageable, set of utilities to give us confidence in our system. &lt;/p&gt;

&lt;p&gt;We’ve covered:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The levels of testing&lt;/li&gt;
&lt;li&gt;Testing each thing as early as possible&lt;/li&gt;
&lt;li&gt;Only testing for each thing once&lt;/li&gt;
&lt;li&gt;How to migrate old tests without costing the company&lt;/li&gt;
&lt;li&gt;How to get understanding and buy-in from the company&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Thanks for reading and happy testing! 🤖&lt;/p&gt;

</description>
      <category>backend</category>
      <category>testing</category>
      <category>refactoring</category>
      <category>techdebt</category>
    </item>
    <item>
      <title>Estimating the RU throughput requirements for Azure Cosmos DB and predicting cost</title>
      <dc:creator>Alessio Franceschelli</dc:creator>
      <pubDate>Tue, 30 May 2023 15:42:26 +0000</pubDate>
      <link>https://dev.to/newday-technology/estimating-the-ru-throughput-requirements-for-azure-cosmos-db-and-predicting-cost-3hnc</link>
      <guid>https://dev.to/newday-technology/estimating-the-ru-throughput-requirements-for-azure-cosmos-db-and-predicting-cost-3hnc</guid>
      <description>&lt;p&gt;Azure Cosmos DB is a globally distributed, multi-model database service designed to scale seamlessly to handle variable workloads.&lt;/p&gt;

&lt;p&gt;One key aspect of Azure Cosmos DB's performance and scalability is its usage of Request Units (RUs). In Cosmos DB, RU throughput is important because it determines how many operations a database can perform per second. This means that by varying it, you can scale the performance of a database up or down, ensuring that your application can handle the expected workload and maintain the desired level of performance. Of course, this will also influence the cost you are billed for your database.&lt;/p&gt;

&lt;h2&gt;
  
  
  Understanding Request Units (RUs)
&lt;/h2&gt;

&lt;p&gt;Request Units (RUs) are a measure of the resources needed to perform a particular operation in Cosmos DB. Each database operation in Cosmos DB, such as reading, writing, or querying data, requires a specific number of RUs.&lt;/p&gt;

&lt;p&gt;RUs are a fundamental part of the Cosmos DB pricing model. They determine the amount of throughput capacity that is required to perform a given set of operations on a database. The more RUs a database requires, the higher the cost of the database.&lt;/p&gt;

&lt;p&gt;RUs are used to abstract the complexity of underlying hardware and software from the user. Instead of worrying about hardware and software configurations, users can simply specify the required RUs for their workloads, and Cosmos DB will handle the rest.&lt;/p&gt;

&lt;p&gt;To estimate the required RU throughput for Azure Cosmos DB, it's important to consider the data operations that will be performed, the size of the data being accessed, and the performance characteristics of the database. The number of RUs required for a given data operation depends on the complexity of the query, the amount of data being accessed, and the consistency level of the database. In addition, the size of the data being stored, and the access patterns can affect the required RU throughput. For example, larger datasets may require more RUs to perform operations efficiently, and high-throughput workloads may require more RUs to ensure that the database can keep up with demand. By carefully considering these factors, users can estimate the required RU throughput for their specific workloads and ensure that their Cosmos DB database is provisioned with the appropriate amount of resources to handle their needs.&lt;/p&gt;

&lt;p&gt;We will now explore some common Azure Cosmos DB operations and their associated RU costs to give you a better understanding of how RUs affect your database performance and cost.&lt;/p&gt;

&lt;h3&gt;
  
  
  Reading a single document
&lt;/h3&gt;

&lt;p&gt;Reading a single document by its ID and partition key is advertised as the solution that consumes the lowest number of RUs, making it the most efficient read operation in terms of RU usage. However, the size of the document drives the RU consumption and, for large documents, it will become more onerous that a query operation.&lt;/p&gt;

&lt;p&gt;For example, reading a 1KB document will consume 1 RU, while reading a 100KB document will consume 10 RUs. Indexing has understandably no impact on RUs consumption of these operations.&lt;/p&gt;

&lt;h3&gt;
  
  
  Creating a Document
&lt;/h3&gt;

&lt;p&gt;Creating a new document in Azure Cosmos DB typically consumes a larger number of RUs than reading it, making it not particularly cost effective for write heavy scenarios. The RU cost for this operation depends on the size of the document and the amount of indexed fields.&lt;/p&gt;

&lt;p&gt;For example, without any indexed field, writing a 1 KB document, will consume 5.5 RUs, while a 100KB document will consume 55 RUs. As you can see, as the document grows the cost scale the same as with reads.&lt;br&gt;
Indexing has also an important impact, as inserting a 1KB document, with all fields indexed, which is the default behaviour, would consume about 16 RUs.&lt;/p&gt;

&lt;h3&gt;
  
  
  Updating a Document
&lt;/h3&gt;

&lt;p&gt;Updating a document consumes about double the RUs compared to creating a new document, however the RU consumption is potentially less affected by indexes as not necessarily all indexed fields are updated.&lt;/p&gt;

&lt;p&gt;For example, without any indexed field, updating a field in a 1 KB document, will consume about 10 RUs, while in a 100KB document will consume 100 RUs.&lt;/p&gt;

&lt;h3&gt;
  
  
  Sample RU consumptions
&lt;/h3&gt;

&lt;p&gt;This table show measured RU charges for different operations on a sample 1Kb and 100Kb documents. Of course, there would be variation based on the actual shape of your documents.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;Insert&lt;/th&gt;
&lt;th&gt;Point Read&lt;/th&gt;
&lt;th&gt;Upsert&lt;/th&gt;
&lt;th&gt;Delete&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;1KB Document&lt;br&gt;No Index&lt;/td&gt;
&lt;td&gt;5.5&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;10.3&lt;/td&gt;
&lt;td&gt;5.5&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;1KB Document&lt;br&gt;All fields indexed (default)&lt;/td&gt;
&lt;td&gt;16.2&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;10.7&lt;/td&gt;
&lt;td&gt;16.2&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;100KB Document&lt;br&gt;No Index&lt;/td&gt;
&lt;td&gt;48.8&lt;/td&gt;
&lt;td&gt;10&lt;/td&gt;
&lt;td&gt;96.7&lt;/td&gt;
&lt;td&gt;48.8&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;100KB Document&lt;br&gt;All fields indexed (default)&lt;/td&gt;
&lt;td&gt;59.4&lt;/td&gt;
&lt;td&gt;10&lt;/td&gt;
&lt;td&gt;97.1&lt;/td&gt;
&lt;td&gt;59.4&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Queries RUs consumption
&lt;/h2&gt;

&lt;p&gt;When looking into the RUs consumption of queries, there are many different variables at play, which makes it more difficult to predict the performance and cost of performing the queries required to support your application behaviour.&lt;/p&gt;

&lt;h3&gt;
  
  
  Indexes
&lt;/h3&gt;

&lt;p&gt;In Azure Cosmos DB, indexes play a vital role in optimizing the performance of read and query operations. While indexing can increase the RU consumption of write operations, such as inserts and updates, the benefits of using indexes often outweigh the associated costs.&lt;/p&gt;

&lt;p&gt;One of the main advantages of using indexes in Azure Cosmos DB is the substantial improvement in query performance. Indexes enable the database engine to quickly locate and retrieve documents based on specific attribute values. Without indexes, Cosmos DB would need to perform a full scan of the data, which is much slower and consumes more RUs. By using indexes, you can efficiently run complex queries with multiple filters, sorts, and joins, resulting in a significantly reduced query execution time and lower RU consumption.&lt;/p&gt;

&lt;p&gt;It’s important to note that the number of total documents doesn't affect RUs consumption when running a query on an index in Cosmos DB because the query is executed on the index. However, the number of documents returned by the query does affect the RUs consumption, as it determines the amount of data that needs to be read and returned.&lt;/p&gt;

&lt;p&gt;Cosmos DB offers automatic index management, which means that the database engine automatically maintains indexes for all properties in your JSON documents by default. This eliminates the need for manual index creation and maintenance, simplifying your database administration tasks. Automatic index management also ensures that new properties added to documents are indexed automatically, making your data model more flexible and adaptable to changes in your application. However, as we had seen previously, indexes come at a cost, so if you don’t need to perform queries on all fields, you should optimize your indexing policy to balance RU consumption and query performance.&lt;/p&gt;

&lt;p&gt;Although indexing might increase the RU consumption of write operations, the benefits of faster query execution, automatic index management, and flexible indexing policies often outweigh the associated costs. By carefully perfecting your indexing strategy, you can strike the right balance between performance and cost efficiency in your Azure Cosmos DB deployment.&lt;/p&gt;

&lt;h3&gt;
  
  
  Choosing the partition key
&lt;/h3&gt;

&lt;p&gt;The choice of partition key in Azure Cosmos DB plays a crucial role in the overall performance and scalability of your database. A well-chosen partition key not only helps distribute your data evenly across multiple partitions but also affects the RU consumption of various operations.&lt;/p&gt;

&lt;h4&gt;
  
  
  Cross-partition queries
&lt;/h4&gt;

&lt;p&gt;One of the main factors influencing RU consumption is the type of queries your application needs to perform. Queries that can be resolved within a single partition typically consume fewer RUs compared to queries that require scanning multiple partitions (cross-partition queries). By selecting a partition key that aligns well with your most common query patterns, you can minimize the number of cross-partition queries and reduce RU consumption.&lt;/p&gt;

&lt;h4&gt;
  
  
  Hot partitions
&lt;/h4&gt;

&lt;p&gt;An inappropriate partition key choice may lead to a scenario where a single partition receives a disproportionately high amount of requests compared to others. Hot partitions can cause increased RU consumption and may result either in throttling, affecting your application's performance, or requiring you to increase the provisioned RUs of the whole database.&lt;br&gt;
For this reason, to keep the RU consumption in check, it is critical to choose a partition key that evenly distributes the data and request load across all partitions.&lt;/p&gt;

&lt;h4&gt;
  
  
  Strategies for selecting an optimal Partition Key
&lt;/h4&gt;

&lt;p&gt;While this is a big topic on its own, there are many resources available as it is not a problem specific to Azure Cosmos DB but a common requirement for most of the non-relational databases.&lt;br&gt;
The best way to approach the problem is to first understand your application's most common query patterns and then choose a partition key that allows for efficient single-partition queries. Also consider the data distribution, select a partition key that ensures even distribution of data and request load across all partitions to prevent the problem of hot partitions.&lt;/p&gt;

&lt;p&gt;Unfortunately, once a partition key is put in place, it is no longer possible to change it. However, this should not prevent you to continuously monitor your database's performance and RU consumption to identify potential issues with your partition key choice. Be prepared to migrate the data to a new container if the need to adjust your partition key strategy becomes necessary.&lt;/p&gt;

&lt;p&gt;Sometimes, finding a partition key that works successfully in most scenarios is not possible and duplicating the data could unintuitively become the best option.&lt;/p&gt;

&lt;h4&gt;
  
  
  Leveraging data duplication for perfecting the partition key usage
&lt;/h4&gt;

&lt;p&gt;There might be cases where a single partition key does not meet all query requirements efficiently. In such scenarios, duplicating data to another container with a different partition key can be a powerful strategy to optimize query performance and reduce RU consumption.&lt;/p&gt;

&lt;p&gt;Duplicating data to another container with a different partition key can significantly improve the performance of queries that would otherwise require cross-partition operations. Cross-partition queries consume more RUs and have higher latency compared to queries that can be resolved within a single partition. The RU consumption of writing data to a new container could become negligible considering the saving on queries, depending on the volume of your operations.&lt;/p&gt;

&lt;h4&gt;
  
  
  Change Feed for data Synchronization
&lt;/h4&gt;

&lt;p&gt;Azure Cosmos DB's Change Feed is a powerful feature that enables you to capture changes in your source container and replicate them to a target container with a different partition key. Change Feed ensures near-real-time data synchronization between the source and target containers, allowing you to maintain consistent data across multiple containers with different partition key configurations. This process is generally low effort and resilient, however it consumes RUs to read the data from the source container, on top of the obvious charges to write to the target container.&lt;/p&gt;

&lt;h3&gt;
  
  
  Sample RU consumptions
&lt;/h3&gt;

&lt;p&gt;There is a sample RUs consumption of a query using indexes on a container with documents of 1KB.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Method&lt;/th&gt;
&lt;th&gt;Items Returned&lt;/th&gt;
&lt;th&gt;RUs consumed with&lt;br&gt;1KB documents&lt;/th&gt;
&lt;th&gt;RUs consumed with 100KB documents&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Point read&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;10&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Query&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;2.8&lt;/td&gt;
&lt;td&gt;4.5&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Query&lt;/td&gt;
&lt;td&gt;10&lt;/td&gt;
&lt;td&gt;3.2&lt;/td&gt;
&lt;td&gt;19.9&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Based on the results of the sample test, the point read method appears to be the most efficient in terms of RUs consumption as it only consumes 1 RU for retrieving a single document. However, if the requirement is to retrieve multiple documents, then the query method becomes more efficient. It is important to note that the number of documents returned by the query has a significant impact on the RUs consumed. Therefore, when designing queries, it is important to consider the number of documents that will be returned and use indexing and query optimization techniques to minimize the RUs consumed.&lt;/p&gt;

&lt;p&gt;In the last column we can see a test for larger documents that highlights an unexpected result due to a quirk of the Cosmos DB RU charges calculation. In fact, the point read are not actually the cheapest way to retrieve a single document as generally presented in the documentation, but for large documents retrieving items via query is actually cheaper.&lt;/p&gt;

&lt;h2&gt;
  
  
  Translating RUs into cost
&lt;/h2&gt;

&lt;p&gt;Now that we have a clear picture of the RUs consumption of different operations, we are left with the fundamental question on how RUs translate to cost in Azure Cosmos DB, so that we can better optimize our databases for cost efficiency. &lt;/p&gt;

&lt;p&gt;Azure Cosmos DB offers different pricing models, but the most relevant for this analysis are provisioned throughput and serverless.&lt;/p&gt;

&lt;p&gt;Serverless is designed for workloads with variable or unpredictable throughput requirements, and you pay for the actual RUs consumed by your database operations instead of pre-allocating throughput capacity. It has, however, limitations on features availability and on how much your database can grow.&lt;/p&gt;

&lt;p&gt;With provisioned throughput, you allocate a specific number of RUs per second to your database or container. This pre-allocated capacity determines the maximum throughput your database can handle at any given time. You are billed for the provisioned throughput, whether or not you fully utilize it.&lt;/p&gt;

&lt;p&gt;Luckily, you can enable autoscaling on top of provisioned capacity, so that the provisioned throughput would instantly scale based on load, up to the specified maximum provisioned throughput and down to 10%. However, even if the scaling is near instantaneous, for each wall-clock hour you are charged for the maximum value reached of provisioned throughput in that hour, so supporting spiky workload can be expensive.&lt;/p&gt;

&lt;h3&gt;
  
  
  Impact of partitioning and storage amount
&lt;/h3&gt;

&lt;p&gt;One of the limitations of Azure Cosmos DB is that the provisioned throughput of all partitions scale together at the same time. While this doesn’t represent an issue for well distributed workloads, if you have many partitions and experience spikes of load in a small subset of those partitions, your required provisioned throughput and consequently cost will be significantly larger than would be expected given the amount of operations.&lt;/p&gt;

&lt;p&gt;For example, if you have a large database of 40TB of data, you will probably have about a thousand physical partitions, due to the constraints on partitions size. If your data is not particularly well distributed and you have a spiky workload, it could easily be that you have a bunch of operations happening at the same time in one of the partitions at least once per hour.&lt;br&gt;
Or, alternatively, you could have an infrequent unoptimized query requiring to scan multiple documents and hence consuming lots of RUs. In these cases, it would not be surprising to have a spike of consumption of a couple thousand RUs on  specific partitions. However, given all the partitions have to scale together, it would require a total provisioned capacity of a couple millions of RUs! If this happens every hour for just a second, you would be constantly paying for this high amount of RUs even if you are not actually performing that many operations on your database to justify this.&lt;/p&gt;

&lt;h3&gt;
  
  
  Impact of data consistency and high availability on cost
&lt;/h3&gt;

&lt;p&gt;In distributed databases that rely on replication, there is a fundamental trade-off between the read consistency, availability, latency, and throughput. The choice of data consistency and high availability settings can also a substantial impact on the cost of running the database.&lt;/p&gt;

&lt;p&gt;First of all, adding more regions will multiply the cost as you are essentially paying for each region where your data is replicated to, independently from the amount of operations served by it. In fact, Cosmos DB provisioned throughput is reflected on all region, so even regions not serving any traffic will incur the same cost as the active region.&lt;/p&gt;

&lt;p&gt;Azure Cosmos Db also offer the ability to setup a database in multi-region write, which offers better availability, while sacrificing the data consistency, but this causes the cost to double. So, if for example we have configured a database to have 3 regions with multi-region writes enabled, we would be paying 6 times compared to using a single region.&lt;/p&gt;

&lt;p&gt;Regarding of the different consistency models, when using the Strong Consistency or the Bounded Staleness models to achieve better consistency, to the detriment of write latency, read operations will consume double the amount of RUs as they require a local minority quorum.&lt;/p&gt;

&lt;p&gt;One final consideration is regarding the use of availability zones to improve the availability inside a single region. This introduces a 25% increase in cost for that region.&lt;/p&gt;

&lt;p&gt;It is important to carefully evaluate the trade-offs between consistency, availability, and cost to ensure an optimal and cost-effective Cosmos DB solution.&lt;/p&gt;

&lt;h2&gt;
  
  
  Estimating RUs requirements and predicting cost
&lt;/h2&gt;

&lt;p&gt;Now that we understand the cost model of Azure Cosmos DB, we need to determine the amount of provisioned RUs that would be needed to support our applications, so that we can predict the running cost.&lt;/p&gt;

&lt;p&gt;While there are many factors, as we discussed before, that influence how many RUs will be required to perform the operations happening every given second, if we managed to achieve a well-structured database, in particular with well-balanced partitions, we can obtain a pretty accurate figure focusing only on the main aspects: the amount of data, the size of the documents, the distribution of the different operations and the number of regions we are going to replicate the database to.&lt;/p&gt;

&lt;p&gt;To quickly obtained an estimate based on this simplification, you can leverage the Azure Cosmos DB Capacity Planner which is provided for free by Microsoft to assist you in determining your RU needs based on your workload's characteristics.&lt;/p&gt;

&lt;p&gt;You can access the Azure Cosmos DB Capacity Planner tool at the following URL: &lt;a href="https://cosmos.azure.com/capacitycalculator"&gt;Azure Cosmos DB Capacity Calculator&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The Capacity Planner tool requires you to input your workload details to provide a good estimate of RUs consumption and cost. It also provides the ability to define indicatively the difference in load between peak and quiet times, as well as to combine different workloads together. You will also need to specify the number of regions you are going to replicate your database to and if you intend to use multi-region writes which, has we had seen, has major cost implications.&lt;/p&gt;

&lt;p&gt;The main omission in this tool is the lack of estimation of the cost of indexes, so your actual spend may vary based on how many indexes you have. As we have seen the major impact these have to the RUs consumption of create and update operations, so the variance can be significant for write heavy scenarios, which however are not ideal use cases for Cosmos DB.&lt;/p&gt;

&lt;p&gt;The Capacity Planner tool provides a starting point for estimating your RU requirements. However, once you go live with your database, it's essential to monitor and adjust your provisioned throughput settings based on your actual usage patterns and performance metrics in a real-world scenario, and review your operations as needed.&lt;/p&gt;

&lt;h2&gt;
  
  
  Monitoring RUs usage
&lt;/h2&gt;

&lt;p&gt;Effectively monitoring and tuning Request Unit (RU) usage in Azure Cosmos DB is crucial for maintaining optimal performance and cost efficiency.&lt;/p&gt;

&lt;p&gt;Azure Cosmos DB provides various tools and metrics to monitor your RU usage. The Azure Portal provides built-in metrics to inspect the RU usage of your Cosmos DB account. Key metrics include provisioned throughput, total requests, average RU consumption, and 429 (Too Many Requests) responses that show throttling due to exceeding provisioned throughput. You can also configure Azure Monitor to collect Cosmos DB metrics and create custom dashboards, alerts, and reports to track your RU usage.&lt;/p&gt;

&lt;p&gt;Furthermore, when executing operations in Cosmos DB via the SDK in your application, you can retrieve metrics from the response, which include information on RU consumption, query execution time, and retrieved document count.&lt;/p&gt;

&lt;p&gt;If you need more detailed information, you can enable diagnostic settings in your Cosmos DB account to collect logs and metrics, and send them to a storage account, event hub, or Log Analytics workspace for further analysis and reporting. Please note that there is a cost involved.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In conclusion, this blog post has explored the significance of Request Units (RUs) in the context of Azure Cosmos DB, emphasizing their role in managing resources and controlling costs. We delved into the intricacies of query RU consumption, the translation of RUs into cost, estimation of RU requirements, and monitoring of RU usage.&lt;/p&gt;

&lt;p&gt;The key takeaways from this post are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;RUs are the primary measure of resource consumption in Cosmos DB and understanding them is crucial for optimizing performance and cost.&lt;/li&gt;
&lt;li&gt;The kind of operation and document size influence the RUs throughput requirements.&lt;/li&gt;
&lt;li&gt;Query RU consumption varies depending on factors like query complexity, indexing policies, and data size.&lt;/li&gt;
&lt;li&gt;Point reads are advertised as the most efficient way but in practice they consume more RUs than queries when dealing with large documents or where multiple documents need to be retrieved.&lt;/li&gt;
&lt;li&gt;RUs directly impact cost; hence, it's important to have a thorough understanding of the relationship between RUs and pricing.&lt;/li&gt;
&lt;li&gt;Estimating RU requirements and predicting costs is essential for budgeting and capacity planning.&lt;/li&gt;
&lt;li&gt;Regular monitoring of RU usage can help in identifying bottlenecks, optimizing performance, and avoiding unexpected costs.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We encourage you to make use of the provided resources to calculate and optimize your own RU throughput requirements for Cosmos DB. By taking advantage of this knowledge and the Azure Cosmos DB Capacity Planner tool, you can ensure that your database operations are both efficient and cost-effective. However, don't hesitate to explore further and tailor your Cosmos DB setup to suit your unique requirements. Happy optimizing!&lt;/p&gt;

</description>
      <category>cosmosdb</category>
      <category>cloud</category>
      <category>programming</category>
      <category>database</category>
    </item>
    <item>
      <title>Easily making internal libraries debuggable in .NET</title>
      <dc:creator>Alessio Franceschelli</dc:creator>
      <pubDate>Thu, 15 Dec 2022 14:51:02 +0000</pubDate>
      <link>https://dev.to/newday-technology/easily-making-internal-libraries-debuggable-in-net-2748</link>
      <guid>https://dev.to/newday-technology/easily-making-internal-libraries-debuggable-in-net-2748</guid>
      <description>&lt;h3&gt;
  
  
  Easily making internal libraries debuggable in .NET
&lt;/h3&gt;

&lt;p&gt;At NewDay, we deeply care about software testing and, while we are big fans of TDD, there is no denying that being able to debug software effectively is a crucial aspect of software engineering.&lt;/p&gt;

&lt;p&gt;Favouring a debugging session instead of writing a new unit test is a big topic on its own, so here we will only focus on a specific aspect: being able to step into an internal library from a consuming application or service in .NET.&lt;/p&gt;

&lt;p&gt;.NET has a long history of providing a fully featured debugger via Visual Studio. When working with Microsoft libraries or many open-source ones, we get the ability to step into the code, how is that possible? And how we can achieve the same behaviour for our company internal libraries?&lt;/p&gt;

&lt;h3&gt;
  
  
  Leveraging debug symbols
&lt;/h3&gt;

&lt;p&gt;In general, to debug a compiled language, on top of having access to the source code, you need the debug symbols, which map the binary to the original source code. In .NET, this role is fulfilled by the Portable Database (or PDB).&lt;/p&gt;

&lt;p&gt;For Microsoft libraries or many open-source ones, the PDBs are automatically retrieved by Visual Studio from the Microsoft public symbols server or from the &lt;a href="https://nuget.org" rel="noopener noreferrer"&gt;NuGet.org&lt;/a&gt; symbol packages (&lt;em&gt;.snupkg&lt;/em&gt;) feed, respectively.&lt;/p&gt;

&lt;p&gt;When working in an enterprise scenario, we don’t usually have either of those mechanism available for our internal libraries: symbols servers have never been popular and are tricky to setup. On the other hand, while &lt;em&gt;snupkg&lt;/em&gt; are the standard for modern .NET libraries, they are not supported by the most popular software used to host internal package feeds, like &lt;a href="https://jfrog.com/artifactory/" rel="noopener noreferrer"&gt;JFrog Artifactory&lt;/a&gt; or &lt;a href="https://docs.github.com/en/packages/working-with-a-github-packages-registry/working-with-the-nuget-registry" rel="noopener noreferrer"&gt;GitHub Packages registry&lt;/a&gt;. For this reason, the best approach is to include the PDBs as part of our actual library NuGet package.&lt;/p&gt;

&lt;h3&gt;
  
  
  Including PDBs in a NuGet package
&lt;/h3&gt;

&lt;p&gt;The common solution to provide debug symbols for private packages used to be to include the PDBs in the NuGet packages alongside the &lt;em&gt;dll&lt;/em&gt;.&lt;br&gt;&lt;br&gt;
For example, in early versions of .NET Core you could add a, admittedly quite hard to remember, line to your &lt;em&gt;csproj&lt;/em&gt; to achieve this.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;Unfortunately, with changes in .NET Core 3.0 to the project system, &lt;a href="https://github.com/dotnet/sdk/issues/1458" rel="noopener noreferrer"&gt;it is no longer possible to consume the PDB files included in NuGet packages&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Luckly, we can rely on another mechanism that, while deeply different in nature, allow us to achieve the same result: embedding the PDB inside the DLL. It has a few drawbacks, like having larger packages and not being able to easily strip out PDBs from deployables, however these problems are rarely relevant for internal libraries, especially when used in internal microservices and APIs.&lt;/p&gt;

&lt;p&gt;In current .NET versions, this is easily achievable by adding the &lt;code&gt;&amp;lt;DebugType&amp;gt;embedded&amp;lt;/DebugType&amp;gt;&lt;/code&gt; property to your library csproj, which is much easier to remember than the previous technique and one day it could become the &lt;a href="https://github.com/dotnet/sdk/issues/2679" rel="noopener noreferrer"&gt;default behaviour&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;There is a final aspect that we need to mention when talking about including a PDB: making sure it is deterministic. While .NET builds have become deterministic by default in .NET, ensuring the same DLL is produced no matter where it is built, to have PDBs with normalized paths to our source file, we also need to add the true property, usually with a conditional to only apply the normalization when building in CI, but we will look into an easy way to achieve all these later on.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;h3&gt;
  
  
  Getting the source code
&lt;/h3&gt;

&lt;p&gt;Now that our library contains the required metadata to map the DLL to the source code, we need to be able to easily retrieve it. Nowadays we all use version control systems, so the easier way is to retrieve it directly from it. The .NET platform provides &lt;a href="https://github.com/dotnet/sourcelink" rel="noopener noreferrer"&gt;&lt;em&gt;Source Link&lt;/em&gt;&lt;/a&gt;, a nifty tool integrated into Visual Studio, which provides the ability to automatically include metadata into NuGet packages that point to the source code hosted in our version control hosting platform of choice from which Visual Studio can then automatically download it when stepping into the code. Thanks to the support to multiple authentication mechanism as well as on-premises solutions like &lt;em&gt;GitHub Enterprise&lt;/em&gt;, this works well in the enterprise scenario, allowing us to leverage the same tool as the opensource community without extra configuration, as easy as adding a package reference and a couple of project properties. But can it be even easier to integrate it?&lt;/p&gt;

&lt;h3&gt;
  
  
  Putting it all together
&lt;/h3&gt;

&lt;p&gt;As we had seen, by putting a few pieces together and using the right configuration, we can provide a brilliant debugging experience for our internal libraries which has nothing to envy to opensource or Microsoft libraries. Of course, we would not want to have to replicate a bunch of settings across every project and, while creating shared props or targets is an option, there is no need for it.&lt;/p&gt;

&lt;p&gt;In fact, there is a .NET Foundation’s package called &lt;a href="https://github.com/dotnet/reproducible-builds" rel="noopener noreferrer"&gt;DotNet.ReproducibleBuilds&lt;/a&gt; that takes care of all these steps necessary to get Source Link fully working and CI builds deterministic, including logic to &lt;a href="https://github.com/dotnet/reproducible-builds/blob/956ec68ee3572d3c29e62c7d37aaf076647ab8c8/src/DotNet.ReproducibleBuilds/DotNet.ReproducibleBuilds.props#L15-L76" rel="noopener noreferrer"&gt;detect your correct CI server&lt;/a&gt;. All you need to do is to add this package either to your &lt;em&gt;csproj&lt;/em&gt;, or more convenient if you have multiple projects published as library in the same solution, to your &lt;code&gt;Directory.Build.props&lt;/code&gt; file.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;h3&gt;
  
  
  Investigating Source Link issues
&lt;/h3&gt;

&lt;p&gt;If you are having issues in navigating to the source code, you should check the logs in &lt;em&gt;Navigate to External Sources&lt;/em&gt; in the Visual Studio’s &lt;em&gt;Output View&lt;/em&gt;.&lt;br&gt;&lt;br&gt;
Also make sure you have &lt;em&gt;Enable Source Link support&lt;/em&gt; enabled and consider enabling the integration with &lt;em&gt;Git Credential Manager&lt;/em&gt; if you experience authentication issues.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fie2g6v1sg4oyqst32n27.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fie2g6v1sg4oyqst32n27.png" alt="Output from Navigate to External Sources" width="800" height="195"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwg99vi1u7wlfyze98y3v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwg99vi1u7wlfyze98y3v.png" alt="Source Link options in Visual Studio 2022" width="743" height="435"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Furthermore, don’t forget that to be able to step into libraries, apart from thrown exceptions, you need to disable &lt;em&gt;Just My Code&lt;/em&gt; and set the &lt;em&gt;Symbols loading&lt;/em&gt; rules appropriately.&lt;/p&gt;

&lt;h3&gt;
  
  
  Extra goodies
&lt;/h3&gt;

&lt;p&gt;In recent versions of Visual Studio 2022 there is now the ability to use &lt;em&gt;Go To Definition&lt;/em&gt; on references of our internal libraries and magically go to the source code relying on the same mechanism previously described used during debugging. While the feature is still not perfect, it works in most scenarios, and it keeps getting refined with each feature update!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbrmmptjbwiy5vldfafwr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbrmmptjbwiy5vldfafwr.png" alt="Visual Studio 2022 — Go To Definition" width="538" height="607"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  So easy!
&lt;/h3&gt;

&lt;p&gt;Thanks to the ability of the IDE to automatically retrieve debug symbols and the source code, allowing both to &lt;em&gt;"step in"&lt;/em&gt; during debug as well as using &lt;em&gt;Go To Definition&lt;/em&gt; when navigating the codebase, working with private libraries has never been so easy!&lt;/p&gt;




</description>
      <category>development</category>
      <category>dotnet</category>
      <category>debug</category>
      <category>engineering</category>
    </item>
    <item>
      <title>Adopting SwiftUI</title>
      <dc:creator>Olivier Rigault</dc:creator>
      <pubDate>Fri, 11 Jun 2021 16:53:32 +0000</pubDate>
      <link>https://dev.to/newday-technology/adopting-swiftui-32go</link>
      <guid>https://dev.to/newday-technology/adopting-swiftui-32go</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;There has been a debate among the iOS developer community about adopting SwiftUI since Apple introduced this framework in 2019. Adopting a new technology is always risky, and comes with many challenges, but we still decided, at NewDay, to adopt this new technology very early in 2020.&lt;/p&gt;

&lt;p&gt;This post explains the reasons why we made this decision, the challenges that we faced, and how we embraced SwiftUI in our iOS development process.&lt;/p&gt;




&lt;h2&gt;
  
  
  What is SwiftUI?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F75l98oaatygbdnujd4yp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F75l98oaatygbdnujd4yp.png" alt="SwiftUI"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;SwiftUI is an innovative, exceptionally simple way to build user interfaces across all Apple platforms with the power of Swift. Build user interfaces for any Apple device using just one set of tools and APIs. With a declarative Swift syntax that’s easy to read and natural to write, SwiftUI works seamlessly with new Xcode design tools to keep your code and design perfectly in sync.&lt;/p&gt;

&lt;p&gt;Apple Inc.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://developer.apple.com/xcode/swiftui/" rel="noopener noreferrer"&gt;SwiftUI&lt;/a&gt; is Apple’s take on &lt;a href="https://en.wikipedia.org/wiki/Declarative_programming" rel="noopener noreferrer"&gt;Declarative UI Programming&lt;/a&gt;, and is a faster, cleaner and more interactive way to code UI on the iOS platform. It also comes with some caveats.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pros&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Declarative syntax&lt;/li&gt;
&lt;li&gt;Fast, clean and interactive&lt;/li&gt;
&lt;li&gt;Works well with Combine&lt;/li&gt;
&lt;li&gt;UIKit compatibility&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Cons&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Supported only on iOS 13+&lt;/li&gt;
&lt;li&gt;Limited list of UI components&lt;/li&gt;
&lt;li&gt;Lack of support from the developer community&lt;/li&gt;
&lt;li&gt;Tightly coupled views&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  Btw, what is Combine? 🤔
&lt;/h3&gt;

&lt;p&gt;It would not be fair to talk about SwiftUI without mentioning &lt;a href="https://developer.apple.com/documentation/combine" rel="noopener noreferrer"&gt;Combine&lt;/a&gt;, which is Apple’s take on &lt;a href="https://en.wikipedia.org/wiki/Reactive_programming" rel="noopener noreferrer"&gt;Reactive Programming&lt;/a&gt;. Like SwiftUI, Combine is only available from iOS 13+.&lt;/p&gt;

&lt;p&gt;Combine is, by essence, very similar to SwiftUI, and therefore still perfectible. It is a strong contender to replace, in the next few years, other native reactive programming frameworks such as &lt;a href="https://github.com/ReactiveX/RxSwift" rel="noopener noreferrer"&gt;RxSwift&lt;/a&gt; or &lt;a href="https://github.com/ReactiveCocoa/ReactiveSwift" rel="noopener noreferrer"&gt;ReactiveSwift&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Combine can be used without SwiftUI, but SwiftUI is very dependent on Combine.&lt;/p&gt;




&lt;h2&gt;
  
  
  Is SwiftUI Ready for Production?
&lt;/h2&gt;

&lt;p&gt;That's the big question. We at NewDay believed very early in 2020 that we could push SwiftUI code to production, and in doing so, would benefit our development and delivery processes.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why did we adopt SwiftUI?
&lt;/h2&gt;

&lt;p&gt;Our decision was based on several factors:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;iOS 13 adoption rate&lt;/li&gt;
&lt;li&gt;Pushing new features&lt;/li&gt;
&lt;li&gt;Replacing web views by native code&lt;/li&gt;
&lt;li&gt;Launching new applications&lt;/li&gt;
&lt;li&gt;Adopting the latest technologies&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  High iOS 13 adoption rate &lt;a&gt;&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;The adoption rate of new versions of mobile operating systems has always been much faster on iOS than on Android, and iOS 13 was no exception. Apple released iOS 13.0 on September 2019, and by January 2020, 80% of iOS users (worldwide) already adopted it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frz2s1bj3n4i64c75t49r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frz2s1bj3n4i64c75t49r.png" alt="iOS 13 Adoption Rate - source: MixPanel"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We assumed that given time, this rate would go much higher. We originally thought about pushing our first SwiftUI code to production in September 2020, so very close to the iOS 14 launch date, but we postponed this to later in the year - December 2020. &lt;/p&gt;

&lt;p&gt;In February 2021, more than 92.5% of our iOS users use our apps on iOS 14, and with about 4% using iOS 13. These numbers seem to confirm that our original assumption, which was shared with our stakeholders, was correct, and the lack of support of older versions of iOS was not a blocker for fully embracing SwiftUI at NewDay.&lt;/p&gt;




&lt;h3&gt;
  
  
  A New Feature - Aqua Coach &lt;a&gt;&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F28m79ba9o2dwh90fhp46.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F28m79ba9o2dwh90fhp46.png" alt="Aqua Coach"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We decided to use SwiftUI to implement new features, whenever possible. The very first feature benefitting from this was Aqua Coach, a credit score functionality that we added in the Aqua Card app in December 2020.&lt;/p&gt;




&lt;h3&gt;
  
  
  A New Native Screen - Account Summary &lt;a&gt;&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;Our mobile apps are currently partly hybrid. Some screens just use web views to display contents, which give users a poor user experience. &lt;/p&gt;

&lt;p&gt;We plan to keep these web views as long as our apps support iOS 12 (or under), and present brand new native screens, coded in SwiftUI, for our users using iOS 13+ devices.&lt;/p&gt;

&lt;p&gt;The new Account Summary will be the first of such screens, and will be available soon.&lt;/p&gt;




&lt;h3&gt;
  
  
  A New Application - bip &lt;a&gt;&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa728c1faih5xvslkazwb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa728c1faih5xvslkazwb.png" alt="Bip"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We have used SwiftUI and Combine to write our brand new &lt;strong&gt;bip&lt;/strong&gt; iOS app. We originally planned to launch the app in September 2020, but decided to postpone to Q2 2021 instead. This decision gave our users, even more time, to adopt iOS 13 (and now iOS 14).&lt;/p&gt;




&lt;blockquote&gt;
&lt;p&gt;&lt;a href="https://www.bip.credit" rel="noopener noreferrer"&gt;bip&lt;/a&gt; is an innovative product, a mobile app only service, targeting a younger generation of users, who, potentially, own recent phones running on the latest version of the operating system.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;p&gt;Using SwiftUI to code all the screens helped us tremendously as there were numerous screens in bip mobile app in terms of application forms.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://www.bip.credit" rel="noopener noreferrer"&gt;bip&lt;/a&gt; is now available on the App Store and Google Play Store.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://apps.apple.com/us/app/bip/id1541023957" rel="noopener noreferrer"&gt;&lt;img alt="Download on the App Store" src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjdk6za6l3ti58e3mklae.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://play.google.com/store/apps/details?id=com.newdaycards.bip&amp;amp;pcampaignid=pcampaignidMKT-Other-global-all-co-prtnr-py-PartBadge-Mar2515-1" rel="noopener noreferrer"&gt;&lt;img alt="Get it on Google Play" src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0028yb79a7pf382lr5nq.png"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  Adopting The Latest Technologies &lt;a&gt;&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;NewDay is a tech-driven company. We want to adopt the best practices, the best technologies in order to deliver better, faster and create reliable apps. We also want to attract new talent who want to embrace these goals.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;SwiftUI is surely one of the best example of such tools on the iOS platform.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  The Future of SwiftUI at NewDay
&lt;/h2&gt;

&lt;p&gt;We strongly believe that SwiftUI and Combine will play an important part in iOS development in the years to come. Even though we admit that these frameworks are still perfectible, there is no doubt that Apple will refine and improve them overtime.&lt;/p&gt;

&lt;p&gt;Apple being Apple, it is also likely they will start to gradually deprecate UIKit pretty soon, and entice iOS developers to adopt SwiftUI instead.&lt;/p&gt;

</description>
      <category>swift</category>
      <category>swiftui</category>
      <category>ios</category>
    </item>
    <item>
      <title>Measuring performance using BenchmarkDotNet - Part 3 Breaking Builds</title>
      <dc:creator>Tony Knight</dc:creator>
      <pubDate>Sat, 22 May 2021 00:25:13 +0000</pubDate>
      <link>https://dev.to/newday-technology/measuring-performance-using-benchmarkdotnet-part-3-breaking-builds-36il</link>
      <guid>https://dev.to/newday-technology/measuring-performance-using-benchmarkdotnet-part-3-breaking-builds-36il</guid>
      <description>&lt;h1&gt;
  
  
  Introduction
&lt;/h1&gt;

&lt;p&gt;Previously we discussed the &lt;a href="https://dev.to/newday-technology/measuring-performance-using-benchmarkdotnet-part-2-4dof"&gt;absolute bare minimum&lt;/a&gt; to run &lt;a href="https://benchmarkdotnet.org/"&gt;BenchmarkDotNet&lt;/a&gt; in your CI pipeline. Your code builds, benchmarks are taken, and you have to drill down into the numbers. &lt;/p&gt;

&lt;p&gt;But what if bad code was committed? A small change sneaks in, probably under tight deadlines, that suddenly makes your nice fast code run like treacle? &lt;/p&gt;

&lt;p&gt;How would you know about - and more importantly &lt;em&gt;stop&lt;/em&gt; - such horrors? That's what we'll try and address in this post.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;As before, we're only talking pure code here, that is, your class methods and algorithms. APIs, services, applications are much more complex, and we haven't considered I/O. So let's keep it simple and just focus on the performance of pure code.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h3&gt;
  
  
  What this post will cover
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Discuss ways to stop builds upon degraded code performance&lt;/li&gt;
&lt;li&gt;Installing tools in a sandbox environment&lt;/li&gt;
&lt;li&gt;Collecting benchmark data for analysis&lt;/li&gt;
&lt;li&gt;Analysing results and breaking builds&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  What you'll need for this post
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;.NET 7 SDK installed on your local machine&lt;/li&gt;
&lt;li&gt;A BenchmarkDotNet solution&lt;/li&gt;
&lt;li&gt;An IDE, such as VS, VS Code or Rider.&lt;/li&gt;
&lt;/ul&gt;




&lt;h1&gt;
  
  
  The simplest possible way to break a build...
&lt;/h1&gt;

&lt;p&gt;...is, surprisingly, not a sledgehammer. It's even simpler than that.&lt;/p&gt;

&lt;p&gt;For the vast majority of build platforms, to stop a build you normally need your script to return a non-zero return code. That age old trick is so simple and effective: it stops bad things dead in their tracks. So let's use that:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;We want our benchmark analysis to return 0 on success and 1 on failure&lt;/strong&gt;. &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Easy! but this leaves us with a trickier problem. &lt;/p&gt;




&lt;h1&gt;
  
  
  How to detect performance has degraded?
&lt;/h1&gt;

&lt;p&gt;You've got the stats from BenchmarkDotNet. You now need to monitor each build's performance results, or more accurately, &lt;em&gt;detect deviance from accepted performance&lt;/em&gt;. &lt;/p&gt;

&lt;h3&gt;
  
  
  What is acceptable performance?
&lt;/h3&gt;

&lt;p&gt;This is a &lt;em&gt;very&lt;/em&gt; broad subject, and it's often difficult to put precise time limits on micro code performance. For much optimisation work, you'll be iteratively changing code so that &lt;em&gt;performance should always improve with each commit&lt;/em&gt;. Therefore, you can stop optimising &lt;em&gt;when the results are good enough&lt;/em&gt;. &lt;/p&gt;

&lt;p&gt;So, as we often do not have absolute time requirements and we iteratively improve our performance as a matter of course, we'll take a broad view:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Accepted performance is the best recorded benchmark time&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;That leads us onto deviance.&lt;/p&gt;

&lt;h3&gt;
  
  
  Deviance from acceptable performance
&lt;/h3&gt;

&lt;p&gt;Why do we want deviance and not absolutes? &lt;em&gt;Because we cannot guarantee that repeated benchmark runs, even with a static codebase and the same infrastructure, will yield exactly the same time measurements over each iteration.&lt;/em&gt; &lt;/p&gt;

&lt;p&gt;And as each build feeds many time-critical activities - user acceptance, security validation and the like - we don't want a tiny deviation to choke off this supply of new features. &lt;/p&gt;

&lt;p&gt;How do we know what is an appropriate deviance is and how do we measure it? That's another &lt;em&gt;very&lt;/em&gt; broad subject and depends entirely on your circumstances. For now, let's take a simple (&amp;amp; admittedly crude!) method just to illustrate the key point of stopping slow code getting into our main codebase.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Acceptable deviance, expressed as a percentage, falls in between &lt;code&gt;[baseline measurement] &amp;lt; [new measurement] &amp;lt; [baseline measurement + deviance%]&lt;/code&gt;&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Here we're simply allowing some slippage from the best recorded time.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Please remember:&lt;/strong&gt; the subject is extremely broad and this article is just an introduction to the subject. But for now, the main take-away point is: &lt;strong&gt;whatever the current performance, keep improving it and never degrade!&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h1&gt;
  
  
  It seems we need a tool for this
&lt;/h1&gt;

&lt;p&gt;You could build your own, but here's something from our own stables: a dotnet tool to detect deviance in BenchmarkDotNet results:&lt;/p&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--A9-wwsHG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev.to/assets/github-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/NewDayTechnology"&gt;
        NewDayTechnology
      &lt;/a&gt; / &lt;a href="https://github.com/NewDayTechnology/benchmarkdotnet.analyser"&gt;
        benchmarkdotnet.analyser
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      A tool for analysing BenchmarkDotNet results
    &lt;/h3&gt;
  &lt;/div&gt;
&lt;/div&gt;


&lt;p&gt;&lt;strong&gt;BenchmarkDotNet.Analyser (BDNA)&lt;/strong&gt; is a tool for iteratively collecting and analysing BenchmarkDotNet data. It's distributed as a dotnet tool, so you can use it locally and on almost any CI platform. You just need .NET 7 installed.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;BDNA is in beta and we want to continually improve it. We welcome &lt;a href="https://github.com/NewDayTechnology/benchmarkdotnet.analyser/issues"&gt;bug reports and feature suggestions&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Installing
&lt;/h2&gt;

&lt;p&gt;The latest version is distributed via &lt;a href="https://www.nuget.org/packages/bdna/"&gt;Nuget&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;For the remainder of this section, I'll lead you through installing BDNA in a sandbox environment so if you do get into any problems you can simply destroy the directory and start again without any side effects.&lt;/p&gt;




&lt;h3&gt;
  
  
  Create a new sandbox
&lt;/h3&gt;

&lt;p&gt;The sandbox will be a directory on your local drive.  &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;We won't be pushing this directory to source control in this article. But the same steps are used in a cloned local repository.&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight powershell"&gt;&lt;code&gt;&lt;span class="n"&gt;mkdir&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;c:\projects\scratch\tools&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="n"&gt;cd&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="nx"&gt;cd&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;projects\scratch\tools&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Create a tools manifest
&lt;/h3&gt;

&lt;p&gt;The tools manifest is simply a version list of the repo's tools, to ensure version consistency and stability: just like your own project's package dependencies. As we want these tools installed locally we'll create a new manifest in our directory:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight powershell"&gt;&lt;code&gt;&lt;span class="n"&gt;dotnet&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;new&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;tool-manifest&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;blockquote&gt;
&lt;p&gt;Globally installed .NET tools are very convenient: you install it once on your machine and keep updating as necessary. But they place nasty dependencies on your build platform, and there's no guarantee your team members will use &lt;em&gt;exactly&lt;/em&gt; the same version. Locally installed tools provide consistency, and are installed to the local repository. &lt;/p&gt;
&lt;/blockquote&gt;


&lt;h3&gt;
  
  
  Install BDNA
&lt;/h3&gt;

&lt;p&gt;All that's left now is to download and install BDNA:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight powershell"&gt;&lt;code&gt;&lt;span class="n"&gt;dotnet&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;tool&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;install&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;bdna&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;This will install the latest non-preview version. If you want to install a specific version, just give the version, say:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight powershell"&gt;&lt;code&gt;&lt;span class="n"&gt;dotnet&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;tool&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;install&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;bdna&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;--version&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;0.2.263&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;blockquote&gt;
&lt;p&gt;BDNA packages are &lt;a href="https://www.nuget.org/packages/bdna/"&gt;listed here&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Check that BNDA is correctly installed:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight powershell"&gt;&lt;code&gt;&lt;span class="n"&gt;dotnet&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;tool&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;list&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;and you will get a list of repo-local tools:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Package Id      Version              Commands      Manifest
-------------------------------------------------------------------------------------------------------
bdna            0.2.263              bdna          projects\scratch\tools\.config\dotnet-tools.json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Check that it's up and running:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight powershell"&gt;&lt;code&gt;&lt;span class="n"&gt;dotnet&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;bdna&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;and you should be greeted with a banner, like:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--z2EFq6QW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/giszsu71jom47lu8aukw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--z2EFq6QW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/giszsu71jom47lu8aukw.png" alt="alt text" width="587" height="174"&gt;&lt;/a&gt;&lt;/p&gt;


&lt;h3&gt;
  
  
  The installation is done!
&lt;/h3&gt;

&lt;p&gt;You have successfully installed BDNA into your directory, and exactly the same steps will apply in a cloned git repository.&lt;/p&gt;


&lt;h1&gt;
  
  
  Checking benchmarks
&lt;/h1&gt;

&lt;p&gt;What remains now is to get some benchmarks. If you've followed this series, you'll have some demonstration projects that generate benchmarks, such as &lt;/p&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--A9-wwsHG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev.to/assets/github-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/NewDayTechnology"&gt;
        NewDayTechnology
      &lt;/a&gt; / &lt;a href="https://github.com/NewDayTechnology/benchmarking-performance-part-2"&gt;
        benchmarking-performance-part-2
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      A simple demonstration of BenchmarkDotNet
    &lt;/h3&gt;
  &lt;/div&gt;
&lt;/div&gt;






&lt;h3&gt;
  
  
  Get some benchmarks
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://github.com/NewDayTechnology/benchmarking-performance-part-2/"&gt;Clone the repo&lt;/a&gt; and start building:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight powershell"&gt;&lt;code&gt;&lt;span class="n"&gt;dotnet&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;clean&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="n"&gt;dotnet&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;restore&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="n"&gt;dotnet&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;build&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-c&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;Release&lt;/span&gt;&lt;span class="w"&gt;

&lt;/span&gt;&lt;span class="n"&gt;cd&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;src\benchmarkdotnetdemo\bin\Release\net7.0&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="n"&gt;dotnet&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;Benchmarkdotnetdemo.dll&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-f&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;The results will be found under &lt;code&gt;**\BenchmarkDotNet.Artifacts\results&lt;/code&gt;.&lt;/p&gt;


&lt;h3&gt;
  
  
  Collect the data from your recent BenchmarkDotNet run
&lt;/h3&gt;

&lt;p&gt;BDNA works by aggregating sequential benchmark runs. To aggregate (from the repo's root directory):&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight powershell"&gt;&lt;code&gt;&lt;span class="n"&gt;dotnet&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;bdna&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;aggregate&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-new&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;".\src\benchmarkdotnetdemo\bin\Release\net7.0\BenchmarkDotNet.Artifacts\results"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-aggs&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;".\bdna"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-out&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;".\bdna"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-runs&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;30&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;blockquote&gt;
&lt;p&gt;To see all options try &lt;code&gt;dotnet bdna aggregate -?&lt;/code&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Repeatedly run benchmarks (&lt;code&gt;dotnet Benchmarkdotnetdemo.dll -f *&lt;/code&gt;) and aggregate (&lt;code&gt;dotnet bdna aggregate ...&lt;/code&gt;) to build a dataset. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;When benchmarking you'll need points of reference for each datapoint. You can use &lt;code&gt;--build %build_number%&lt;/code&gt; when aggregating each benchmark run to annotate with the build number. Tags are also supported.&lt;/p&gt;
&lt;/blockquote&gt;


&lt;h3&gt;
  
  
  Analyse the data
&lt;/h3&gt;

&lt;p&gt;Now, we want to check the dataset for deviances. To see some errors we'll assume very strict deviance (0%) and allow no errors:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight powershell"&gt;&lt;code&gt;&lt;span class="n"&gt;dotnet&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;bdna&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;analyse&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;--aggregates&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;".\bdna"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;--tolerance&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;0&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;--maxerrors&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;0&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;--verbose&lt;/span&gt;&lt;span class="w"&gt; 
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;code&gt;dotnet bdna analyse&lt;/code&gt; will send results to the console. If all is well you'll see a nice confirmatory message:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--YeHHRyPl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/o8ia50fp9s3j7i5c1gmf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--YeHHRyPl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/o8ia50fp9s3j7i5c1gmf.png" alt="alt text" width="384" height="79"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;But if there are degraded benchmarks they'll be listed:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--kwKvjKWn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ib9vq7a5ng9derzaodx6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--kwKvjKWn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ib9vq7a5ng9derzaodx6.png" alt="alt text" width="800" height="70"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If too many errors are found the tool's return code will be 1: &lt;strong&gt;your CI script will need to watch for this return code and fail the build accordingly&lt;/strong&gt;. &lt;/p&gt;


&lt;h3&gt;
  
  
  Reporting on the data
&lt;/h3&gt;

&lt;p&gt;Console logs are often fine for CI pipelines. Wouldn't it be good to get some graphs of performance over time?&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight powershell"&gt;&lt;code&gt;&lt;span class="n"&gt;dotnet&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;bdna&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;report&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;--aggregates&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;".\bdna"&lt;/span&gt;&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="nt"&gt;--verbose&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-r&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;csv&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-r&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;json&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-f&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-out&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;".\bdna_reports"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;blockquote&gt;
&lt;p&gt;For help see &lt;code&gt;dotnet bdna report --help&lt;/code&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;BDNA will build a CSV (and/or JSON) file containing selected benchmarks. Each benchmark is exported with namespace, class, method, parameters and annotations (build number, tags, etc). &lt;/p&gt;

&lt;p&gt;Import the report file in your favourite BI tool and:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--OxUx9kjv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sero4h4zjxykgv2l5ea2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--OxUx9kjv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sero4h4zjxykgv2l5ea2.png" alt="alt text" width="800" height="292"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;These measurements were taken from a machine with a lot of background procesing going on, and so you see peaks and troughs in the measurements. The general trend is flat. This is good, as the code didn't change between builds. &lt;/p&gt;
&lt;/blockquote&gt;


&lt;h1&gt;
  
  
  What have we learned?
&lt;/h1&gt;

&lt;p&gt;We've discussed a very simple method of determining degraded performance where we compare results against a best-known result.&lt;/p&gt;

&lt;p&gt;We've described how to set up local dotnet tools and nuget configurations.&lt;/p&gt;

&lt;p&gt;We've introduced a tool that can collect, report &amp;amp; detect performance degradations, and how it can be used in a sandbox environment.&lt;/p&gt;


&lt;h1&gt;
  
  
  More reading
&lt;/h1&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--A9-wwsHG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev.to/assets/github-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/NewDayTechnology"&gt;
        NewDayTechnology
      &lt;/a&gt; / &lt;a href="https://github.com/NewDayTechnology/benchmarkdotnet.analyser"&gt;
        benchmarkdotnet.analyser
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      A tool for analysing BenchmarkDotNet results
    &lt;/h3&gt;
  &lt;/div&gt;
&lt;/div&gt;




&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--A9-wwsHG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev.to/assets/github-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/NewDayTechnology"&gt;
        NewDayTechnology
      &lt;/a&gt; / &lt;a href="https://github.com/NewDayTechnology/benchmarking-performance-part-2"&gt;
        benchmarking-performance-part-2
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      A simple demonstration of BenchmarkDotNet
    &lt;/h3&gt;
  &lt;/div&gt;
&lt;/div&gt;


</description>
      <category>dotnet</category>
      <category>performance</category>
      <category>ci</category>
      <category>benchmark</category>
    </item>
    <item>
      <title>My sight and accessibility tools</title>
      <dc:creator>Matthew Lane</dc:creator>
      <pubDate>Fri, 07 May 2021 10:47:06 +0000</pubDate>
      <link>https://dev.to/newday-technology/my-sight-and-accessibility-tools-3i69</link>
      <guid>https://dev.to/newday-technology/my-sight-and-accessibility-tools-3i69</guid>
      <description>&lt;h1&gt;
  
  
  Introduction
&lt;/h1&gt;

&lt;p&gt;If I don’t have my glasses on or my contact lenses in, most visual interfaces are hard to use, the nature of my eyesight is such that anything over 20-30cm from my face becomes blurred, out of focus and indistinct. Even with WCAG AA recommendations the contrast level is often not enough unless I have also pushed up the zoom size. I can easily dig around in the sofa to find and put my glasses on, but there are many forms of degeneration in sight affecting our customers/users that can’t be corrected. A deeper dive into how low vision is described, its incidences and causes can be found in footnote 1.&lt;/p&gt;

&lt;p&gt;I invite you to walk through an experience of this and join me for a three stage experiment at the end.&lt;/p&gt;

&lt;p&gt;Without my glasses the following image is roughly what I see, only the larger and bolder fonts are readable, images have little internal detail and none of the legal compliance text is readable.&lt;/p&gt;




&lt;h1&gt;
  
  
  What I see
&lt;/h1&gt;

&lt;p&gt;This is my world without my glasses. Thankfully there are built in tools in most operating systems to help me with this problem.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftmvcman5flw3dparzvt2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftmvcman5flw3dparzvt2.png" alt="The aqua login page, blurred to simulate what a user with low vision might experience."&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  What I do when my glasses are down the back of the sofa?
&lt;/h2&gt;

&lt;p&gt;One of the most used features for users with low vision is the zoom, allowing content to be enlarged, if a website is well designed the content will re-flow so that content is kept within the browser window. Being able to re-size the interface without a loss in functionality ensures that we meet &lt;a href="https://www.w3.org/WAI/WCAG21/Understanding/resize-text.html" rel="noopener noreferrer"&gt;WCAG 2.1 - 1.4.4: Resize text&lt;/a&gt;; a &lt;em&gt;AA standard&lt;/em&gt;. All browsers should support zooming using the ctrl and plus keys, ctrl and 0 will reset your zoom.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe8uqpq8ygjo71z8nmq6q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe8uqpq8ygjo71z8nmq6q.png" alt="The aqua login page, zoomed in and blurred to simulate what a user with low vision might experience."&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;A system wide tool that enables text to be read more clearly is &lt;em&gt;high contrast&lt;/em&gt; which improves &lt;strong&gt;clarity&lt;/strong&gt; and &lt;strong&gt;readability&lt;/strong&gt;, whilst this tool is useful it is system wide, meaning the entire operating system is in high contrast mode. &lt;/p&gt;

&lt;p&gt;Our base level of contrast supports &lt;a href="https://www.w3.org/WAI/WCAG21/Understanding/contrast-minimum.html" rel="noopener noreferrer"&gt;WCAG 2.1 - 1.4.3: Contrast (Minimum)&lt;/a&gt; a &lt;em&gt;AA standard&lt;/em&gt;, but without a control on our sites to increase contrast we cannot meet &lt;a href="https://www.w3.org/WAI/WCAG21/Understanding/contrast-enhanced.html" rel="noopener noreferrer"&gt;WCAG 2.1 - 1.4.6: Contrast (Enhanced)&lt;/a&gt; a &lt;strong&gt;&lt;em&gt;AAA&lt;/em&gt;&lt;/strong&gt; &lt;em&gt;standard&lt;/em&gt;.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Enabling high contrast for Windows can be found in start → settings → ease of access → high contrast.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpvtqbxctzmt42yyikox2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpvtqbxctzmt42yyikox2.png" alt="The aqua login page, switched to high contrast and blurred to simulate what a user with low vision might experience."&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;Using zoom and high contrast together gives me a much better chance at understanding what’s happening on the login page, being readable (for me without glasses) with no content hidden by horizontal scroll.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv4o69hralsds247glm93.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv4o69hralsds247glm93.png" alt="The aqua login page, switched to high contrast, zoomed in and blurred to simulate what a user with low vision might experience."&gt;&lt;/a&gt;&lt;/p&gt;




&lt;blockquote&gt;
&lt;p&gt;The design systems we use at New Day support our adherence WCAG 2.1 AA guidelines when implemented to meet these standards.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;em&gt;A note about magnifiers: whilst zooming in the browser is supported as a standard, there are tools which magnify a specific portion on the screen in all major operating systems see footnote 2.&lt;/em&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Use case: Focus is important
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgsdv8rilyv5mnt17zmpt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgsdv8rilyv5mnt17zmpt.png" alt="The aqua login form, blurred to simulate what a user with low vision might experience."&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq907bgjnbv6umqfe0ktq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq907bgjnbv6umqfe0ktq.png" alt="The aqua login form with username field focused, blurred to simulate what a user with low vision might experience."&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;All browsers by default provide a focus state that will meet or exceed WCAG level A guidelines. If we change these default states we must make sure that:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“It must be possible for a person to know which element among multiple elements has the keyboard focus. If there is only one keyboard actionable control on the screen, the success criterion would be met because the visual design presents only one keyboard actionable item.”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The preceding statement is from the understanding success criterion and is a Level A standard see footnote 3.&lt;/p&gt;




&lt;h1&gt;
  
  
  An Experiment in Three Parts
&lt;/h1&gt;

&lt;h2&gt;
  
  
  Part One - The Low Vision Simulator
&lt;/h2&gt;

&lt;p&gt;An experience of low vision can be simulated with a pair of old sunglasses and some sticky back plastic, simply wrap one or two layers of the tape around the glasses and pop them on - you now have an approximation of low vision. (If like me you are lucky enough to use glasses for near sight, just pop them off, or make you own glasses and experience the world of someone with more limited vision).&lt;/p&gt;

&lt;h2&gt;
  
  
  Part Two - See what it’s like
&lt;/h2&gt;

&lt;p&gt;With your new vision simulators firmly on your face, open a webpage or download an application that you are unfamiliar with and attempt a login or registration journey without using any assistive tools - zoom, high contrast, screen readers, keyboard navigation etc. It’s important to be conscious of your experience, take notes about what actions you find difficult to achieve.&lt;/p&gt;

&lt;h2&gt;
  
  
  Part Three - Explore assistive technologies
&lt;/h2&gt;

&lt;p&gt;Re-visit the same webpage still with your fancy eyewear on, this time make use of as many accessibility tools as you feel comfortable with. Most mobile and desktop platforms have built in assistive technologies, find a mix that enables you to use application or site you were visiting, take notes about your experience and share that with your colleagues.&lt;/p&gt;




&lt;h1&gt;
  
  
  Footnotes
&lt;/h1&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href="https://www.w3.org/WAI/GL/low-vision-a11y-tf/wiki/Overview_of_Low_Vision#Definition_of_Low_Vision" rel="noopener noreferrer"&gt;Overview of Low Vision - Low Vision Accessibility Task Force (w3.org)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.pcmag.com/how-to/how-to-use-the-magnifier-tool-on-windows-mac-and-mobile" rel="noopener noreferrer"&gt;How to Use the Magnifier Tool on Windows, Mac, and Mobile&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.w3.org/WAI/WCAG21/Understanding/focus-visible.html" rel="noopener noreferrer"&gt;Understanding Success Criterion 2.4.7: Focus Visible (w3.org)&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>a11y</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Consumer Driven Contract Testing</title>
      <dc:creator>Elham Khani</dc:creator>
      <pubDate>Fri, 16 Apr 2021 12:36:15 +0000</pubDate>
      <link>https://dev.to/newday-technology/consumer-driven-contract-testing-1mni</link>
      <guid>https://dev.to/newday-technology/consumer-driven-contract-testing-1mni</guid>
      <description>&lt;h1&gt;
  
  
  Introduction
&lt;/h1&gt;

&lt;p&gt;At NewDay, we used to create backend services and front-end web apps within the same team (or between teams who worked together closely). By growing fast and becoming more mobile oriented, the system architecture is now developing into &lt;a href="https://samnewman.io/patterns/architectural/bff/"&gt;BFF (Backends for Frontends)&lt;/a&gt;. That means front-end developers should trust APIs created by other teams and API teams should feel safe to change their APIs. &lt;/p&gt;

&lt;p&gt;Contracts are essential in API based architectures (Microservices or BFF), but what if the contract needs to be changed? Who should be notified of the changes and how should they be notified?&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;These are the issues Consumer Driven Contract Testing solves.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;It’s essential that an API team (Provider) and front-end team (Consumer) agree on a contract and develop their code accordingly. In BFF model, the idea is that there are many Consumers of an API - for example mobile and web app teams. When the Provider changes part of the contract, the traditional way of emailing or putting a message in a Slack/Teams Channel is not practical - each Consumer may not know exactly which part of the contract they are using at their end.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Consumer driven contract testing is an API test approach which doesn’t test if the API is working fine, but checks if the contract is not-broken. &lt;/p&gt;
&lt;/blockquote&gt;




&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--UdCWIlg2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gb55rsq3ytpt45w6f6kh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--UdCWIlg2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gb55rsq3ytpt45w6f6kh.png" alt="API Contracts"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h1&gt;
  
  
  Example
&lt;/h1&gt;

&lt;p&gt;Let's say we have an API for setting marketing consent and both mobile and web apps are using them; they are consumers. The API accepts an object with 5 Boolean fields:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"email"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"post"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"sms"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"phone"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"push"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The web app only needs to send four of these fields because "push" (push notifications) are not used in the web app. If the API team decides to change one of the fields, "email", "post", "sms" or "phone", they need to notify both the web app team and mobile team; but if they want to change the "push" field, they don't need to notify the web app team because they are not using that field.&lt;/p&gt;

&lt;p&gt;In this scenario, mobile and web app teams write their own &lt;em&gt;contract tests&lt;/em&gt; and will run these tests in their own CI pipeline.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;This approach allows the consuming team to understand contract changes from the provider, and if there is an impact to them; i.e. CI of mobile / web app teams will fail if the contract tests fail, thereby, notifying themselves as consumers of contract changes from the provider. &lt;/p&gt;
&lt;/blockquote&gt;




&lt;h1&gt;
  
  
  Tooling
&lt;/h1&gt;

&lt;p&gt;There are different tools to write Contract Tests. The most common ones are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Postman/Newman&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/pact-foundation/pact-net"&gt;Pact&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://spring.io/projects/spring-cloud-contract"&gt;Spring Cloud Contract&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;At Newday, we use Postman to write contract tests, Newman to run the tests in CI (TeamCity) and the Newman reporter to see reports in TeamCity.&lt;/p&gt;




&lt;h2&gt;
  
  
  Writing contract tests in Postman
&lt;/h2&gt;

&lt;p&gt;Writing API tests in Postman is explained fully here: &lt;a href="https://www.postman.com/use-cases/api-testing-automation/"&gt;https://www.postman.com/use-cases/api-testing-automation/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Notice at this point we call our tests, “API tests”, this is because Consumer driven contract tests are only an approach; they are basically API tests.&lt;/p&gt;

&lt;p&gt;Here is an example of an API test to validate the schema of our above mentioned consent API scenario. This is the test written by the web app team. Recall they are not interested in the, “push” field, they haven’t put it as part of the schema validation. By clicking, “Send” the test will run and the results are displayed in the, “Test Results” tab.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--l0KEOn31--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5ttv3vumfi9qrermvbal.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--l0KEOn31--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5ttv3vumfi9qrermvbal.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Save API collection
&lt;/h2&gt;

&lt;p&gt;To be able to run the tests in CI/CD they need to be in a repository. First &lt;a href="https://learning.postman.com/docs/getting-started/importing-and-exporting-data/#exporting-postman-data"&gt;Export the API collection&lt;/a&gt; as a json file and push them to your desired repository. If you are using variables in your collection, you should export the variables to a separate json file. The variable file will be used when running the collection in next step. &lt;/p&gt;




&lt;h2&gt;
  
  
  Run the tests locally
&lt;/h2&gt;

&lt;p&gt;The next step is to run the API tests in your local CLI and then we will do the same steps in TeamCity. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://learning.postman.com/docs/running-collections/using-newman-cli/command-line-integration-with-newman/"&gt;Newman&lt;/a&gt; is the CLI companion of Postman and it can execute Postman collections. &lt;/p&gt;

&lt;p&gt;Install Newman globally like so:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm install -g newman
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and run the saved collection&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;newman run mycollection.json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;add -e flag to use an environment variable file&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;newman run mycollection.json -e dev_environment.json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now you can run your tests locally and see the result. hopefully all passed!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--0aH_j5Kx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mir0pfz9m254tdoxenw3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--0aH_j5Kx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mir0pfz9m254tdoxenw3.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Run tests in CI (TeamCity)
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.npmjs.com/package/newman-reporter-teamcity"&gt;newman-reporter-teamcity&lt;/a&gt; is a newman report for TeamCity. It’s optional but recommended.  &lt;/p&gt;

&lt;p&gt;Install newman and the above package globally in your build:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm install -g newman
npm install -g newman-reporter-teamcity
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Add another step to run the newman and generate the report. &lt;/p&gt;

&lt;p&gt;I use two parameters to locate the postman collection file and postman environment file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;newman run %postman.collection.path% -e %postman.environment.path% --suppress-exit-code --reporters teamcity,cli
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;Now you can be sure that whenever there is a change in API the contract tests will run and the correct team can be notified if the contract is broken.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h1&gt;
  
  
  Big Questions
&lt;/h1&gt;

&lt;p&gt;&lt;strong&gt;Who owns the contract tests?&lt;/strong&gt;&lt;br&gt;
The Consumer driven contract tests can be done in many ways and you should implement the approach best suited to your products and teams.&lt;/p&gt;

&lt;p&gt;I believe the &lt;em&gt;Consumer&lt;/em&gt; is the owner of the contract tests and &lt;em&gt;Provider&lt;/em&gt; should not change these tests.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Where to put the tests (in this example the postman collection)?&lt;/strong&gt;&lt;br&gt;
I prefer to put them in the same repository as the API (Provider’s repository).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What stage of CI/CD process should contract tests be executed?&lt;/strong&gt;&lt;br&gt;
The idea is to notify both the &lt;em&gt;Provider&lt;/em&gt; and &lt;em&gt;Consumer&lt;/em&gt; when the contract is broken. I prefer to notify the Provider as soon as possible (for example a git commit hook to prevent the Provider to push breaking changes). If the Provider is sure that the changes are correct then they need to change the contract tests but that means Consumer might never get notified. So having them in the Provider’s build and setting up an automated notification for Consumer is the earliest time to get the best results.&lt;/p&gt;

</description>
      <category>microservices</category>
      <category>contract</category>
      <category>testing</category>
      <category>postman</category>
    </item>
    <item>
      <title>Measuring performance using BenchmarkDotNet - Part 2</title>
      <dc:creator>Tony Knight</dc:creator>
      <pubDate>Thu, 01 Apr 2021 16:07:43 +0000</pubDate>
      <link>https://dev.to/newday-technology/measuring-performance-using-benchmarkdotnet-part-2-4dof</link>
      <guid>https://dev.to/newday-technology/measuring-performance-using-benchmarkdotnet-part-2-4dof</guid>
      <description>&lt;h1&gt;
  
  
  Introduction
&lt;/h1&gt;

&lt;p&gt;Previously we &lt;a href="https://dev.to/newday-technology/measuring-performance-using-benchmarkdotnet-part-1-39g3"&gt;discussed what BenchmarkDotNet gives us&lt;/a&gt; and how to write simple benchmarks. As a quick reminder:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;We use benchmarks to find code performance&lt;/li&gt;
&lt;li&gt;BenchmarkDotNet is a nuget package&lt;/li&gt;
&lt;li&gt;We use console apps to host and run benchmarks&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So what's next to do? We need to run it and get benchmarks as easily and frequently as we can.&lt;/p&gt;




&lt;h1&gt;
  
  
  Running Benchmarks locally
&lt;/h1&gt;

&lt;p&gt;We have a sample .Net core console application coded up and ready to go in Github: &lt;/p&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev.to%2Fassets%2Fgithub-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/NewDayTechnology" rel="noopener noreferrer"&gt;
        NewDayTechnology
      &lt;/a&gt; / &lt;a href="https://github.com/NewDayTechnology/benchmarking-performance-part-2" rel="noopener noreferrer"&gt;
        benchmarking-performance-part-2
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      A simple demonstration of BenchmarkDotNet
    &lt;/h3&gt;
  &lt;/div&gt;
&lt;/div&gt;


&lt;h3&gt;
  
  
  Build and run
&lt;/h3&gt;

&lt;p&gt;Once you've cloned the repo, just run a &lt;code&gt;dotnet publish&lt;/code&gt; from the local repository's root folder:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;dotnet publish -c Release -o publish
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;blockquote&gt;
&lt;p&gt;If you're unfamiliar with dotnet's CLI, &lt;code&gt;dotnet publish&lt;/code&gt; will build and integrate the application, pushing the complete distributable application to the &lt;code&gt;./publish&lt;/code&gt; directory. &lt;a href="https://docs.microsoft.com/en-us/dotnet/core/tools/dotnet-publish" rel="noopener noreferrer"&gt;You can read more here&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;At this point, you've got a benchmarking console application in &lt;code&gt;./publish&lt;/code&gt; that's ready to use. Because I like my command line clean, I'm going to change the working folder:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd publish
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;...and we're almost ready to start.&lt;/p&gt;


&lt;h3&gt;
  
  
  Before you run, prepare your machine
&lt;/h3&gt;

&lt;p&gt;Whenever you're measuring CPU performance you've got to be mindful of what else is running on your machine. Even with a 64 core beast your OS may interrupt the benchmark execution and skew results. That skew is not easy to measure or counter: it's best to assume that the interrupts and switches always happen.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Whenever you run final benchmarks make sure the absolute minimum software and applications are running. Before you start, close down all other applications before running your benchmarks. Browsers, chat, video, everything! &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;For now, don't close down everything: we're just exploring BenchmarkDotNet here and you need a browser open to read. But, when capturing real results &lt;strong&gt;always remember to run on idle machines&lt;/strong&gt;.&lt;/p&gt;


&lt;h3&gt;
  
  
  And now to get some benchmarks
&lt;/h3&gt;

&lt;p&gt;To run them all we need to:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;dotnet ./benchmarkdotnetdemo.dll -f *
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;code&gt;-f *&lt;/code&gt; is a BenchmarkDotNet argument to selectively run benchmarks by their fully qualified namespace type. We've elected to select all of them with the wildcard &lt;code&gt;*&lt;/code&gt;; if we want to run only selected benchmarks, I'd  have to use &lt;code&gt;-f benchmarkdotnetdemo.&amp;lt;pattern&amp;gt;&lt;/code&gt; as all these benchmarks fall in the &lt;code&gt;benchmarkdotnetdemo&lt;/code&gt; namespace. For instance, &lt;code&gt;-f benchmarkdotnetdemo.Simple*&lt;/code&gt; will run all the "Simple" benchmarks.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Each console application with BenchmarkDotNet has help automatically integrated. Just use &lt;code&gt;--help&lt;/code&gt; as the arguments, and you will get a very comprehensive set of switches.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;So now all we have to do is wait, and eventually your console will give you the good news:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// ***** BenchmarkRunner: End *****
// ** Remained 0 benchmark(s) to run **
Run time: 00:03:44 (224.56 sec), executed benchmarks: 3

Global total time: 00:08:03 (483.58 sec), executed benchmarks: 15
// * Artifacts cleanup *
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;All good! The result files will have been pushed to the &lt;code&gt;BenchmarkDotNet.Artifacts&lt;/code&gt; folder:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    Directory: C:\...\benchmarking-performance-part-2\publish\BenchmarkDotNet.Artifacts


Mode                 LastWriteTime         Length Name
----                 -------------         ------ ----
d-----          4/1/2021  11:50 AM                results
-a----          4/1/2021  11:50 AM         128042 BenchmarkRun-20210401-114253.log
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;The &lt;code&gt;.log&lt;/code&gt; file is simply the benchmark console echoed to file.&lt;/p&gt;

&lt;p&gt;Within the &lt;code&gt;/results&lt;/code&gt; directory you'll find the actual reports:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    Directory: C:\...\benchmarking-performance-part-2\publish\BenchmarkDotNet.Artifacts\results


Mode                 LastWriteTime         Length Name
----                 -------------         ------ ----
-a----          4/1/2021  11:47 AM         109014 benchmarkdotnetdemo.FibonacciBenchmark-measurements.csv
-a----          4/1/2021  11:47 AM         103104 benchmarkdotnetdemo.FibonacciBenchmark-report-full.json
-a----          4/1/2021  11:47 AM           3930 benchmarkdotnetdemo.FibonacciBenchmark-report-github.md
-a----          4/1/2021  11:47 AM           6632 benchmarkdotnetdemo.FibonacciBenchmark-report.csv
-a----          4/1/2021  11:47 AM           4484 benchmarkdotnetdemo.FibonacciBenchmark-report.html
-a----          4/1/2021  11:50 AM          83537 benchmarkdotnetdemo.SimpleBenchmark-measurements.csv
-a----          4/1/2021  11:50 AM          53879 benchmarkdotnetdemo.SimpleBenchmark-report-full.json
-a----          4/1/2021  11:50 AM           1215 benchmarkdotnetdemo.SimpleBenchmark-report-github.md
-a----          4/1/2021  11:50 AM           2119 benchmarkdotnetdemo.SimpleBenchmark-report.csv
-a----          4/1/2021  11:50 AM           1881 benchmarkdotnetdemo.SimpleBenchmark-report.html
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;As you can see, it's a mix of CSV, HTML, markdown and pure JSON ready for publication and reading.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;These formats are determined by either the benchmark code or the runtime arguments. I've included them all in the demo repo to give a feel of what's on offer.&lt;/p&gt;
&lt;/blockquote&gt;


&lt;h3&gt;
  
  
  Interpreting the results
&lt;/h3&gt;

&lt;p&gt;We've &lt;a href="https://dev.to/newday-technology/measuring-performance-using-benchmarkdotnet-part-1-39g3"&gt;previously discussed&lt;/a&gt; the various reports' contents. But suffice to say BenchmarkDotNet runs &amp;amp; reports benchmarks &lt;strong&gt;but does not evaluate them&lt;/strong&gt;. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Evaluating these benchmarks and acting on them is a fairly complex problem: what analysis  method to use? How do we run and capture results? Can we use benchmarks as a PR gateway? This will be the subject of a future post.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;But before we run, we'd would like benchmarks running on all git pushes, right?&lt;/p&gt;


&lt;h1&gt;
  
  
  Running benchmarks in CI
&lt;/h1&gt;

&lt;p&gt;Let's implement the simplest possible approach:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;build benchmarks&lt;/li&gt;
&lt;li&gt;run them&lt;/li&gt;
&lt;li&gt;capture the report files&lt;/li&gt;
&lt;li&gt;present for manual inspection&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In short, benchmarks are built, run, and the results published as workflow artifacts. Anyone with access can download these artifacts.&lt;/p&gt;

&lt;p&gt;Because our repo is in Github, and we want to show this in-the-flesh we'll be using Github Actions.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;One day, Github Actions will support deep artifact linking and one-click reports, just like Jenkins and TeamCity have provided for years. But until that day dawns the tedium of download-extract-search is our lot :(&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Here's a super-simple Github Actions workflow:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;If you're unfamiliar with Action workflows, one of the best hands-on introductions is from &lt;a href="https://dev.to/newday-technology/api-s-from-dev-to-production-part-3-7dn"&gt;Pete King&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;This workflow file is in the sample Github repository, under &lt;code&gt;./.github/workflows/dotnet.yml&lt;/code&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Looking at the workflow, let's skip past the job's build steps as they're self explanatory.&lt;/p&gt;




&lt;h4&gt;
  
  
  &lt;code&gt;Publish&lt;/code&gt;
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: Publish
      run: dotnet publish -c Release --verbosity normal -o ./publish/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Here we prepare a fully publishable .Net core application. &lt;br&gt;
We always need to build with Release configuration: BenchmarkDotNet will not adequately run without normal compiler optimisations. The application with its dependencies, including the code-under-test, is pushed to a &lt;code&gt;./publish/&lt;/code&gt; directory within the job.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;One glorious day, both Windows and linux will finally and completely converge on a single standard for directory path separators. Until that time, please be careful if you're writing these workflows on Windows!&lt;/p&gt;
&lt;/blockquote&gt;


&lt;h4&gt;
  
  
  &lt;code&gt;Archive&lt;/code&gt;
&lt;/h4&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: Archive 
      uses: actions/upload-artifact@v2
      with:
        name: benchmarkdotnetdemo
        path: ./publish/*
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;We're just arching the binaries here, in case we want to distribute and run locally.&lt;/p&gt;


&lt;h4&gt;
  
  
  &lt;code&gt;Run Benchmarks&lt;/code&gt;
&lt;/h4&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: Run Benchmarks    
      run: dotnet "./publish/benchmarkdotnetdemo.dll" -f "benchmarkdotnetdemo.*"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;This is where we run the benchmarks.&lt;/p&gt;

&lt;p&gt;As of now there are no Github Actions to support benchmark running, so all we do here is run the console application itself within the Github Actions job. &lt;/p&gt;

&lt;p&gt;We're running all benchmarks in the &lt;code&gt;benchmarkdotnetdemo&lt;/code&gt; namespace, and we expect the results to be pushed to the same working folder.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Note the double quotes! In windows you won't need to quote these arguments, but you will need to for Github Actions. If you don't, you'll see strange command line parsing errors.&lt;/p&gt;

&lt;p&gt;Previously I remarked that you should only run benchmarks on an idle machine. Here we'll be running these on virtualised hardware, where OS interrupts are an absolutely unavoidable fact of life.  Clearly we're trading precision for convenience here, and the code-under-test is simple enough not to worry too much about single-tick precision metrics.&lt;/p&gt;
&lt;/blockquote&gt;


&lt;h4&gt;
  
  
  &lt;code&gt;Upload benchmark results&lt;/code&gt;
&lt;/h4&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: Upload benchmark results
      uses: actions/upload-artifact@v2
      with:
        name: Benchmark_Results
        path: ./BenchmarkDotNet.Artifacts/results/*
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;This is where we present the results for inspection. &lt;/p&gt;

&lt;p&gt;We just zip up the benchmark result files into a single artifact called &lt;code&gt;Benchmark_Results&lt;/code&gt;.&lt;/p&gt;


&lt;h2&gt;
  
  
  And lastly...
&lt;/h2&gt;

&lt;p&gt;That's it! Every time you push changes to this solution, benchmarks will be run. Performance degradations won't fail the build as we're not analysing the results, and we're certainly not applying quality gates in this solution. But you've got the minimum useful visibility, albeit very simply:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Falx667v9j5fv4o03jpfj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Falx667v9j5fv4o03jpfj.png" alt="GHA-build-results"&gt;&lt;/a&gt;&lt;/p&gt;


&lt;h1&gt;
  
  
  What have we learned?
&lt;/h1&gt;

&lt;p&gt;Running benchmarks is very simple on face value, but there are considerations when doing so: you don't want to run while you're rendering videos!&lt;/p&gt;

&lt;p&gt;Incorporating benchmark reporting into a CI pipeline is straight forward, although the lack of build reporting in Github Actions is a disappointment.&lt;/p&gt;

&lt;p&gt;We've yet to act on those benchmarks' results. For instance, we don't yet fail the build if our code-under-test is underperforming.&lt;/p&gt;


&lt;h1&gt;
  
  
  Up next
&lt;/h1&gt;

&lt;p&gt;How to fail the build if your code's underperforming.&lt;/p&gt;


&lt;h1&gt;
  
  
  Further Reading
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.github.com/en/actions" rel="noopener noreferrer"&gt;Github Actions&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.microsoft.com/en-us/dotnet/core/tools/" rel="noopener noreferrer"&gt;Dotnet CLI&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Demo Github source &amp;amp; Actions
&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev.to%2Fassets%2Fgithub-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/NewDayTechnology" rel="noopener noreferrer"&gt;
        NewDayTechnology
      &lt;/a&gt; / &lt;a href="https://github.com/NewDayTechnology/benchmarking-performance-part-2" rel="noopener noreferrer"&gt;
        benchmarking-performance-part-2
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      A simple demonstration of BenchmarkDotNet
    &lt;/h3&gt;
  &lt;/div&gt;
&lt;/div&gt;


&lt;/li&gt;

&lt;/ul&gt;

</description>
      <category>dotnet</category>
      <category>performance</category>
      <category>ci</category>
      <category>benchmark</category>
    </item>
    <item>
      <title>Measuring performance using BenchmarkDotNet - Part 1</title>
      <dc:creator>Tony Knight</dc:creator>
      <pubDate>Mon, 15 Mar 2021 18:15:31 +0000</pubDate>
      <link>https://dev.to/newday-technology/measuring-performance-using-benchmarkdotnet-part-1-39g3</link>
      <guid>https://dev.to/newday-technology/measuring-performance-using-benchmarkdotnet-part-1-39g3</guid>
      <description>&lt;h1&gt;
  
  
  Introduction
&lt;/h1&gt;

&lt;p&gt;We all must build fast software, right? Right? It’s true that microservices tend to introduce latencies - stateless functions mean a whole lot more network calls, and you can wave goodbye to data locality. But a microservice is still dependent on its own code being fast, and at least fast enough. &lt;/p&gt;

&lt;p&gt;In the past we’ve relied on profilers, stopwatches, dedicated performance teams, and sometimes plain old complaints from the field. All of these methods require some form of measurement; unfortunately they tend to be “big picture” performance that lacks detail - and often without concrete scenarios. This gets very expensive very quickly.&lt;/p&gt;

&lt;p&gt;Very often, you just want to measure the code’s performance without the baggage of dependencies. You might have a critical piece of code that &lt;em&gt;absolutely must&lt;/em&gt; meet certain performance criteria. Measuring such microcode cam obviously be done with profilers - dotTrace, ANTS to name just two.  The problem is they bring their own baggage as well, and worse can’t be easily relied upon in a CI pipeline. So how can you measure microcode performance in CI? Unit tests are a terrible idea, what else is there? Step forward BenchmarkDotNet.&lt;/p&gt;




&lt;h1&gt;
  
  
  TL;DR
&lt;/h1&gt;

&lt;p&gt;Measure your code’s performance with benchmarks at near zero cost and. All you need are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;.NET 7 SDK&lt;/li&gt;
&lt;li&gt;VS/VSCode&lt;/li&gt;
&lt;li&gt;BenchmarkDotNet from Nuget&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We’ll talk about how to write simple benchmarks, how to run them and how to interpret the results.&lt;/p&gt;




&lt;h1&gt;
  
  
  What is BenchmarkDotNet?
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://benchmarkdotnet.org/index.html" rel="noopener noreferrer"&gt;BenchmarkDotNet&lt;/a&gt; does what it says on the tin: benchmark .net code. It’s available as a &lt;a href="https://www.nuget.org/packages/BenchmarkDotNet/" rel="noopener noreferrer"&gt;Nuget packaged library&lt;/a&gt; for inclusion into your .net console applications. It is very &lt;a href="https://github.com/dotnet/BenchmarkDotNet#who-use-benchmarkdotnet" rel="noopener noreferrer"&gt;widely used&lt;/a&gt; by all major players in the .net world, including the &lt;a href="https://github.com/dotnet/runtime" rel="noopener noreferrer"&gt;dotnet core runtime project&lt;/a&gt; itself.&lt;/p&gt;

&lt;h1&gt;
  
  
  What does a HelloWorld benchmark look like?
&lt;/h1&gt;

&lt;p&gt;Let’s say you have a very basic Fibonacci implementation - and you want to measure its resource usage growth as more numbers are generated.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;By “resource usage” I mean time and memory consumed per method call. &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;In other words, you'd want to know how it scales. Here's an implementation of "get the first N Fibonacci numbers":&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;

&lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="k"&gt;static&lt;/span&gt; &lt;span class="n"&gt;IEnumerable&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="kt"&gt;int&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt; &lt;span class="nf"&gt;GetFibonacci&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;this&lt;/span&gt; &lt;span class="kt"&gt;int&lt;/span&gt; &lt;span class="n"&gt;count&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;w&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;x&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

    &lt;span class="k"&gt;yield&lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="k"&gt;foreach&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;_&lt;/span&gt; &lt;span class="k"&gt;in&lt;/span&gt; &lt;span class="n"&gt;Enumerable&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Range&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="m"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;count&lt;/span&gt; &lt;span class="p"&gt;-&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;y&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;w&lt;/span&gt; &lt;span class="p"&gt;+&lt;/span&gt; &lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="k"&gt;yield&lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;y&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="n"&gt;w&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="n"&gt;x&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;y&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;blockquote&gt;
&lt;p&gt;No prizes are sought for best efficiency here. Please &lt;strong&gt;do not&lt;/strong&gt; take this as a reference implementation of Fibonacci!&lt;/p&gt;
&lt;/blockquote&gt;



&lt;p&gt;To answer the scaling question, we would implement a benchmark, run it and analyse the results. Skipping forward a rendered benchmark report would look something like the below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fccoaxassb9omnz96k1ji.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fccoaxassb9omnz96k1ji.png" alt="alt text"&gt;&lt;/a&gt;&lt;/p&gt;


&lt;h2&gt;
  
  
  What do all the headers actually mean?
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;The column&lt;/th&gt;
&lt;th&gt;What it means&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Method&lt;/td&gt;
&lt;td&gt;The name of the code-under-test; a single benchmark may have several methods under test for, e.g. scenarios. This value is lifted directly from your benchmark code.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Count&lt;/td&gt;
&lt;td&gt;An arbitrary parameter: in this case the number of Fibonacci numbers generated by the method under test.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Mean/Error/StdDev/StdError&lt;/td&gt;
&lt;td&gt;Execution time statistics. Note that these can be given down to nanoseconds, depending on how fast your code is. Low is best.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Min/Q1/Median/Q3/Max&lt;/td&gt;
&lt;td&gt;Quartile execution time statistics: note the time units. Low is best.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Ops/sec&lt;/td&gt;
&lt;td&gt;The number of operations executed per second for the method/parameter combination. High is good.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Rank&lt;/td&gt;
&lt;td&gt;The fastest performing method/parameter combination. Low is best.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Gen 0/1/2&lt;/td&gt;
&lt;td&gt;The total number of collections per generation&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Allocated&lt;/td&gt;
&lt;td&gt;Total bytes allocated against all generations&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;blockquote&gt;
&lt;p&gt;Note the header information in the report! It’ll give details on the OS, CPU, .Net version, JIT method and GC configuration. Always benchmark like-for-like!&lt;/p&gt;
&lt;/blockquote&gt;


&lt;h2&gt;
  
  
  OK… what do those numbers &lt;em&gt;really&lt;/em&gt; mean?
&lt;/h2&gt;

&lt;p&gt;Let’s look at each iteration of &lt;code&gt;Count&lt;/code&gt;, and we’re using it here to get the first &lt;code&gt;Count&lt;/code&gt; numbers of the Fibonacci sequence.&lt;/p&gt;

&lt;p&gt;Where &lt;code&gt;Count&lt;/code&gt; is 1 the mean execution time is 103.4 nanoseconds. That’s 0.1 microseconds, or 0.0001 milliseconds. I like that: nice and fast. &lt;/p&gt;

&lt;p&gt;Where &lt;code&gt;Count&lt;/code&gt; is 13 (yes, the parameters themselves follow Fibonacci!) the mean time is 407.2 ns: four times what &lt;code&gt;Count=1&lt;/code&gt; is, yet the Count is 13 times bigger. I’ll take that, for now. &lt;/p&gt;

&lt;p&gt;Where &lt;code&gt;Count&lt;/code&gt; is 34 the mean time is 1,077.9 ns, or 1.077 microseconds, or just over 0.0001 milliseconds. That’s taking 2.6 times more time than &lt;code&gt;Count = 13&lt;/code&gt;. Let’s compare against &lt;code&gt;Count = 1&lt;/code&gt;: &lt;code&gt;Count&lt;/code&gt; is 34 times bigger , yet takes 10 times the time. I’ll take that too. &lt;/p&gt;

&lt;p&gt;If we plot &lt;code&gt;Count&lt;/code&gt; against the time ratio we see this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmaausg26euewio5iowek.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmaausg26euewio5iowek.png" alt="alt text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In other words, time used does not grow as &lt;code&gt;Count&lt;/code&gt; grows. If it did, the lines would be parallel.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;So the benchmarks are showing that the implementation has reasonably acceptable scaling. It's not constant time, but it’s better than O(n) time: a pleasant surprise.&lt;/p&gt;

&lt;p&gt;If you're not satisfied with the performance results, simply make your changes, re-run the benchmarks &amp;amp; re-analyse. That's it.&lt;/p&gt;
&lt;/blockquote&gt;


&lt;h2&gt;
  
  
  You haven’t mentioned the memory yet, have you?
&lt;/h2&gt;

&lt;p&gt;Trust me, I’m getting to that. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Pay particular attention to memory usage. Garbage collections and memory allocations are as important as sheer speed!&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;Count=1&lt;/code&gt; used 128 bytes.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Count=13&lt;/code&gt; used 312 bytes&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Count=34&lt;/code&gt; used 744 bytes.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If we plot &lt;code&gt;Count&lt;/code&gt; against the allocation growth ratios, we see this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft87hec7nhpqs0z0ultm8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft87hec7nhpqs0z0ultm8.png" alt="alt text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;This means the used memory isn’t constant either: the memory used for &lt;code&gt;Count=34&lt;/code&gt; is greater than the memory used for &lt;code&gt;Count=1&lt;/code&gt;. Again it's better than O(n). To my mind this is OK, but not great: we need more investigation. It's probably incurred with yield return, but do we want to sacrifice the readability? Probably not, but in any case we’re getting new perspectives on our code. &lt;em&gt;This is a good thing&lt;/em&gt;.&lt;/p&gt;
&lt;/blockquote&gt;


&lt;h2&gt;
  
  
  What other rendered reports can you get?
&lt;/h2&gt;

&lt;p&gt;You can output a markdown version of your report and many other formats; Markown output is GitHub inspired.&lt;/p&gt;

&lt;p&gt;You can use the following attributes to output the many different types of rendered reports:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;

&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;JsonExporterAttribute&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Full&lt;/span&gt;&lt;span class="p"&gt;()]&lt;/span&gt;
&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nf"&gt;CsvMeasurementsExporter&lt;/span&gt;&lt;span class="p"&gt;()]&lt;/span&gt;
&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nf"&gt;CsvExporter&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;CsvSeparator&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Comma&lt;/span&gt;&lt;span class="p"&gt;)]&lt;/span&gt;
&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nf"&gt;HtmlExporter&lt;/span&gt;&lt;span class="p"&gt;()]&lt;/span&gt;
&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;MarkdownExporterAttribute&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;GitHub&lt;/span&gt;&lt;span class="p"&gt;()]&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;An example of the GitHub Markdown report:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu8eh79e4i3zd4zoti91b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu8eh79e4i3zd4zoti91b.png" alt="alt text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Charting is supported through &lt;a href="https://www.r-project.org/" rel="noopener noreferrer"&gt;the R project&lt;/a&gt;. As R is a world in itself, I’m going to skip the subject. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;If you want charts, consider importing the rendered JSON into Excel. The &lt;code&gt;CsvExporter&lt;/code&gt; attribute will generate a CSV with the data you need.&lt;/p&gt;
&lt;/blockquote&gt;


&lt;h2&gt;
  
  
  Full code example
&lt;/h2&gt;

&lt;p&gt;What does the benchmark code look like using BenchmarkDotNet? It might surprise you to see how simple it is.&lt;/p&gt;

&lt;p&gt;BenchmarkDotNet relies on declarative code over which it will reflect. Leaving aside the class attributes (more on those later), note the &lt;code&gt;[Params]&lt;/code&gt; attribute over &lt;code&gt;Count&lt;/code&gt; from the report above, likewise &lt;code&gt;[Benchmark]&lt;/code&gt; and &lt;code&gt;Fibonacci()&lt;/code&gt;.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;

&lt;span class="k"&gt;using&lt;/span&gt; &lt;span class="nn"&gt;System.Linq&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;using&lt;/span&gt; &lt;span class="nn"&gt;BenchmarkDotNet.Attributes&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;using&lt;/span&gt; &lt;span class="nn"&gt;BenchmarkDotNet.Exporters.Csv&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;namespace&lt;/span&gt; &lt;span class="nn"&gt;benchmarkdotnetdemo&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nf"&gt;InProcess&lt;/span&gt;&lt;span class="p"&gt;()]&lt;/span&gt;
    &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;MemoryDiagnoser&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;RankColumn&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;MinColumn&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;MaxColumn&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;Q1Column&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;Q3Column&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;AllStatisticsColumn&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;JsonExporterAttribute&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Full&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;CsvMeasurementsExporter&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nf"&gt;CsvExporter&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;CsvSeparator&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Comma&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="n"&gt;HtmlExporter&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;MarkdownExporterAttribute&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;GitHub&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nf"&gt;GcServer&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;true&lt;/span&gt;&lt;span class="p"&gt;)]&lt;/span&gt;
    &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;FibonacciBenchmark&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nf"&gt;Params&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="m"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="m"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="m"&gt;3&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="m"&gt;5&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="m"&gt;8&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="m"&gt;13&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="m"&gt;21&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="m"&gt;34&lt;/span&gt;&lt;span class="p"&gt;)]&lt;/span&gt;
        &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="kt"&gt;int&lt;/span&gt; &lt;span class="n"&gt;Count&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="k"&gt;get&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;set&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;

        &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;Benchmark&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
        &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="k"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;Fibonacci&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
        &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;xs&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;Count&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;GetFibonacci&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;ToList&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;blockquote&gt;
&lt;p&gt;You’ll notice that the benchmarks have a return type of &lt;code&gt;void&lt;/code&gt;  and do not have any assertions. Remember: we’re not proving functional correctness here, we’re measuring resource usage.&lt;/p&gt;
&lt;/blockquote&gt;


&lt;h1&gt;
  
  
  Show me the code!
&lt;/h1&gt;

&lt;p&gt;I’ve created a simple BenchmarkDotNet implementation here:&lt;/p&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev.to%2Fassets%2Fgithub-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/NewDayTechnology" rel="noopener noreferrer"&gt;
        NewDayTechnology
      &lt;/a&gt; / &lt;a href="https://github.com/NewDayTechnology/benchmarking-performance-part-1" rel="noopener noreferrer"&gt;
        benchmarking-performance-part-1
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      A simple demonstration of BenchmarkDotNet
    &lt;/h3&gt;
  &lt;/div&gt;
&lt;/div&gt;



&lt;p&gt;There’s only the one C# project in there - &lt;code&gt;benchmarkdotnetdemo.csproj&lt;/code&gt; - that contains the minimal files.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;BenchmarkDotNet will only work if the console project is built with a &lt;em&gt;Release&lt;/em&gt; configuration, that is with code optimisations applied. Running in &lt;em&gt;Debug&lt;/em&gt; will result in a &lt;em&gt;run-time error&lt;/em&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Setup
&lt;/h3&gt;

&lt;p&gt;This is the &lt;code&gt;Program.cs&lt;/code&gt; file, and like all C# console apps you need an entry point: &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;

&lt;span class="k"&gt;using&lt;/span&gt; &lt;span class="nn"&gt;System&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;using&lt;/span&gt; &lt;span class="nn"&gt;BenchmarkDotNet.Running&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;namespace&lt;/span&gt; &lt;span class="nn"&gt;benchmarkdotnetdemo&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;Program&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;static&lt;/span&gt; &lt;span class="kt"&gt;int&lt;/span&gt; &lt;span class="nf"&gt;Main&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kt"&gt;string&lt;/span&gt;&lt;span class="p"&gt;[]&lt;/span&gt; &lt;span class="n"&gt;args&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="k"&gt;try&lt;/span&gt;
            &lt;span class="p"&gt;{&lt;/span&gt;
                &lt;span class="n"&gt;BenchmarkSwitcher&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;FromAssembly&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;typeof&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;Program&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="n"&gt;Assembly&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
                    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Run&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;args&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

                &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
            &lt;span class="p"&gt;}&lt;/span&gt;
            &lt;span class="k"&gt;catch&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;Exception&lt;/span&gt; &lt;span class="n"&gt;ex&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="p"&gt;{&lt;/span&gt;
                &lt;span class="n"&gt;Console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;ForegroundColor&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;ConsoleColor&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Red&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
                &lt;span class="n"&gt;Console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;WriteLine&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ex&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Message&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
                &lt;span class="n"&gt;Console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;ResetColor&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
                &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
            &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Aside from the standard method entry point let’s go over it bit by bit.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;

&lt;span class="k"&gt;using&lt;/span&gt; &lt;span class="nn"&gt;BenchmarkDotNet.Running&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;For bootstrapping BenchmarkDotNet, this is the only import you need.&lt;/p&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;

&lt;span class="n"&gt;BenchmarkSwitcher&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;FromAssembly&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;typeof&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;Program&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="n"&gt;Assembly&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;Run&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;args&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;This one-line-to-rule-them-all will perform all command line parsing, all help, all benchmark execution and all report generation. &lt;/p&gt;

&lt;p&gt;One point here is &lt;code&gt;.FromAssembly(typeof(Program).Assembly)&lt;/code&gt; - this informs BenchmarkDotNet of its benchmark search scope. Benchmarks are internally discovered by reflection - you’ll see soon enough.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;NOTE: If you were to run the project without any command line arguments, BenchmarkDotNet will assume an interactive CLI. &lt;/p&gt;

&lt;p&gt;&lt;code&gt;.Run(args)&lt;/code&gt; returns a sequence of report objects comprised of the same data used for rendered reports: I’ve excluded them for simplicity. If you want to run benchmarks and fail CI builds if performance dips they are your first place to look.&lt;/p&gt;
&lt;/blockquote&gt;


&lt;h2&gt;
  
  
  Create a new benchmark
&lt;/h2&gt;

&lt;p&gt;There is a file called &lt;code&gt;SimpleBenchmark.cs&lt;/code&gt;. Let’s have a look.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;

&lt;span class="k"&gt;using&lt;/span&gt; &lt;span class="nn"&gt;BenchmarkDotNet.Attributes&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;using&lt;/span&gt; &lt;span class="nn"&gt;BenchmarkDotNet.Exporters.Csv&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;namespace&lt;/span&gt; &lt;span class="nn"&gt;benchmarkdotnetdemo&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nf"&gt;InProcess&lt;/span&gt;&lt;span class="p"&gt;()]&lt;/span&gt;
    &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;MemoryDiagnoser&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;RankColumn&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;MinColumn&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;MaxColumn&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;Q1Column&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;Q3Column&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;AllStatisticsColumn&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;JsonExporterAttribute&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Full&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;CsvMeasurementsExporter&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nf"&gt;CsvExporter&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;CsvSeparator&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Comma&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="n"&gt;HtmlExporter&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;MarkdownExporterAttribute&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;GitHub&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nf"&gt;GcServer&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;true&lt;/span&gt;&lt;span class="p"&gt;)]&lt;/span&gt;
    &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;SimpleBenchmark&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;Benchmark&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
        &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="k"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;NoopTest&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;

        &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;Benchmark&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
        &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="kt"&gt;int&lt;/span&gt; &lt;span class="nf"&gt;AddTest&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="kt"&gt;int&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;MaxValue&lt;/span&gt; &lt;span class="p"&gt;+&lt;/span&gt; &lt;span class="kt"&gt;int&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;MinValue&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

        &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;Benchmark&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
        &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="kt"&gt;int&lt;/span&gt; &lt;span class="nf"&gt;MultiplyTest&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="m"&gt;11&lt;/span&gt; &lt;span class="p"&gt;*&lt;/span&gt; &lt;span class="m"&gt;3&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h4&gt;
  
  
  FibonacciBenchmark.cs
&lt;/h4&gt;

&lt;p&gt;Just for completeness: note the similar declarations as &lt;code&gt;SimpleBenchmarks.cs&lt;/code&gt;. In this case, we’re adding a &lt;code&gt;[Params]&lt;/code&gt; parameter to support benchmark permutations. &lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;

&lt;span class="k"&gt;using&lt;/span&gt; &lt;span class="nn"&gt;System.Linq&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;using&lt;/span&gt; &lt;span class="nn"&gt;BenchmarkDotNet.Attributes&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;using&lt;/span&gt; &lt;span class="nn"&gt;BenchmarkDotNet.Exporters.Csv&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;namespace&lt;/span&gt; &lt;span class="nn"&gt;benchmarkdotnetdemo&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nf"&gt;InProcess&lt;/span&gt;&lt;span class="p"&gt;()]&lt;/span&gt;
    &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;MemoryDiagnoser&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;RankColumn&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;MinColumn&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;MaxColumn&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;Q1Column&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;Q3Column&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;AllStatisticsColumn&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;JsonExporterAttribute&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Full&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;CsvMeasurementsExporter&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nf"&gt;CsvExporter&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;CsvSeparator&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Comma&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="n"&gt;HtmlExporter&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;MarkdownExporterAttribute&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;GitHub&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nf"&gt;GcServer&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;true&lt;/span&gt;&lt;span class="p"&gt;)]&lt;/span&gt;
    &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;FibonacciBenchmark&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nf"&gt;Params&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="m"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="m"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="m"&gt;3&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="m"&gt;5&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="m"&gt;8&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="m"&gt;13&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="m"&gt;21&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="m"&gt;34&lt;/span&gt;&lt;span class="p"&gt;)]&lt;/span&gt;
        &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="kt"&gt;int&lt;/span&gt; &lt;span class="n"&gt;Count&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="k"&gt;get&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;set&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;

        &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;Benchmark&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
        &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="k"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;Fibonacci&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
        &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;xs&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;Count&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;GetFibonacci&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;ToList&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h3&gt;
  
  
  How are benchmarks executed?
&lt;/h3&gt;

&lt;p&gt;Without going into too much detail, BenchmarkDotNet will attempt to run your benchmarks many many times over to settle on mean and median values. &lt;/p&gt;

&lt;p&gt;When you run the benchmarks you may first be confused by just how many iterations are involved, so let’s give a simplistic explanation. Modern OSs are preemptive multitaskers, CPUs have pipeline caches as well as instruction reordering features. .NET itself has the JIT compiler. This means that &lt;em&gt;no single execution of code can be relied upon to give a canonical result&lt;/em&gt;.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;This is part of the reason why unit tests are terrible for benchmarking! They only run once and incur their own (unaccounted) overheads.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;BenchmarkDotNet will run warm up iterations before it can take representative values. These show up as various stages: OverheadJitting  &amp;amp; WorkloadJitting, WorkloadPilot, OverheadWarmup, OverheadActual.&lt;/p&gt;

&lt;p&gt;JIT comes at a cost: the first time any .NET code executes it must first be JIT compiled. The more complex the code the higher the JIT cost, usually showing as CPU and time costs. As we’re interested only in runtime performance, these steps eliminate JIT costs from measurements.&lt;/p&gt;

&lt;p&gt;In the same vein  other warmup steps are run to eliminate other “once only” costs, for instance to warm up pipelining caches. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft434gva68vqgk4nfpd0m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft434gva68vqgk4nfpd0m.png" alt="alt text"&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;After these steps have completed, BenchmarkDotNet will iterate these operations to yield the final statistics; these are shown as WorkloadActual steps.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkkkshojo5dsobe2ogndi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkkkshojo5dsobe2ogndi.png" alt="alt text"&gt;&lt;/a&gt;&lt;/p&gt;



&lt;blockquote&gt;
&lt;p&gt;If you want more detail, please refer to &lt;a href="https://benchmarkdotnet.org/articles/guides/how-it-works.html" rel="noopener noreferrer"&gt;BenchmarkDotNet’s own documentation&lt;/a&gt;. In these code samples we’re using the default &lt;code&gt;Throughput&lt;/code&gt; strategy for microbenchmarking.&lt;/p&gt;
&lt;/blockquote&gt;


&lt;h3&gt;
  
  
  How long does it take?
&lt;/h3&gt;

&lt;p&gt;It depends ;) Simple calculations, such as in the demo project, will run in under a minute. Adding permutations (such as with &lt;code&gt;[Params]&lt;/code&gt;)  will linearly increase the benchmarking time, as each parameter will be benchmarked in its own right.&lt;/p&gt;

&lt;p&gt;With that in mind, it’s quite clear that resource hungry algorithms, benchmarked with a large variety of parameters, will take a considerable amount of time. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Don’t expect to parallelise BenchmarkDotNet: it runs benchmarks sequentially. Thread context switching is itself a cost and extremely difficult to compensate for.&lt;/p&gt;
&lt;/blockquote&gt;


&lt;h1&gt;
  
  
  What have we learned?
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;We’ve seen how to get BenchmarkDotNet &lt;/li&gt;
&lt;li&gt;We’ve seen how to integrate it in a simple console application&lt;/li&gt;
&lt;li&gt;We’ve seen the minimum work needed to build benchmarks&lt;/li&gt;
&lt;li&gt;We’ve had a taste of the reports and inferences we can gain from BenchmarkDotNet&lt;/li&gt;
&lt;/ul&gt;


&lt;h1&gt;
  
  
  Next Steps
&lt;/h1&gt;

&lt;p&gt;How to incorporate into CI?&lt;/p&gt;


&lt;h1&gt;
  
  
  More Information
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://en.wikipedia.org/wiki/Benchmark_(computing)" rel="noopener noreferrer"&gt;What is benchmarking - Wiki&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://benchmarkdotnet.org/index.html" rel="noopener noreferrer"&gt;BenchmarkDotNet&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/dotnet/BenchmarkDotNet" rel="noopener noreferrer"&gt;BenchmarkDotNet on Github&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://stebet.net/benchmarking-and-performance-optimizations-in-c-using-benchmarkdotnet/" rel="noopener noreferrer"&gt;A real world use case of BenchmarkDotNet&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;GitHub repository: &lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev.to%2Fassets%2Fgithub-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/NewDayTechnology" rel="noopener noreferrer"&gt;
        NewDayTechnology
      &lt;/a&gt; / &lt;a href="https://github.com/NewDayTechnology/benchmarking-performance-part-1" rel="noopener noreferrer"&gt;
        benchmarking-performance-part-1
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      A simple demonstration of BenchmarkDotNet
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;Measuring performance with BenchmarkDotNet part 1&lt;/h1&gt;
&lt;/div&gt;

&lt;p&gt;&lt;a rel="noopener noreferrer" href="https://github.com/NewDayTechnology/benchmarkdotnetdemo/actions/workflows/dotnet.yml/badge.svg"&gt;&lt;img src="https://github.com/NewDayTechnology/benchmarkdotnetdemo/actions/workflows/dotnet.yml/badge.svg" alt="Build"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/NewDayTechnology/benchmarking-performance-part-1CODE_OF_CONDUCT.md" rel="noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/aa04298d32dcc4713314045eb64482ed5d22bf73c7131f3cd48fe0fda7e6f886/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f436f6e7472696275746f72253230436f76656e616e742d322e302d3462616161612e737667" alt="Contributor Covenant"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A simple demonstration of the superlative &lt;a href="https://benchmarkdotnet.org/index.html" rel="nofollow noopener noreferrer"&gt;BenchmarkDotNet&lt;/a&gt; and its integration into Github Actions.&lt;/p&gt;

&lt;p&gt;Measuring code performance is self evidently a vital discipline to software engineering and yet is so often skipped, usually for false economies. &lt;a href="https://benchmarkdotnet.org/index.html" rel="nofollow noopener noreferrer"&gt;BenchmarkDotNet&lt;/a&gt; makes this essential task simplicity itself, with a syntax and style that's immediately intuitive to anyone versed in unit testing.&lt;/p&gt;

&lt;p&gt;Just exercise your code in a declarative way, include it in your CI pipeline, and enjoy the results.&lt;/p&gt;

&lt;p&gt;This project just demonstrates the basics: the .net project, the CI pipeline and the resultant reports.&lt;/p&gt;

&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;The Benchmarks&lt;/h2&gt;
&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;NoopTest&lt;/code&gt;
The absolute minimum function that can be benchmarked - it does nothing.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;AddTest&lt;/code&gt;
A simple addition metric, again of minimal complexity.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;MultiplyTest&lt;/code&gt;
A simple multiplication metric, again of minimal complexity.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;Fibonacci&lt;/code&gt;
Benchmarking a Fibonacci implementation, measuring the computation time for the first N Fibonacci numbers.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Builds&lt;/h2&gt;
&lt;/div&gt;

&lt;p&gt;Builds are managed with love…&lt;/p&gt;
&lt;/div&gt;


&lt;/div&gt;
&lt;br&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/NewDayTechnology/benchmarking-performance-part-1" rel="noopener noreferrer"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;br&gt;
&lt;/div&gt;

&lt;/li&gt;

&lt;/ul&gt;

</description>
      <category>dotnet</category>
      <category>performance</category>
      <category>metrics</category>
      <category>benchmark</category>
    </item>
  </channel>
</rss>
