<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Stafford Williams</title>
    <description>The latest articles on DEV Community by Stafford Williams (@staff0rd).</description>
    <link>https://dev.to/staff0rd</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/staff0rd"/>
    <language>en</language>
    <item>
      <title>Building a React Native pipeline for Android and iOS</title>
      <dc:creator>Stafford Williams</dc:creator>
      <pubDate>Wed, 01 Apr 2020 04:17:00 +0000</pubDate>
      <link>https://dev.to/staff0rd/building-a-react-native-pipeline-for-android-and-ios-2bo5</link>
      <guid>https://dev.to/staff0rd/building-a-react-native-pipeline-for-android-and-ios-2bo5</guid>
      <description>&lt;p&gt;Liquid syntax error: Unknown tag 'endraw'&lt;/p&gt;
</description>
      <category>azurepipelines</category>
      <category>reactnative</category>
      <category>ios</category>
      <category>android</category>
    </item>
    <item>
      <title>An Azure Pipeline for Visual Studio Marketplace</title>
      <dc:creator>Stafford Williams</dc:creator>
      <pubDate>Tue, 05 Nov 2019 08:50:49 +0000</pubDate>
      <link>https://dev.to/staff0rd/an-azure-pipeline-for-visual-studio-marketplace-44og</link>
      <guid>https://dev.to/staff0rd/an-azure-pipeline-for-visual-studio-marketplace-44og</guid>
      <description>&lt;p&gt;In a &lt;a href="https://staffordwilliams.com/blog/2019/05/19/publishing-a-custom-azure-pipeline-release-task/"&gt;previous post&lt;/a&gt; I built a custom Azure Pipeline release task and manually uploaded it to the Visual Studio Marketplace. In this post I’ll add new features and have DEV and PROD versions of the extension automatically deploy to the Marketplace on commit.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--d_7AGHGD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/http://staffordwilliams.com/assets/tfx-header.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--d_7AGHGD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/http://staffordwilliams.com/assets/tfx-header.png" alt="Purge Cache for Cloudflare - an Azure Pipelines Task"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://github.com/staff0rd/tfx-cloudflare-purge"&gt;source for this project is here&lt;/a&gt; and the &lt;a href="https://github.com/staff0rd/tfx-cloudflare-purge/compare/9976e5366975092149d2b2b064c3c86f6f5ebd1b...f876e6e9807763cf4a8d7738feb3f10ba2a632c3"&gt;diff for changes mentioned in this post is here&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Development and Release Versions
&lt;/h2&gt;

&lt;p&gt;Initially I followed a &lt;a href="https://blog.raph.ws/2018/03/build-and-release-pipeline-for-your-own-custom-vsts-tasks/"&gt;colleague’s post on setting up a pipeline&lt;/a&gt; and &lt;a href="https://devblogs.microsoft.com/devops/streamlining-azure-devops-extension-development/"&gt;this devblog from Microsoft&lt;/a&gt; in an attempt to have a dev (and prod) version of the release task. However, I later determined that there’s a difference between the &lt;em&gt;extension&lt;/em&gt; that is defined and published by &lt;code&gt;vss-extension.json&lt;/code&gt; and the (one or more) tasks that are packaged with the extension and defined by &lt;code&gt;taskName/task.json&lt;/code&gt;. I could successfully push a unique-id extension to my existing publisher, but the deployment would then fail due to the marketplace already having an existing version of the &lt;em&gt;task&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;As such, the approach was to name and key both &lt;code&gt;vss-extension.json&lt;/code&gt; (now renamed &lt;code&gt;manifest-*.json&lt;/code&gt;) and &lt;code&gt;task.json&lt;/code&gt; specifically for dev/prod. The core changes to achieve this result in a structure like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;azure-pipelines-dev.yml # dev pipeline
azure-pipelines-release.yml # release pipeline
manifest-dev.json # dev extension definition
manifest-release.json # release extension definition
cloudflarePurge
  |-- task.json # dev &amp;amp; release task definition

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;&lt;em&gt;NB: Actually &lt;code&gt;manifest-dev.json&lt;/code&gt; and &lt;code&gt;manifest-release.json&lt;/code&gt; could be merged and token-replaced similar to &lt;code&gt;task.json&lt;/code&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The result is two extensions and their associated tasks:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Purge Cache for Cloudflare and;&lt;/li&gt;
&lt;li&gt;Purge Cache for Cloudflare (Development)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The first is the existing, already published version, and the second is a brand new instance specifically for development and testing and with &lt;code&gt;public: false&lt;/code&gt; set. The production instance continues to reflect and is now &lt;a href="https://github.com/staff0rd/tfx-cloudflare-purge/blob/f876e6e9807763cf4a8d7738feb3f10ba2a632c3/azure-pipelines-release.yml#L1-L2"&gt;triggered from&lt;/a&gt; the &lt;code&gt;master&lt;/code&gt; branch, and the new development instance is &lt;a href="https://github.com/staff0rd/tfx-cloudflare-purge/blob/f876e6e9807763cf4a8d7738feb3f10ba2a632c3/azure-pipelines-dev.yml#L1-L2"&gt;triggered&lt;/a&gt; from the &lt;code&gt;dev&lt;/code&gt; branch.&lt;/p&gt;

&lt;h2&gt;
  
  
  Token Replacement
&lt;/h2&gt;

&lt;p&gt;An immediate issue with my proposed pipeline was the difference in how &lt;code&gt;vss-extension.json&lt;/code&gt; and &lt;code&gt;task.json&lt;/code&gt; deal with &lt;code&gt;version&lt;/code&gt; and how JSON variable substitution works within the standard File Transform task.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;vss-extension.json&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;"version": "1.2.3"

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;task.json&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;"version": {
    "Major": 1,
    "Minor": 2,
    "Patch": 3
}

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The solution &lt;a href="https://github.com/microsoft/azure-pipelines-tasks/issues/11344"&gt;proposed by microsoft&lt;/a&gt; was to split the two variable transforms across jobs. I’ll give this approach a go in the future, however for now I’ve just used a single job and &lt;a href="https://github.com/staff0rd/tfx-cloudflare-purge/blob/f876e6e9807763cf4a8d7738feb3f10ba2a632c3/azure-pipelines-release.yml#L30-L35"&gt;token replacement&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;This same token replacement is used for setting the &lt;code&gt;name&lt;/code&gt; and &lt;code&gt;id&lt;/code&gt; of the extension and task.&lt;/p&gt;

&lt;h2&gt;
  
  
  Build and deploy
&lt;/h2&gt;

&lt;p&gt;Outside token-replacement the only steps in both dev/prod pipelines are;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Install node&lt;/li&gt;
&lt;li&gt;Install dependencies&lt;/li&gt;
&lt;li&gt;Compile typescript&lt;/li&gt;
&lt;li&gt;Deploy to marketplace&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;We’ll use &lt;code&gt;npm&lt;/code&gt; to install the dependencies and use &lt;code&gt;npx&lt;/code&gt; to &lt;a href="https://github.com/staff0rd/tfx-cloudflare-purge/blob/f876e6e9807763cf4a8d7738feb3f10ba2a632c3/azure-pipelines-dev.yml#L39"&gt;compile with typescript&lt;/a&gt; &amp;amp; &lt;a href="https://github.com/staff0rd/tfx-cloudflare-purge/blob/f876e6e9807763cf4a8d7738feb3f10ba2a632c3/azure-pipelines-dev.yml#L43"&gt;deploy with tfx-cli&lt;/a&gt;. For the latter we’ll need a PAT from our Azure Devops tenant, &lt;a href="https://blog.raph.ws/2018/03/build-and-release-pipeline-for-your-own-custom-vsts-tasks/#generate-pat"&gt;instructions can be found here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Note that before installing &lt;code&gt;npm&lt;/code&gt; dependencies, it’s important to set the working directory to the task directory. Doing so installs dependencies adjacent to the task and ensures they are shipped with &lt;code&gt;tfx-cli extension publish&lt;/code&gt;. Originally I had not done this, such that the extension was building and shipping without the dependencies, meaning users of the extension needed to &lt;code&gt;npm install&lt;/code&gt; themself - my thanks to Patrycjusz Jaskurzyński who reported, and helped me diagnose, this issue.&lt;/p&gt;

&lt;h2&gt;
  
  
  Further differences
&lt;/h2&gt;

&lt;p&gt;An issue with the previous version of the extension was that the provided icon was only visibile on the marketplace, but once installed into an Azure DevOps tenant, a default icon rather than the one supplied was visible when adding the task to a release pipeline.&lt;/p&gt;

&lt;p&gt;Where &lt;code&gt;vss-manifest.json&lt;/code&gt; defines an &lt;code&gt;icons&lt;/code&gt; property, &lt;code&gt;task.json&lt;/code&gt; does not, and instead by convention checks for &lt;code&gt;icon.png&lt;/code&gt; in the same directory. Additionally some documentation indicates an icon size of 32x32 is required - this will however result in a lower quality image once installed as the icon will be scaled up in size. To resolve this supply the icon at 128x128.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;The last thing I needed to do was actually install the new Development version of the extension. This is an entirely different extension as far as Visual Studio Marketplace is concerned, but I can have both of them installed on my single Azure Devops tenant. The dev version’s &lt;code&gt;public: false&lt;/code&gt; property in the extension definition and &lt;code&gt;--share-with myUserName&lt;/code&gt; in the &lt;a href="https://github.com/staff0rd/tfx-cloudflare-purge/blob/f876e6e9807763cf4a8d7738feb3f10ba2a632c3/azure-pipelines-dev.yml#L43"&gt;pipeline defintion&lt;/a&gt; means only my Azure Devops tenant can see the development version. While developing and testing, I commit directly to the &lt;code&gt;dev&lt;/code&gt; branch and the development version of the extension is deployed to the Visual Studio Marketplace and updated on my Azure Devops tenant. Once i’m satisfied, I merge &lt;code&gt;dev&lt;/code&gt; into &lt;code&gt;master&lt;/code&gt;, which triggers the production build &amp;amp; deploy.&lt;/p&gt;

&lt;p&gt;As for the extension itself, now you can specify individual URLs instead of purging everything:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--oUllOiwR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/http://staffordwilliams.com/assets/tfx-files-to-purge.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--oUllOiwR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/http://staffordwilliams.com/assets/tfx-files-to-purge.png" alt="files to purge"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>azuredevops</category>
      <category>azurepipelines</category>
    </item>
    <item>
      <title>Monitoring the Web with Azure for Free</title>
      <dc:creator>Stafford Williams</dc:creator>
      <pubDate>Wed, 25 Sep 2019 14:00:00 +0000</pubDate>
      <link>https://dev.to/staff0rd/monitoring-the-web-with-azure-for-free-198b</link>
      <guid>https://dev.to/staff0rd/monitoring-the-web-with-azure-for-free-198b</guid>
      <description>&lt;p&gt;With the &lt;a href="https://devblogs.microsoft.com/devops/cloud-based-load-testing-service-eol/" rel="noopener noreferrer"&gt;deprecation of Multi-step Web Testing in Visual Studio and Azure&lt;/a&gt;, what approach can we now use to monitoring web properties, especially if we don’t own them? In this post, I’ll propose and demonstrate the leveraging of Azure to provide us a solution to this problem at no-to-little cost.&lt;/p&gt;

&lt;h2&gt;
  
  
  Multi-step Web Test Deprecation
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://devblogs.microsoft.com/devops/cloud-based-load-testing-service-eol/" rel="noopener noreferrer"&gt;Announced here&lt;/a&gt;, this deprecation means that Visual Studio 2019 Enterprise will be the last version of Visual Studio to support Multi-step Web Test projects, and &lt;a href="https://github.com/MicrosoftDocs/azure-docs/issues/26050#issuecomment-468814101" rel="noopener noreferrer"&gt;no replacement is currently being worked on&lt;/a&gt;. This enterprise-only feature was not overly enviable anyways considering these tests are HTTP-based only (no Javascript) and the pricing was pretty out there, for example &lt;a href="https://azure.microsoft.com/en-au/pricing/details/monitor/" rel="noopener noreferrer"&gt;West US 2 / AUD pricing&lt;/a&gt; is shown below.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Feature&lt;/th&gt;
&lt;th&gt;Free units included&lt;/th&gt;
&lt;th&gt;Price&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Multi-step web tests&lt;/td&gt;
&lt;td&gt;None&lt;/td&gt;
&lt;td&gt;$13.73 per test per month&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Ping web tests&lt;/td&gt;
&lt;td&gt;Unlimited&lt;/td&gt;
&lt;td&gt;Free&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  A Better Solution
&lt;/h2&gt;

&lt;p&gt;I’m looking for a more modern solution with a feature set greater than what’s possible with Multi-step Web Tests. We’ll use Azure to host it, and if we go with the bare minimum then this solution can be entirely free. For a few cents a month we can add additional features that compliement the idea of monitoring web properties that might not be our own.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fstaffordwilliams.com%2Fassets%2Fazure-for-free%2FArchitecture.png%23center" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fstaffordwilliams.com%2Fassets%2Fazure-for-free%2FArchitecture.png%23center" alt="Architecture diagram"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This solution is entirely &lt;a href="https://github.com/staff0rd/azure-web-monitor" rel="noopener noreferrer"&gt;opensource&lt;/a&gt; and &lt;a href="https://github.com/staff0rd/azure-web-monitor" rel="noopener noreferrer"&gt;available for free on GitHub&lt;/a&gt;. If you have ideas on how this solution could be improved, please &lt;a href="https://github.com/staff0rd/azure-web-monitor/issues" rel="noopener noreferrer"&gt;let me know&lt;/a&gt;!&lt;/p&gt;

&lt;h2&gt;
  
  
  Browser Automation
&lt;/h2&gt;

&lt;p&gt;To solve the core requirement of monitoring availability, request times and whether our single page applications are executing javascript correctly, we’ll use &lt;a href="https://www.seleniumhq.org/" rel="noopener noreferrer"&gt;Selenium&lt;/a&gt;. We’ll control it with &lt;a href="https://dotnet.microsoft.com/download/dotnet-core" rel="noopener noreferrer"&gt;.net core&lt;/a&gt;, using plain old &lt;a href="https://docs.microsoft.com/en-us/dotnet/core/tools/dotnet-test" rel="noopener noreferrer"&gt;mstest&lt;/a&gt; which we (or more likely our automation) can run easily with &lt;code&gt;dotnet test&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;There’s &lt;a href="https://www.google.com/search?q=selenium%20dotnetcore" rel="noopener noreferrer"&gt;plenty of guidance&lt;/a&gt; on writing tests with Selenium, and it’s good practice to use a Page Object Model approach when using Selenium. &lt;a href="https://github.com/staff0rd/azure-web-monitor/blob/master/src/AzureWebMonitor.Test/AzureDotCom_AzureSignalR.cs" rel="noopener noreferrer"&gt;An example I’ve written&lt;/a&gt; loads up , navigates to the pricing page, and tests some assertions on what it finds using &lt;a href="https://github.com/shouldly/shouldly" rel="noopener noreferrer"&gt;Shoudly&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;I’m using &lt;a href="https://chromedriver.chromium.org/" rel="noopener noreferrer"&gt;ChromeDriver&lt;/a&gt; with Selenium, which we can pass further arguments to for example, to &lt;a href="https://github.com/staff0rd/azure-web-monitor/blob/master/src/AzureWebMonitor.Test/WebDriverHelper.cs#L17-L18" rel="noopener noreferrer"&gt;avoid tracking integrations like google analytics&lt;/a&gt; and to &lt;a href="https://github.com/staff0rd/azure-web-monitor/blob/master/src/AzureWebMonitor.Test/WebDriverHelper.cs#L19" rel="noopener noreferrer"&gt;run headless&lt;/a&gt; so we can run it on linux vms without a graphical environment.&lt;/p&gt;

&lt;h2&gt;
  
  
  Running It In The Cloud
&lt;/h2&gt;

&lt;p&gt;I need something to periodically run the above test project, and &lt;a href="https://dev.azure.com" rel="noopener noreferrer"&gt;Azure DevOps&lt;/a&gt; is the free answer, with 1,800 free hosted build minutes per month. I can implement a &lt;a href="https://azure.microsoft.com/en-au/services/devops/pipelines/" rel="noopener noreferrer"&gt;build pipeline&lt;/a&gt; that will run the tests and email me the results. There’s even a dashboard widget to show how my tests have performed over the last 20 runs, including how long the run took and whether any tests failed.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fstaffordwilliams.com%2Fassets%2Fazure-for-free%2Fdashboard.PNG%23center" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fstaffordwilliams.com%2Fassets%2Fazure-for-free%2Fdashboard.PNG%23center" alt="Azure DevOps dashboard widget"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://github.com/staff0rd/azure-web-monitor/blob/master/azure-pipelines.yml" rel="noopener noreferrer"&gt;build pipeline configuration&lt;/a&gt; is pretty trivial. Whenever a build is triggered, a hosted linux agent will run the following steps;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Install dotnet core 2.2&lt;/li&gt;
&lt;li&gt;Restore nuget packages&lt;/li&gt;
&lt;li&gt;Build the project&lt;/li&gt;
&lt;li&gt;Run the tests&lt;/li&gt;
&lt;li&gt;Publish the result&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Scheduling the build is also quite trivial, especially now that Azure Pipelines supports &lt;a href="https://docs.microsoft.com/en-us/azure/devops/pipelines/build/triggers?view=azure-devops&amp;amp;tabs=yaml#scheduled-triggers" rel="noopener noreferrer"&gt;scheduling with cron syntax&lt;/a&gt;. Triggering every hour is as simple as the snippet below. Remember to set &lt;code&gt;always: true&lt;/code&gt; so the build is triggered regardless of no change being committed to source.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;schedules:
- cron: '0 * * * *'
  displayName: Hourly build
  branches:
    include:
    - master
  always: true

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Cron schedules are fairly new to Azure Pipelines and were &lt;a href="https://developercommunity.visualstudio.com/content/problem/613157/new-cron-schedule-not-working.html" rel="noopener noreferrer"&gt;broken&lt;/a&gt; on my initial implementation. While the fix was coming through I instead used &lt;a href="https://azure.microsoft.com/en-au/services/logic-apps/" rel="noopener noreferrer"&gt;Logic Apps&lt;/a&gt; to schedule the build. Logic Apps already includes an Azure DevOps connector and a two step app was all that was required - a periodic recurrence and a build trigger. Now that Cron schedules are fixed however, the Logic Apps approach is no longer required.&lt;/p&gt;

&lt;p&gt;Additionally, Cron schedules are only reliable if your Azure Devops tenant is in constant use. The &lt;a href="https://docs.microsoft.com/en-us/azure/devops/pipelines/build/triggers?view=azure-devops&amp;amp;tabs=yaml#my-build-didnt-run-what-happened" rel="noopener noreferrer"&gt;docs explain&lt;/a&gt; that the tenant goes dormant after everyone has logged out - if this occurs to you, a Logic Apps approach will ensure your builds get triggered.&lt;/p&gt;

&lt;h2&gt;
  
  
  Storing the data
&lt;/h2&gt;

&lt;p&gt;A great way for us to both store and query the data we’ll be collecting is to use &lt;a href="https://docs.microsoft.com/en-us/azure/azure-monitor/app/app-insights-overview" rel="noopener noreferrer"&gt;Application Insights&lt;/a&gt;. Normally we’d use the &lt;a href="https://docs.microsoft.com/en-us/azure/azure-monitor/app/asp-net-core" rel="noopener noreferrer"&gt;Application Insights SDK&lt;/a&gt; to automagicially track our own website’s data - however as we’re monitoring other websites out on the internet, we’ll just include the &lt;a href="https://www.nuget.org/packages/Microsoft.ApplicationInsights/" rel="noopener noreferrer"&gt;Base API&lt;/a&gt; and use &lt;code&gt;TelemetryClient&lt;/code&gt; directly.&lt;/p&gt;

&lt;p&gt;I’ve written &lt;a href="https://github.com/staff0rd/azure-web-monitor/blob/master/src/AzureWebMonitor.Test/ApplicationInsights.cs" rel="noopener noreferrer"&gt;ApplicationInsights.cs&lt;/a&gt; as a wrapper around &lt;code&gt;TelemetryClient&lt;/code&gt;, and it shows we’re sending Availability and Exception telemtry to Application Insights. Integrating this with our Selenium tests and &lt;a href="https://github.com/staff0rd/azure-web-monitor/blob/master/src/AzureWebMonitor.Test/appsettings.json" rel="noopener noreferrer"&gt;configuring the Instrumentation Key&lt;/a&gt; results in our data being available for querying in the Azure Portal.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fstaffordwilliams.com%2Fassets%2Fazure-for-free%2Fappinsights.PNG%23center" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fstaffordwilliams.com%2Fassets%2Fazure-for-free%2Fappinsights.PNG%23center" alt="Application Insights query result"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We can review &lt;a href="https://docs.microsoft.com/en-us/azure/azure-monitor/app/pricing#limits-summary" rel="noopener noreferrer"&gt;up to 90 days&lt;/a&gt; of data in the portal, using &lt;a href="https://docs.microsoft.com/en-us/azure/kusto/query/" rel="noopener noreferrer"&gt;Kusto&lt;/a&gt; to query and create charts. We can also query the data externally by using the &lt;a href="https://dev.applicationinsights.io" rel="noopener noreferrer"&gt;Application Insights REST API&lt;/a&gt; and we’ll do so below to generate reports that will snapshot our prior to reaching the &lt;a href="https://docs.microsoft.com/en-us/azure/azure-monitor/app/pricing#limits-summary" rel="noopener noreferrer"&gt;data rentention limit&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Notifications and Alarms
&lt;/h2&gt;

&lt;p&gt;There’s two key events we want to be notified of:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Test failures and;&lt;/li&gt;
&lt;li&gt;When the whole system is down and not monitoring.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Test failures are already being emailed by Azure DevOps, but the content of the email could be a lot better than this. We can extend the existing test project to build a summary of failures once all tests are complete. Even better, we can &lt;a href="https://github.com/staff0rd/azure-web-monitor/blob/master/src/AzureWebMonitor.Test/WebDriverHelper.cs#L39-L45" rel="noopener noreferrer"&gt;use Selenium to grab screenshots of the browser at the point of failure&lt;/a&gt; and &lt;a href="https://github.com/staff0rd/azure-web-monitor/blob/master/src/AzureWebMonitor.Test/ResultsEmailer.cs" rel="noopener noreferrer"&gt;include these images in the email&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;We can detect that the whole system is down if our Application Insights instance has not received any new data in x minutes/hours. Originally I had used &lt;a href="https://azure.microsoft.com/en-au/services/scheduler/" rel="noopener noreferrer"&gt;Azure Scheduler&lt;/a&gt; for this task, but it is &lt;strong&gt;deprecated and to be retired&lt;/strong&gt; by the end of the year. As such, Logic Apps is our friend which we can use to query the same &lt;a href="https://dev.applicationinsights.io" rel="noopener noreferrer"&gt;Application Insights REST API&lt;/a&gt; mentioned above and send an email should we detect no data.&lt;/p&gt;

&lt;h2&gt;
  
  
  Reporting
&lt;/h2&gt;

&lt;p&gt;Periodically I’d like a summary of how these web properties have been performing, which would basically be a snapshot of the charts we already have generated above using Application Insights. However, Application Insights does not currently offer a way to periodically email these results, so we’ll &lt;a href="https://github.com/staff0rd/azure-web-monitor/tree/master/src/AzureWebMonitor.Report" rel="noopener noreferrer"&gt;write our own&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;To do this, we’ll grab our raw telemetry data using the &lt;a href="https://dev.applicationinsights.io" rel="noopener noreferrer"&gt;Application Insights REST API&lt;/a&gt; and we’ll ressurect &lt;a href="https://www.nuget.org/packages/Microsoft.Chart.Controls/" rel="noopener noreferrer"&gt;Microsoft.Chart.Controls&lt;/a&gt; - a WinForms charting control for .NET Framework - to generate charts that look very similar to results we’ve seen in the portal. We’ll export these charts to .png, reference them in an HTML file we’ll build, and then attach them to a MailMessage for emailing. The result is an HTML formatted email delivered right to our email client.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fstaffordwilliams.com%2Fassets%2Fazure-for-free%2Freport.png%23center" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fstaffordwilliams.com%2Fassets%2Fazure-for-free%2Freport.png%23center" alt="Periodic report"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Emailing
&lt;/h2&gt;

&lt;p&gt;Throughout the above I’ve mentioned emails that we’re generating, but we’ll need something to actually send them, for example our good friend SMTP. To deliver this functionality for the cool price of free, we’ll use &lt;a href="https://sendgrid.com/" rel="noopener noreferrer"&gt;SendGrid&lt;/a&gt; which lets us send 100 emails/day at no cost - much more than we’ll need.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;And that about wraps it up - we’ve seen that we can use a range of Azure services and integrations for free or very low cost to implement a fairly robust monitoring system. However, with the techniques described, we could expand or (totally) change this solution to meet close to any requirements we might have for running workloads in the cloud. I hope this overview has given you some ideas or helped you discover something new - if you can think of an improved way to implement the above or notice any errors please &lt;a href="https://twitter.com/staff0rd" rel="noopener noreferrer"&gt;reach out&lt;/a&gt;!&lt;/p&gt;

</description>
      <category>azure</category>
      <category>selenium</category>
      <category>showdev</category>
      <category>dotnetcore</category>
    </item>
    <item>
      <title>Auto-scaling Azure SignalR Service with Logic Apps</title>
      <dc:creator>Stafford Williams</dc:creator>
      <pubDate>Fri, 12 Jul 2019 14:00:00 +0000</pubDate>
      <link>https://dev.to/staff0rd/auto-scaling-azure-signalr-service-with-logic-apps-269h</link>
      <guid>https://dev.to/staff0rd/auto-scaling-azure-signalr-service-with-logic-apps-269h</guid>
      <description>&lt;p&gt;Azure SignalR Service does not have auto-scale functionality out of the box. In this post I’ll implement my own auto-scaling using Azure Logic Apps.&lt;/p&gt;

&lt;h2&gt;
  
  
  Scaling Azure SignalR Service
&lt;/h2&gt;

&lt;p&gt;Microsoft &lt;a href="https://docs.microsoft.com/en-us/azure/azure-signalr/signalr-concept-scale-aspnet-core" rel="noopener noreferrer"&gt;promotes&lt;/a&gt; Azure SignalR Service as a simple way to scale SignalR implementations, but how does the service itself scale? The answer is Units: for every Unit the Standard_S1 sku accepts another 1,000 concurrent connections. To scale, bump the Unit count to a maximum of 100 (100k concurrent connections) using the Azure Portal, CLI, Azure Powershell or REST API.&lt;/p&gt;

&lt;p&gt;Some caveats here are that the service actually only accepts Unit counts of 1, 2, 5, 10, 20, 50, 100. You can’t set a Unit count of, say, 3 or 60. Additionally, &lt;a href="https://azure.microsoft.com/en-au/pricing/details/signalr-service/" rel="noopener noreferrer"&gt;pricing&lt;/a&gt; includes a per unit, per day factor, and this day includes part-thereof, such that if you set a unit count of 100 you’ll be charged 100 x $AUD2.21 for that day, even if you immediatly roll back to 1 unit. Pricing here is not consumption based.&lt;/p&gt;

&lt;h2&gt;
  
  
  Will it auto-scale?
&lt;/h2&gt;

&lt;p&gt;I don’t want to scale it myself in the Portal, or by making remote calls, rather I’d like the service to auto-scale based on concurrent connection count, but the service does not offer this capability out of the box.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;It doesn’t currently autoscale. Is it a blocker for you?&lt;/p&gt;

&lt;p&gt;— Anthony Chu (@nthonyChu) &lt;a href="https://twitter.com/nthonyChu/status/1147524475832770560?ref_src=twsrc%5Etfw" rel="noopener noreferrer"&gt;July 6, 2019&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;As such, the driving forces of laziness and curiosity within me has led to the creation of a &lt;a href="https://github.com/staff0rd/azure-signalr-autoscale" rel="noopener noreferrer"&gt;Logic App that will auto-scale SignalR Service&lt;/a&gt; for me, even if it’s not a great idea to do so.&lt;/p&gt;

&lt;h2&gt;
  
  
  Automate Azure with Logic Apps
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fstaffordwilliams.com%2Fassets%2Flogicapp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fstaffordwilliams.com%2Fassets%2Flogicapp.png" alt="logic apps screenshot"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://github.com/staff0rd/azure-signalr-autoscale" rel="noopener noreferrer"&gt;source and usage is here&lt;/a&gt;, but let’s take a look at what this Logic App is doing.&lt;/p&gt;

&lt;p&gt;Logic Apps doesn’t have a connector available for SignalR Service, so instead we’ll use the &lt;a href="https://docs.microsoft.com/en-us/rest/api/signalr/signalr" rel="noopener noreferrer"&gt;Azure REST API&lt;/a&gt; to query and modify it. There’s three REST end points we’re interested in;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;GET &lt;a href="https://docs.microsoft.com/en-us/rest/api/monitor/metrics/list" rel="noopener noreferrer"&gt;monitor/metrics/list&lt;/a&gt; to query how many concurrent connections there are&lt;/li&gt;
&lt;li&gt;GET &lt;a href="https://docs.microsoft.com/en-us/rest/api/signalr/signalr/get" rel="noopener noreferrer"&gt;signalr/get&lt;/a&gt; to query the current Unit setting and;&lt;/li&gt;
&lt;li&gt;PUT &lt;a href="https://docs.microsoft.com/en-us/rest/api/signalr/signalr/createorupdate" rel="noopener noreferrer"&gt;signalr/createorupdate&lt;/a&gt; to update the unit count.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Everything else in the Logic App are then steps to determine whether we need to call the &lt;code&gt;PUT&lt;/code&gt; above and what Unit count we’ll set if we do.&lt;/p&gt;

&lt;h2&gt;
  
  
  How many connections until we scale?
&lt;/h2&gt;

&lt;p&gt;The formula I’m using to determine the number of units we need is as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;(CurrentConnections + BaseConnections + Buffer) / ConnectionsPerUnit

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;where:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;CurrentConnections&lt;/code&gt; is the maximum connections connected at any time since the last run&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;BaseConnections&lt;/code&gt; reflects that we’ll always need at least 1 unit regardless of connection count&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Buffer&lt;/code&gt; is how close to the maximum connections per unit we’ll get before scaling. I set this to 100 initially so, for example, reaching 900 connections would add another unit. After testing showed that limits are soft and counts would go up to 10% over, so probably this could be closer to, or just 0.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;ConnectionsPerUnit&lt;/code&gt; = 1000&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The formula produces an ideal Unit count, however Azure SignalR Service allows us only Unit counts of 1, 2, 5, 10, 20, 50 or 100, so we’ll choose one that’s greater or equal to our ideal count. As a side note, I also found during testing that 0 is a valid unit count, however doing so blocks both clients and servers from connecting to the service.&lt;/p&gt;

&lt;h2&gt;
  
  
  Scaling up and down
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fstaffordwilliams.com%2Fassets%2Flogicapp-end.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fstaffordwilliams.com%2Fassets%2Flogicapp-end.png" alt="set new unit count"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once we’ve determined what unit count we need, we’ll compare it to what the current SignalR Service unit count is. If they are equal, then there’s nothing more to do. If they’re different, we’ll call the &lt;code&gt;createorupdate&lt;/code&gt; endpoint with our new unit count, at which time the service will reload, disconnecting all current connections with the following HubException:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Connection terminated with error: Microsoft.AspNetCore.SignalR.HubException: The server closed the connection with the following error: ServiceReload&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Whilst disconnection sounds alarming, this happens often enough in practice that client applications should already have re-conection implementations built in.&lt;/p&gt;

&lt;h2&gt;
  
  
  Parameterising for ARM template deployment
&lt;/h2&gt;

&lt;p&gt;Editing Logic Apps in the Designer is frustrating to say the least, so converting variables first to &lt;a href="https://docs.microsoft.com/en-us/azure/logic-apps/logic-apps-workflow-definition-language#parameters" rel="noopener noreferrer"&gt;Workflow Definition Language Parameters&lt;/a&gt; and then into &lt;a href="https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-group-authoring-templates#parameters" rel="noopener noreferrer"&gt;ARM template parameters&lt;/a&gt; is time consuming but lets us pass parameters and update the Logic App from the commandline. Embedding ARM parameters directly into Logic App definitions is &lt;a href="https://pacodelacruzag.wordpress.com/2017/10/11/preparing-azure-logic-apps-for-cicd/" rel="noopener noreferrer"&gt;not a great idea&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;We can then call deploy the Logic App via CLI and have it run and auto-scale every 30 minutes like so:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;az group deployment create \
    --resource-group yourResourceGroup \
    --template-file template.json \
    --parameters @parameters.json \
    --parameters scaleInterval=30

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Would I do it again?
&lt;/h2&gt;

&lt;p&gt;Now that &lt;a href="https://github.com/staff0rd/azure-signalr-autoscale" rel="noopener noreferrer"&gt;it exists&lt;/a&gt;, I can auto-scale SignalR Service pretty effortlessly but the lack of granular control over unit count and no consumption pricing makes scaling up costly and scaling down before the end of a given day pointless.&lt;/p&gt;

&lt;p&gt;Building this in Logic Apps was a learning experience, but the Designer and &lt;a href="https://docs.microsoft.com/en-us/azure/logic-apps/logic-apps-workflow-definition-language" rel="noopener noreferrer"&gt;Workflow Definition Language&lt;/a&gt; had some short-comings that made progress slow and frustrating. This included whacky workarounds for &lt;a href="https://powerusers.microsoft.com/t5/Building-Flows/Type-Conversion-Errors-decimal-to-int/m-p/289227#M30341" rel="noopener noreferrer"&gt;converting floats to integers&lt;/a&gt; and WDL not having &lt;code&gt;floor&lt;/code&gt; or &lt;code&gt;round&lt;/code&gt; functions. I found myself constantly fighting the Designer when it failed to display dynamic variables, inserted superfluous &lt;code&gt;foreach&lt;/code&gt; steps, blocked renaming of steps once they are referenced by another, and had no editing of parameters. These problems meant I more often than not needed to directly edit the underlying JSON.&lt;/p&gt;

&lt;p&gt;It’d be interseting to see how the same functionality could be achieved using Az PowerShell modules inside an Azure Function.&lt;/p&gt;

</description>
      <category>signalr</category>
      <category>azure</category>
      <category>showdev</category>
    </item>
    <item>
      <title>Load testing ASP.NET Core SignalR</title>
      <dc:creator>Stafford Williams</dc:creator>
      <pubDate>Mon, 10 Jun 2019 12:50:17 +0000</pubDate>
      <link>https://dev.to/staff0rd/load-testing-asp-net-core-signalr-4pff</link>
      <guid>https://dev.to/staff0rd/load-testing-asp-net-core-signalr-4pff</guid>
      <description>&lt;p&gt;&lt;a href="https://staffordwilliams.com/blog/2019/03/20/azure-signalr-service/"&gt;Last time around&lt;/a&gt; I messed with SignalR I touched briefly on &lt;a href="https://staffordwilliams.com/blog/2019/03/20/azure-signalr-service/#load-testing"&gt;load testing&lt;/a&gt;.  This time around I'll deep dive into SignalR load testing, specifically to test the tool supplied in source, Crankier, and build my own load testing tools to investigate the limits of SignalR applications.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why?
&lt;/h2&gt;

&lt;p&gt;I have a SignalR-based application I'm building that I intend to gradually test with increasing-sized audiences of people.  Prior to these real-human tests I'd like to have confidence and an understanding of what the connection limits and latency expectations are for the application.  An application demo falling over due to load that could have been investigated with robots 🤖 instead of people 🤦‍♀ is an experience I want to skip.&lt;/p&gt;

&lt;p&gt;I started load testing SignalR three months ago and it took me down a crazy rabbit hole of learning - this post will summarise both the journey and the findings.&lt;/p&gt;

&lt;h2&gt;
  
  
  Crankier
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://github.com/aspnet/AspNetCore/tree/master/src/SignalR/perf/benchmarkapps/Crankier"&gt;Crankier&lt;/a&gt; is an ASP.NET Core SignalR port of Crank, which was a load testing tool shipped with ASP.NET SignalR.  At the moment the only thing Crankier does is attempt to hold open concurrent connections to a SignalR hub.  There's also a &lt;a href="https://github.com/aspnet/AspNetCore/tree/master/src/SignalR/perf/benchmarkapps/BenchmarkServer"&gt;BenchmarkServer&lt;/a&gt; which we can use to host a SignalR hub for our load testing purposes.&lt;/p&gt;

&lt;p&gt;At the very least, we can clone the &lt;a href="https://github.com/aspnet/AspNetCore"&gt;aspnetcore repo&lt;/a&gt; and run both of these apps as a local test:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone https://github.com/aspnet/AspNetCore
&lt;span class="nb"&gt;cd &lt;/span&gt;aspnetcore/src/SignalR/perf/benchmarkapps
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Start the server:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cd &lt;/span&gt;BenchmarkServer
dotnet run
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Start crankier:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cd &lt;/span&gt;Crankier
dotnet run &lt;span class="nt"&gt;--&lt;/span&gt; &lt;span class="nb"&gt;local&lt;/span&gt; &lt;span class="nt"&gt;--target-url&lt;/span&gt; http://localhost:5000/echo &lt;span class="nt"&gt;--workers&lt;/span&gt; 10
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;I put the server on an Azure App Service and pointed Crankier at it too.  Here's the results;&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Server&lt;/th&gt;
&lt;th&gt;Max connections&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Local&lt;/td&gt;
&lt;td&gt;8113&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;B1 App Service&lt;/td&gt;
&lt;td&gt;350&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;S1 App Service&lt;/td&gt;
&lt;td&gt;768&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;P1V1 App Service&lt;/td&gt;
&lt;td&gt;768&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;These results look alarming, but there's reasoning behind them.  8113 turns out to be the maximum http connections my local machine can make, even within itself.  But if that's the case why can't I get that number with the App Services?  The limit on B1 is stated on &lt;a href="https://docs.microsoft.com/en-us/azure/azure-subscription-service-limits"&gt;Azure subscription service limits&lt;/a&gt; but the same page notes &lt;em&gt;Unlimited&lt;/em&gt; for &lt;em&gt;Web sockets per instance&lt;/em&gt; for S1 and P1V1.  Turns out (via Azure Support) that 768 is the (undocumented) connection limit per client.  I'll need more clients!&lt;/p&gt;

&lt;h2&gt;
  
  
  Hosting Crankier inside a container
&lt;/h2&gt;

&lt;p&gt;I want to spawn multiple clients to test connection limits, so containers seem like a great idea to do this.  Sticking Crankier inside a container is pretty easy, here's the &lt;a href="https://github.com/staff0rd/docker-crankier/blob/master/client/container/Dockerfile"&gt;Dockerfile&lt;/a&gt; I built to do this.  I've pushed this to docker hub, so we can skip building it and run it directly.  The &lt;code&gt;run&lt;/code&gt; command above now becomes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run staff0rd/crankier &lt;span class="nt"&gt;--target-url&lt;/span&gt; http://myPath/echo &lt;span class="nt"&gt;--workers&lt;/span&gt; 10
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Using this approach I can push the App Services above 768 concurrent connections, but I still need one client per 768 connections.  I want to chase higher numbers, so I'll swap the App Services out for Virtual Machines, which I'll directly run the Benchmark server on.&lt;/p&gt;

&lt;h2&gt;
  
  
  Hosting BenchmarkServer inside a container
&lt;/h2&gt;

&lt;p&gt;Now that I have multiple clients it no longer clear how many concurrent connections I've reached.  I'll extend the benchmark server to echo some important information every second:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Total time we've been running&lt;/li&gt;
&lt;li&gt;Current connection count&lt;/li&gt;
&lt;li&gt;Peak connection count&lt;/li&gt;
&lt;li&gt;New connections in last second&lt;/li&gt;
&lt;li&gt;New disconnections in the last second&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I've &lt;a href="https://github.com/staff0rd/AspNetCore/tree/master/src/SignalR/perf/benchmarkapps/BenchmarkServer"&gt;forked aspnetcore here&lt;/a&gt; to implement the above functionality.&lt;/p&gt;

&lt;p&gt;Additionally, I'll put it inside a container so I don't have to install any of its dependencies on the VM, like the dotnet sdk.  Here's the &lt;a href="https://github.com/staff0rd/docker-crankier/blob/master/server/Dockerfile"&gt;Dockerfile&lt;/a&gt;, but as usual it's on docker hub, so we can now start the server like so:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; 80:80 crankier staff0rd/crankier-server
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h2&gt;
  
  
  Container client results
&lt;/h2&gt;

&lt;p&gt;Now we can raise 10-20 (or more!) containers at a time asynchronously with the following command, where &lt;code&gt;XX&lt;/code&gt; is incremented each time:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;az container create &lt;span class="nt"&gt;-g&lt;/span&gt; crankier &lt;span class="nt"&gt;--image&lt;/span&gt; staff0rd/crankier &lt;span class="nt"&gt;--cpu&lt;/span&gt; 1 &lt;span class="nt"&gt;--memory&lt;/span&gt; 1 &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--command-line&lt;/span&gt; &lt;span class="s2"&gt;"--target-url http://myPath/echo --workers 10 --connections 20000"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--no-wait&lt;/span&gt; &lt;span class="nt"&gt;--name&lt;/span&gt; myContainerXX
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Over many tests I found that Operating System didn't seem to make much of a difference, so I stuck to Linux as it's cheaper.  I didn't detect any difference between running the server inside a container with docker vs running it directly via &lt;code&gt;dotnet&lt;/code&gt; installed on the VM, so future tests stick to running the server inside docker only.&lt;/p&gt;

&lt;p&gt;Here's the results:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;VmSize&lt;/th&gt;
&lt;th&gt;Max connections&lt;/th&gt;
&lt;th&gt;Time&lt;/th&gt;
&lt;th&gt;Os&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;B2s&lt;/td&gt;
&lt;td&gt;64200&lt;/td&gt;
&lt;td&gt;15m&lt;/td&gt;
&lt;td&gt;Windows Datacenter 2019&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;B2s&lt;/td&gt;
&lt;td&gt;64957&lt;/td&gt;
&lt;td&gt;18m&lt;/td&gt;
&lt;td&gt;Windows Datacenter 2019&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;B2s&lt;/td&gt;
&lt;td&gt;57436&lt;/td&gt;
&lt;td&gt;&amp;gt; 5m&lt;/td&gt;
&lt;td&gt;Ubuntu 16.04&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;B2ms&lt;/td&gt;
&lt;td&gt;107944&lt;/td&gt;
&lt;td&gt;7m&lt;/td&gt;
&lt;td&gt;Ubuntu 16.04 (50+ containers)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Overall these results are a bit lower than I was expecting, and two problems still existed.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;By default Crankier only holds connections open for 5 minutes before dropping them.  Any tests running over 5 minutes were dropping their connections and;&lt;/li&gt;
&lt;li&gt;Some containers were maxing out 500 concurrent connections only.  If I raised 10 containers, only 1 or 2 of them would crank past 500.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The first one is easily solve by passing &lt;code&gt;--send-duration 10000&lt;/code&gt; to hold connections open for 10000 seconds, but the second item would require a move to VMs as clients.&lt;/p&gt;

&lt;h2&gt;
  
  
  Crankier running on VMs
&lt;/h2&gt;

&lt;p&gt;I found that VMs were much more reliable in bringing up many connections, but my problem was that they weren't as easily automated like containers.  So, I &lt;a href="https://github.com/staff0rd/docker-crankier/tree/master/client/vm"&gt;built the automation myself&lt;/a&gt; with these scripts:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://github.com/staff0rd/docker-crankier/blob/master/client/vm/clientVM.json"&gt;clientVM.json&lt;/a&gt;

&lt;ul&gt;
&lt;li&gt;An &lt;a href="https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-group-authoring-templates"&gt;ARM template&lt;/a&gt; that specifies the structure of the VM to bring up per client.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/staff0rd/docker-crankier/blob/master/client/vm/startUpScript.sh"&gt;startUpScript.sh&lt;/a&gt;

&lt;ul&gt;
&lt;li&gt;Install docker on the VM once it's initialised and &lt;code&gt;docker pull staff0rd/crankier&lt;/code&gt; in preparation.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/staff0rd/docker-crankier/blob/master/client/vm/Up.ps1"&gt;Up.ps1&lt;/a&gt;

&lt;ul&gt;
&lt;li&gt;Asynchronously raise &lt;code&gt;count&lt;/code&gt; VMs on size &lt;code&gt;vmSize&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/staff0rd/docker-crankier/blob/master/client/vm/RunCommand.ps1"&gt;RunCommand.ps1&lt;/a&gt;

&lt;ul&gt;
&lt;li&gt;Running commands on VMs is not quick, so this script enables faster command running using ssh &amp;amp; powershell jobs.  We can use this to send commands to all the VMs and get the result back.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Using the scripts above I quickly found that Azure places a limit of 20 cores per region by default.  As a workaround, I raise ten 2-core VMs per region.  Here's an example of raising 30 VMs:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight powershell"&gt;&lt;code&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;\Up.ps1&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-count&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;10&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-location&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;australiaeast&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;\Up.ps1&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-count&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;10&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-offset&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;10&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-location&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;westus&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;\Up.ps1&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-count&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;10&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-offset&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;20&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-location&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;eastus&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;I can monitor the progress of bringing the VMs up with &lt;code&gt;Get-Job&lt;/code&gt; and &lt;code&gt;Get-Job | Receive-Job&lt;/code&gt;.  Once the jobs are completed I can clear them with &lt;code&gt;Get-Job | Remove-Job&lt;/code&gt;.  Because the VMs are all brought up asynchronously it takes about 5 minutes total to bring them all up.  After they're up, we can send commands to them:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight powershell"&gt;&lt;code&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;\RunCommand.ps1&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-command&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"docker run --name crankier -d staff0rd/crankier --send-duration 10000 --target-url http://mypath/echo --connections 10000 --workers 20"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;If we've set the client's &lt;code&gt;target-url&lt;/code&gt; correctly, we should now see the server echoing the incoming connections:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[00:00:00] Current: 178, peak: 178, connected: 160, disconnected: 0, rate: 160/s
[00:00:02] Current: 432, peak: 432, connected: 254, disconnected: 0, rate: 254/s
[00:00:02] Current: 801, peak: 801, connected: 369, disconnected: 0, rate: 369/s
[00:00:03] Current: 1171, peak: 1171, connected: 370, disconnected: 0, rate: 370/s
[00:00:05] Current: 1645, peak: 1645, connected: 474, disconnected: 0, rate: 474/s
[00:00:05] Current: 2207, peak: 2207, connected: 562, disconnected: 0, rate: 562/s
[00:00:06] Current: 2674, peak: 2674, connected: 467, disconnected: 0, rate: 467/s
[00:00:08] Current: 3145, peak: 3145, connected: 471, disconnected: 0, rate: 471/s
[00:00:08] Current: 3747, peak: 3747, connected: 602, disconnected: 0, rate: 602/s
[00:00:10] Current: 4450, peak: 4450, connected: 703, disconnected: 0, rate: 703/s
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h2&gt;
  
  
  Monitoring client VM connections
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;RunCommand.ps&lt;/code&gt; lets us send any command we like to every VM, so we can use &lt;code&gt;docker logs&lt;/code&gt; to get the last line logged from every VM to monitor their status:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight powershell"&gt;&lt;code&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;\RunCommand.ps1&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-command&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"docker logs --tail 1 crankier"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{"ConnectingCount":10,"ConnectedCount":8038,"DisconnectedCount":230,"ReconnectingCount":0,"FaultedCount":34,"TargetConnectionCount":10000,"PeakConnections":8038}

{"ConnectingCount":10,"ConnectedCount":8026,"DisconnectedCount":211,"ReconnectingCount":0,"FaultedCount":34,"TargetConnectionCount":10000,"PeakConnections":8026}

{"ConnectingCount":10,"ConnectedCount":7984,"DisconnectedCount":187,"ReconnectingCount":0,"FaultedCount":32,"TargetConnectionCount":10000,"PeakConnections":7986}
...
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Here's an example of killing the containers:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight powershell"&gt;&lt;code&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;\RunCommand.ps1&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-command&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"docker rm -f crankier"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h2&gt;
  
  
  Results
&lt;/h2&gt;

&lt;p&gt;Over the last three months I've raised ~980 VMs, slowly enhancing how I test and capture data.  The lines below represent some of those tests, the later ones also include the full log of the test.&lt;/p&gt;

&lt;h3&gt;
  
  
  Standard_D2s_v3 server
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Time from start&lt;/th&gt;
&lt;th&gt;Peak connections&lt;/th&gt;
&lt;th&gt;Logs&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;15:35&lt;/td&gt;
&lt;td&gt;93,100&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;07:38&lt;/td&gt;
&lt;td&gt;100,669&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;24:16&lt;/td&gt;
&lt;td&gt;91,541&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;24:04&lt;/td&gt;
&lt;td&gt;92,506&lt;/td&gt;
&lt;td&gt;&lt;a href="https://pastebin.com/QPLgDeZt"&gt;https://pastebin.com/QPLgDeZt&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;07:54&lt;/td&gt;
&lt;td&gt;100,730&lt;/td&gt;
&lt;td&gt;&lt;a href="https://pastebin.com/FB9skzJE"&gt;https://pastebin.com/FB9skzJE&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;13:31&lt;/td&gt;
&lt;td&gt;91,541&lt;/td&gt;
&lt;td&gt;&lt;a href="https://pastebin.com/sDLdm0bh"&gt;https://pastebin.com/sDLdm0bh&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Average 80% CPU/RAM&lt;/p&gt;

&lt;h3&gt;
  
  
  Standard_D8s_v3 server
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Time from start&lt;/th&gt;
&lt;th&gt;Peak connections&lt;/th&gt;
&lt;th&gt;Logs&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;02:34&lt;/td&gt;
&lt;td&gt;107,564&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;05:55&lt;/td&gt;
&lt;td&gt;111,665&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;03:43&lt;/td&gt;
&lt;td&gt;132,175&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;25:33&lt;/td&gt;
&lt;td&gt;210,746&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;13:03&lt;/td&gt;
&lt;td&gt;214,025&lt;/td&gt;
&lt;td&gt;&lt;a href="https://pastebin.com/wkttPAaS"&gt;https://pastebin.com/wkttPAaS&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Average 40% CPU/RAM&lt;/p&gt;

&lt;h3&gt;
  
  
  Standard_D32s_v3 server
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Time from start&lt;/th&gt;
&lt;th&gt;Peak connections&lt;/th&gt;
&lt;th&gt;Logs&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;11:05&lt;/td&gt;
&lt;td&gt;236,906&lt;/td&gt;
&lt;td&gt;&lt;a href="https://pastebin.com/mm3RZM1y"&gt;https://pastebin.com/mm3RZM1y&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;10:28&lt;/td&gt;
&lt;td&gt;245,217&lt;/td&gt;
&lt;td&gt;&lt;a href="https://pastebin.com/6kAPJB9R"&gt;https://pastebin.com/6kAPJB9R&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Average 20% CPU/RAM&lt;/p&gt;

&lt;p&gt;The logs tell an interesting story, including the limits on new connections per second, and how long it takes before Kestrel starts slowing down with:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Heartbeat took longer than "00:00:01" at "05/21/2019 09:37:19 +00:00"&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;and the time until SignalR starts throwing the following exception (&lt;a href="https://github.com/aspnet/AspNetCore/issues/6701"&gt;GitHub issue&lt;/a&gt; - possible &lt;a href="https://github.com/aspnet/AspNetCore/pull/10043"&gt;fix&lt;/a&gt;), &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Failed writing message. Aborting connection.&lt;/p&gt;

&lt;p&gt;System.InvalidOperationException: Writing is not allowed after writer was completed&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Findings
&lt;/h2&gt;

&lt;p&gt;My original target was guided by &lt;a href="https://twitter.com/anurse/status/983406560880701440?ref_src=twsrc%5Etfw%7Ctwcamp%5Etweetembed%7Ctwterm%5E983406560880701440%7Ctwgr%5E393039363b636f6e74726f6c&amp;amp;ref_url=http%3A%2F%2Fstaffordwilliams.com%2Fblog%2F2019%2F03%2F20%2Fazure-signalr-service%2F"&gt;this tweet&lt;/a&gt; from April 2018, which suggests 236k concurrent connections at 9.5GB.  From the tests above it doesn't look like ASP.NET Core SignalR is currently (&lt;code&gt;dotnet 3.0.100-preview6-011744&lt;/code&gt;) capable of such a number at such low memory.  &lt;code&gt;B2ms&lt;/code&gt; which has 8GB peaked at 107k with &lt;code&gt;D2s_v3&lt;/code&gt; similar.  However, with &lt;code&gt;D8s_v3&lt;/code&gt; and &lt;code&gt;D32s_v3&lt;/code&gt; peaking at 214k and 245k respectively, it's clear that CPU and memory are not currently the limiting factor.  With the tools I've created to automate both the deployment of server and clients, once .NET Core 3 reaches &lt;a href="https://en.wikipedia.org/wiki/Software_release_life_cycle#Release_to_manufacturing_(RTM)"&gt;RTM&lt;/a&gt; it will be relatively trivial to re-test at a later date.&lt;/p&gt;

&lt;h2&gt;
  
  
  Taking it further
&lt;/h2&gt;

&lt;p&gt;I've sunk quite a bit of time into this load testing project. It's resulted in &lt;a href="https://hub.docker.com/r/staff0rd/pastebin"&gt;three&lt;/a&gt; &lt;a href="https://hub.docker.com/r/staff0rd/crankier"&gt;new&lt;/a&gt; &lt;a href="https://hub.docker.com/r/staff0rd/crankier-server"&gt;containers&lt;/a&gt; and a few &lt;a href="https://github.com/aspnet/AspNetCore/pulls?q=is%3Apr+author%3Astaff0rd+is%3Aclosed"&gt;merges to aspnet/aspnetcore&lt;/a&gt;.  Even so, there's still things to do.&lt;/p&gt;

&lt;p&gt;The functionality from the forked benchmarkserver should instead be &lt;a href="https://github.com/aspnet/AspNetCore/pull/9264#issuecomment-494565663"&gt;moved in to Crankier itself&lt;/a&gt;.  The server logs are missing important metrics: total cpu &amp;amp; memory usage, however there doesn't seem to be a nice way to grab these in current .NET Core (currently I monitor &lt;code&gt;top&lt;/code&gt; in another ssh session to the server).  Finally, one could leverage Application Insights, and, along with echoing to std out, also push telemetry to App Insights via &lt;code&gt;TelemetryClient&lt;/code&gt; - this would result in pleasant graphs and log querying over pastebin log dumps.&lt;/p&gt;

&lt;h2&gt;
  
  
  A final note
&lt;/h2&gt;

&lt;p&gt;Having become acquainted with Crankier I can appreciate its own limits.  The current implementation only tests concurrent connections and not messaging between client and server which without does not reflect "real" load on a SignalR application.  To test your own application, not only should messaging be tested, but specifically the messaging that your &lt;code&gt;Hub&lt;/code&gt; implementations expect.  Instead of extending Crankier to test your own &lt;code&gt;Hub&lt;/code&gt; methods, it's much easier to use &lt;a href="https://www.nuget.org/packages/Microsoft.AspNetCore.SignalR.Client/"&gt;Microsoft.AspNetCore.SignalR.Client&lt;/a&gt; to write your own class that will use &lt;code&gt;HubConnection&lt;/code&gt; to call your application's &lt;code&gt;Hub&lt;/code&gt; methods directly, acting as an automated user specific to your application.&lt;/p&gt;

&lt;p&gt;Chasing concurrent connection counts in this manner has been fun, but doesn't reflect what production should look like.  Ramping VM size to achieve higher connection counts ignores that one VM is one single point of failure for your application.  In production, using something like &lt;a href="https://staffordwilliams.com/blog/2019/03/20/azure-signalr-service/"&gt;Azure SignalR Service&lt;/a&gt; would be a better approach to scaling concurrent connections.&lt;/p&gt;

</description>
      <category>showdev</category>
      <category>signalr</category>
      <category>azure</category>
      <category>dotnet</category>
    </item>
    <item>
      <title>Docker containers aren't just for long running tasks</title>
      <dc:creator>Stafford Williams</dc:creator>
      <pubDate>Mon, 27 May 2019 12:03:59 +0000</pubDate>
      <link>https://dev.to/staff0rd/docker-containers-aren-t-just-for-long-running-tasks-3kha</link>
      <guid>https://dev.to/staff0rd/docker-containers-aren-t-just-for-long-running-tasks-3kha</guid>
      <description>&lt;p&gt;Docker makes it easy to run long-running services like &lt;a href="https://hub.docker.com/_/microsoft-mssql-server"&gt;sql server&lt;/a&gt; or &lt;a href="https://hub.docker.com/r/linuxserver/nzbget/"&gt;nzbget&lt;/a&gt;, but did you know that docker is also an excellent option for executing shorter workloads? This could be &lt;a href="https://hub.docker.com/_/microsoft-dotnet-core"&gt;compiling a dotnet core program&lt;/a&gt; without installing the sdk, or &lt;a href="https://hub.docker.com/r/jekyll/jekyll/"&gt;building your jekyll blog&lt;/a&gt; without installing ruby. &lt;/p&gt;

&lt;p&gt;To demonstrate both a short task use-case and the ease of creating your own docker containers, in this post I'll build a container that lets you, and me, push log files or any other text content at pastebin.com via the commandline.&lt;/p&gt;

&lt;h2&gt;
  
  
  Publishing logs from any VM
&lt;/h2&gt;

&lt;p&gt;Pastebin lets you quickly post and share text files online.  They offer an &lt;a href="https://pastebin.com/api"&gt;API&lt;/a&gt; so you can automate this process and have &lt;a href="https://pastebin.com/api#2"&gt;examples of using it written in PHP&lt;/a&gt;.  In my use-case, I need to publish logs created on transient VMs in Azure, but I don't want to install PHP or its dependencies, nor the scripts required to post the logs.  These VMs already have docker available, so my intention is to package Pastebin's PHP code into a container and push the container to docker hub.&lt;/p&gt;

&lt;p&gt;This approach will let me use &lt;code&gt;docker run staff0rd/pastebin&lt;/code&gt; on any VM, and the downloading and executing of the container will be taken care of automagically.&lt;/p&gt;

&lt;h2&gt;
  
  
  Try it now
&lt;/h2&gt;

&lt;p&gt;Now that I've built and published this container, you also have access to this functionality without having to go through any of the following effort.  If you've got docker installed, you can try this right now:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="nt"&gt;-it&lt;/span&gt; staff0rd/pastebin &lt;span class="nt"&gt;-k&lt;/span&gt; &amp;lt;yourDevApiKey&amp;gt; &lt;span class="s2"&gt;"post this content to pastebin"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;You can extend, modify, and use the source to create your own if you wish, or, &lt;a href="https://github.com/staff0rd/pastebin/pulls"&gt;open a PR and collaborate&lt;/a&gt; on this one.  The &lt;a href="https://hub.docker.com/r/staff0rd/pastebin"&gt;image is on dockerhub&lt;/a&gt; and the &lt;a href="https://www.github.com/staff0rd/pastebin"&gt;source is on github&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating the Dockerfile
&lt;/h2&gt;

&lt;p&gt;The &lt;a href="https://github.com/staff0rd/pastebin/blob/master/Dockerfile"&gt;Dockerfile&lt;/a&gt; represents the image we're going to &lt;code&gt;docker build&lt;/code&gt;, and is a great example of how trivial it is to create your own containers leveraging the hard work others have already done for you.  If I want to host or run a PHP script, I'm going to base my &lt;code&gt;Dockerfile&lt;/code&gt; on the &lt;a href="https://hub.docker.com/_/php"&gt;official php image&lt;/a&gt;.  In this case, and as we'll see below, I ended up pulling in a php dependency manager, so I'll base my image on &lt;a href="https://hub.docker.com/_/composer"&gt;that image&lt;/a&gt;, which in turn is based on the php image.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM composer:1.8
COPY ./src /app
WORKDIR /app
RUN composer install
ENTRYPOINT [ "php", "./pastebin.php" ]
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;In the five lines above, I've set the base image for my container, copied my script and set the working directory, executed the install command for the dependency manager, and told the container to run the script when it starts.  I can run the following command and the container is ready to be used on the machine I built it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;docker build &lt;span class="nt"&gt;-t&lt;/span&gt; staff0rd/pastebin &lt;span class="nb"&gt;.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Then I can run this command, and the container can be &lt;code&gt;pull&lt;/code&gt;ed or &lt;code&gt;run&lt;/code&gt; from any machine on the internet:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;docker push staff0rd/pastebin
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h2&gt;
  
  
  Building the script
&lt;/h2&gt;

&lt;p&gt;The larger part of this exercise is building &lt;a href="https://github.com/staff0rd/pastebin/blob/master/src/pastebin.php"&gt;the script that gets copied inside the container&lt;/a&gt;.  For the actual execution I'll just use the &lt;a href="https://pastebin.com/api#2"&gt;code exampled by pastebin&lt;/a&gt;, but because this script will be encapsulated within the container, the parameters it needs are not obvious outside the container.  To solve this, I'll use &lt;a href="https://github.com/nategood/commando"&gt;commando&lt;/a&gt; to build a nice CLI helper.  As a dependency, commando can be installed via &lt;a href="https://getcomposer.org/"&gt;Composer&lt;/a&gt;, a php dependency manager, and hence I based the Dockerfile on Composer rather than php directly.&lt;/p&gt;

&lt;p&gt;At the very least, commando gives me a nice &lt;code&gt;--help&lt;/code&gt; output, so the options implemented by the script within container are no longer a secret:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;λ docker run &lt;span class="nt"&gt;-it&lt;/span&gt; staff0rd/pastebin &lt;span class="nt"&gt;--help&lt;/span&gt;

Examples:

docker run &lt;span class="nt"&gt;-it&lt;/span&gt; staff0rd/pastebin &lt;span class="nt"&gt;-k&lt;/span&gt; &amp;lt;devKey&amp;gt; &lt;span class="s2"&gt;"paste this text to pastebin!"&lt;/span&gt;
    Paste the given text to pastebin

&lt;span class="nb"&gt;cat &lt;/span&gt;myfile.log | docker run &lt;span class="nt"&gt;-i&lt;/span&gt; staff0rd/pastebin &lt;span class="nt"&gt;-k&lt;/span&gt; &amp;lt;devKey&amp;gt;
    Paste the contents of myfile.log to pastebin

docker run &lt;span class="nt"&gt;-it&lt;/span&gt; staff0rd/pastebin &lt;span class="nt"&gt;-k&lt;/span&gt; &amp;lt;devKey&amp;gt; &lt;span class="nt"&gt;-u&lt;/span&gt; &amp;lt;userName&amp;gt; &lt;span class="nt"&gt;--password&lt;/span&gt; &amp;lt;password&amp;gt;
    Retrieve a userKey to use associate pastes with your user

arg 0
     Content to to &lt;span class="nb"&gt;paste&lt;/span&gt;

&lt;span class="nt"&gt;-k&lt;/span&gt;/--devkey &amp;lt;argument&amp;gt;
     Required. Your api developer key

&lt;span class="nt"&gt;--help&lt;/span&gt;
     Show the &lt;span class="nb"&gt;help &lt;/span&gt;page &lt;span class="k"&gt;for &lt;/span&gt;this command.

&lt;span class="nt"&gt;-j&lt;/span&gt;/--userkey &amp;lt;argument&amp;gt;
     Your user key

&lt;span class="nt"&gt;-n&lt;/span&gt;/--name &amp;lt;argument&amp;gt;
     Name or title of your &lt;span class="nb"&gt;paste&lt;/span&gt;

&lt;span class="nt"&gt;-p&lt;/span&gt;/--public
     Make this &lt;span class="nb"&gt;paste &lt;/span&gt;public.  Default is unlisted.

&lt;span class="nt"&gt;--password&lt;/span&gt; &amp;lt;argument&amp;gt;
     Your pastebin password

&lt;span class="nt"&gt;-u&lt;/span&gt;/--username &amp;lt;argument&amp;gt;
     Your pastebin user name
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h2&gt;
  
  
  Wrapping it up
&lt;/h2&gt;

&lt;p&gt;I've pushed the &lt;a href="https://hub.docker.com/r/staff0rd/pastebin"&gt;image&lt;/a&gt; to docker hub and enabled automatic builds, so any updates to the &lt;a href="https://github.com/staff0rd/pastebin"&gt;git repo&lt;/a&gt; will result in a new image being built on docker hub.&lt;/p&gt;

&lt;p&gt;Otherwise, I can (and so can you, because docker) send any file to pastebin on any docker-enabled host with the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cat &lt;/span&gt;sendThisFile.txt | docker run &lt;span class="nt"&gt;-i&lt;/span&gt; staff0rd/pastebin &lt;span class="nt"&gt;-k&lt;/span&gt; thisIsMyDevKey
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;I hope this post has illustrated both how easy it is to create your own docker containers, and, how docker is also a great tool for both executing and distributing/deploying short-running tasks.&lt;/p&gt;

</description>
      <category>docker</category>
      <category>devops</category>
      <category>opensource</category>
      <category>showdev</category>
    </item>
    <item>
      <title>Publishing a custom Azure Pipelines release task</title>
      <dc:creator>Stafford Williams</dc:creator>
      <pubDate>Sat, 18 May 2019 14:00:00 +0000</pubDate>
      <link>https://dev.to/staff0rd/publishing-a-custom-azure-pipelines-release-task-2pd</link>
      <guid>https://dev.to/staff0rd/publishing-a-custom-azure-pipelines-release-task-2pd</guid>
      <description>&lt;p&gt;I wanted to clear my Cloudflare cache during an Azure Pipeline deployment. Cloudflare offers a REST API to do this, but rather than poke around with scripts I decided to see how I could instead write a custom task to do this and publish it on the Visual Studio market place.&lt;/p&gt;

&lt;h2&gt;
  
  
  Get the extension
&lt;/h2&gt;

&lt;p&gt;The extension is &lt;a href="%5Bhere%5D(https://marketplace.visualstudio.com/items?itemName=staff0rd.tfx-cloudflare-purge)"&gt;free on Visual Studio Marketplace&lt;/a&gt; and it’s &lt;a href="https://github.com/staff0rd/tfx-cloudflare-purge"&gt;open source on github&lt;/a&gt;. Here’s a screenshot of how it looks in a Azure DevOps:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--qmP-tmB---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/http://staffordwilliams.com/assets/purge-for-cloudflare.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--qmP-tmB---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/http://staffordwilliams.com/assets/purge-for-cloudflare.png" alt="Screenshot of Purge Cache for Cloudflare release task"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  How I built it
&lt;/h2&gt;

&lt;p&gt;I followed a great &lt;a href="https://docs.microsoft.com/en-us/azure/devops/extend/develop/add-build-task?view=azure-devops"&gt;walk-through on docs.microsoft&lt;/a&gt;. I like that the instructions promote typescript and testing, though I’m not going to write tests for such a small task.&lt;/p&gt;

&lt;p&gt;Another netizen &lt;a href="https://www.david-tec.com/2018/06/clear-the-cloudflare-cache-as-part-of-a-release-in-visual-studio-team-services-vsts/"&gt;already pointed out&lt;/a&gt; that I would need to query first the Cloudflare REST API to translate a zone name into a zone id, and then request the purge. I turned those instructions into &lt;a href="https://github.com/staff0rd/tfx-cloudflare-purge/blob/master/cloudflarePurge/index.ts"&gt;typescript&lt;/a&gt;, compiled to javascript, and bundled this javascript as a vsix extensiion using &lt;code&gt;tfx-cli&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Over all &lt;a href="https://github.com/staff0rd/tfx-cloudflare-purge"&gt;the code&lt;/a&gt; is very simple. The structure is as follows:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;/&lt;a href="https://github.com/staff0rd/tfx-cloudflare-purge/blob/master/vss-extension.json"&gt;vss-extension.json&lt;/a&gt;

&lt;ul&gt;
&lt;li&gt;Extension manifest, defines basic information about what the extenion does. &lt;a href="https://docs.microsoft.com/en-us/azure/devops/extend/develop/manifest?view=azure-devops"&gt;Reference docs&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;/images/extension-icon.png 

&lt;ul&gt;
&lt;li&gt;Icon representing the extension&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;/cloudflarePurge/&lt;a href="https://github.com/staff0rd/tfx-cloudflare-purge/blob/master/cloudflarePurge/index.ts"&gt;index.ts&lt;/a&gt;

&lt;ul&gt;
&lt;li&gt;The code that the task will execute. I haven’t written TypeScript since before v2 so I suspect the error handling could be more elegantly written with async or similar.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;/cloudflarePurge/&lt;a href="https://github.com/staff0rd/tfx-cloudflare-purge/blob/master/cloudflarePurge/task.json"&gt;task.json&lt;/a&gt;

&lt;ul&gt;
&lt;li&gt;Defines how the task’s configuration options will be rendered and which scripts will execute at build time. &lt;a href="https://github.com/Microsoft/azure-pipelines-task-lib/blob/master/tasks.schema.json"&gt;Reference docs&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If we were just using javascript with no dependencies, that would be it. However we’re using typescript so we also have;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;/cloudflarePurge/index.js 

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;tsc&lt;/code&gt; output (generated)&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;/cloudflarePurge/&lt;a href="https://github.com/staff0rd/tfx-cloudflare-purge/blob/master/cloudflarePurge/package.json"&gt;package.json&lt;/a&gt;

&lt;ul&gt;
&lt;li&gt;nodejs dependencies&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;/cloudflarePurge/&lt;a href="https://github.com/staff0rd/tfx-cloudflare-purge/blob/master/cloudflarePurge/tsconfig.json"&gt;tsconfig.json&lt;/a&gt;

&lt;ul&gt;
&lt;li&gt;typescript config&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Once this is in place I packaged the extension with:&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npx tfx-cli extension create --rev-version --manifest-globs vss-extension.json
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h2&gt;
  
  
  Publishing
&lt;/h2&gt;

&lt;p&gt;Publishing was a little odd - to publish publicly your Visual Studio Marketplace account needs to be approved as a publisher by someone at Microsoft. This was accomplished by entering some details about who I was and clicking a Request Approval (or similar) button. I got an approval and a you’re now approved email from someone at Microsoft about two days later.&lt;/p&gt;

&lt;p&gt;Prior to being approved, the marketplace will block the uploading of any extension that has &lt;code&gt;public:true&lt;/code&gt; set. Post-approval the uploader pointed out I need to include an &lt;code&gt;overview.md&lt;/code&gt; on a public extension. However, you can point content/details in your &lt;code&gt;vss-extension.json&lt;/code&gt; at, say, &lt;code&gt;README.md&lt;/code&gt; to resolve this issue:&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;"content": {
    "details": {
        "path": "README.md"
    } 
},
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h2&gt;
  
  
  That’s it!
&lt;/h2&gt;

&lt;p&gt;Overall it was pretty simple to execute custom code within an Azure Pipeline release and package it into an extension. From here I’d like to look into how such a task can be called by an Azure Pipeline yaml build, and, to auto-deploy updates to this task to the marketplace on commit - a colleague of mine has &lt;a href="https://blog.raph.ws/2018/03/build-and-release-pipeline-for-your-own-custom-vsts-tasks/"&gt;a post on this&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>azurepipelines</category>
      <category>cloudflare</category>
      <category>devops</category>
      <category>showdev</category>
    </item>
  </channel>
</rss>
