<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Gerade Geldenhuys</title>
    <description>The latest articles on DEV Community by Gerade Geldenhuys (@raidzen10).</description>
    <link>https://dev.to/raidzen10</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/raidzen10"/>
    <language>en</language>
    <item>
      <title>Test Your ARM Templates</title>
      <dc:creator>Gerade Geldenhuys</dc:creator>
      <pubDate>Mon, 16 Nov 2020 08:18:38 +0000</pubDate>
      <link>https://dev.to/raidzen10/test-your-arm-templates-4nim</link>
      <guid>https://dev.to/raidzen10/test-your-arm-templates-4nim</guid>
      <description>&lt;p&gt;In my &lt;a href="https://dev.to/raidzen10/azure-resource-manager-arm-template-tips-26c"&gt;previous post&lt;/a&gt;, I talked about best practices for ARM templates. Now I want to share with you a neat little tool that we can use for testing these templates.&lt;/p&gt;

&lt;h3&gt;
  
  
  What is the ARM template test toolkit?
&lt;/h3&gt;

&lt;p&gt;The ARM template test toolkit is a tool we can use to test our deployment templates and ensure that we are developing templates that adhere to best practices. If our templates do not follow these recommended best practices this tool will give us warnings with the suggested changes for us to make.&lt;/p&gt;

&lt;p&gt;This tool comes a set of default tests you can find here. Remember, these tests are recommendations, meaning you can pick and choose which tests you want to run.&lt;/p&gt;

&lt;h3&gt;
  
  
  Installing the Test Toolkit
&lt;/h3&gt;

&lt;p&gt;I am assuming the ttk is still in preview, which is why it hasn’t been added to the PowerShell gallery as of today. Don't stress it though, it takes less that 2 minutes to get it up and running. First, you need to download &lt;a href="https://aka.ms/arm-ttk-latest"&gt;this zip file&lt;/a&gt;. Once it has been extracted, navigate to the 'arm-ttk' folder in PowerShell.&lt;/p&gt;

&lt;p&gt;We need to manually add the test toolkit PowerShell Module to our current session, which we do by running the following command:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Import-Module .\arm-ttk.psd1&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This command will add the test toolkit module which we can then go ahead and use to test our deployment templates.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Extract the zip file to the PowerShell modules folder (%ProgramFiles%\WindowsPowerShell\Modules). This way, you don’t have to import the test toolkit module as we are doing above. We can then just call the test command from anywhere because this path is already as part of the ‘PSModulePath’.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Test Template
&lt;/h3&gt;

&lt;p&gt;To test a template, we use the &lt;code&gt;Test-AzTemplate&lt;/code&gt; command and provide it with the path to our template. This command will all the default test cases against our template. Below is an example of what the results would look like. The results in green are tests that passed and the ones in red failed.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--lEavaG-W--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://geradewebsitestorage.blob.core.windows.net/website/arm-test-results.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--lEavaG-W--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://geradewebsitestorage.blob.core.windows.net/website/arm-test-results.PNG" alt="Test Results"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;That’s it. Now we can ensure that our templates follow recommended best practices.&lt;/p&gt;

&lt;h3&gt;
  
  
  Continuous Integration
&lt;/h3&gt;

&lt;p&gt;We could even add a step to our build pipeline on Azure DevOps to ensure our templates remain in a good state. All you need to do is add the extension, which you can find here, and add the task to your build.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;This is a great tool especially for beginners starting to ARM templates. It will guide you in the right direction because I remember when I first started developing ARM I felt so overwhelmed by everything and if whether my templates are correct or not until I did a deployment and went through all the errors.&lt;/p&gt;

&lt;p&gt;I hope this helps you and happy deployment - just not Fridays, please. :-)&lt;/p&gt;

</description>
      <category>azure</category>
      <category>devops</category>
      <category>infraascode</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Azure Resource Manager (ARM) Template Tips</title>
      <dc:creator>Gerade Geldenhuys</dc:creator>
      <pubDate>Tue, 10 Nov 2020 19:04:24 +0000</pubDate>
      <link>https://dev.to/raidzen10/azure-resource-manager-arm-template-tips-26c</link>
      <guid>https://dev.to/raidzen10/azure-resource-manager-arm-template-tips-26c</guid>
      <description>&lt;h3&gt;
  
  
  What are ARM templates?
&lt;/h3&gt;

&lt;p&gt;An Azure Resource Manager template is a declarative JSON file that defines the infrastructure and configuration of your project. In this agile world we live in and adopting the process of &lt;a href="https://devops.com/devops-shift-left-avoid-failure/"&gt;“shifting left”&lt;/a&gt; and automating our deployments we want to add our infrastructure to our Software Development Lifecycle as well, this is known as Infrastructure as Code. This is the practice that further reduces the gap between development and operations. Here we define and version our infrastructure in code just as we do our application.&lt;/p&gt;

&lt;p&gt;Now that we have a basic understanding of the purpose of ARM templates, I want to share a couple of tips and best practices for using them.&lt;/p&gt;

&lt;h3&gt;
  
  
  Use Functions
&lt;/h3&gt;

&lt;p&gt;ARM templates provide you with the ability to create your own functions within a template. This functionality allows you to create functions that would simplify the process of creating your resources in Azure. For example, say all the resources for a template belongs to the finance department, you probably want to prefix these resources with something like 'fin' or 'financeDept'. With user-defined functions, you can do just that. See the sample below:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;The function defined above will prefix 'financeDept-' to the name of a resource. Say we are creating a storage account for the finance department and name it “storage2020”, the resource will be named 'financeDept-storage2020' when the template is deployed.&lt;/p&gt;

&lt;h3&gt;
  
  
  Nested Templates
&lt;/h3&gt;

&lt;p&gt;Next up we have nested templates. When creating an ARM template, it tends to grow in size pretty quick and nobody wants to go through a JSON file that is thousands of lines long. This is where can split our infrastructure into different templates that can then be called from one single trigger template.&lt;/p&gt;

&lt;p&gt;In sticking with the example of creating a template for the finance department above, let’s say we need to create storage accounts, VM’s, a Load balancer, configure the networking etc. Doing all of this in a single template can easily load your template. In this scenario, we can create a template for each individual resource and call them from the main template. Below is an example of nesting a template:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;The above template creates a new storage account. The definition for the storage is defined in a separate template. We create a deployment resource in our main template and specify the location of the template we want to deploy. The location is specified with the 'templateLink' variable.&lt;/p&gt;

&lt;h3&gt;
  
  
  Resource groups &amp;amp; subscriptions
&lt;/h3&gt;

&lt;p&gt;You would typically create a template that deploys resources to a single resource group, but what if you need to deploy resources to multiple regions? Below are some things to keep in mind when creating such a template:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A single deployment is limited to 5 resources groups&lt;/li&gt;
&lt;li&gt;If no resource group is set then the values for the parent template are used&lt;/li&gt;
&lt;li&gt;The account doing the deployment needs to have permission to the subscription&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Conditional deployments
&lt;/h3&gt;

&lt;p&gt;Sometimes you might have to skip certain deployments. Suppose that those resources already exist, Use the condition element to specify whether the resource is deployed. True if you want the resource to be deployed and False if not.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;Above we are adding a condition to the deployment of a storage account using the &lt;strong&gt;newOrExisting&lt;/strong&gt; parameter. If the parameter is set to 'New' a storage account be deployed and not if it set to anything else.&lt;/p&gt;

&lt;h3&gt;
  
  
  Deployment modes
&lt;/h3&gt;

&lt;p&gt;There are two types of deployment modes for ARM templates, one is the default &lt;strong&gt;incremental mode&lt;/strong&gt; which is pretty self-explanatory. The other is &lt;strong&gt;Complete mode&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;The traits of incremental deployments are as follows:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;All resource defined in the template will be created in the specified resource group&lt;/li&gt;
&lt;li&gt;Update the configuration for a resource that differs from what is defined in the template&lt;/li&gt;
&lt;li&gt;All resources in the resource group that are not part of the template will be ignored&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The difference between an &lt;strong&gt;incremental deployment&lt;/strong&gt; and &lt;strong&gt;complete deployment&lt;/strong&gt; hinges mostly on the third trait specified above for incremental deployments. if you deploy in complete mode, your resource group will have exactly what is in your template, meaning any existent resource in your resource group that is not specified in your template will be deleted.&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;Above are a couple of tips that I find very helpful when working with ARM templates, especially when your project starts growing in complexity and you the need to minimize time and effort is of most importance.&lt;/p&gt;

</description>
      <category>azure</category>
      <category>architecture</category>
      <category>devops</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Integrate Your CI/CD With AppCenter</title>
      <dc:creator>Gerade Geldenhuys</dc:creator>
      <pubDate>Tue, 27 Oct 2020 19:48:16 +0000</pubDate>
      <link>https://dev.to/raidzen10/integrate-your-ci-cd-with-appcenter-7ae</link>
      <guid>https://dev.to/raidzen10/integrate-your-ci-cd-with-appcenter-7ae</guid>
      <description>&lt;p&gt;Recently I was tasked with automating the build and deployment process for the in-house mobile app that we develop for our client. We have a dedicated build box with GitLab runner set up to handle our CI needs. We decided to use Microsoft’s AppCenter to handle the building and distribution of our mobile app, but having to manually start builds to kick off the process is not really doing DevOps, is it?&lt;/p&gt;

&lt;p&gt;To overcome this problem, I did a little research and came across the AppCenter CLI. This tool was exactly what we needed to enable communication between our build box and AppCenter to initialize the process of building and deploying our app to the testers or even the Play Store.&lt;/p&gt;

&lt;h3&gt;
  
  
  Starting a build
&lt;/h3&gt;

&lt;p&gt;The &lt;a href="https://github.com/microsoft/appcenter-cli"&gt;AppCenter CLI&lt;/a&gt; is pretty comprehensive with one drawback. On AppCenter you have to configure a build for a branch. This means setting the build configuration (Debug or Release), the version of Xamarin to use when building your app and a couple of other options like deploying to the Play Store and whether you want to build this branch on every push.&lt;/p&gt;

&lt;p&gt;With that being said, it’s really not that much of an effort to configure an app on AppCenter. Once it’s been configured you can use the &lt;code&gt;appcenter build queue&lt;/code&gt; command to queue a new build.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;appcenter build queue --app &amp;lt;appName&amp;gt; -b &amp;lt;branchName&amp;gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;That’s it, through just that one command you can have your app built and deployed to your testers/Play Store in minutes, this all by only pushing to your repository.&lt;/p&gt;

&lt;h3&gt;
  
  
  Distribution
&lt;/h3&gt;

&lt;p&gt;Once our app has been successfully built, we want to distribute it to notify our testers that a new version is available. Not just that, when we merge and build a new app on the Main branch we want to automatically deploy it to the Play Store.&lt;/p&gt;

&lt;p&gt;For this, we can turn to the &lt;code&gt;appcenter distribute releases add-destination&lt;/code&gt; command.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;appcenter distribute releases add-destination -d "Collaborators" -t "group" -r &amp;lt;releaseId&amp;gt; -a &amp;lt;appName&amp;gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;In the command above we are requesting AppCenter to notify all the users in the &lt;strong&gt;Collaborators group&lt;/strong&gt; of a new release that was just built for our app by specifying the &lt;strong&gt;ReleaseId&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Our testers will then get an email with a link to install/update the app. &lt;/p&gt;

&lt;h3&gt;
  
  
  Keep me informed
&lt;/h3&gt;

&lt;p&gt;This is all well and neat, but for my team, I didn’t want us to move between two different applications when building our projects. We use GitLab, and I want it to be our source of truth for everything CI related.&lt;/p&gt;

&lt;p&gt;Once a build has been queued, I don’t want us to go to AppCenter to monitor the status of our builds, this is where I want to stay in GitLab and still know of these things without having to open a new tab. &lt;/p&gt;

&lt;p&gt;To accomplish this, we can use the &lt;code&gt;appcenter build branches show -a &amp;lt;appName&amp;gt; -b &amp;lt;branchName&amp;gt;&lt;/code&gt; command to get the latest build status of a branch.&lt;/p&gt;

&lt;p&gt;With this command, you can periodically ask AppCenter for the status of a build and output the result on GitLab whether the build is still in progress, succeeded or failed.&lt;/p&gt;

&lt;p&gt;I did all this in PowerShell, below is a sample of said script that you can use in your CI setup, whether it be GitLab, GitHub, Azure DevOps, etc.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;h3&gt;
  
  
  Summary
&lt;/h3&gt;

&lt;p&gt;I hope this script can help your team as much as it did mine because just thinking of the task of installing the Android SDK and configuring it to build our app on our build machine was daunting in itself and that’s not even taking into account the effort it would take to have it notify our testers or deploying to the Play Store.&lt;/p&gt;

</description>
      <category>appcenter</category>
      <category>powershell</category>
      <category>devops</category>
    </item>
    <item>
      <title>Fun With Azure Functions And GitHub Webhooks</title>
      <dc:creator>Gerade Geldenhuys</dc:creator>
      <pubDate>Wed, 29 Jul 2020 13:44:17 +0000</pubDate>
      <link>https://dev.to/raidzen10/fun-with-azure-functions-and-github-webhooks-1i0i</link>
      <guid>https://dev.to/raidzen10/fun-with-azure-functions-and-github-webhooks-1i0i</guid>
      <description>&lt;p&gt;Part of my blogging process since I started was to create card images for each post that I could use for my meta tags. For all the benefits it brings like personalisation when I cross-post my articles on Dev.to I should mention it was not fun having to create these images using Gimp every time I wrote up a blog post. To solve this, I turned to my motto to Automate everything.&lt;/p&gt;

&lt;p&gt;I wanted to automate the process of generating these images whenever I added a new blog post. Seeing as I host all my images on Azure Blob Storage, I figured having the images generated via an Azure Function was going to be an obvious design decision. Secondly, I wanted to trigger this process automatically as soon as I pushed the new blog post to my repository on &lt;a href="https://github.com/GeradeDev"&gt;GitHub&lt;/a&gt;, for this a webhook to call my Http trigger Azure Function would be enough.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--yDk9Ghke--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://geradewebsitestorage.blob.core.windows.net/website/blog-post-sample-card.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--yDk9Ghke--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://geradewebsitestorage.blob.core.windows.net/website/blog-post-sample-card.PNG" alt="Sample"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Azure Function
&lt;/h3&gt;

&lt;p&gt;For the Azure Function side of this process, what I wanted to do was generate an image that I could use as the card when posting my article link to twitter, like the image above. For this, I already had my placeholder image without the description of said article.&lt;/p&gt;

&lt;p&gt;All that was left for me to do was to load the placeholder image and get the description of the blog post article from my repository. Below is how did this:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;In the code above we are making use of the &lt;a href="https://docs.github.com/en/rest"&gt;GitHub API&lt;/a&gt;. I use the API to get the latest commit for the repository and iterate through the new files that were added to it. I then look to see if any new file was added to the &lt;code&gt;contents/static/api/post/&lt;/code&gt; directory. This is the directory with all my articles. My website is a static file application, so what I do next is to load the JSON file with the blog post details and extract the post id and description I will use on the image.&lt;/p&gt;

&lt;p&gt;Now that I have the description to use on the image, I can go ahead and generate the image.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;The code above loads the placeholder image which is just the image in the example above without the text and the curly brackets, then I get the start position on the image and use the &lt;code&gt;DrawString&lt;/code&gt; method in &lt;code&gt;System.Drawing&lt;/code&gt; to plot the text on the image.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;Once I have the image generated, I then upload it to Blob storage and save it with its name set to the post id we got earlier from the GitHub API. This way I can ensure that setting the image url for the meta tag on the blog post to &lt;a href="https://storage-account-url/website/post-id.png"&gt;https://storage-account-url/website/post-id.png&lt;/a&gt; will render the image for this post.&lt;/p&gt;

&lt;p&gt;That concludes the Azure Function, but we still need to invoke the function. For that, we’re going to use a &lt;a href="https://docs.github.com/en/developers/webhooks-and-events/webhooks"&gt;GitHub webhook&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  GitHub Webhook
&lt;/h3&gt;

&lt;p&gt;Webhooks are automated messages sent from apps when something happens. In the case of GitHub, that would be when pushing new code, when a repo is forked and so on. For my scenario, we want to make use of the Push event. This would ensure every time I push to my repo, my Azure Function would be called and if a new blog post was added a new image would be created for it.&lt;/p&gt;

&lt;p&gt;To create a webhook takes less than a minute. Navigate to the setting of your GitHub repo, select the Webhooks menu item. On the webhooks page click on the Add button and complete the following fields:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Payload URL&lt;/strong&gt;: This is the where to send the payload of the GitHub event. In my case, it would be the payload of the Push event and the URL to my Azure Function.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;This payload is of no use to me because I will be using the GitHub API to get everything I need. I use the GitHub API because this payload does not include the inner details of the blog post that was added.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Content type&lt;/strong&gt;: The content type of the payload. I selected Json.&lt;/p&gt;

&lt;p&gt;The last thing you need to do is select the event you want this Webhook to be triggered on. In my case, it was the Push event.&lt;/p&gt;

&lt;h3&gt;
  
  
  Summary
&lt;/h3&gt;

&lt;p&gt;There you have it. I have automated the process of generating the images for each article I use for the meta tags on my posts. Now all I do is write my posts and push to GitHub. I host my website on &lt;a href="https://www.netlify.com/"&gt;Netlify&lt;/a&gt; and it too uses a Webhook to publish my website after I push to GitHub, so as soon my article is published to my website the image has already been created and save to my storage account on Azure.&lt;/p&gt;

</description>
      <category>azure</category>
      <category>github</category>
      <category>webhooks</category>
      <category>dotnet</category>
    </item>
    <item>
      <title>Service Discovery Using Consul And ASP.NET Core</title>
      <dc:creator>Gerade Geldenhuys</dc:creator>
      <pubDate>Mon, 29 Jun 2020 06:59:25 +0000</pubDate>
      <link>https://dev.to/raidzen10/service-discovery-using-consul-and-asp-net-core-2c4b</link>
      <guid>https://dev.to/raidzen10/service-discovery-using-consul-and-asp-net-core-2c4b</guid>
      <description>&lt;p&gt;Back when I embarked on my Domain Driven Design/Microservice/Distributed systems journey I came across this neat little library to make REST calls between my services for &lt;a href="https://github.com/Med-Park" rel="noopener noreferrer"&gt;MedPark&lt;/a&gt;, a sample distributed system project I am using to implement everything new I learn that is DDD related. I wrote a post on this library back when I discovered it &lt;a href="https://geradegeldenhuys.net/read/trivializing-microservice-communication/" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;While &lt;a href="https://github.com/canton7/RestEase" rel="noopener noreferrer"&gt;RestEase&lt;/a&gt; works great at what it does, once you start doing distributed systems in every sense of the word it becomes a little trickier. When having multiple instances of an application running, having a hardcoded endpoint for a particular service in your configuration to send requests to kills the purpose of scaling, doesn’t it? For this, we can implement some kind of service discovery.&lt;/p&gt;

&lt;h3&gt;
  
  
  Service Discovery
&lt;/h3&gt;

&lt;p&gt;Service Discovery has the ability to locate a network automatically making it so that there is no need for a long configuration set up process. Service discovery works by devices connecting through a common language on the network allowing devices or services to connect without any manual intervention. (i.e Kubernetes service discovery, AWS service discovery)&lt;/p&gt;

&lt;p&gt;There are two types of service discovery: Server-side and Client-side. Server-side service discovery allows clients applications to find services through a router or a load balancer. Client-side service discovery allows clients applications to find services by looking through or querying a service registry, in which service instances and endpoints are all within the service registry.&lt;/p&gt;

&lt;h3&gt;
  
  
  Scenario
&lt;/h3&gt;

&lt;p&gt;Let’s say we want to query the Catalog service and find out if a particular product is in stock. We would send this request to our API gateway and from the API we would relay that query to the Catalog service. When relaying this request, we could look up the Catalog service in some kind registry and send our request to any of the instances of the Catalog service that is healthy and available.&lt;/p&gt;

&lt;h3&gt;
  
  
  Consul
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://www.consul.io/" rel="noopener noreferrer"&gt;Consul&lt;/a&gt;, by HashiCorp, is a centralized service registry that enables services to discover each other by storing location information (like IP addresses) in a single registry. We will be using this service for looking up our services in a registry when communicating between services.&lt;/p&gt;

&lt;p&gt;For local development and testing, we are using the Consul Docker image.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

docker run consul -p 8500:8500


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;



&lt;p&gt;Above are two extension methods that I am using to register my service to Consul registry. The &lt;code&gt;AddConsul&lt;/code&gt; method adds the Consul Client and its options to the service collection, along with a custom HTTP client that will be used to make our requests between services. The &lt;code&gt;UseConsul&lt;/code&gt; method is how we will register our application to the Consul services registry. This requires four parameters:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Parameter&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Name&lt;/td&gt;
&lt;td&gt;The name of the service (i.e. Catalog-service)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;ID&lt;/td&gt;
&lt;td&gt;The ID of the service. Usually, the name of the service with a unique Id (Guid) appended to it.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Address&lt;/td&gt;
&lt;td&gt;The address where the service will be running at&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Port&lt;/td&gt;
&lt;td&gt;The port the service will be running at&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgeradewebsitestorage.blob.core.windows.net%2Fwebsite%2Fconsul-services.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgeradewebsitestorage.blob.core.windows.net%2Fwebsite%2Fconsul-services.PNG" alt="Consul Service Dashboard"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Sending Requests
&lt;/h3&gt;

&lt;p&gt;Once we have our services registered on Consul all we need to do to request data between services are the service names. As is displayed in the screenshot above, we have 7 instances of the Basket service registered. When making our request from the API gateway to the Basket service, the API gateway does not have to know which instance to request. All we want is for Consul to send us in the direction of an instance that is healthy. Let’s look at how we can accomplish this.&lt;/p&gt;

&lt;h3&gt;
  
  
  Consul HTTP Client
&lt;/h3&gt;

&lt;p&gt;I implemented an HTTP client to get the service we want to request from the Consul Service Registry and send our Request along. See an overview of the client implementation below:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;To better explain the code in the gist above, say we want to get a customer’s basket from the Basket service, all we need to do is pass the name of the service along to this Consul HTTP client like this basket-service. In the code above, we ask Consul for all the services that are registered with that name, then we select a random instance from the registry to send the request to. This is how we could call the basket service from our controller:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;blockquote&gt;
&lt;p&gt;The gists on this post have been chopped and screwed in a way to accommodate this article because some of it was not within the scope of giving you a high-level introduction to implementing Service discovery in your microservices-based application. You can have a look at the &lt;a href="https://github.com/Med-Park" rel="noopener noreferrer"&gt;MedPark&lt;/a&gt; repository for the full implementation, which also uses HealthChecks for monitoring the healthiness of a service and then deregistering the service if it is unable to handle requests.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Summary
&lt;/h3&gt;

&lt;p&gt;In closing, implementing service discovery in your distributed system’s architecture reduces the technical complexity of discovering and most importantly (for me anyway) automating the process for connected services.&lt;/p&gt;

&lt;p&gt;Thank you for reading.&lt;/p&gt;

</description>
      <category>dotnet</category>
    </item>
    <item>
      <title>Reduce Your Docker Image Build Time</title>
      <dc:creator>Gerade Geldenhuys</dc:creator>
      <pubDate>Mon, 22 Jun 2020 07:12:27 +0000</pubDate>
      <link>https://dev.to/raidzen10/reduce-your-docker-image-build-time-412k</link>
      <guid>https://dev.to/raidzen10/reduce-your-docker-image-build-time-412k</guid>
      <description>&lt;p&gt;When building Docker images, it would be awesome if we could mount/point to the cached feed of NuGet packages we have on our developer machines, but since that is not possible, Docker goes and downloads all packages for a project from NuGet every time we run Docker build. That’s if we don’t make use of caching our packages every time we update our dependencies.&lt;/p&gt;

&lt;p&gt;To cache our dependencies when building our Docker images, update your Dockerfile to the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;COPY *.csproj .
RUN dotnet restore

# Copy everything else and build
COPY . .
RUN dotnet publish -c Release -o out
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Applying the small change above will ensure that our package cache is generated the first time we build our image. Thereafter, the cache will be used in subsequent builds. Unless we change our dependencies in the project the cache will remain valid and speed up our compilation time. When using this approach my build time went from around 1 minute to +- 5 seconds.&lt;/p&gt;

&lt;h3&gt;
  
  
  Other useful tips
&lt;/h3&gt;

&lt;p&gt;Some other tips at your disposal to reduce your image build times are the popular techniques of using a Docker ignore file, to exclude everything you don’t want in your image while also reducing its size, and using multistage builds, of which I’m sure you are already doing.&lt;/p&gt;

&lt;p&gt;I hope this neat little trick that packs a punch comes in handy for you as it did for me.&lt;/p&gt;

&lt;p&gt;Thanks for reading.&lt;/p&gt;

</description>
      <category>dotnet</category>
      <category>docker</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Distributed Caching in ASP.Net Core</title>
      <dc:creator>Gerade Geldenhuys</dc:creator>
      <pubDate>Wed, 04 Mar 2020 15:02:23 +0000</pubDate>
      <link>https://dev.to/raidzen10/distributed-caching-in-asp-net-core-4fk4</link>
      <guid>https://dev.to/raidzen10/distributed-caching-in-asp-net-core-4fk4</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;This post was originally posted on my website at &lt;a href="//GeradeGeldenhuys.net/read/distributed-caching-aspnet-core"&gt;GeradeGeldenhuys.net&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Caching
&lt;/h3&gt;

&lt;p&gt;Caching is one of those things that in most cases, are probably inevitable with any project, especially if your project is a web application. It is the process of storing frequently accessed information in a cache. A cache is a temporary storage area, usually in memory. This sounds pretty trivial if you have your application and run it on a single server, but in this cloud-first, auto-scale landscape we are living in, suddenly it’s not as simple. For this, we look to a different, but same, solution. This is we call Distributed Caching.&lt;/p&gt;

&lt;p&gt;The cache is structured around keys and values – there’s a cached entry for each key. When you want to load something from the database, you first check whether the cache doesn’t have an entry with that key (based on the ID of your database record, for example). If a key exists in the cache, you don’t do a database query.&lt;/p&gt;

&lt;h3&gt;
  
  
  What is a Distributed Cache?
&lt;/h3&gt;

&lt;p&gt;A distributed cache is as its name says, caching with the added benefit of having it distributed across multiple servers. The advantages of having a distributed cache are being comfortable in knowing that your data is coherent – that it is consistent across all nodes of your application. This way, you can also ensure that that app restarts, or in the case of having to restart your app server will not result in the loss of your caching data. In the cloud crazed world, we live in today, this is a no brainer for any application looking to implement a reliable caching strategy.&lt;/p&gt;

&lt;p&gt;Either this or have all your data saved in a single server with tons of memory likely to die on you at a moment's notice and have your smooth-running application thrown into disarray. Your choice.&lt;/p&gt;

&lt;p&gt;There are many different ways to implement this in our ASP.NET microservice (Memcached, Redis, Cassandra, Elasticache, etc). For this post, I will be using Redis. Regardless of which implementation you choose, the app interacts with the cache using the IDistributedCache interface.&lt;/p&gt;

&lt;h3&gt;
  
  
  Setup Redis
&lt;/h3&gt;

&lt;p&gt;With the rise of Docker, we no longer have to install 3rd party applications we want to interact with from our applications. In the past, we would have to download and install Redis and go through the entire process of setting it up and so on. But today, it is easy as pulling a Docker image and running it. Yes, 2 simple commands, as shown below.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;C:\&amp;gt; docker pull redis&lt;/code&gt;&lt;br&gt;
&lt;code&gt;C:\&amp;gt; docker run -p 6379:6379 redis&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Once we have Redis up and running, we want to be able to interact with it. &lt;a href="https://redisdesktop.com/"&gt;Redis Desktop Manager&lt;/a&gt; is one way of doing it with one limitation that it sits behind a paywall. If that is not an option for you, you can try out &lt;a href="http://joeferner.github.io/redis-commander/"&gt;Redis Commander&lt;/a&gt;, a free to use Redis management tool written in Node. You can also run Redis Commander in a Docker container.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;C:\&amp;gt; docker pull rediscommander/redis-commander&lt;/code&gt;&lt;br&gt;
&lt;code&gt;C:\&amp;gt;  docker run -p 8081:8081 rediscommander/redis-commander:latest&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Once it is up and running, go ahead and navigate to it in your browser at port &lt;code&gt;8081&lt;/code&gt;.&lt;/p&gt;
&lt;h3&gt;
  
  
  Use Case
&lt;/h3&gt;

&lt;p&gt;In my MedPark 360 application, I am developing, we give patients the ability to order medication online and have them delivered to their address. This requires a product catalogue and everything that goes with it. One such feature would be a cart, your everyday run of the mill cart where you could add products to it before checking out. Nothing special about it. Currently, when a user requests their cart, we make a request to the database to retrieve it. This is a bad implementation because every time a user requests their cart, it will incur a database hit. This is not optimal.&lt;/p&gt;

&lt;p&gt;So, what we can do here is to implement caching and reduce the calls to our database and improve the performance/responsiveness of our application.&lt;/p&gt;
&lt;h3&gt;
  
  
  Implementation
&lt;/h3&gt;

&lt;p&gt;We first need to add Redis to our application, then we need to set up a service to handle saving and retrieving our cached data from Redis. Once we have our service, we can create a Filter in the form of an attribute to handle the response caching. I created the following extension method to add Redis the Basket service.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;Once that is set up in our service, we want to implement a service that will be responsible for interacting with Redis. The service is the IResponseCacheService I added to DI when adding Redis above. Below is the implementation:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;This service is pretty as straightforward. It has one method to search our cache for a value that matches the key we pass in. It also has a method that will cache a response if it is not null.&lt;/p&gt;

&lt;p&gt;Now that we have our service to handle the interaction with Redis, we need to invoke it. As a refresher, when a user requests their cart, we want to retrieve it from the cache, if it doesn’t exist in the cache, we are more than happy to hit up our database to get it. But, once we have it, we want to store in our cache so that subsequent requests can get from the cache and we avoid asking our database for it. This is where the Filter comes into play.&lt;/p&gt;

&lt;p&gt;The general idea of a filter that we want some custom code to run before and/or after specific stages in our request pipeline. Below we have an attribute we can apply to our endpoints on our controllers to take advantage caching.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;The above code is pretty straightforward. We initialize the attribute with a TTL (Time To Live) for the cached value. This indicates for how long we want the data to be cached for before we want it to go stale and expire. Next, we get the settings for Redis on the particular service, in this case, the Basket service, and before we continue, we want to see if caching has been enabled for this service. If not, return and continue on the request pipeline. If caching has been enabled, fire up the service we created for interacting with Redis and generate a unique key based on the request. Once we have the key, we know what to look for in the cache. We then use the key to get the data from Redis. If the data is available and has not expired in Redis we can be certain that this data is still correct and may use and return it to the user.&lt;/p&gt;

&lt;p&gt;If the information does not exist on Redis we continue on the request pipeline to the controller. The controller method will then request the data from the database and return it. At this point, we will return to our filter (Attribute) and save this information to Redis using the key we generated.&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;In conclusion, caching is a very simple way to improve the performance of your services. For the scenario above, without caching, the request gets handled in about 40ms on my machine. Once I enabled caching that number dropped down to around 10ms.&lt;/p&gt;

&lt;p&gt;The source material for this post can be found on &lt;a href="https://github.com/GeradeDev/med-park-360"&gt;GitHub&lt;/a&gt;. This is an application I am actively developing, so if the source code for this post is not there please bear with me as it probably has not been merge yet.&lt;/p&gt;

&lt;h3&gt;
  
  
  Further Reading
&lt;/h3&gt;

&lt;p&gt;If you would like to read up on more Caching strategies, I would suggest &lt;a href="https://nickcraver.com/blog/2019/08/06/stack-overflow-how-we-do-app-caching/"&gt;this&lt;/a&gt; blog post by &lt;a href="https://twitter.com/Nick_Craver"&gt;Nick Craver&lt;/a&gt;, the Architecture Lead at &lt;a href="https://stackoverflow.com/"&gt;Stack Overflow&lt;/a&gt; and the rest of Stack Exchange on how they deal with app caching on such a huge network of services.&lt;/p&gt;

</description>
      <category>dotnet</category>
      <category>aspnetcore</category>
      <category>redis</category>
      <category>caching</category>
    </item>
    <item>
      <title>Package versioning on Azure DevOps</title>
      <dc:creator>Gerade Geldenhuys</dc:creator>
      <pubDate>Tue, 18 Feb 2020 15:45:48 +0000</pubDate>
      <link>https://dev.to/raidzen10/package-versioning-on-azure-devops-1cgb</link>
      <guid>https://dev.to/raidzen10/package-versioning-on-azure-devops-1cgb</guid>
      <description>&lt;p&gt;This blog was originally posted on &lt;a href="https://geradegeldenhuys.net/"&gt;my website&lt;/a&gt; and I thought I might share it here.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Scenario&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;The scenario for this post is that I have a library hosted on NuGet and whenever I deploy a new package I want to increment the version of the package, but I don’t want to update the assembly information in my project or manually update the version in my build pipeline every time I want to deploy. I want to automate this and ensure that every time I deploy a new version of my library to NuGet the version gets incremented by 1 and I want to do this all inside my pipeline on &lt;br&gt;
&lt;a href="https://dev.azure.com/"&gt; Azure DevOps.&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For this, I will be creating a new build task that I can plug into my pipeline on Azure DevOps.&lt;/p&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;Creating Task&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;In a &lt;a href="https://geradegeldenhuys.net/read/create-custom-vsts-build-task---part-1"&gt;previous post&lt;/a&gt; I showed you how to create your own custom build tasks. I will be doing the same for this task, but the only exception this time around is that I’ll be using PowerShell, as apposed to Node. There is no reason in particular for using PowerShell other than the fact that I am currently in the motions of learning it. Using PowerShell does come with one downside though, and that is the task is not cross-platform as with your node tasks.&lt;/p&gt;


&lt;blockquote&gt;The VstsTask SDK does not support PowerShell Core, and does not seem likely to in the near future. You are advised to use Node if you want to create cross-platform tasks.&lt;/blockquote&gt;

&lt;p&gt;So once we completed Part 1 of creating our own tasks, I will be implementing the logic. What we want is to get the current version of the library on NuGet.org and increment that value before we pack/publish our library.&lt;/p&gt;

&lt;p&gt;Pretty straightforward right?&lt;/p&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;Implementation&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;For this version of the task, we are going to require three inputs from the user. We need to know the name of the package we will be working with, that’s the first one. Secondly, we want to increment the Patch version number, but we also do not want to increment it infinitely. For this, we require the user to set an upper limit the Patch version can be incremented to before resetting it to 0 and incrementing the Minor version. We will come back to the third required input later.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--RRKsvgqL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://geradewebsitestorage.blob.core.windows.net/website/task-setting-version-updater.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--RRKsvgqL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://geradewebsitestorage.blob.core.windows.net/website/task-setting-version-updater.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once we have our setup complete we can go ahead and implement the core functionality for this task.&lt;/p&gt;

&lt;p&gt;We will use the Find-Package command to find the package on Nuget, or your own locally hosted package manager. Before we do that, we need to add the Nuget package source. The Register-PackageSource command should be called before we execute the Find-Package command because it uses these package sources to search for your package. Once we have our package and the version we can go ahead and update it.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;Once we have our new version set, we need to pass it on to the next task in our pipeline, the Nuget Pack command.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Nuget Pack&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;When packing a Nuget package, the version of the package is required. Nuget does not allow you to publish the same version of a package multiple times. This brings us to the third required input for our task. Once we have our new version, we need to pass it on to the next task in our pipeline. I already have a variable I am using in the pipeline to set the new version of the package, so I figured why not use the same variable. For this to work, I am updating my environment variable from my custom task using the VSTS task.setvariable command. This updates the variable packageVersion to the newly calculated version.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--1klyMIrh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://geradewebsitestorage.blob.core.windows.net/website/pipeline-screenshot.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--1klyMIrh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://geradewebsitestorage.blob.core.windows.net/website/pipeline-screenshot.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Above is a screenshot of my pipeline for my library. Before the packing stage, I will generate the next version number based on the current version of the package on NuGet, update the packageVersion value which will then be used as the version during the Packing stage that follows.&lt;/p&gt;

&lt;p&gt;And there we have it, every time we want to publish our library to NuGet the correct version number will be used.&lt;/p&gt;

&lt;p&gt;The source code for this build task can be found at &lt;a href="https://github.com/GeradeDev/utils"&gt;this repository&lt;/a&gt;. Stay tuned to this repository as I will be adding more tasks in the future.&lt;/p&gt;

</description>
      <category>azure</category>
      <category>devops</category>
      <category>powershell</category>
      <category>tfxcli</category>
    </item>
  </channel>
</rss>
