<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: axurcio</title>
    <description>The latest articles on DEV Community by axurcio (@lacjamm).</description>
    <link>https://dev.to/lacjamm</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/lacjamm"/>
    <language>en</language>
    <item>
      <title>Custom code components - go beyond power pages limitation</title>
      <dc:creator>axurcio</dc:creator>
      <pubDate>Tue, 19 Jul 2022 00:00:00 +0000</pubDate>
      <link>https://dev.to/lacjamm/custom-code-components-go-beyond-power-pages-limitation-4g23</link>
      <guid>https://dev.to/lacjamm/custom-code-components-go-beyond-power-pages-limitation-4g23</guid>
      <description>&lt;h2&gt;
  
  
  Life without code components
&lt;/h2&gt;

&lt;p&gt;Power Platform has been evolving quite rapidly as Microsoft continues to invest in the area. In this article the following will be discussed:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Power Apps Portal, recently renamed to Power Pages.&lt;/li&gt;
&lt;li&gt;Power Apps component framework (PCF), or also known as custom code components.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;More and more enterprise and government customers seek to modernise their legacy applications and aim to bring everything together into one platform. Websites and portals are no exception to this. Microsoft bought ADX portal studio in 2015 and rebranded it to become the portal capability tied to Dynamics 365.&lt;/p&gt;

&lt;p&gt;JQuery / JavaScript has been the main tool of doing portal customisation. Microsoft also introduced liquid templates for server side processing to move away from developing any custom server side (MVC) code. However, with any low code platform, sometimes clients requirements demand more to what the platform is capable of.&lt;/p&gt;

&lt;h2&gt;
  
  
  PCF / Custom code components
&lt;/h2&gt;

&lt;p&gt;Integration with other systems can quickly becomes a challenge and Microsoft acknowledged there needs to be a better tool to develop custom code especially across different Power Platform interfaces, not just the portal.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--iTfyalis--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://user-images.githubusercontent.com/1879280/179398772-563a1f86-9d8d-4049-888f-042785e40f68.png" alt="image" width="880" height="489"&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://docs.microsoft.com/en-us/power-apps/developer/component-framework/custom-controls-overview"&gt;Power Apps Component Framework&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Developers have been incorporating modern JavaScript frameworks manually into the Portal JavaScript section to workaround the challenges, but fortunately Microsoft introduced Power Apps Component Framework to address the situation better, and React has been chosen as the recommended framework moving forward.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;a href="https://powerapps.microsoft.com/en-us/blog/virtual-code-components-for-power-apps-using-react-and-fluent-ui-react-platform-libraries/"&gt;Going forward&lt;/a&gt;: At GA, React and Fluent UI will be the recommended and default way to create all code components. We recommend starting to evaluating React + Fluent controls for building Power Apps code components.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;PCF is designed to work across every Power Platform interfaces such as Model-driven apps, Canvas apps and Portal albeit there are still differences of what is supported for each interface.&lt;/p&gt;

&lt;p&gt;PCF for Portal used to support only field-bound forms. With the new ability of rendering PCF via liquid template, this opens a lot of possibilities as it can live outside portal forms and be rendered anywhere on the site.&lt;/p&gt;

&lt;h2&gt;
  
  
  Use cases examples
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Payment gateway integration
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;External users needs to be able to do payment via chosen payment gateway while maintaining PCI compliance.&lt;/li&gt;
&lt;li&gt;Backend office staff needs to be able to make payment on user behalf in model-driven app.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--XVPOu9YM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://user-images.githubusercontent.com/1879280/179401121-72dd781f-9274-48f8-9d1d-fee092f69c3a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--XVPOu9YM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://user-images.githubusercontent.com/1879280/179401121-72dd781f-9274-48f8-9d1d-fee092f69c3a.png" alt="image" width="880" height="527"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Without PCF, custom JavaScript code needs to be developed and maintained twice in both interfaces.&lt;/p&gt;

&lt;h3&gt;
  
  
  Large file upload integration
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;SharePoint document integration has a limit of 25 MB size per attachment.&lt;/li&gt;
&lt;li&gt;Azure Blob storage has a limit of 125 MB size per attachment.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When the requirements cannot be satisfied easily (e.g. larger size or more file security), developing PCF can help to meet these requirements.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--vNaZCdye--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://user-images.githubusercontent.com/1879280/179401093-de14db17-239e-423b-a2f1-ab9ec3973923.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--vNaZCdye--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://user-images.githubusercontent.com/1879280/179401093-de14db17-239e-423b-a2f1-ab9ec3973923.png" alt="image" width="880" height="488"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Considerations
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;With great power comes great responsibility&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ul&gt;
&lt;li&gt;Security must always be at the back of developer’s mind while building PCF. All code inside PCF is your own code, thus Microsoft is likely not be liable for any security issues that stem from the PCF component.&lt;/li&gt;
&lt;li&gt;As PCF is just a wrapper around your client side code, anything is visible on the browser. Any sensitive information and processing should be protected (e.g. using API integration that hides the functionality from the browser).&lt;/li&gt;
&lt;li&gt;PCF is meant to work across all Power Platform interfaces, so avoid any unsupported workarounds such as modifying things outside the component. Microsoft has produced &lt;a href="https://docs.microsoft.com/en-us/power-apps/developer/component-framework/code-components-best-practices"&gt;best practices&lt;/a&gt; around PCF development.&lt;/li&gt;
&lt;li&gt;As PCF is still evolving, expect there are some things that can still pose a challenge, mainly with the integration with Dataverse (e.g. fields such as lookup is not supported yet to bind). However, with custom code there is likely to be a workaround that can be thought of.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;For developers that come from custom development background, this is a much needed and welcoming feature of Power Platform. Low code platforms are great for rapid development and business benefits realisations, but sometimes they are still not be able to cut complex requirements easily.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.microsoft.com/en-us/power-apps/guidance/fusion-dev-ebook/01-what-is-fusion-dev-approach"&gt;Fusion development&lt;/a&gt; would be the reality moving forward and as Power Platform becomes more mature, there is no doubt it will be Microsoft’s greatest investment to come.&lt;/p&gt;

</description>
      <category>powerplatform</category>
      <category>pcf</category>
      <category>react</category>
    </item>
    <item>
      <title>Multi-environment Cloud Infrastructure with Terraform &amp; Azure DevOps</title>
      <dc:creator>axurcio</dc:creator>
      <pubDate>Wed, 13 Jul 2022 00:00:00 +0000</pubDate>
      <link>https://dev.to/lacjamm/multi-environment-cloud-infrastructure-with-terraform-azure-devops-47jp</link>
      <guid>https://dev.to/lacjamm/multi-environment-cloud-infrastructure-with-terraform-azure-devops-47jp</guid>
      <description>&lt;p&gt;We worked on a recent project where Terraform was used to provision the Azure infrastructure and I will explain the process, and how we advance towards creating a ‘Multi-environment Cloud Infrastructure’.&lt;/p&gt;

&lt;h2&gt;
  
  
  Multi-environment
&lt;/h2&gt;

&lt;p&gt;Working on large projects requires several environments, such as DEV, SIT, UAT, PROD, etc and it is always good to have non-prod and prod as like each other as possible. It will help us in identifying the failures early and reduces the blast radius. So, a single pipeline that manages these environments with dependencies is a good way to achieve that.&lt;/p&gt;

&lt;h2&gt;
  
  
  Terraform
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.terraform.io/" rel="noopener noreferrer"&gt;Terraform&lt;/a&gt; is rapidly becoming a matter of fact for creating and managing cloud infrastructures by using declarative code. It has many ways for managing multiple environments, maintaining code re-usability, resource state management, etc.&lt;/p&gt;

&lt;h3&gt;
  
  
  Monolithic vs Modular Terraform Configurations
&lt;/h3&gt;

&lt;p&gt;Monolithic story&lt;/p&gt;

&lt;p&gt;We started initially with a single &lt;code&gt;main.tf&lt;/code&gt; file creating all the resources inside the root directory, managed with a single state file. That worked well, as we were only concentrated on making the things work, like creating the workspace, pipelines, variables, Azure authentication, etc. All went great, our development &amp;amp; test environments are up and running, automated through azure pipelines.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Finsight-services-apac.github.io%2Fassets%2Fimages%2Fterraform-environments%2Fmono1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Finsight-services-apac.github.io%2Fassets%2Fimages%2Fterraform-environments%2Fmono1.png" title="Terraform Monolithic structure" alt="Terraform Monolithic structure"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Finsight-services-apac.github.io%2Fassets%2Fimages%2Fterraform-environments%2FMonolithic.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Finsight-services-apac.github.io%2Fassets%2Fimages%2Fterraform-environments%2FMonolithic.png" title="Terraform Monolithic workspace" alt="Terraform Monolithic workspace"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you look into the above source, you can see two files, &lt;code&gt;dev.tfvars&lt;/code&gt; and &lt;code&gt;test.tfvars&lt;/code&gt; responsible for providing separation between the environments. Simply, you will provide &lt;code&gt;dev.tfvars&lt;/code&gt; for provisioning your development environment and &lt;code&gt;test.tfvars&lt;/code&gt; for the test environment. They will fill the &lt;code&gt;variables.tf&lt;/code&gt; file with environment-specific values like resource groups, environment tags, prefix, etc.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Finsight-services-apac.github.io%2Fassets%2Fimages%2Fterraform-environments%2Fdev-tfvars.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Finsight-services-apac.github.io%2Fassets%2Fimages%2Fterraform-environments%2Fdev-tfvars.png" title="Dev TFVars" alt="Dev TFVars"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Your terraform plan and apply commands will look like the below for the development environment and have to configure the pipeline to pass the environment-specific &lt;code&gt;.tfvars&lt;/code&gt; file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform plan -var-file="./dev.tfvars"
terraform plan -var-file="./dev.tfvars"

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Pros&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You can just copy-paste the &lt;code&gt;.tfvars&lt;/code&gt; file to create any number of environments, say, for uat you just need to create a &lt;code&gt;uat.tfvars&lt;/code&gt; file.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Cons&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Complex to handle if resources are different in each environment, like, you may create only a single app service plan in development, where you may need multiple in production, etc.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Note: You will have to keep separate remote states per environment, which can be pointed through pipelines.&lt;/p&gt;

&lt;h3&gt;
  
  
  Modular Terraform Configurations
&lt;/h3&gt;

&lt;p&gt;So, that was our monolithic story, and we always kept in mind to restructure Terraform configurations to leverage the ‘Multi-environment Cloud Infrastructure’. There are various ways to split your &lt;code&gt;main.tf&lt;/code&gt; as it may become hard to manage as you add more environments, and a few of them are&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://www.terraform.io/cloud-docs/workspaces" rel="noopener noreferrer"&gt;Workspaces&lt;/a&gt; - Terraform CLI way to toggle between multiple environments. We didn’t choose this method, as there were a few risks including, the chances of accidentally performing operations in the wrong environments, cannot handle if environments are not like each other, etc.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://terragrunt.gruntwork.io/" rel="noopener noreferrer"&gt;Terragrunt&lt;/a&gt;Terragrunt is a popular terraform wrapper library that solves some of these issues, keeping IaC &lt;em&gt;DRY&lt;/em&gt; and we kept it as our secondary option for this project.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Modules &amp;amp; Directories - We choose this method, creating separate directories for each environment, with re-usable modules, which gave us a lot of confidence in maintaining flexibility between the environments.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Modules &amp;amp; Directories
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://www.terraform.io/language/modules" rel="noopener noreferrer"&gt;Modules&lt;/a&gt; are terraform ways to reuse your code, and keep it DRY, helping you to create multi-environment infrastructures where code is more clean, readable, and isolated. They are directories, a block of terraform that can source directory stored “.tf” configurations and can pass variables. &lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Finsight-services-apac.github.io%2Fassets%2Fimages%2Fterraform-environments%2Fmodule.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Finsight-services-apac.github.io%2Fassets%2Fimages%2Fterraform-environments%2Fmodule.png" title="Module" alt="Module"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We combined the concept of directories and modules to create our multi-environment cloud infrastructure where ‘each environment is isolated by directory’ and the environments can access the modules to create the required infrastructure.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Finsight-services-apac.github.io%2Fassets%2Fimages%2Fterraform-environments%2Fdirectory-module.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Finsight-services-apac.github.io%2Fassets%2Fimages%2Fterraform-environments%2Fdirectory-module.png" title="Directory Structure" alt="Directory Structure"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The below image will show how a &lt;code&gt;main.tf&lt;/code&gt; looks inside your network module folder &lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Finsight-services-apac.github.io%2Fassets%2Fimages%2Fterraform-environments%2Fmodule-tf-small.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Finsight-services-apac.github.io%2Fassets%2Fimages%2Fterraform-environments%2Fmodule-tf-small.png" title="main file" alt="module file"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Inside your development folder, you can see a &lt;code&gt;main.tf file&lt;/code&gt;, where you will call the required modules and pass the variables to create those modules.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Finsight-services-apac.github.io%2Fassets%2Fimages%2Fterraform-environments%2Fmain-dev.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Finsight-services-apac.github.io%2Fassets%2Fimages%2Fterraform-environments%2Fmain-dev.png" title="main file" alt="main file"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We kept a &lt;code&gt;.tfvars&lt;/code&gt; file per environment, and the advantage was, that it looked cleaner than hardcoding the resources name in your main files and made it easy to replicate the environments (just a copy-paste of your &lt;code&gt;main.tf&lt;/code&gt; file). The major advantage was, that if you have 2 or more similar environments, you can keep a single &lt;code&gt;main.tf&lt;/code&gt; and with environment-specific tfvar files and can configure your pipeline to pass those variables to the &lt;code&gt;main.tf&lt;/code&gt; file.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pipelines
&lt;/h2&gt;

&lt;p&gt;All the above, explains how terraform is structured to handle multiple environments, next comes the pipeline which plays the major role in handling build and deployments into these environments.&lt;/p&gt;

&lt;p&gt;We have used &lt;a href="https://docs.microsoft.com/en-us/azure/devops/pipelines/yaml-schema/pipeline?view=azure-pipelines" rel="noopener noreferrer"&gt;Azure DevOps Yaml Pipeline&lt;/a&gt; to deploy the resources, and the basic steps required in your Terraform YAML CI/CD pipeline are below.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Init - terraform validate, init, lint, tfsec etc.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Build – terraform plan&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Deploy – terraform apply&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We have split our YAML files to make them more readable and reusable and combined them using &lt;a href="https://docs.microsoft.com/en-us/azure/devops/pipelines/process/templates?view=azure-devops" rel="noopener noreferrer"&gt;Azure DevOps YAML Templates&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Finsight-services-apac.github.io%2Fassets%2Fimages%2Fterraform-environments%2Fpipeline-stages.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Finsight-services-apac.github.io%2Fassets%2Fimages%2Fterraform-environments%2Fpipeline-stages.png" title="Pipeline stages" alt="Pipeline Stages"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The below shows a sample terraform plan &amp;amp; apply pipeline deploying into multiple environments. You can configure your stage dependencies, approvals, triggers, etc in your pipelines based on your requirements inside YAML scripts.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Finsight-services-apac.github.io%2Fassets%2Fimages%2Fterraform-environments%2Fstages-pipeline.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Finsight-services-apac.github.io%2Fassets%2Fimages%2Fterraform-environments%2Fstages-pipeline.png" title="Build stages" alt="Build Stages"&gt;&lt;/a&gt;&lt;/p&gt;




</description>
      <category>posts</category>
      <category>terraform</category>
      <category>iac</category>
      <category>azure</category>
    </item>
    <item>
      <title>A Guide to Azure Site Recovery - Part 2</title>
      <dc:creator>axurcio</dc:creator>
      <pubDate>Mon, 20 Jun 2022 00:00:00 +0000</pubDate>
      <link>https://dev.to/lacjamm/a-guide-to-azure-site-recovery-part-2-13cb</link>
      <guid>https://dev.to/lacjamm/a-guide-to-azure-site-recovery-part-2-13cb</guid>
      <description>&lt;p&gt;In the previous &lt;a href="https://dev.to/insighttechapac/a-guide-to-azure-site-recovery-part-1-1fbl"&gt;blog post&lt;/a&gt; I have elaborated the significance of Azure Site Recovery and various under the hood components that make up an Azure Site Recovery. As a continuation to it, I will cover the Onboarding of Virtual Machines to ASR and usage of Recovery Plans along with the Infrastructure as a Code practices and challenges.&lt;/p&gt;

&lt;h2&gt;
  
  
  Onboarding Virtual Machines for ASR
&lt;/h2&gt;

&lt;p&gt;In a typical migration journey, ASR onboarding can be performed once all the technical and business validations are performed and signed off and when there is no point of rollback involved. It is critical to understand and categorize the RTO and RPO levels of the Virtual machine and services in it before onboarding into ASR. The initial replication and following snapshots will begin according to your Replication policy as soon as you associate the VM to ASR.&lt;/p&gt;

&lt;p&gt;Switching to a different Replication policy is only possible after turning off replication which will delete all the snapshots under previous policy. Hence, initial onboarding has to be performed with right rationale. On most occasions, the recovery configuration of the VM would be same as of that of its Primary configuration. However with ASR this is configurable too.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is a Recovery Plan?
&lt;/h2&gt;

&lt;p&gt;Recovery plans helps group the VMs into recovery groups through which you can plan and define your failover. Recovery plan groups helps define the order of failover and has capability to run tasks as a part of pre or post failover. This is similar to an Onpremises DR playbook maintained by Infrastructure and Product team. While migrating your workloads, it is essential to transform these runbooks and incorporate them into Recovery plans.&lt;/p&gt;

&lt;h2&gt;
  
  
  Automation using Recovery Plans
&lt;/h2&gt;

&lt;p&gt;Group Actions in Recovery Plans helps us to automate tasks which can reduce the overall RTO. There are two type of Group Actions.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Manual actions can be list of steps that needs to be performed in Azure or elsewhere before or after a groups of VMs are failed over or failed back. When the step is reached, User prompt with preconfigured description if any will allow the administrator to complete any manual tasks and awaits acknowledgement.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Runbook Actions are tasks that can integrate with Runbooks of Automation account. These can be tasks executed before or after a groups of VMs are failed over or failed back such as scripts on the VMs specifically like Updating Config files, registry changes etc. Runbooks have visibility to the Recovery process through the Recovery Plan context passed through ASR processes.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--s4hO8oiY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://insight-services-apac.github.io/assets/images/azure-site-recovery/runbook.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--s4hO8oiY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://insight-services-apac.github.io/assets/images/azure-site-recovery/runbook.png" alt="Sample Runbook with Recovery Plan Context" title="ASR Sample Runbook" width="880" height="473"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "RecoveryPlanName":"Test-RecoveryPlan",
    "FailoverType":"Test",
    "FailoverDirection":"PrimaryToSecondary",
    "GroupId":"Group2",
    "VmMap":{
        "d8daf0e6-34a7-4608-b09d-6a3251fe5ac5":{
            "SubscriptionId":"nnnnnn-nnnnn-nnnnn",
            "ResourceGroupName":"yyy-yyy-yyy-yy",
            "CloudServiceName":null,
            "RoleName":"VM-Name",
            "RecoveryPointId":"a53eea11-4e14-462a-b5ec-e18e455dada5",
            "RecoveryPointTime":"/Date(1636597591395)/"
        }
    }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  ASR via Infrastructure as a Code
&lt;/h2&gt;

&lt;p&gt;Recovery Services Vault can be a single pane of control for ASR however under the hood there are plenty of components to be configured if it is configured and maintained via code. Typically Recovery Service Vault configuration is part of Azure Foundations and is best placed to configure.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;strong&gt;Resource&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Provider&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Significance&lt;/strong&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Replication Policy&lt;/td&gt;
&lt;td&gt;Microsoft.RecoveryServices/vaults/replicationPolicies&lt;/td&gt;
&lt;td&gt;Configuration that details the frequency of recovery snapshots and retention of those snapshots&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Replication Fabric&lt;/td&gt;
&lt;td&gt;Microsoft.RecoveryServices/vaults/replicationFabrics&lt;/td&gt;
&lt;td&gt;Source and Target Regions are represented as Fabrics&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Replication Protection Container&lt;/td&gt;
&lt;td&gt;Microsoft.RecoveryServices/vaults/replicationFabrics/replicationProtectionContainers&lt;/td&gt;
&lt;td&gt;Logical containers underneath Fabric to group Virtual Machines for Source and Target regions&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Replication Protection Container Mappings&lt;/td&gt;
&lt;td&gt;Microsoft.RecoveryServices/vaults/replicationFabrics/replicationProtectionContainers/replicationProtectionContainerMappings&lt;/td&gt;
&lt;td&gt;Associates the Protection Containers to Replication Policy Ideally this has to be performed for every replication policy which we are intending to use.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Replication Network Mappings&lt;/td&gt;
&lt;td&gt;Microsoft.RecoveryServices/vaults/replicationFabrics/replicationNetworks/replicationNetworkMappings&lt;/td&gt;
&lt;td&gt;Maps the Source and Target Networks and vice versa&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  ASR Onboarding
&lt;/h3&gt;

&lt;p&gt;Some of common pain areas in ASR onboarding are,&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Configuration such as Disks, Specification, Resource Groups can be varying for every virtual machine.&lt;/li&gt;
&lt;li&gt;Complex Parameters files which is hard to maintain.&lt;/li&gt;
&lt;li&gt;If extensive parameters are not supplied, building logic via ARM, Bicep or Terraform to fetch from VMs can be challenging and will be complex to write and maintain.&lt;/li&gt;
&lt;li&gt;Maintaining the Source Repository. Most of the foundations code comprises Recovery Services vault and teams generally do not mix up Non foundations components in it. Recommendation is maintain the ASR components excluding the foundations in a separate repository.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In efforts to solve above pain points, I found that the approach of using an Azure Powershell script as a wrapper can be highly beneficial. This preprocessing logic dynamically fetches the VM details and enables replication for the VMs chosen.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;a Simple CSV file comprising Virtual Machines can be used as an input this preprocessing logic.&lt;/li&gt;
&lt;li&gt;CSVs are easily configurable and maintainable.&lt;/li&gt;
&lt;li&gt;Powershell is highly compatible to integrate with Azure providers.&lt;/li&gt;
&lt;li&gt;Logic can be developed to read the specifications, disk details and formulate the details required for enabling replication.&lt;/li&gt;
&lt;li&gt;Finally this preprocessing logic can either produce a complex parameter JSON file that can be used to deploy via ARM templates, Bicep or can run Azure cmdlets to enable replication.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Virtual Machines CSV Layouts
&lt;/h4&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;strong&gt;Column Name&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Description&lt;/strong&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;vmName&lt;/td&gt;
&lt;td&gt;Name of the Virtual Machine&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;replicationPolicy&lt;/td&gt;
&lt;td&gt;Name of the Replication Policy - Platinum,Gold,Silver,Bronze, NonProd depending upon RTO and RPO&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;resourceGroup&lt;/td&gt;
&lt;td&gt;Name of Resource Group&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h4&gt;
  
  
  ASR Onboarding Snippet
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  #import CSV from vmCsvPath
  $vmCsv = import-csv $vmCsvPath
  $enableReplicationJobs = New-Object System.Collections.ArrayList

  #Enable Replication for Each VM
  foreach ($vm in $vmCsv)
  { 
   Write-output ("Processing VM: "+$vm.vmName)
   $vmName = $vm.vmName
   $sourceResourceGroup = $vm.resourceGroup
   $replicationPolicy = $vm.replicationPolicy

   Enable-Replication -vmName $vmName -replicationPolicy $replicationPolicy -sourceRg $sourceResourceGroup -targetRg $asrResourceGroup -rsvVault $rsvVault
  } 

  #Adding Os Disk
  $osDisk = New-AzRecoveryServicesAsrAzureToAzureDiskReplicationConfig -DiskId $vmDetails.StorageProfile.OsDisk.ManagedDisk.Id `
            -LogStorageAccountId $primaryASRStorageAccountId -ManagedDisk -RecoveryReplicaDiskAccountType $vmDetails.StorageProfile.OsDisk.ManagedDisk.StorageAccountType `
            -RecoveryResourceGroupId $targetResourceGroupId -RecoveryTargetDiskAccountType $vmDetails.StorageProfile.OsDisk.ManagedDisk.StorageAccountType 

  #Adding Data Disk
  foreach($dataDisk in $vmDetails.StorageProfile.DataDisks)
  { 
    write-output "Adding Data disks for Replication"
    $disk = New-AzRecoveryServicesAsrAzureToAzureDiskReplicationConfig -DiskId $dataDisk.ManagedDisk.Id `
             -LogStorageAccountId $primaryASRStorageAccountId -ManagedDisk -RecoveryReplicaDiskAccountType $dataDisk.ManagedDisk.StorageAccountType `
             -RecoveryResourceGroupId $targetResourceGroupId -RecoveryTargetDiskAccountType $dataDisk.ManagedDisk.StorageAccountType
    $rc = $diskList.Add($disk)
  }

  #Enabling Replication
  $job = New-AzRecoveryServicesAsrReplicationProtectedItem -AzureToAzure -Name $vmName -RecoveryVmName $vmName -ProtectionContainerMapping $primaryProtectionContainerMapping ` 
         -AzureVmId $vmDetails.ID -AzureToAzureDiskReplicationConfiguration $diskList -RecoveryResourceGroupId $TargetResourceGroupId `
         -RecoveryAzureSubnetName $targetSubnetName -RecoveryAzureNetworkId $targetVirtualNetworkId

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Recovery Plans
&lt;/h3&gt;

&lt;p&gt;Recovery plans can be simple or complex according to your Virtual machines footprint and your appetite towards automation. A complex recovery plan can have multiple groups each groups comprising a set of Virtual machines. Each group can have pre and post actions that can be either manual or automated tasks with the help of runbooks.&lt;/p&gt;

&lt;p&gt;Now all this can be overwhelming if the configuration is maintained as parameters in JSON files. Similar to pain points in ASR onboarding, achieving a right balance between logic and flexibility in parameters will be challenging.&lt;/p&gt;

&lt;p&gt;A preprocessing logic using Azure Powershell can again be used which can consume simple parameters in form of CSV files and build the complex JSON required for ARM template deployment.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Script to iterate the VM and Recovery plan CSV files for the recovery plan to be processed.&lt;/li&gt;
&lt;li&gt;Identifies the grouping of the VMs and build groups by referencing the Pre and Post actions in group actions CSV file.&lt;/li&gt;
&lt;li&gt;A JSON parameter file is finally built through this preprocessing script and can then be used to deploy via ARM template/Bicep.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Recovery Plan CSV Layouts
&lt;/h4&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;strong&gt;Column Name&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Description&lt;/strong&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;vmName&lt;/td&gt;
&lt;td&gt;Name of the Virtual Machine&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;recoveryPlan&lt;/td&gt;
&lt;td&gt;Name of the Recovery Plan&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;group&lt;/td&gt;
&lt;td&gt;Group in Recovery Plan - 1,2,3 etc&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h4&gt;
  
  
  Group Action CSV Layouts
&lt;/h4&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;strong&gt;Column Name&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Description&lt;/strong&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;recoveryPlan&lt;/td&gt;
&lt;td&gt;Name of the Recovery Plan&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;group&lt;/td&gt;
&lt;td&gt;Group in Recovery Plan - 1,2,3 etc&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;startAction&lt;/td&gt;
&lt;td&gt;Type of Start Action - Manual, Runbook&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;startActionName&lt;/td&gt;
&lt;td&gt;Name of the Start Action&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;startActionDescription&lt;/td&gt;
&lt;td&gt;Description of Start Action&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;endAction&lt;/td&gt;
&lt;td&gt;Type of End Action - Manual, Runbook&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;endActionName&lt;/td&gt;
&lt;td&gt;Name of the End Action&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;endActionDescription&lt;/td&gt;
&lt;td&gt;Description of End Action&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;failoverType&lt;/td&gt;
&lt;td&gt;Type of Failover - TestFailover, PlannedFailover&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;failoverDirections&lt;/td&gt;
&lt;td&gt;Direction of Failover - PrimaryToRecovery, RecoveryToPrimary&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h4&gt;
  
  
  Recovery Plan Snippet
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
        #Create Recovery Protected Items Array
        $replicationProtArray = $vmSubset | ForEach-Object {
            $primaryFabric = get-asrfabric | Where-object {$_.FabricSpecificDetails.Location -like $primaryRegion} 
            $primaryContainer = Get-ASRProtectionContainer -Name $PrimaryContainerName -Fabric $primaryFabric
            $protDetails = Get-AzRecoveryServicesAsrReplicationProtectedItem -Name $_.vmName -ProtectionContainer $primaryContainer
            $protId = $protDetails.Id
            $vmDetails = Get-AzVM -ResourceGroupName $_.resourceGroup -Name $_.vmName 
            $vmId = $vmDetails.Id 
            [PSCustomObject]@{
                id = $protId
                virtualMachineId = $vmId 
            }

        }

        ###Start Group Action
        #Logic to transform to Manual action
        if ($action.startAction -eq 'Manual')
        {
            $startCustomDetails= [PSCustomObject]@{
                instanceType = 'ManualActionDetails'
                description = $action.startActionDescription
            }
            $finalStartAction = [PSCustomObject]@{
                actionName = $action.startActionName
                failoverTypes = [string[]] (Split-StringObject $action.failoverType)  
                failoverDirections = [string[]] (Split-StringObject $action.failoverDirections)
                customDetails = $startCustomDetails 
            }         

        }
        #Logic to transform to Runbook action 
        elseif( $action.startAction -eq 'Runbook') {
            $runbookName = $action.startActionName
            $runbookId = ($automationAccountId+"/runbooks/"+$runbookName)
            $startCustomDetails = [PSCustomObject]@{
                instanceType = 'AutomationRunbookActionDetails'
                runbookId = $runbookId
                description = $action.startActionDescription
                fabricLocation = 'Primary'
            }
            $finalStartAction = [PSCustomObject]@{
                actionName = $action.startActionName
                failoverTypes = [string[]] (Split-StringObject $action.failoverType)  
                failoverDirections = [string[]] (Split-StringObject $action.failoverDirections)
                customDetails = $startCustomDetails 
            }    
        }
        elseif( !$action ) {
            $finalStartAction = @()            
        }

        #Create Recovery Group Array
        $recoveryGroups = [PSCustomObject]@{
                groupType = "Boot"
                replicationProtectedItems = [array] $replicationProtArray
                startGroupActions = [array] $finalStartAction
                endGroupActions = [array] $finalEndAction
        }

        #create Recovery Plan Finalized Param file with Array
        $recoveryPlanfile.parameters.recoveryVaultName.value = $rsvVault

        $recoveryPlanfile.parameters.recoveryPlanName.value = "RecoveryPlan-$recoveryPlan"

        $recoveryPlanfile.parameters.recoveryGroups.value = [array] $recoveryGroupsArray

        #Convert to Json Parameters
        $recoveryPlanJson = ConvertTo-Json -InputObject $recoveryPlanfile -Depth 10

        $recoveryPlanJson | Set-Content $baseTemplatePath\"RecoveryPlan-$recoveryPlan.parameters.json"

        $DeploymentInputs = @{
                     Name = "RecoveryPlan-$recoveryPlan-$(-join (Get-Date -Format yyyyMMdd))"
                     TemplateFile = $armTemplateFile
                     TemplateParameterFile = "$baseTemplatePath\RecoveryPlan-$recoveryPlan.parameters.json"
                     Verbose = $true
                     ErrorAction = "Stop"
                   }

        New-AzResourceGroupDeployment @DeploymentInputs -ResourceGroupName $rsvRg

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This marks the completion of Onboarding of Virtual machines into ASR and Recovery plans and now ready for failover as a part of DR Drills or a real life disaster situation. In the next blog post in this series, I will explain the Failover scenarios and steps along with the day 2 day operations in Azure Site Recovery.&lt;/p&gt;

&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.microsoft.com/en-us/azure/site-recovery/azure-to-azure-support-matrix"&gt;Site Recovery Support Matrix&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.microsoft.com/en-us/azure/site-recovery/recovery-plan-overview"&gt;Recovery Plans&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.microsoft.com/en-us/azure/site-recovery/site-recovery-runbook-automation"&gt;Automation Runbooks in Recovery Plans&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>posts</category>
      <category>azure</category>
      <category>site</category>
      <category>recovery</category>
    </item>
    <item>
      <title>A Guide to Azure Site Recovery - Part 1</title>
      <dc:creator>axurcio</dc:creator>
      <pubDate>Fri, 27 May 2022 00:00:00 +0000</pubDate>
      <link>https://dev.to/lacjamm/a-guide-to-azure-site-recovery-part-1-1fbl</link>
      <guid>https://dev.to/lacjamm/a-guide-to-azure-site-recovery-part-1-1fbl</guid>
      <description>&lt;h2&gt;
  
  
  What is Azure Site Recovery?
&lt;/h2&gt;

&lt;p&gt;Azure Site Recovery is a Disaster Recovery as a Service offering from Azure which contributes the Business Continuity and DR strategy by replicating the IaaS workloads between regions in Azure, Onpremises physical servers and virtual machines to Azure.&lt;/p&gt;

&lt;p&gt;In this blog post, I will selectively cover the Azure to Azure Site Recovery option and discuss the various under the hood components that make up an Azure Site recovery. I will also explain the different phases of setting up a Azure Site Recovery for Disaster Recovery.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Azure Site Recovery in a PaaS and Containers world?
&lt;/h2&gt;

&lt;p&gt;In a world fast moving towards Containers and PaaS service if anyone can wonder why would we need Azure Site Recovery here is some supporting facts. Yes, many customers and product teams move away from IaaS style workloads to PaaS services and Containers in form of Azure Kubernetes. However we still have a lot of customers and businesses who are beginning their Cloud journey having a lot of applications and services running in Virtual machines in data centres.&lt;/p&gt;

&lt;p&gt;Cloud Enablement for these Onpremises Virtual machines quite often starts with migrations i.e Lift and Shift. The percentage of these applications getting modernised are still in low numbers. Most organisations decide to migrate as their first phase and eventually modernise/decommision the workloads in subsequent phases. Until this happens, it is important that these services are provided with a Business Continuity and DR strategy. There is also another scenario where businesses chose IaaS workloads over PaaS and Containers because of security concerns and other requirements. Virtual machines are here to stay atleast for another 10 years in my humble opinion and definitely they need a DR strategy and Azure Site Recovery is a Cloud native offering which can meet up organisational DR needs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Enabling Azure Site Recovery
&lt;/h2&gt;

&lt;p&gt;Azure Site Recovery is offered through Recovery Services Vault. Depending on the governance model, Recovery Services vault can be provisioned in a Hub if businesses are looking to centrally manage or Spoke Subscription for a more distributed type of management. Site Recovery has to be enabled though and it needs several components to be configured to complete a Site Recovery Infrastructure. Lets discuss these in below sections.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting up the Foundations
&lt;/h2&gt;

&lt;p&gt;As a part of broader Cloud Enablement and DR strategy, businesses widely choose their Primary and Secondary regions as a part of Cloud Foundations. It is vital to define the Source and Target networks in these regions for enabling ASR. Along with the networking, there are few other components required which are explained below.&lt;/p&gt;

&lt;p&gt;Some of the key factors to consider:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Existing or Dedicated Target network/subnets for ASR workloads.&lt;/li&gt;
&lt;li&gt;Reserving enough IP address spaces in Target Virtual network/Subnets to accomodate failed over workloads.&lt;/li&gt;
&lt;li&gt;Ensure networking is consistent between regions such as NSGs, Firewall rules to always accomodate Primary and Secondary Networks and inter connectivity between tiers if any (eg: Web to Data Subnet in Primary and Secondary).&lt;/li&gt;
&lt;li&gt;Firewall and NSG rules to enable access to several PaaS services such as Storage, Automation, KeyVault that supports ASR.&lt;/li&gt;
&lt;li&gt;Resource Groups in Target Regions.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A dedicated Cache Storage account in Primary region will have to be created as well which will be used by Azure Site recovery in replication.&lt;/p&gt;

&lt;h2&gt;
  
  
  Define your Replication Policies
&lt;/h2&gt;

&lt;p&gt;It is very vital to understand and frame the RTO and RPO of your existing workloads through detailed assessment as a part of your broader DR strategy.&lt;/p&gt;

&lt;h3&gt;
  
  
  What is RTO and RPO
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Recovery Time Objective&lt;/strong&gt; is the amount of time within which the system or service must be restored in case of disaster.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Recovery Point Objective&lt;/strong&gt; is the amount of time for which data loss is acceptable through an outage of system or service before it can significantly impact business.&lt;/p&gt;

&lt;p&gt;Understanding the existing RTO and RPO and translating to what ASR and Azure can offer is important in framing the replication policies.&lt;/p&gt;

&lt;h3&gt;
  
  
  Crash Consistency Vs App Consistency
&lt;/h3&gt;

&lt;p&gt;Similar to other backup and recovery services, ASR offers two type of snapshots. Crash Consistent is default to every 5 minutes and cant be modified. This is a system state snapshot and captures the data in disks. Often, most of the applications are capable to recover with these snapshots.&lt;/p&gt;

&lt;p&gt;Application Consistency snapshots offers data consistency by capturing in memory and transactions if any in progress in addition to what Crash Consistent offers.&lt;/p&gt;

&lt;h3&gt;
  
  
  Snapshot Retention
&lt;/h3&gt;

&lt;p&gt;Snapshot retention is another important factor as there are costs in maintaining the snapshots in form of storage.&lt;/p&gt;

&lt;h3&gt;
  
  
  Tiered Replication Policy
&lt;/h3&gt;

&lt;p&gt;Based on the above factors, it is essential to create the replication policies. Often the business fall into tiered model such as below and some approximate numbers as an example.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Replication Policy&lt;/th&gt;
&lt;th&gt;Snapshot Frequency (in hours)&lt;/th&gt;
&lt;th&gt;Snapshot Retention (in days)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Gold&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Silver&lt;/td&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;td&gt;3&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Bronze&lt;/td&gt;
&lt;td&gt;3&lt;/td&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;This marks the completion of Recovery Services Vault configuration and is now ready to onboard the Virtual machines for replication. In the upcoming blog posts in this series, I will explain the Onboarding of Virtual Machines into ASR, Failovers and Infrastructure as a Code practices/challenges for Azure Site Recovery.&lt;/p&gt;

&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.microsoft.com/en-us/azure/site-recovery/site-recovery-overview"&gt;Site Recovery Overview&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.microsoft.com/en-us/azure/site-recovery/azure-to-azure-architecture"&gt;Azure to Azure Site Recovery Architecture&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>posts</category>
      <category>azure</category>
      <category>asr</category>
    </item>
    <item>
      <title>How to align your azure environment using right tiering strategy</title>
      <dc:creator>axurcio</dc:creator>
      <pubDate>Thu, 05 May 2022 00:00:00 +0000</pubDate>
      <link>https://dev.to/lacjamm/how-to-align-your-azure-environment-using-right-tiering-strategy-1100</link>
      <guid>https://dev.to/lacjamm/how-to-align-your-azure-environment-using-right-tiering-strategy-1100</guid>
      <description>&lt;p&gt;Microsoft Azure is one of the major cloud providers in the technology space and many enterprise and small scale customers have migrated to the azure cloud as part of their digital transformation journeys. As Azure cloud provides a bundle of services aligning to the organisations existing principles and methodologies, it’s important to align these strategies during the foundation stage for each workload to better place them in the right tier.&lt;/p&gt;

&lt;p&gt;Every workload hosting in an organisation has a story on how it should be provisioned on the basis of certain factors such as Availability, Resiliency, Fault tolerance and Disaster Recovery and additionally in terms of Recovery Point Objective(RPO) and Recovery Time Objective(RTO) and when deployed requires a certain criteria based on the criticality and impact without comprising the cost, though placing them in the right tier is equally important.&lt;/p&gt;

&lt;p&gt;Most of the legacy and few of the latest workloads are dependent of running on virtual machines(VMs) due to factors such as supportability, frameworks and operating systems and many others. Hence, the majority of the workload ends up running as IaaS before organisations invest in modernising them.&lt;/p&gt;

&lt;p&gt;This criteria focuses more on Infrastructure as a Service(IaaS) and to some extent can be applied to the Platform as a Service(PaaS) services too.&lt;/p&gt;

&lt;p&gt;Microsoft Azure provides the following services to host these workloads based on the needs of the organisation/applications.&lt;/p&gt;

&lt;h2&gt;
  
  
  Virtual Machines(VMs)
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Azure Virtual Machines (VM) is one of several types of on-demand, scalable computing resources that Azure offers. An Azure VM gives you the flexibility of virtualization without having to buy and maintain the physical hardware that runs it.&lt;/li&gt;
&lt;li&gt;Single Instance Virtual Machine SLA varies based on the Managed disk SKU used for Operating System and Data Disk.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Managed Disks
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Azure managed disks are block-level storage volumes that are managed by Azure and used with Azure Virtual Machines. Managed disks are like a physical disk in an on-premises server but virtualized.&lt;/li&gt;
&lt;li&gt;Azure managed disks offer two storage redundancy options, locally-redundant storage(LRS - do support all azure regions and disk types such as HDD, SSD and Ultra Drives) and zone-redundant storage (ZRS - which is limited to specific regions and do support only premium disks now).&lt;/li&gt;
&lt;li&gt;Managed Disks does not have a financially backed SLA itself. The availability of Managed Disks is based on the SLA of the underlying storage used and virtual machine to which it is attached, but they are designed for 99.999% availability.&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Disk SKU&lt;/th&gt;
&lt;th&gt;Operating System&lt;/th&gt;
&lt;th&gt;Data Disk&lt;/th&gt;
&lt;th&gt;SLA&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Premium SSD or Ultra Disk&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;99.9%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Standard SSD&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;99.5%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Standard HDD&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;95%&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Azure Backup(AB)
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;The Azure Backup service provides simple, secure, and cost-effective solutions to back up your data and recover it from the Microsoft Azure cloud.&lt;/li&gt;
&lt;li&gt;Azure Backup supports multiple services such as virtual machines, Managed Disks, Azure File Shares and many others.&lt;/li&gt;
&lt;li&gt;It guarantees at least 99.9% availability of the backup and restore functionality of the Azure Backup service.&lt;/li&gt;
&lt;li&gt;It does support multiple types of replication to keep your storage/data highly available such as Locally redundant storage (LRS - creates 3 copies of the data in a storage unit in a datacenter ), Geo-redundant storage (GRS - replicates your data to a secondary region) and Zone-redundant storage (ZRS - replicates your data in availability zones, guaranteeing data residency and resiliency in the same region).&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Azure Site Recovery(ASR)
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Site Recovery helps ensure business continuity by keeping business apps and workloads running during outages. Site Recovery replicates workloads running on virtual machines (VMs) from a primary site to a secondary location or within a region between zones.&lt;/li&gt;
&lt;li&gt;For each Protected Instance configured for Azure-to-Azure Failover, it guarantees a two-hour Recovery Time Objective.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Availability Sets
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Spreading the virtual machines in the availability set provides redundancy from hardware, network and storage within the same datacenter. If a disaster occured at the datacenter level, service will not be available.&lt;/li&gt;
&lt;li&gt;All azure regions do support availability sets unlike Availability Zones.&lt;/li&gt;
&lt;li&gt;Availability set is meant for more than 2 virtual machines to avail the advantages and there may not be identical copies of the application.&lt;/li&gt;
&lt;li&gt;Active Directory Domain Services(On-Premises AD) service is a good example when multiple instances are running in the same region.&lt;/li&gt;
&lt;li&gt;For all Virtual Machines that have two or more instances deployed in the same Availability Set or in the same Dedicated Host Group, we guarantee you will have Virtual Machine Connectivity to at least one instance at least 99.95% of the time&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Availability Zones
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Spreading the virtual machines in availability zones provides redundancy from power, cooling, and networking infrastructure across datacenters(termed as zones made up of multiple datacenters). If there is a disaster occured at the zone level, service will be provided by other active zones.&lt;/li&gt;
&lt;li&gt;Unlike availability sets, not all azure regions support zoning. It’s important to finalise a location before provisioning the applications.&lt;/li&gt;
&lt;li&gt;Availability Zones are meant for applications which require disaster recovery capabilities within the region or running multiple instances across zones to provide high availability to your application.&lt;/li&gt;
&lt;li&gt;Multiple instances of a web server running in each zone is a good example of providing availability to the application. For all Virtual Machines that have two or more instances deployed across two or more Availability Zones in the same Azure region, will have Virtual Machine Connectivity to at least one instance at least 99.99% of the time.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Tiering Categories
&lt;/h2&gt;

&lt;p&gt;Below are the Tiering category definitions based on tags. This can be extended further based on the requirements and below categories can be used as a good starting point to tier your services based on organisation needs. It it recommended to tag the workloads and its dependencies for easier tracking.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Tier Category&lt;/th&gt;
&lt;th&gt;Tag Name&lt;/th&gt;
&lt;th&gt;Tag Value&lt;/th&gt;
&lt;th&gt;Service Availability&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;High Availability + Backup&lt;/td&gt;
&lt;td&gt;Tier&lt;/td&gt;
&lt;td&gt;0&lt;/td&gt;
&lt;td&gt;~ Zero RPO and RTO&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Disaster Recovery + Backup&lt;/td&gt;
&lt;td&gt;Tier&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;&amp;lt; 5mins of RPO and &amp;lt; 2hrs of RTO&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Backup&lt;/td&gt;
&lt;td&gt;Tier&lt;/td&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;td&gt;&amp;lt; 24hrs of RPO and &amp;lt; 2days of RTO&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;No (Disaster Recovery + Backup)&lt;/td&gt;
&lt;td&gt;Tier&lt;/td&gt;
&lt;td&gt;3&lt;/td&gt;
&lt;td&gt;Based on demand&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  Tier Category 0
&lt;/h3&gt;

&lt;p&gt;This tier is meant for services which are critical and requires high-availability and provides foundation to the later tiered services. There are multiple ways to achieve this in azure such as using availability set or availability zones or hosting multiple instances of the service across regions.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Possibilities: 

&lt;ul&gt;
&lt;li&gt;Virtual Machines - can be hosted using Availability set or Availability zone for a Virtual machine or multiple in each region&lt;/li&gt;
&lt;li&gt;Managed Disks - recommended to use LRS or ZRS where possible within each region&lt;/li&gt;
&lt;li&gt;Azure Backup - recommended to use LRS storage or ZRS where possible, GRS is not required as secondary instance will be already running across region&lt;/li&gt;
&lt;li&gt;Azure Site Recovery - not recommended as the service itself is running multiple instances across regions or zones&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Tier Category 1
&lt;/h3&gt;

&lt;p&gt;This tier is meant for services which are critical or categorised under a production environment which has challenging RPO and RTO requirements. This would be a scenario where workloads don’t want to host in high-availability mode and use disaster recovery mechanisms to recover in case of the human or nature made disaster.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Possibilities: 

&lt;ul&gt;
&lt;li&gt;Virtual Machines - can be hosted using Availability set or Availability zone for a Single Virtual machine or multiple only in one region&lt;/li&gt;
&lt;li&gt;Managed Disks - recommended to use LRS and ZRS where possible within one region&lt;/li&gt;
&lt;li&gt;Azure Backup - recommended to LRS or ZRS where possible, if backup is not required in the secondary region or use GRS based storage for all the instances if backup data required&lt;/li&gt;
&lt;li&gt;Azure Site Recovery - recommended to enable for all the applicable instances and create a disaster recovery plan based on application criteria&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Tier Category 2
&lt;/h3&gt;

&lt;p&gt;This tier is meant for services which are categorised under staging, testing and development instances and considered as non-critical services. This would be a scenario where workloads don’t want to be hosted or replicated within same or other regions in case the disaster occurs.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Possibilities: 

&lt;ul&gt;
&lt;li&gt;Virtual Machines - can be hosted using Availability set or Availability zone for a Single Virtual machine or multiple only in one region&lt;/li&gt;
&lt;li&gt;Managed Disks - recommended to use LRS and ZRS where possible within one region&lt;/li&gt;
&lt;li&gt;Azure Backup - recommended to use LRS storage, or ZRS where possible&lt;/li&gt;
&lt;li&gt;Azure Site Recovery - not recommended to enable replication as these are non critical workloads&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Tier Category 3
&lt;/h3&gt;

&lt;p&gt;This tier is meant for services which are categorised under Proof of Concept(PoC), short term duration testing or development related workloads. This would be the scenario where workloads don’t want to be backed up or replicated expecting there is no impact with the availability and can be redeployed in case required.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Possibilities: 

&lt;ul&gt;
&lt;li&gt;Virtual Machines - can be hosted using Availability set or Availability zone for a Single Virtual machine or multiple only in one region&lt;/li&gt;
&lt;li&gt;Managed Disks - recommended to use LRS or ZRS where possible within one region&lt;/li&gt;
&lt;li&gt;Azure Backup - not recommended to use any replication type&lt;/li&gt;
&lt;li&gt;Azure Site Recovery - not recommended to enable replication as these are non critical workloads&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Addendum
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.microsoft.com/en-us/azure/virtual-machines/"&gt;Virtual Machines&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.microsoft.com/en-us/azure/virtual-machines/managed-disks-overview"&gt;Managed Disks&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.microsoft.com/en-us/azure/backup/backup-overview"&gt;Azure Backup&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.microsoft.com/en-us/azure/site-recovery/site-recovery-overview"&gt;Azure Site Recovery&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.microsoft.com/en-us/azure/virtual-machines/availability-set-overview"&gt;Availability Sets&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.microsoft.com/en-us/azure/availability-zones/az-overview"&gt;Availability Zones&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://en.wikipedia.org/wiki/High_availability"&gt;High-Availability SLAs&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Availability %&lt;/th&gt;
&lt;th&gt;Downtime per year&lt;/th&gt;
&lt;th&gt;Downtime per quarter&lt;/th&gt;
&lt;th&gt;Downtime per month&lt;/th&gt;
&lt;th&gt;Downtime per week&lt;/th&gt;
&lt;th&gt;Downtime per day (24 hours)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;99% (“two nines”)&lt;/td&gt;
&lt;td&gt;3.65 days&lt;/td&gt;
&lt;td&gt;21.9 hours&lt;/td&gt;
&lt;td&gt;7.31 hours&lt;/td&gt;
&lt;td&gt;1.68 hours&lt;/td&gt;
&lt;td&gt;14.40 minutes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;99.5% (“two and a half nines”)&lt;/td&gt;
&lt;td&gt;1.83 days&lt;/td&gt;
&lt;td&gt;10.98 hours&lt;/td&gt;
&lt;td&gt;3.65 hours&lt;/td&gt;
&lt;td&gt;50.40 minutes&lt;/td&gt;
&lt;td&gt;7.20 minutes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;99.9% (“three nines”)&lt;/td&gt;
&lt;td&gt;8.77 hours&lt;/td&gt;
&lt;td&gt;2.19 hours&lt;/td&gt;
&lt;td&gt;43.83 minutes&lt;/td&gt;
&lt;td&gt;10.08 minutes&lt;/td&gt;
&lt;td&gt;1.44 minutes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;99.95% (“three and a half nines”)&lt;/td&gt;
&lt;td&gt;4.38 hours&lt;/td&gt;
&lt;td&gt;65.7 minutes&lt;/td&gt;
&lt;td&gt;21.92 minutes&lt;/td&gt;
&lt;td&gt;5.04 minutes&lt;/td&gt;
&lt;td&gt;43.20 seconds&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;99.99% (“four nines”)&lt;/td&gt;
&lt;td&gt;52.60 minutes&lt;/td&gt;
&lt;td&gt;13.15 minutes&lt;/td&gt;
&lt;td&gt;4.38 minutes&lt;/td&gt;
&lt;td&gt;1.01 minutes&lt;/td&gt;
&lt;td&gt;8.64 seconds&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;99.999% (“five nines”)&lt;/td&gt;
&lt;td&gt;5.26 minutes&lt;/td&gt;
&lt;td&gt;1.31 minutes&lt;/td&gt;
&lt;td&gt;26.30 seconds&lt;/td&gt;
&lt;td&gt;6.05 seconds&lt;/td&gt;
&lt;td&gt;864.00 milliseconds&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;It is important to have tiering strategy prior to the migration/deploying an application. Tiering strategy doesn’t just help in aliging the services to the right tier but also plays vital role in delivering the service based on the criticality. Choosing the tier based on factors such as redundancy and replication options helps also in optimising the cost of the overall solution. Hope it helps!&lt;/p&gt;

</description>
      <category>azure</category>
      <category>tiering</category>
      <category>strategy</category>
    </item>
    <item>
      <title>The Cloud Doctor</title>
      <dc:creator>axurcio</dc:creator>
      <pubDate>Tue, 05 Apr 2022 00:00:00 +0000</pubDate>
      <link>https://dev.to/lacjamm/the-cloud-doctor-3pjj</link>
      <guid>https://dev.to/lacjamm/the-cloud-doctor-3pjj</guid>
      <description>&lt;p&gt;In the past 2 years, the pandemic has taught us that keeping our immunity and health at an all time high can avoid the risk of getting the virus. However we are humans and are not perfect at all times. And we fail to do the right thing to stay immune. And so whenever we feel sick we go to the doctor and get ourselves diagnosed. How good would it be if we knew how to fix our health before it becomes an issue?&lt;/p&gt;

&lt;p&gt;It would be so nice to have to wake up one day and have Siri, or Apple watch or Alexa saying “You are recommended to drink 12 glasses of water today because you drank only 4 glasses yesterday”. Perhaps that sort of artificial intelligence is still in the making, where your apple watch could start making recommendations to you.&lt;/p&gt;

&lt;p&gt;A few years ago, when things were mostly on premise, services such as Health checks of a database were a standard service offered by consultancies had to be performed by a consultant by going on to the actual premises accessing the server, running the scripts and then obtaining the results. This process was cumbersome, time consuming and required manual effort. If an organization is too busy to get someone to perform this health check, and meanwhile the health of the database deteriorates and the DB crashes, then well who is responsible for the loss of data, time to retrieve the DB state and cost?&lt;/p&gt;

&lt;p&gt;During the pandemic, fortunately many organizations have chosen to migrate to the cloud and have removed the need to host physical servers. Whilst that is a brilliant choice, we can say that gone are the days when Health checks were performed manually on the database. Thankfully do have something that looks after our platform before it turns into a disaster!&lt;/p&gt;

&lt;p&gt;In this article we will look at Azure offerings that would constitute a virtual cloud doctor. If we imagine visiting a doctor for a health check, reviewing our vital signs such as blood sugar levels, blood pressure and temperature, they can provide recommendations based on these health statistics. We can draw parallels with our Doctor to 3 Azure services: Azure Advisor, Azure Monitoring and Azure Health.&lt;/p&gt;

&lt;p&gt;Leveraging these 3 services together within your Azure platform provides an unrivalled trio that can give you robust monitoring, alerting capabilities and customized capabilities based on best practices! And not just that, it puts you at the top of all that is happening in your platform so that you have adequate time required to respond to an issue rather than panic in haste. This trio monitors the health of your platform, diagnoses issues and makes recommendations based on the diagnosis. It monitors availability performance, security, pricing and, discovers risks so that you can pro-actively mitigate. The following sections look at each one in detail.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Azure Monitoring
&lt;/h3&gt;

&lt;p&gt;While the Electrocardiogram or ECG as you know it monitors the heart rate, similarly Azure monitoring is a monitoring service that provides a single pipeline for monitoring across all Azure resource types, enabling you to easily monitor, diagnose, alert, and notify of problems in your cloud infrastructure. It provides platform metrics with one minute granularity by default. This tool monitors both applications and data. So this can be a DBAs or Azure admin’s best friend! Figure 1 shows how the Monitoring options shows up when you search for it in the left panel of Azure SQL resource page.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--lAjWJvy---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://insight-services-apac.github.io/assets/images/azure-monitoring/Figure1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--lAjWJvy---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://insight-services-apac.github.io/assets/images/azure-monitoring/Figure1.png" alt="Figure 1" width="533" height="564"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 1&lt;/p&gt;

&lt;p&gt;Azure monitoring produces metrics of all different kinds! Figure 2 shows that Azure Monitoring service monitors the CPU utilization, Input output and Logs of an Azure SQL Database. This gives an idea of what are the peak times when a database is queried upon and when the resource was paused , unavailable or unutilized which be further drilled down.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--voEhKug2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://insight-services-apac.github.io/assets/images/azure-monitoring/Figure2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--voEhKug2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://insight-services-apac.github.io/assets/images/azure-monitoring/Figure2.png" alt="Figure 2" width="880" height="564"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 2&lt;/p&gt;

&lt;p&gt;The “ &lt;strong&gt;Line chart&lt;/strong&gt; ” option allows you to visualize this differently. It also allows you to create an alert rule to let you know if resources are overutilized or unavailable. Later in this article we will discuss some sophisticated visualization options. Azure Monitoring not just keeps a record of the metrics at a high level, but can also tell us which queries are the ones causing high CPU / IO utilization. This feature is really good to find out if a user has kept a query running on their machine and forgotten about it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--gDe0_4yh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://insight-services-apac.github.io/assets/images/azure-monitoring/Figure3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--gDe0_4yh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://insight-services-apac.github.io/assets/images/azure-monitoring/Figure3.png" alt="Figure 2" width="880" height="253"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 3&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Azure Health
&lt;/h3&gt;

&lt;p&gt;We fear from our health being compromised and so we take all the vitamins required to stay fit. Similarly when there are hundreds of pipelines in the Azure environment , processing terabytes worth of data and hundreds of reports rely on that data, then the health of each resource whether it is Azure Database or storage account carries paramount importance. It is the responsibility of the Azure Health to keep you informed about the health of the services and resources. Whether is maintenance, outages and different types of issues that may impact your organisation. Azure Health offers 2 types of sub-services such as Service Health and Resource Health. One is the Resource Health that monitors a specific resource and the other is the that monitors the Azure environment.&lt;/p&gt;

&lt;h4&gt;
  
  
  Resource Health
&lt;/h4&gt;

&lt;p&gt;Provides details about the health of individual resources in your environment, such as a virtual machine. By setting up alerts on the Resource Health you can stay informed about the availability of your resources and quickly respond to a problem if any. Azure Resource Health helps you diagnose and get support for service problems that affect your Azure resources. It reports on the current and past health of your resources. The Resource Health executes some checks, minute-by-minute, across the resources and makes the information available to you. The Resource Health is available through the &lt;strong&gt;Support + troubleshooting&lt;/strong&gt; blade in the Azure portal, for the specific resource types on Azure.&lt;/p&gt;

&lt;h4&gt;
  
  
  Service Health
&lt;/h4&gt;

&lt;p&gt;This looks after the health of Azure services and regions currently in use by your workloads. To stay informed about your Azure services, you need to setup alerts to notify you via preferred communication channels. Azure Service Health should be looked upon as a set of services. Service Health is what you will be using to get information on outages, planned maintenance, health, and security advisories.&lt;/p&gt;

&lt;p&gt;It allows you to create customized views, filtering among subscription, region, and services. The level of details will include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Issue Name&lt;/li&gt;
&lt;li&gt;Subscription, service, and region impacted&lt;/li&gt;
&lt;li&gt;Start time&lt;/li&gt;
&lt;li&gt;Summary and issue updates&lt;/li&gt;
&lt;li&gt;Root cause analysis&lt;/li&gt;
&lt;li&gt;Downloadable PDF with explanations&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  3. Azure Advisor
&lt;/h3&gt;

&lt;p&gt;In real life doctors not only diagnose an issue but also perform medical consultation. For example your dentist would recommend a suitable toothbrush and toothpaste for you depending on the situation of your teeth. Similar for the Azure environment Azure Advisor provides a set of recommendations as your personalized cloud consultant. From cost to performance to security, Azure advisor provides a suite of different categories in which it can serve you. Azure recommendations is a pro-active way to prevent something that hasn’t happened yet. This service of Azure analyses your Azure configuration and matches them against the best practices to come up with a list of recommendations. Figure 4 shows the list of recommendations for the admin to be able to action them.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--AKx0F4qY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://insight-services-apac.github.io/assets/images/azure-monitoring/Figure4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--AKx0F4qY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://insight-services-apac.github.io/assets/images/azure-monitoring/Figure4.png" alt="Figure 4" width="476" height="236"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 4&lt;/p&gt;

&lt;p&gt;There are many more features of Azure Advisor, Azure Monitoring and Azure Health that require in a depth explanation and would be a bit of a stretch for this blog! However before ending this blog, let’s quickly see how best the recommendations can be viewed. Figure 5 shows how recommendations from Azure can be best viewed in Power BI. Power BI being one of the finest visualization tools which also works very well with Azure to serve data , you can view recommendations all in one place with all the filters, slicers and color codes. You can also get aggregates in terms of “Total Savings” if the recommendations were followed.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--cMEfpnxe--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://insight-services-apac.github.io/assets/images/azure-monitoring/Figure5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--cMEfpnxe--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://insight-services-apac.github.io/assets/images/azure-monitoring/Figure5.png" alt="Figure 5" width="602" height="336"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 5&lt;/p&gt;

&lt;p&gt;Hope this blog has given an idea about how the Cloud Doctor works and if you need advice or assistance with Azure please feel free to approach Insight Enterprises.&lt;/p&gt;

</description>
      <category>monitor</category>
      <category>azure</category>
    </item>
    <item>
      <title>Getting started with Blazor</title>
      <dc:creator>axurcio</dc:creator>
      <pubDate>Mon, 28 Mar 2022 00:00:00 +0000</pubDate>
      <link>https://dev.to/lacjamm/getting-started-with-blazor-2dbb</link>
      <guid>https://dev.to/lacjamm/getting-started-with-blazor-2dbb</guid>
      <description>&lt;p&gt;Blazor is a &lt;a href="https://en.wikipedia.org/wiki/Single-page_application"&gt;Single Page Application&lt;/a&gt; development framework. The name Blazor comes from the words Browser and &lt;a href="https://en.wikipedia.org/wiki/ASP.NET_Razor"&gt;Razor&lt;/a&gt;.&lt;br&gt;&lt;br&gt;
Blazor either runs server-side or client-side. With the server-side model it executes on a server and sends the rendered HTML and CSS to the browser. In the Blazor Server Model UI events are sent back to the server using SignalR then the UI changes are sent to the client and merged into the DOM. Blazor client-side model runs in the browser by utilising &lt;a href="https://blazor-university.com/overview/what-is-webassembly"&gt;WebAssembly&lt;/a&gt;. It does not require any plugin installed on the browser. Since WebAssembly is a web standard, it is supported on all major browsers and devices. Code running in the browser executes in the same security sandbox as JavaScript frameworks.&lt;/p&gt;
&lt;h3&gt;
  
  
  Why use Blazor
&lt;/h3&gt;

&lt;p&gt;The first question you would ask straight away when you hear about Blazor is why Blazor? And why not use another JavaScript framework like React or Angular. This is not an easy question to answer. JavaScript frameworks have been there for a while and gained a lot of popularity and success. I’m not trying to convince you with Blazor over other UI frameworks here, but I’m going to list the features and capabilities that attract most people to Blazor and might be good reasons for you too to start using Blazor.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;.NET Experience&lt;/strong&gt;  &lt;/p&gt;

&lt;p&gt;This might be the primary selling point of Blazor. If you are a .NET developer with no or little JavaScript experience Blazor allow you to start developing web applications without having to learn a new technology. You can use libraries or frameworks that you already use with your .NET projects as long as they are compatible with &lt;a href="https://docs.microsoft.com/en-us/dotnet/standard/net-standard?tabs=net-standard-1-0"&gt;.NET Standard&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;WebAssembly&lt;/strong&gt;  &lt;/p&gt;

&lt;p&gt;WebAssembly (abbreviated &lt;em&gt;Wasm&lt;/em&gt;) is designed as a portable compilation target for programming languages, enabling deployment on the web for client and server applications. In November 2017, Mozilla declared support in all major browsers after WebAssembly was enabled by default in Edge 16. The support includes mobile web browsers for iOS and Android.&lt;/p&gt;

&lt;p&gt;JavaScript is a high-level language, flexible and expressive enough to write web applications and has a huge ecosystem that provides powerful frameworks, libraries, and other tools, whereas WebAssembly is a low-level assembly-like language with a compact binary format that runs in the browser with near-native performance.&lt;/p&gt;

&lt;p&gt;Blazor supports JavaScript interoperability. Using Blazor does not mean “No JavaScript”. You can still call JavaScript functions from .NET apps and vice versa.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Code Reusability&lt;/strong&gt;  &lt;/p&gt;

&lt;p&gt;With Blazor you are using .NET throughout your different application layers. Developers in your team can easily work with backend or frontend. This means that you can reuse code between your backend and frontend, use your common libraries across application layers and share API models. Think about utilities like string formatters, DateTime helpers, cryptography and a lot more. You can now have one library that you can use across all different projects - backend or frontend.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Server-side Rendering&lt;/strong&gt;  &lt;/p&gt;

&lt;p&gt;One of the primary requirements that you would frequently be asked for when developing SEO friendly websites is server-side rendering. Server-side rendering is important when you build applications intended to be crawled by search engines as it allows bots to crawl your website contents without having to execute JavaScript code. By using Blazor, server-side rendering is included as standard.&lt;/p&gt;
&lt;h3&gt;
  
  
  Blazor WebAssembly vs. Blazor Server
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Blazor WebAssembly&lt;/strong&gt;  &lt;/p&gt;

&lt;p&gt;In Blazor WebAssembly (or the client-side hosting model) the application runs directly in the browser on WebAssembly. So, the compiled application code itself, dependencies and the .NET runtime are downloaded to the browser. This means that the application will need some time to download and load the application and dependencies assemblies at start-up. However, this means the app remains functional if the server goes offline.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--SFnedodu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://insight-services-apac.github.io/assets/images/first-blazor-app/wasm-hosting-model.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--SFnedodu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://insight-services-apac.github.io/assets/images/first-blazor-app/wasm-hosting-model.png" alt="Blazor WebAssembly Hosting Model" width="564" height="476"&gt;&lt;/a&gt;&lt;br&gt;&lt;br&gt;
Reference: &lt;a href="https://docs.microsoft.com/en-us/aspnet/core/blazor/hosting-models"&gt;ASP.NET Core Blazor hosting models&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Blazor Server&lt;/strong&gt;  &lt;/p&gt;

&lt;p&gt;In Blazor Server (or the server hosting model) the application is executed on the server from within an ASP.NET Core application. Between the client and the server, a SignalR connection is established. When an event occurs on the client such as a button click the information about the event is sent to the server over the SignalR connection. The server handles the event and the generated HTML. The entire HTML is not sent back again to the client, it’s only the diff that is sent to the client over the established SignalR connection. The browser then updates the UI.&lt;br&gt;&lt;br&gt;
Because code run on the server, the initial load is much faster than Blazor WebAssembly. The application can take advantage of server capabilities such as .NET Core APIs which allows utilizing the existing tooling such as debugging. However, the main concern with using this model is the high latency as every user interaction will require a network hop and this means also that there is no offline support.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--TS3W9Xw7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://insight-services-apac.github.io/assets/images/first-blazor-app/server-hosting-model.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--TS3W9Xw7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://insight-services-apac.github.io/assets/images/first-blazor-app/server-hosting-model.png" alt="Blazor Server Hosting Model" width="554" height="350"&gt;&lt;/a&gt;&lt;br&gt;&lt;br&gt;
Reference: &lt;a href="https://docs.microsoft.com/en-us/aspnet/core/blazor/hosting-models"&gt;ASP.NET Core Blazor hosting models&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Create your first app
&lt;/h3&gt;

&lt;p&gt;In this article, we’ll go through the steps of how to create and deploy a Blazor WebAssembly app. There are two types of Blazor WebAssembly apps, stand-alone and hosted models.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Standalone&lt;/strong&gt; is suitable when you intend to deploy your app without an ASP .NET Core app to serve it, for example hosting the app on IIS server or Azure App Service (windows).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Hosted&lt;/strong&gt; the app uses an ASP.NET Core server to host the client App which gives you the flexibility to host it in Azure App Service (Windows or Linux) and allow you to share code between client and server apps.&lt;/p&gt;

&lt;p&gt;In this article, we’ll be using hosted Blazor WebAssembly.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Create the project&lt;/strong&gt;  &lt;/p&gt;

&lt;p&gt;Visual Studio Code&lt;/p&gt;

&lt;p&gt;For the command line run the following:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;dotnet new blazorwasm --hosted -o FirstBlazorApp&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Visual Studio&lt;/p&gt;

&lt;p&gt;From Visual Studio Choose Blazor WebAssembly App template and make sure ASP.NET Core hosted option is selected.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--nzXsHKVT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://insight-services-apac.github.io/assets/images/first-blazor-app/vs-create-blazor-app.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--nzXsHKVT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://insight-services-apac.github.io/assets/images/first-blazor-app/vs-create-blazor-app.png" alt="VS Create Blazor App" width="880" height="617"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now open the project in your favourite editor and you will get the following folder structure:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--JdcQ7q5s--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://insight-services-apac.github.io/assets/images/first-blazor-app/blazor-hosted-structure.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--JdcQ7q5s--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://insight-services-apac.github.io/assets/images/first-blazor-app/blazor-hosted-structure.png" alt="Blazor Hosted Folder Structure" width="566" height="1296"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You will have three projects created in your solutions.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Shared&lt;/strong&gt; a class library project referenced by both Client and Server and the purpose of this project is to include all the code shared between the client and server such as API models.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Client&lt;/strong&gt; the Blazor WASM project. It has all the code required to run your Blazor client App.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Server&lt;/strong&gt; an ASP.NET Core project. You can create your APIs in this project which will run in the same environment as your client app. This app also acts as a host server to run your client app. This is enabled in the &lt;code&gt;Program.cs&lt;/code&gt; file by a simple line:
&lt;code&gt;app.UseBlazorFrameworkFiles();&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Run the project
&lt;/h3&gt;

&lt;p&gt;To run the project:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Go to Client folder (if you want to run the standalone client without the server) or Server folder (if you want to run both server and client).&lt;/li&gt;
&lt;li&gt;Run this command to start the app &lt;code&gt;dotnet run&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;From the browser navigate to &lt;a href="http://localhost:5173/"&gt;http://localhost:5173/&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Walkthrough
&lt;/h3&gt;

&lt;p&gt;Under Client folder we have the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;_imports.razor&lt;/code&gt; contains all the namespaces imported and used by the project.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;App.razor&lt;/code&gt; is the main component of the app. It has the &lt;code&gt;Router&lt;/code&gt; component which loads all the pages inside the &lt;code&gt;RouteView&lt;/code&gt; component. If the URL was not found the app displays the template in &lt;code&gt;NotFound&lt;/code&gt; component.
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;Router AppAssembly="@typeof(App).Assembly"&amp;gt;
  &amp;lt;Found Context="routeData"&amp;gt;
    &amp;lt;RouteView RouteData="@routeData" DefaultLayout="@typeof(MainLayout)" /&amp;gt;
    &amp;lt;FocusOnNavigate RouteData="@routeData" Selector="h1" /&amp;gt;
  &amp;lt;/Found&amp;gt;
  &amp;lt;NotFound&amp;gt;
    &amp;lt;PageTitle&amp;gt;Not found&amp;lt;/PageTitle&amp;gt;
    &amp;lt;LayoutView Layout="@typeof(MainLayout)"&amp;gt;
      &amp;lt;p role="alert"&amp;gt;Sorry, there's nothing at this address.&amp;lt;/p&amp;gt;
    &amp;lt;/LayoutView&amp;gt;
  &amp;lt;/NotFound&amp;gt;
&amp;lt;/Router&amp;gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;wwwroot&lt;/code&gt; has the client-side assets and styles. In Blazor WebAssembly &lt;code&gt;index.html&lt;/code&gt; is the main entry point to the application. You can add any CSS or JavaScript references to this page. You can also redesign how the loading page should look during the initial load.
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;!DOCTYPE html&amp;gt;
&amp;lt;html lang="en"&amp;gt;
  &amp;lt;head&amp;gt;
    &amp;lt;meta charset="utf-8" /&amp;gt;
    &amp;lt;meta
      name="viewport"
      content="width=device-width, initial-scale=1.0, maximum-scale=1.0, user-scalable=no"
    /&amp;gt;
    &amp;lt;title&amp;gt;FirstBlazorApp&amp;lt;/title&amp;gt;
    &amp;lt;base href="/" /&amp;gt;
    &amp;lt;link href="css/bootstrap/bootstrap.min.css" rel="stylesheet" /&amp;gt;
    &amp;lt;link href="css/app.css" rel="stylesheet" /&amp;gt;
    &amp;lt;link href="FirstBlazorApp.Client.styles.css" rel="stylesheet" /&amp;gt;
  &amp;lt;/head&amp;gt;

  &amp;lt;body&amp;gt;
    &amp;lt;div id="app"&amp;gt;Loading...&amp;lt;/div&amp;gt;

    &amp;lt;div id="blazor-error-ui"&amp;gt;
      An unhandled error has occurred.
      &amp;lt;a href="" class="reload"&amp;gt;Reload&amp;lt;/a&amp;gt;
      &amp;lt;a class="dismiss"&amp;gt;🗙&amp;lt;/a&amp;gt;
    &amp;lt;/div&amp;gt;
    &amp;lt;script src="_framework/blazor.webassembly.js"&amp;gt;&amp;lt;/script&amp;gt;
  &amp;lt;/body&amp;gt;
&amp;lt;/html&amp;gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;Shared&lt;/code&gt; Under Shared folder, you will find the &lt;code&gt;MainLayout.razor&lt;/code&gt; and all other shared components. MainLayout has been set as the DefaultLayout in the &lt;code&gt;App.razor&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Pages&lt;/code&gt; Under pages, you will find the components which will be rendered inside the MainLayout. In the page component, there are three main pieces you need to understand: 

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;@page&lt;/code&gt; directive which defines the route Url for the current page&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;PageTitle&lt;/code&gt; overrides the application page title.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;@code block&lt;/code&gt; allows you to write your code in the same file with the razor template.
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;@page "/counter"

&amp;lt;PageTitle&amp;gt;Counter&amp;lt;/PageTitle&amp;gt;

&amp;lt;h1&amp;gt;Counter&amp;lt;/h1&amp;gt;

&amp;lt;p role="status"&amp;gt;Current count: @currentCount&amp;lt;/p&amp;gt;

&amp;lt;button class="btn btn-primary" @onclick="IncrementCount"&amp;gt;Click me&amp;lt;/button&amp;gt;

@code { 
    private int currentCount = 0; 

    private void IncrementCount() 
    {
        currentCount++; 
    } 
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h3&gt;
  
  
  Code-behind
&lt;/h3&gt;

&lt;p&gt;As you can see in the previous example both the HTML markup and the C# code exist in the same file, which makes it easy and simple to build components in Blazor. However, as the business grows and requirements get bigger and more complex, splitting the code into a Code-behind file might be a good idea to keep the code clean and easy to maintain.&lt;/p&gt;

&lt;p&gt;Splitting Blazor’s component code into a separate file is easy as all &lt;code&gt;.razor&lt;/code&gt; files become classes when the project is compiled. For example, the code inside the &lt;code&gt;Counter.razor&lt;/code&gt; file gets extracted into a class called &lt;code&gt;Counter&lt;/code&gt; when compiled.&lt;/p&gt;

&lt;p&gt;To move the code inside a component into a separate class there are two methods to achieve that:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Partial Class&lt;/strong&gt;  &lt;/p&gt;

&lt;p&gt;To create a partial class for the Counter component, create a new class file and call it &lt;code&gt;Counter.razor.cs&lt;/code&gt; then add the following code to it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;namespace FirstBlazorApp.Client.Pages;
public partial class CounterComponent
{
    private int currentCount = 0;

    private void IncrementCount()
    {
        currentCount++;
    }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, you can remove the code block from &lt;code&gt;Counter.razor&lt;/code&gt; and build the project and you’ll find that it builds successfully with no errors.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Inheriting ComponentBase class&lt;/strong&gt;  &lt;/p&gt;

&lt;p&gt;Instead of creating a partial class for your components, you can create a class that inherits from &lt;code&gt;ComponentBase&lt;/code&gt; class. You can then remove the code block from the &lt;code&gt;.razor&lt;/code&gt; file and use the &lt;code&gt;@inherits&lt;/code&gt; directive to inherit from the class you have just created.&lt;br&gt;&lt;br&gt;
Note that in this method you need to modify access modifiers from private to protected to be able to access them from the &lt;code&gt;.razor&lt;/code&gt; file.&lt;/p&gt;

&lt;p&gt;Because the class inherits from &lt;code&gt;ComponentBase&lt;/code&gt;, another benefit you get is that you have access to the component’s lifecycle methods such as &lt;code&gt;OnInitializedAsync()&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;You can find more details about component lifecycle in the following article: &lt;a href="https://docs.microsoft.com/en-us/aspnet/core/blazor/components/lifecycle?view=aspnetcore-6.0"&gt;ASP.NET Core Razor component lifecycle&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;em&gt;Counter.razor.cs&lt;/em&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;using Microsoft.AspNetCore.Components;
namespace FirstBlazorApp.Client.Pages;
public class CounterComponent : ComponentBase
{
    protected int currentCount = 0;

    protected void IncrementCount()
    {
        currentCount++;
    }

    protected override Task OnInitializedAsync()
    {
        return base.OnInitializedAsync();
    }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ul&gt;
&lt;li&gt;
&lt;em&gt;Counter.razor&lt;/em&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;@page "/counter"
@inherits CounterComponent

&amp;lt;PageTitle&amp;gt;Counter&amp;lt;/PageTitle&amp;gt;

&amp;lt;h1&amp;gt;Counter&amp;lt;/h1&amp;gt;

&amp;lt;p role="status"&amp;gt;Current count: @currentCount&amp;lt;/p&amp;gt;

&amp;lt;button class="btn btn-primary" @onclick="IncrementCount"&amp;gt;Click me&amp;lt;/button&amp;gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h3&gt;
  
  
  Scoped styles
&lt;/h3&gt;

&lt;p&gt;Blazor supports scoping CSS to Razor components, which can simplify CSS and avoid collisions with other components or libraries.&lt;/p&gt;

&lt;p&gt;To enable CSS isolation is simple. Create a &lt;code&gt;.razor.css&lt;/code&gt; file matching the name of the &lt;code&gt;.razor&lt;/code&gt; component file in the same folder. For example, to create isolated CSS for the &lt;code&gt;Counter&lt;/code&gt; component create a file called &lt;code&gt;Counter.razor.css&lt;/code&gt; in the same folder as &lt;code&gt;Counter.razor&lt;/code&gt; file.&lt;br&gt;&lt;br&gt;
The styles defined in &lt;code&gt;Counter.razor.css&lt;/code&gt; are only applied to the rendered HTML of the &lt;code&gt;Counter&lt;/code&gt; component.&lt;/p&gt;

&lt;p&gt;For more details about Blazor CSS isolation see the following article:&lt;br&gt;&lt;br&gt;
&lt;a href="https://docs.microsoft.com/en-us/aspnet/core/blazor/components/css-isolation?view=aspnetcore-6.0"&gt;ASP.NET Core Blazor CSS isolation&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Writing unit tests
&lt;/h3&gt;

&lt;p&gt;In unit tests, only the Razor component (Razor/C#) is involved. External dependencies, such as services and JS interop, must be mocked.&lt;br&gt;&lt;br&gt;
There’s no official Microsoft testing framework for Blazor, but Microsoft &lt;a href="https://docs.microsoft.com/en-us/aspnet/core/blazor/test?view=aspnetcore-6.0#test-components-with-bunit"&gt;recommends&lt;/a&gt;using &lt;code&gt;bUnit&lt;/code&gt; which provides a simple and easy way to unit test Razor components.&lt;/p&gt;

&lt;p&gt;The good news is that &lt;code&gt;bUnit&lt;/code&gt; works with common testing frameworks, such as &lt;code&gt;MSTest&lt;/code&gt;, &lt;code&gt;NUnit&lt;/code&gt;, and &lt;code&gt;xUnit&lt;/code&gt; which makes bUnit tests feel like regular unit tests.&lt;/p&gt;

&lt;p&gt;Let’s go through the steps of how to start writing unit tests with bUnit using xunit testing framework:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create a xunit test project by running:
&lt;code&gt;dotnet new xunit -o FirstBlazorAppTests&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Under FirstBlazorAppTests add reference to FirstBlazorApp project:
&lt;code&gt;dotnet add reference ../Client/FirstBlazorApp.Client.csproj&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Add the following NuGet packages:
&lt;code&gt;dotnet add package bunit.web&lt;/code&gt;
&lt;code&gt;dotnet add package bunit.xunit&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Now you can start writing unit tests as you would normally do with any .NET project&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The following is an example of a test class for the &lt;code&gt;Counter&lt;/code&gt; component:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;using Bunit;
using Xunit;
using FirstBlazorApp.Client.Pages;

namespace FirstBlazorApp.Tests;

public class CounterComponentTests
{
    [Fact]
    public void CounterComponentTest()
    {
        // Arrange: Render the Counter component
        using var ctx = new TestContext();
        var cut = ctx.RenderComponent&amp;lt;Counter&amp;gt;();

        // Act: Find and click the &amp;lt;button&amp;gt; element
        cut.Find("button").Click();

        // Assert: Find the &amp;lt;p&amp;gt; element, then verify its content
        cut.Find("p").MarkupMatches(@"&amp;lt;p role=""status""&amp;gt;Current count: 1&amp;lt;/p&amp;gt;");
    }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For more details about bUnit see the official bUnit documentation:&lt;br&gt;&lt;br&gt;
&lt;a href="https://bunit.dev/docs/getting-started/writing-tests.html?tabs=xunit"&gt;Testing Blazor components&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Deploy the Blazor app
&lt;/h3&gt;

&lt;p&gt;In Blazor Server model a web server capable of hosting an ASP.NET Core app is required in order to run your Blazor app. However, in Blazor WebAssembly there are two deployment strategies:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Standalone deployment&lt;/strong&gt;
The app is hosted on a static web server where .NET is not used to serve the app. Any static file server can host the Blazor app.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Hosted deployment&lt;/strong&gt;
The app is served by an ASP.NET Core app that runs on a web server. As you can see in the previous example, we have two main Projects (Client and Server). The Client project produces the static files required to run the app whereas the Server project contains the ASP.NET Core app that acts as a small server to host the client app.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Deploy to Azure App Service&lt;/strong&gt;  &lt;/p&gt;

&lt;p&gt;You can publish and deploy the Blazor app directly from VS Code which is a convenient and efficient way to deploy Blazor apps during the development process. However, for production, you should always consider using Continuous Deployment to deploy your apps.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Deploy from VS Code&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Let’s go through the steps for how to deploy FirstBlazorApp that we have just created to Azure App Service using VS Code.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Install Azure App Service extension and configure it: &lt;a href="https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azureappservice"&gt;Azure App Service VS Code Extension&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Publish the app. When you deploy Blazor WASM to Azure App Service windows you can use both strategies (Standalone or Hosted), however, if you wish to host the app on Azure App Service Linux you won’t be able to use the Standalone strategy. 

&lt;ol&gt;
&lt;li&gt;Standalone deployment (windows):
Publish the Client app to generate the deployment package:
&lt;code&gt;dotnet publish Client -c Release -o ./publish&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Hosted deployment (Windows or Linux):
&lt;code&gt;dotnet publish Server -c Release -r linux-x64 -o ./publish&lt;/code&gt;
&lt;/li&gt;
&lt;/ol&gt;


&lt;/li&gt;
&lt;li&gt;Now right click on the publish folder that has been just created and select &lt;code&gt;Deploy to Web App…&lt;/code&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Deploy from Azure DevOps&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Finally, we’ll go through the steps of how to configure Azure DevOps pipelines to build, test and deploy our FirstBlazorApp to Azure App Service. To do so:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;First, push your code to a new repository.&lt;/li&gt;
&lt;li&gt;From the Azure Portal create an Azure App Service and set the runtime stack to &lt;code&gt;.NET 6&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Go to Azure DevOps and create a new pipeline.&lt;/li&gt;
&lt;li&gt;Select your code repo and proceed to the Review step.&lt;/li&gt;
&lt;li&gt;Go to Project &lt;code&gt;Settings &amp;gt; Service&lt;/code&gt; connections and create a new service connection then select &lt;code&gt;Azure Resource Manager&lt;/code&gt; type then choose &lt;code&gt;Service Principal (automatic)&lt;/code&gt; and set scope level to subscription.&lt;/li&gt;
&lt;li&gt;Click &lt;code&gt;Variables&lt;/code&gt; and add the required variables 

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;AzureServiceConnection&lt;/code&gt; The name of the Service connection created in the previous step.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;ResourceGroup&lt;/code&gt; The name of the resource group where your app service is created.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;WebAppName&lt;/code&gt; The name of the App Service to deploy the app to.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Paste the following YAML then click &lt;code&gt;Save and Run&lt;/code&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;trigger:
  - main

pool:
  vmImage: ubuntu-20.04

variables:
  buildConfiguration: "Release"
  dotNetVersion: "6.0.x"
  location: "australiaeast"
  dotNetFramework: "net6.0"
  webAppDir: "$(Build.SourcesDirectory)/Server"
  webAppTestDir: "$(Build.SourcesDirectory)/Tests"

steps:
  - task: UseDotNet@2
    displayName: Use .NET 6.0
    inputs:
      packageType: "sdk"
      version: "6.0.x"

  - script: dotnet build --runtime linux-x64 --configuration $(buildConfiguration)
    workingDirectory: "$(webAppDir)"
    displayName: "Building App..."

  - task: DotNetCoreCLI@2
    displayName: "Testing App..."
    inputs:
      command: test
      projects: "$(webAppTestDir)/*.csproj"
      arguments: "--configuration $(buildConfiguration)"
      publishTestResults: true

  - task: DotNetCoreCLI@2
    displayName: "Publishing App..."
    inputs:
      command: publish
      workingDirectory: "$(webAppDir)"
      publishWebProjects: false
      zipAfterPublish: True
      arguments: "--runtime linux-x64 --configuration $(BuildConfiguration) --output $(Build.ArtifactStagingDirectory)"

  - task: AzureRmWebAppDeployment@4
    displayName: "Azure App Service Deploy"
    inputs:
      azureSubscription: "$(AzureServiceConnection)"
      ResourceGroupName: "$(ResourceGroup)"
      appType: webAppLinux
      WebAppName: "$(WebAppName)"
      Package: "$(Build.ArtifactStagingDirectory)/**/*.zip"

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  References
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;a href="https://docs.microsoft.com/en-us/aspnet/core/blazor/?view=aspnetcore-6.0"&gt;ASP.NET Core Blazor&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://bunit.dev"&gt;bUnit&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://developer.mozilla.org/en-US/docs/WebAssembly"&gt;WebAssembly&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://en.wikipedia.org/wiki/Single-page_application"&gt;Single-page application&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://en.wikipedia.org/wiki/ASP.NET_Razor"&gt;ASP.NET Razor&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://docs.microsoft.com/en-us/aspnet/core/blazor/host-and-deploy/webassembly?view=aspnetcore-6.0"&gt;Host and deploy ASP.NET Core Blazor WebAssembly&lt;/a&gt;.&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>blazor</category>
      <category>webassembly</category>
      <category>dotnet</category>
    </item>
    <item>
      <title>Managing the Azure Virtual Desktop Gold Image - Virtual Desktop Series Post 1</title>
      <dc:creator>axurcio</dc:creator>
      <pubDate>Mon, 28 Mar 2022 00:00:00 +0000</pubDate>
      <link>https://dev.to/lacjamm/managing-the-azure-virtual-desktop-gold-image-virtual-desktop-series-post-1-5h52</link>
      <guid>https://dev.to/lacjamm/managing-the-azure-virtual-desktop-gold-image-virtual-desktop-series-post-1-5h52</guid>
      <description>&lt;p&gt;Azure Virtual Desktop is a Desktop-as-a-Service solution provided by Microsoft running on Azure. The Azure Virtual Desktop (AVD) session management control plane is entirely managed by Microsoft. As this is often the most complex aspect of implementing a Virtual Desktop Infrastructure (VDI) solution, AVD dramatically lowers the technical and financial barriers to entry for those who are on the fence about investing in a VDI solution.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--MVWBpf_1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://insight-services-apac.github.io/assets/images/avd-gold-image/vdi-vs-avd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--MVWBpf_1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://insight-services-apac.github.io/assets/images/avd-gold-image/vdi-vs-avd.png" alt="VDI vs AVD - Who manages what?" title="VDI vs AVD - Who manages what?" width="880" height="378"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Typically, solution architects and IT administrators begin planning a VDI deployment working out how they will network, configure and deploy all of the infrastructure and services in order to implement a functional solution. AVD enables IT administrators to create a functional host pool from a Virtual Machine (VM) image from a single wizard. This is a huge time and cost saving, however there is still one technical aspect of an AVD deployment that needs to be considered prior to deploying a host pool: the Gold Image.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--VDnnlSyC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://insight-services-apac.github.io/assets/images/avd-gold-image/azure-virtual-desktop-highlight.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--VDnnlSyC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://insight-services-apac.github.io/assets/images/avd-gold-image/azure-virtual-desktop-highlight.png" alt="The AVD Gold Image is in your subscription" title="The AVD Gold Image is in your subscription" width="880" height="281"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  What is a Gold Image?
&lt;/h3&gt;

&lt;p&gt;A VDI Gold Image contains all software and profile management pre-configuration to deploy several identical VMs for users to log into. In this article I will describe the process for creating a VM to be used as a Gold Image, Capturing a VM to an image stored in a Compute Gallery, and finally, referencing a new Gold Image from the Host Pool configuration wizard.&lt;/p&gt;

&lt;h3&gt;
  
  
  Azure Compute Gallery
&lt;/h3&gt;

&lt;p&gt;The first step is to create the Azure object that can hold VM Images. While these can be stored in a Storage Account, it is highly recommended to make use of the Azure Compute Gallery. The Azure Compute Gallery lets you share custom VM images and application packages with others in an organization, within or across regions, or within an AAD tenant. Images in the Compute Gallery are version controlled. Image replicas (single or cross-region) reduce throttling during VM creation where multiple simultaneous deployments or cross-region deployments would overload a single replica. The Compute Gallery is free to create, however storage costs of each image replica, and for network egress to replicate the image to other regions are chargeable. Source: &lt;a href="https://docs.microsoft.com/en-us/azure/virtual-machines/shared-image-galleries"&gt;Store and share images in an Azure Compute Gallery&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There are limits, per subscription, for deploying resources using Azure Compute Galleries:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;100 galleries&lt;/strong&gt; , per subscription, per region&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;1,000 image definitions&lt;/strong&gt; , per subscription, per region&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;10,000 image versions&lt;/strong&gt; , per subscription, per region&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;10 image version replicas&lt;/strong&gt; , per subscription, per region&lt;/li&gt;
&lt;li&gt;Any disk attached to the image must be &lt;strong&gt;less than or equal to 1TB&lt;/strong&gt; in size&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Source: &lt;a href="https://docs.microsoft.com/en-us/azure/virtual-machines/shared-image-galleries#limits"&gt;Store and share images in an Azure Compute Gallery - Limits&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Create the Gold Image VM
&lt;/h3&gt;

&lt;p&gt;Secondly, a VM will need to be created in an Azure Resource Group that will be converted into the Gold Image. The type of AVD deployment that is being considered will determine which Azure Marketplace image is used to create the Gold Image VM.&lt;/p&gt;

&lt;p&gt;The key consideration is the difference between Personal and Pooled Desktops&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Personal Desktops&lt;/strong&gt; - A VM allocated per user&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Ideal for &lt;strong&gt;single-session&lt;/strong&gt; users with &lt;strong&gt;heavy performance requirements&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Choose the right VM to run robust apps like CAD, SAP and others&lt;/li&gt;
&lt;li&gt;Always-on experience and single state retention&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Pooled Desktops&lt;/strong&gt; - Multiple users allocated to the same VM&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Ideal for users with &lt;strong&gt;light to medium&lt;/strong&gt; workloads with basic business requirements&lt;/li&gt;
&lt;li&gt;Choose the right VM to run most business apps&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Pooled Desktops allow multiple users to share a single VM’s resources. A Pooled Desktop Gold Image will need to user Multi-Session Windows 10 Marketplace image. For high performance users using Personal Desktops with dedicated VM resources, the standard Windows 10 Marketplace image can be used.&lt;/p&gt;

&lt;p&gt;See below for an ARM Template excerpt when creating a Multi-Session Windows 10 VM to use as your Gold Image. Note the imageReference SKU below &lt;strong&gt;“win10-21h2-avd-g2”&lt;/strong&gt;. For personal desktops, use the &lt;strong&gt;“21h1-ent-g2”&lt;/strong&gt; SKU instead. Ensure that if you use the Generation 2 VM images or Accelerated Networking that these parameters need to be reflected across all script blocks from this post.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "name": "[parameters('virtualMachineName')]",
    "type": "Microsoft.Compute/virtualMachines",
    "apiVersion": "2021-07-01",
    "location": "[parameters('location')]",
    "dependsOn": [
        "[concat('Microsoft.Network/networkInterfaces/', parameters('networkInterfaceName'))]"
    ],
    "properties": {
        "hardwareProfile": {
            "vmSize": "[parameters('virtualMachineSize')]"
        },
        "storageProfile": {
            "osDisk": {
                "createOption": "fromImage",
                "managedDisk": {
                    "storageAccountType": "[parameters('osDiskType')]"
                },
                "deleteOption": "[parameters('osDiskDeleteOption')]"
            },
            "imageReference": {
                "publisher": "MicrosoftWindowsDesktop",
                "offer": "Windows-10",
                "sku": "win10-21h2-avd-g2",
                "version": "latest"
            }
        }
    }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The next step is to log into the VM and run updates, install corporate software, configure licensing, and set up your profile management solution (FSLogix configuration for AppMasking and Profile Management will be covered in a future post). Tools such as SCCM and Azure Automation can used to deploy software to the VM in an automated fashion. Azure DevOps Pipelines be used to trigger an Automation runbook for automated software deployment. Once ready, shut down and deallocate the VM.&lt;/p&gt;

&lt;h3&gt;
  
  
  Prepare the Gold Image VM
&lt;/h3&gt;

&lt;p&gt;When an image is created using PowerShell, the &lt;strong&gt;Operating system state&lt;/strong&gt; can be set to either &lt;strong&gt;Generalized&lt;/strong&gt; or &lt;strong&gt;Specialized&lt;/strong&gt;. Generalized images clear configuration for hostname, users and osProfile on the VM. These means that when multiple VMs are created from the one image, their admin user and hostname are configured during deployment which is what we need for an AVD Gold Image. After generalizing a VM, the VM becomes unusable. Before generalizing, you can take a &lt;strong&gt;Snapshot&lt;/strong&gt; of the VM or take a &lt;strong&gt;backup using a Recovery Services Vault&lt;/strong&gt;. You can then restore the VM from the Snapshot or backup to make edits to your Gold Image for a future image version. If you joined your Gold Image VM to a domain, you should &lt;strong&gt;remove it from the domain before Generalizing&lt;/strong&gt;. Create a PowerShell .ps1 with the following code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;If (Test-Path -Path C:\Windows\Panther) {
  Remove-Item C:\Windows\Panther -Recurse -Force
}
cmd.exe /c "start /b %WINDIR%\system32\sysprep\sysprep.exe /generalize /shutdown /oobe"

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;These Azure CLI scripts will take a Snapshot of and Generalize the VM. The script you have just created will be referenced by the parameter &lt;strong&gt;–scripts&lt;/strong&gt; below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#Set the subscription Id
az account set --subscription $subscriptionId

#Get the disk Id
$diskId=$(az disk show --name "AVD_GOLD_IMAGE_0_0_1" --resource-group "AVD-DEMO" --query [id] -o tsv)

#Create the VM Snapshot
az snapshot create -g "AVD-DEMO" -n "AVD_GOLD_IMAGE_0_0_1" --source $diskId --hyper-v-generation "v2" --sku "Premium_LRS"

#Generalize the VM before creating an Image
az vm run-command invoke --command-id "RunPowerShellScript" --name "AVD_GOLD_IMAGE" -g "AVD-DEMO" --scripts "@C:\path\to\script.ps1"

#Deallocate VM
az vm deallocate -g "AVD-DEMO" -n "AVD_GOLD_IMAGE"

#Mark VM as Generalized
az vm generalize -g "AVD-DEMO" -n "AVD_GOLD_IMAGE"

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Restore the Gold Image VM for image versioning
&lt;/h3&gt;

&lt;p&gt;Use the following Azure CLI script to create a new Gold Image VM from the snapshot when you need to update your Gold Image. Ensure that the &lt;strong&gt;–size-gb&lt;/strong&gt; parameter is the same size or greater than your Gold Image disk.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#Set the subscription Id
az account set --subscription $subscriptionId

#Get the snapshot Id
$snapshotId=$(az snapshot show --name "AVD_GOLD_IMAGE_0_0_1" --resource-group "AVD-DEMO" --query [id] -o tsv)

#Create a new Managed Disks using the snapshot Id - MAKE SURE SIZE MATCHES SNAPSHOT
az disk create --resource-group "AVD-DEMO" --name "AVD_GOLD_IMAGE_OsDisk_1" --sku "Premium_LRS" --size-gb 128 --hyper-v-generation "v2" --source $snapshotId 

#Create VM by attaching created managed disks as OS
az vm create --name "AVD_GOLD_IMAGE" --resource-group "AVD-DEMO" --attach-os-disk "AVD_GOLD_IMAGE_OsDisk_2" --os-type "Windows"

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Capture the Gold Image VM to Compute Gallery
&lt;/h3&gt;

&lt;p&gt;The final step is create a Shared Image to be stored in the Compute Gallery. The information required for creating an image is:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The name of the Image&lt;/li&gt;
&lt;li&gt;The name of the Compute Gallery&lt;/li&gt;
&lt;li&gt;The name of the Resource Group the Compute Gallery resides in&lt;/li&gt;
&lt;li&gt;The resource ID being used to create the image&lt;/li&gt;
&lt;li&gt;The OS Type (Windows or Linux)&lt;/li&gt;
&lt;li&gt;The OS State for the image (Specialized or Generalized)&lt;/li&gt;
&lt;li&gt;The user-defined version number for the image&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The best practice version number convention for images is &lt;strong&gt;MajorVersion.MinorVersion.Patch&lt;/strong&gt;. A VM image can also be replicated to multiple regions for redundancy and faster provisioning of desktops in that region.&lt;/p&gt;

&lt;p&gt;See below for an AzureCLI sample for creating a Compute Gallery image from a VM. Be sure to set &lt;strong&gt;–hyper-v-generation&lt;/strong&gt; to v2 for Generation 2 VMs.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;az sig image-definition create --resource-group "AVD-DEMO" \
--gallery-name "AVD_GALLERY" --gallery-image-definition "AVD_GOLD_IMAGE" \
--publisher "MicrosoftWindowsDesktop" --offer "Windows-10" --sku "win10-21h2-avd-g2" \
--os-type "Windows" --os-state "Generalized" --hyper-v-generation V2

az sig image-version create --resource-group "AVD-DEMO" \
--gallery-name "AVD_GALLERY" --gallery-image-definition "AVD_GOLD_IMAGE" \
--gallery-image-version 0.0.1 \
--managed-image "/subscriptions/xxx/resourceGroups/AVD-DEMO/providers/Microsoft.Compute/virtualMachines/AVD_GOLD_IMAGE" \
--target-regions "australiaeast=premium_lrs australiasoutheast=premium_lrs" --storage-account-type "Premium_LRS"

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;See below for a Terraform sample for creating a Compute Gallery image from a VM. Use &lt;strong&gt;terraform import&lt;/strong&gt; to import the Resource Group, Virtual Network, Subnet, Network Interface and Virtual Machine if already created.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "azurerm_shared_image_gallery" "example" {
  name = "AVD_GALLERY"
  resource_group_name = azurerm_shared_image_gallery.example.name
  location = azurerm_resource_group.example.location
}

resource "azurerm_shared_image" "example" {
  name = "AVD-GOLD-IMAGE"
  gallery_name = azurerm_shared_image_gallery.example.name
  resource_group_name = azurerm_resource_group.example.name
  location = azurerm_resource_group.example.location
  specialized = false
  os_type = "Windows"
  hyper_v_generation = "V2"

  identifier {
    publisher = "MicrosoftWindowsDesktop"
    offer = "Windows-10"
    sku = "win10-21h2-avd-g2"
  }
}

resource "azurerm_shared_image_version" "example" {
  name = "0.0.1"
  gallery_name = azurerm_shared_image_gallery.example.name
  image_name = azurerm_shared_image.example.name
  resource_group_name = azurerm_resource_group.example.name
  location = azurerm_resource_group.example.location
  managed_image_id = azurerm_virtual_machine.example.id

  target_region {
    name = azurerm_resource_group.example.location
    regional_replica_count = 1
    storage_account_type = "Standard_LRS"
  }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now for the final step. You can now incorporate the Compute Gallery image into the &lt;strong&gt;Host Pool ARM template&lt;/strong&gt;. See below for the relevant configuration changes to the &lt;strong&gt;sessionHostConfigurationImageCustomInfoProps&lt;/strong&gt; section and the &lt;strong&gt;imageInfo&lt;/strong&gt; section within the &lt;strong&gt;sessionHostConfigurations&lt;/strong&gt; section of the Host Pool ARM Template.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;"sessionHostConfigurationImageCustomInfoProps": {
    "resourceId": "/subscriptions/xxx/resourceGroups/AVD-DEMO/providers/Microsoft.Compute/galleries/AVD_GALLERY/images/AVD-GOLD-IMAGE/versions/0.0.1"
}


{
    "name": "default",
    "apiVersion": "[parameters('apiVersion')]",
    "type": "sessionHostConfigurations",
    "dependsOn": [
        "[resourceId('Microsoft.DesktopVirtualization/hostpools/', parameters('hostpoolName'))]"
    ],
    "properties": {
        "vmSizeId": "[parameters('vmSize')]",
        "diskInfo": {
            "type": "[parameters('vmDiskType')]"
        },
        "customConfigurationTemplateUrl": "",
        "customConfigurationParameterUrl": "",
        "imageInfo": {
            "type": "CustomImage",
            "marketPlaceInfo": "",
            "customInfo": "variables('sessionHostConfigurationImageCustomInfoProps')"}
        }
    }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The Gold Image, with all preconfigured software, will now be used to create the session hosts! Gold Images created in this way can also be used for Windows 365 Enterprise Cloud PC.&lt;/p&gt;

&lt;p&gt;In future posts, I will cover FsLogix Profile Management and creating Host Pools and Application Groups using Azure DevOps.&lt;/p&gt;

</description>
      <category>azure</category>
      <category>virtualdesktop</category>
      <category>avd</category>
    </item>
    <item>
      <title>How to enable sensitivity labels for containers</title>
      <dc:creator>axurcio</dc:creator>
      <pubDate>Mon, 07 Feb 2022 00:00:00 +0000</pubDate>
      <link>https://dev.to/lacjamm/how-to-enable-sensitivity-labels-for-containers-26d2</link>
      <guid>https://dev.to/lacjamm/how-to-enable-sensitivity-labels-for-containers-26d2</guid>
      <description>&lt;p&gt;In this day and age, users have to collaborate with others both inside and outside the organization to achieve their daily tasks. This can present challenges around privacy, access and external sharing as the content no longer stays on the local network, and is likely being shared with guests. When this happens, you want it to do so in a secure, protected way that is within your organization’s risk appetite.&lt;/p&gt;

&lt;p&gt;Microsoft Information Protection (MIP) framework let you discover, classify, protect and monitor your organization’s data across all endpoints, applications and services using predefined sensitivity labels, while making sure that user productivity and their ability to collaborate isn’t hindered.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--WfYUBM5F--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://insight-services-apac.github.io/assets/images/sensitivity-labels-for-containers/MIP.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--WfYUBM5F--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://insight-services-apac.github.io/assets/images/sensitivity-labels-for-containers/MIP.png" alt="MIP" title="MIP" width="880" height="466"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Most organisations are familiarized with sensitivity labels being applied across documents and emails, however their functionality can also be extended for container-level classification and protection. A container is your typical Microsoft Teams site, Microsoft 365 group or SharePoint site.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Opbm5QbB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://insight-services-apac.github.io/assets/images/sensitivity-labels-for-containers/define-scope.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Opbm5QbB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://insight-services-apac.github.io/assets/images/sensitivity-labels-for-containers/define-scope.png" alt="Define scope" title="Define scope" width="880" height="314"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;By enabling this feature in your Azure AD organization, the following label configuration will be available to you:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Privacy (public or private) of teams sites and Microsoft 365 groups&lt;/li&gt;
&lt;li&gt;External user access&lt;/li&gt;
&lt;li&gt;External sharing from SharePoint sites&lt;/li&gt;
&lt;li&gt;Access from unmanaged devices&lt;/li&gt;
&lt;li&gt;Authentication contexts (in preview)&lt;/li&gt;
&lt;li&gt;Default sharing link for a SharePoint site (PowerShell-only configuration)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This will not only enrich the MIP reporting capability, but more importantly adjust the relevant tenant setting, allowing for more granular control. The content in these containers however, won’t inherit the labels for the classification or settings for emails and documents, such as visual markings and encryption.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--DU60hKFM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://insight-services-apac.github.io/assets/images/sensitivity-labels-for-containers/define-protection.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--DU60hKFM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://insight-services-apac.github.io/assets/images/sensitivity-labels-for-containers/define-protection.png" alt="Define protection" title="Define protection" width="880" height="272"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--nHvewxXh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://insight-services-apac.github.io/assets/images/sensitivity-labels-for-containers/define-privacy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--nHvewxXh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://insight-services-apac.github.io/assets/images/sensitivity-labels-for-containers/define-privacy.png" alt="Define privacy" title="Define privacy" width="880" height="317"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--UxySoPm_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://insight-services-apac.github.io/assets/images/sensitivity-labels-for-containers/define-sharing.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--UxySoPm_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://insight-services-apac.github.io/assets/images/sensitivity-labels-for-containers/define-sharing.png" alt="Define sharing" title="Define sharing" width="880" height="512"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To configure the sensitivity labelling for containers, the following prerequisites must be met:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Active Azure Active Directory Premium P1 licensing&lt;/li&gt;
&lt;li&gt;Global administrator role to run the below PowerShell
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#Enable sensitivity label support
Install-Module AzureADPreview
Import-Module AzureADPreview
Connect-AzureAD
#Fetch the current group settings for the Azure AD organization
$setting = (Get-AzureADDirectorySetting | where -Property DisplayName -Value "Group.Unified" -EQ)
$template = Get-AzureADDirectorySettingTemplate -Id 62375ab9-6b52-47ed-826b-58e47e0e304b
$setting = $template.CreateDirectorySetting()
#Enable the feature
$Setting["EnableMIPLabels"] = "True"
#Check the new applied value
$Setting.Values
![Check values](/assets/images/sensitivity-labels-for-containers/ps-values.png "Check values")
#Create settings at the directory level
New-AzureADDirectorySetting -DirectorySetting $Setting
$Setting.Values
$Setting = Get-AzureADDirectorySetting | ? { $_.DisplayName -eq "Group.Unified"}
Set-AzureADDirectorySetting -Id $Setting.Id -DirectorySetting $Setting
#Enable sensitivity labels for containers and synchronize labels
Install-Module ExchangeOnlineManagement
Connect-IPPSSession -UserPrincipalName
Execute-AzureAdLabelSync

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;Microsoft Information Protection is a powerful framework that is always evolving to reflect your organization’s needs around classification and protection of sensitive data, created and shared by your users daily. To help with the underlying privacy, external user access and sharing challenges, one can enable sensitivity labelling for containers (Groups &amp;amp; sites). When such label is applied to a supported container, the label automatically applies the classification and protection settings to the site or group and adjusts the relevant tenant setting, allowing for more granular control. Microsoft provide a large amount of &lt;a href="https://docs.microsoft.com/en-us/microsoft-365/compliance/sensitivity-labels-teams-groups-sites?view=o365-worldwide#how-to-enable-sensitivity-labels-for-containers-and-synchronize-labels"&gt;detailed information&lt;/a&gt; on how to &lt;a href="https://docs.microsoft.com/en-us/azure/active-directory/enterprise-users/groups-assign-sensitivity-labels#enable-sensitivity-label-support-in-powershell"&gt;enable sensitivity labels for containers via PowerShell&lt;/a&gt; which I have used to help guide this article.&lt;/p&gt;

</description>
      <category>mip</category>
      <category>sensitivity</category>
      <category>labels</category>
    </item>
    <item>
      <title>Taking Azure AD B2C ‘Seamless Migration’ for a spin</title>
      <dc:creator>axurcio</dc:creator>
      <pubDate>Mon, 07 Feb 2022 00:00:00 +0000</pubDate>
      <link>https://dev.to/lacjamm/taking-azure-ad-b2c-seamless-migration-for-a-spin-4ja6</link>
      <guid>https://dev.to/lacjamm/taking-azure-ad-b2c-seamless-migration-for-a-spin-4ja6</guid>
      <description>&lt;p&gt;For many organisations, cloud is not simply heading off to fresh pastures, but instead entails complex migrations out of ‘on-prem’ data centres.&lt;/p&gt;

&lt;p&gt;A core part of this migration, especially for customer-facing organisations, is moving users. Good customer experience demands a frictionless approach, a-la ‘Seamless Migration’, but migrating user passwords that you (hopefully!) do not have in plain text, or do not know the hashing algorithm for, presents a challenge.&lt;/p&gt;

&lt;h2&gt;
  
  
  Azure AD B2C
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://docs.microsoft.com/en-us/azure/active-directory-b2c/overview"&gt;Azure AD B2C&lt;/a&gt; is Azure’s Identity as a Service (IDaaS) customer identity access management service.&lt;/p&gt;

&lt;p&gt;AD B2C uses standards-based authentication protocols including OpenID Connect and OAuth 2.0. By serving as the central authentication authority for your web applications, mobile apps, and APIs, Azure AD B2C enables you to build a single sign-on (SSO) solution for them all.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--cUMT7FQ9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://insight-services-apac.github.io/assets/images/seamless-migration/azureadb2c-overview.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--cUMT7FQ9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://insight-services-apac.github.io/assets/images/seamless-migration/azureadb2c-overview.png" alt="AD B2C Overview" title="AD B2C Overview" width="880" height="403"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Its competitors in the IDaaS space are Okta, probably the leader in the space (at least according to Gartner), but more expensive, and Ping. &lt;a href="https://www.okta.com/resources/whitepaper/not-all-identity-clouds-are-created-equal/"&gt;This Okta blog post&lt;/a&gt;, comparing themselves to Azure AD B2C is insightful, but should of course be read with a grain of salt.&lt;/p&gt;

&lt;p&gt;There are a variety of open-source options in the self-hosted space, including Duende IdentityServer which is popular in the .NET ecosystem.&lt;/p&gt;

&lt;p&gt;Importantly however, a lot of cloud migrations have core principles of utilising managed services, and of leveraging as much of their chosen cloud vendor as possible, so for anyone leaning towards Azure it makes sense to start by evaluating AD B2C.&lt;/p&gt;

&lt;h2&gt;
  
  
  Seamless Migration
&lt;/h2&gt;

&lt;p&gt;The AD B2C docs does have &lt;a href="https://docs.microsoft.com/en-us/azure/active-directory-b2c/user-migration"&gt;a page&lt;/a&gt; on migrating users to AD B2C which discusses Seamless Migration.&lt;/p&gt;

&lt;p&gt;There are two main components - first, an initial ‘pre-migration’ of all user accounts into AD B2C is performed. This is followed by creating a callback to your existing identity solution to validate a user’s password.&lt;/p&gt;

&lt;p&gt;Then, using an AD B2C ‘custom policy’, each time a user logs in a call is triggered to the legacy system. If it returns that the password is correct then AD B2C stores the password, marks the user as migrated, and no longer needs to callback for that user.&lt;/p&gt;

&lt;p&gt;If your legacy solution is not accessible via an API call then you are out of luck.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--dip3CDb3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://insight-services-apac.github.io/assets/images/seamless-migration/seamless-migration.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--dip3CDb3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://insight-services-apac.github.io/assets/images/seamless-migration/seamless-migration.png" alt="Diagram Illustrating Seamless Migration" title="AD B2C Seamless Migration" width="880" height="880"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For me, the existence of this page in the offical documentation initially implied that Seamless Migration was ‘built-in’ functionality. However the docs eventually link you to &lt;a href="https://github.com/azure-ad-b2c/user-migration/tree/master/seamless-account-migration"&gt;a GitHub repo&lt;/a&gt; which has the rather ominous disclaimer.&lt;/p&gt;

&lt;p&gt;“The migration application is developed and managed by the open-source community in GitHub. The application is not part of Azure AD B2C product and it’s not supported under any Microsoft standard support program or service. This migration app is provided AS IS without warranty of any kind.”&lt;/p&gt;

&lt;h2&gt;
  
  
  Let’s try it out
&lt;/h2&gt;

&lt;p&gt;With the exception of being slightly out of date, the repository readme does an excellent job of guiding you through what you need to do to perform Seamless Migration.&lt;/p&gt;

&lt;p&gt;In order to track whether a user has been migrated, a &lt;a href="https://docs.microsoft.com/en-us/azure/active-directory-b2c/user-flow-custom-attributes"&gt;custom attribute&lt;/a&gt; is created in Azure AD B2C. It is great that there exists the ability to create custom attributes, however it would be really helpful to be able to see the value of a custom attribute for a user from within the portal, and to be able to sort/filter all users on a custom attribute. I ended up creating my own management web app to display this value.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--QS_abqMk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://insight-services-apac.github.io/assets/images/seamless-migration/custom-attribute.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--QS_abqMk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://insight-services-apac.github.io/assets/images/seamless-migration/custom-attribute.png" alt="Custom Attribue" title="AD B2C Custom Attribute" width="880" height="47"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The pre-migration step was relatively straightforward, utilising the user-friendly Microsoft Graph APIs to create a bunch of users. The name of the custom attribute used when accessing it programatically is a bit odd, but not a big deal.&lt;/p&gt;

&lt;p&gt;The biggest challenge came with Azure AD B2C &lt;a href="https://docs.microsoft.com/en-us/azure/active-directory-b2c/custom-policy-overview_"&gt;Custom Policies&lt;/a&gt;. I understand that adding extensibility points within products is hard, but this felt like customising a service that was not designed for customisation.&lt;/p&gt;

&lt;p&gt;B2C is explicit that you are entering ‘identity pro’ land, but I am not convinced that slapping a warning on something gets you out of delivering a good product experience! This admission is tellingly called out in the above-mentioned Okta blog post.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--xKNxXAOg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://insight-services-apac.github.io/assets/images/seamless-migration/custom-policies.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--xKNxXAOg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://insight-services-apac.github.io/assets/images/seamless-migration/custom-policies.png" alt="Custom Policies Warning" title="AD B2C Custom Policies" width="880" height="113"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There is a large amount of boilerplate point-and-click to enable custom policies in B2C, outside of authoring the policies themselves. There is a &lt;a href="https://b2ciefsetupapp.azurewebsites.net/"&gt;community app&lt;/a&gt; to automate this process, but it would be great to see this process automated within B2C itself.&lt;/p&gt;

&lt;p&gt;Likewise, there is a (somewhat out of date) &lt;a href="https://github.com/Azure-Samples/active-directory-b2c-custom-policy-starterpack"&gt;community repo&lt;/a&gt; full of custom policy samples, but no canonical ‘starting point’, which I believe would be useful. Taking the sample policies and updating them according to the readme results in policies that fail validation, which is not great.&lt;/p&gt;

&lt;p&gt;Authoring the policies was challenging. The policies are written in xml, and while there are &lt;a href="https://docs.microsoft.com/en-us/azure/active-directory-b2c/trustframeworkpolicy_"&gt;fairly strong docs&lt;/a&gt;, a lot of the terms used are not particularly user-friendly. Being honest, I would say I more cut-and-pasted sections from blogs/tutorials/samples than actually wrote much policy.&lt;/p&gt;

&lt;p&gt;Lastly, uploading policies triggers a validation process, that in the case of failure (i.e. quite often) resulted in strangely formatted error messages.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--L-ZorAkq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://insight-services-apac.github.io/assets/images/seamless-migration/custom-policies-validation-error.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--L-ZorAkq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://insight-services-apac.github.io/assets/images/seamless-migration/custom-policies-validation-error.png" alt="Custom Policies Validatiom Error" title="AD B2C Custom Policies Validation Error" width="880" height="594"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Combining determination and my knowledge of the domain eventually resulted in policies that both passed validation and worked as desired! I wouldn’t say I deeply understand how they are working - I may return and figure it out, I may not.&lt;/p&gt;




&lt;p&gt;In summary, I achieved what I was looking to achieve, but spent a little longer to do so than I would have liked. I don’t particularly value time spent learning how to configure SaaS tools, and feel Azure AD B2C would benefit from a little more investment in the customer experience for some of these more advanced scenarios.&lt;/p&gt;

</description>
      <category>posts</category>
      <category>idaas</category>
      <category>b2c</category>
      <category>identity</category>
    </item>
    <item>
      <title>Interactive rebase</title>
      <dc:creator>axurcio</dc:creator>
      <pubDate>Thu, 21 Oct 2021 00:00:00 +0000</pubDate>
      <link>https://dev.to/lacjamm/interactive-rebase-4hg2</link>
      <guid>https://dev.to/lacjamm/interactive-rebase-4hg2</guid>
      <description>&lt;p&gt;The purpose of this article is to document the steps involved in performing an interactive rebase as an easy reference for clients without paid tools to simplify the process. By the end a git novice should be able to follow along unassisted.&lt;/p&gt;

&lt;h3&gt;
  
  
  Pre-requisites
&lt;/h3&gt;

&lt;p&gt;Rebase can be executed from Visual Studio Code, many IDEs or the command line. The following tools can be used to simplify the process:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Git graph viewer: VS Code with &lt;a href="https://marketplace.visualstudio.com/items?itemName=mhutchie.git-graph"&gt;mhutchie.git-graph extention&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Some means of doing git commands from the CLI &lt;a href="https://git-scm.com/downloads"&gt;from here&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  What is Rebase
&lt;/h3&gt;

&lt;p&gt;A rebase is a way of integrating work to branches as an alternative to using a standard merge. The difference between the two is as time goes on normal merges can lead to a very messy git graph and make it difficult or impossible to find how work was merged. The way that rebase simplifies this issue is by actually re-writing the history so that it is as though the changes were made right on top of some later commit.&lt;/p&gt;

&lt;p&gt;Unfortunately rebases can be more complicated making many people scared of them despite the advantages. But a rebase can be as simple as a couple of commands which I’ll put below before I get into the more complicated topic of the interactive rebase. To do a standard rebase:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Checkout the branch to be rebased&lt;/li&gt;
&lt;li&gt;Run &lt;code&gt;git rebase origin/target-branch&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;If there are no conflicts and the process completes run &lt;code&gt;git push origin --force&lt;/code&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;A lot of the time those two commands are all that is required. However many commits you have made are replayed on top of the target branch. When complete it will look like you have made your changes starting from the last commit of your target branch rather than some earlier point in the history.&lt;/p&gt;

&lt;h3&gt;
  
  
  Guide to interactive rebase
&lt;/h3&gt;

&lt;p&gt;Chances are that you will always try to do it the easy way first as above. The first thing is to cancel the rebase using &lt;code&gt;git rebase --abort&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;The next thing you should do is check the graph to see what work is actually yours. It may take time to learn this part but eventually you should be able to quickly glance through the nodes of your branch and determine what is your work vs. stuff that you have merged in.&lt;/p&gt;

&lt;p&gt;Now comes the fun bit. You run &lt;code&gt;git rebase --interactive origin/target-branch&lt;/code&gt; while on your branch. This will open a &lt;em&gt;vim&lt;/em&gt; editor in your terminal. To make life easier here are some commands that will help make use of it.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Vim commands&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;i&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Press the &lt;em&gt;i&lt;/em&gt; key when not in interactive mode to enter interactive mode and edit the commits list.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Esc&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Press the &lt;em&gt;Esc&lt;/em&gt; key when in interactive mode to exit and use other key commands.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;:wq&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;While &lt;strong&gt;NOT&lt;/strong&gt; in interactive mode press this key sequence to save changes and close the vim editor.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;:q!&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;While &lt;strong&gt;NOT&lt;/strong&gt; in interactive mode press this key sequence to close vim without saving changes. This is useful in case the edits do not look correct.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;So in the vim editor that opens you will see something that looks like the following example. It is already telling you how to do a few interesting things, but your changes will ultimately come down to putting a &lt;em&gt;d&lt;/em&gt; or &lt;em&gt;drop&lt;/em&gt; next to the commits that you don’t want to be rebased. &lt;strong&gt;NOTE&lt;/strong&gt; When you run the rebase there will be instructions as well which I’ve omitted for brevity, but read it through for a full understanding of what can be done in an interactive rebase.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pick 442a48288 Adding publication form
pick 9e4588745 Publication date component
pick b503173d8 Test updates for form
pick 460c79b2f Portal settings integration with publication dates service

# Rebase e7403349a..460c79b2f onto e7403349a (4 commands)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the example above the branch that is being rebased consists of 4 different commits and will default to rebasing all commits. But if the target branch already had the final commit I could do the following:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Start the interactive rebase from the branch to rebase. This will open a vim session in the CLI&lt;/li&gt;
&lt;li&gt;You will see something resembling the example above. Press &lt;em&gt;i&lt;/em&gt; to enter interactive mode&lt;/li&gt;
&lt;li&gt;Update the commits that are no longer required with &lt;em&gt;drop&lt;/em&gt; e.g. &lt;code&gt;drop 460c79b2f Portal settings integration with publication dates service&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Press &lt;em&gt;Esc&lt;/em&gt; to return to command mode&lt;/li&gt;
&lt;li&gt;Press &lt;em&gt;:wq&lt;/em&gt; to close the editor and save the changes&lt;/li&gt;
&lt;li&gt;The rebase will now continue automatically&lt;/li&gt;
&lt;li&gt;If there are conflicts you will need to resolve them&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;As many commits as are necessary can be dropped from the rebase. Identifying those that are already committed in the target should be fairly trivial in a lot of cases. To help use a git graph tool to look at the commits and identify commits that can be dropped without issue.&lt;/p&gt;

&lt;h3&gt;
  
  
  What about conflicts
&lt;/h3&gt;

&lt;p&gt;If you have conflicts then they will need to be resolved. To do this you need some tool to compare the changes in your branch and the target and decide how to resolve it. Broadly there are 4 options to resolve it:&lt;/p&gt;

&lt;p&gt;When you have a conflict in a file you should be able to see it in the git tab of VS Code. If you click on the file there you will see a diff of the file with the conflicts highlighted in blocks like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s---gDSb-Wx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://insight-services-apac.github.io/assets/images/interactive-rebase/conflict_top.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s---gDSb-Wx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://insight-services-apac.github.io/assets/images/interactive-rebase/conflict_top.png" alt="Top of the conflict block" title="Top of the conflict block" width="880" height="53"&gt;&lt;/a&gt;The current commit of the rebased branch’s changes will be here.&lt;/p&gt;

&lt;p&gt;The current state of the target branch’s changes including previous rebased commits. &lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--3b94bG25--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://insight-services-apac.github.io/assets/images/interactive-rebase/conflict_bottom.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--3b94bG25--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://insight-services-apac.github.io/assets/images/interactive-rebase/conflict_bottom.png" alt="Bottom of the conflict block" title="Bottom of the conflict block" width="880" height="31"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Command option&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Accept Current Change&lt;/td&gt;
&lt;td&gt;Take just the change in the target branches changes. &lt;strong&gt;Remember&lt;/strong&gt; that this will include any rebased commits that are already done&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Accept Incoming Change&lt;/td&gt;
&lt;td&gt;Take just the change from the current commit being rebased&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Accept Both Changes&lt;/td&gt;
&lt;td&gt;Take both of the above changes. &lt;strong&gt;NOTE&lt;/strong&gt; this will often require additional editing&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;You can also use git commands to progress the rebase. Typically you will run the continue command after selecting one of the options above but other commands are listed below. It will then open a vim editor with that commit’s message. If you aren’t changing it just use the vim close command.&lt;/p&gt;

&lt;p&gt;Below are commands that can be typed in the git rebase terminal to continue actions.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Command&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;git rebase --continue&lt;/td&gt;
&lt;td&gt;Continue to the next commit of the branch to be rebased. All conflicts must be resolved for this to succeed&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;git rebase --skip&lt;/td&gt;
&lt;td&gt;Skips the current commit from the branch being rebased&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;git rebase --abort&lt;/td&gt;
&lt;td&gt;Abort the rebase and return the branch to its pre rebased state&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  When not to Rebase
&lt;/h3&gt;

&lt;p&gt;Just because you can do something doesn’t mean that you should. 99% of the time you should &lt;strong&gt;NOT&lt;/strong&gt; rebase a branch that is used by multiple people. The reason for this is that the re-write in the history will make it as though the branches diverged and make the other developers lives more difficult as a result. If you only rebase your own branches then you should never run into issues.&lt;/p&gt;

&lt;h3&gt;
  
  
  How can it go wrong?
&lt;/h3&gt;

&lt;p&gt;Occasionally you will have to take a change from another branch such as some API updates possibly before they are completed to the target branch. When you eventually go to PR your changes if those changes are already merged before that you will get conflicts. But this can be avoided using the interactive rebase.&lt;/p&gt;

&lt;p&gt;Even if you haven’t merged from another branch sometimes someone will happen to alter code in the same area as you.&lt;/p&gt;

&lt;p&gt;So you do a rebase and you get conflicts. If you know that you have merged other changes into your branch that have already made their way into your target then you would be better doing an interactive rebase to at least avoid those conflicts. Otherwise you can resolve the conflicts as normal.&lt;/p&gt;

&lt;h3&gt;
  
  
  References
&lt;/h3&gt;

&lt;p&gt;If you would like to read further on the above topics you can find further info at:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://git-scm.com/docs/git-rebase"&gt;git rebase&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>git</category>
      <category>rebase</category>
    </item>
    <item>
      <title>Azure Maps</title>
      <dc:creator>axurcio</dc:creator>
      <pubDate>Wed, 28 Jul 2021 00:00:00 +0000</pubDate>
      <link>https://dev.to/lacjamm/azure-maps-2m4e</link>
      <guid>https://dev.to/lacjamm/azure-maps-2m4e</guid>
      <description>&lt;p&gt;As the topic suggests, &lt;a href="https://azure.microsoft.com/en-us/services/azure-maps/"&gt;Azure Maps&lt;/a&gt; is the geospatial Platform-as-a-Service (PaaS) service provided by &lt;a href="https://www.microsoft.com/"&gt;Microsoft&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;It is part of their cloud computing offering, &lt;a href="https://azure.microsoft.com/"&gt;Azure&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Pricing
&lt;/h3&gt;

&lt;p&gt;With Microsoft cloud offering, the &lt;a href="https://www.myaccountingcourse.com/accounting-dictionary/operating-expenses"&gt;OPEX&lt;/a&gt; model applies. More on their pricing model &lt;a href="https://azure.microsoft.com/en-us/pricing/details/azure-maps/"&gt;here&lt;/a&gt;. The article also describe Azure Map functionality enabled for each pricing tiers.&lt;/p&gt;

&lt;h3&gt;
  
  
  Technical offering
&lt;/h3&gt;

&lt;p&gt;The Azure Maps product team at Microsoft has extensive code &lt;a href="https://azuremapscodesamples.azurewebsites.net/"&gt;samples&lt;/a&gt; available.&lt;/p&gt;

&lt;p&gt;The following are nifty information that are worth knowing when working with Azure Maps.&lt;/p&gt;

&lt;p&gt;Out of the box, pin aggregation aka clustering only happens when there are 2 or more pins.&lt;/p&gt;

&lt;p&gt;In order to have a single pin show up as a cluster, use the Bubble Layer and apply data-driven layer styling.&lt;/p&gt;

&lt;p&gt;The samples do not provide the solution, so after some reading of the &lt;a href="https://docs.microsoft.com/en-us/azure/azure-maps/data-driven-style-expressions-web-sdk"&gt;documentation&lt;/a&gt;, here is how.&lt;/p&gt;

&lt;p&gt;In order to enable aggregation, set the cluster property of the atlas.source.DataSource object to true.&lt;/p&gt;

&lt;p&gt;The data source must contain a property that you want to use to aggregate with. I have used the isCheapest property from my data item in the example below.&lt;/p&gt;

&lt;p&gt;Incrementing the Cheapest property within the clusterProperties of the DataSource object accounts for the scenario when 2 or more pins have the cheapest and are within close proximity.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
cluster: true,
clusterProperties: {
    Cheapest: ['+', ['case', ['==', ['get', 'isCheapest'], true], 1, 0]]
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You will then create a Bubble Layer object and alter its properties.&lt;/p&gt;

&lt;p&gt;The example below shows the colour property of the Bubble Layer being set to red when there is a cluster of 1 or more cheapest pins &lt;strong&gt;or&lt;/strong&gt; when there is only 1 pin and the isCheapest property of the data item is true. Otherwise, the bubble shown will be black in colour.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
color: [
        'case',
        ['all', ['has', 'Cheapest'], ['&amp;gt;', ['get', 'Cheapest'], 0]],
        '#ff0000',
        ['all', ['has', 'myDataItem'], ['==', ['get', 'isCheapest'], true]],
        '#ff0000',
        '#000000'
    ]

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Another nifty information to keep an eye out for is the disabling of pan, rotate and tilt in Azure Maps. By default, the map allows pan, rotate and tilting. Thus to disable them, you will have to set the following map properties.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;dragRotate : this is for drag event as well as right-click. The _pitchWithRotate property will disable pitch when rotating. There is also a disabled function to prevent dragging and rotating.&lt;/li&gt;
&lt;li&gt;touchZoomRotate : this is for touch enabled devices. There is a disableRotation function that you can invoke.&lt;/li&gt;
&lt;li&gt;touchPitch : this is also for touch enabled devices. Use the disabled function to prevent pitch.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The map also has a setMaxPitch function that you can invoke to set the maximum desired pitch. If you pass in 0, that means the map has no pitch. However, I still encountered undesired map rendering. In order to completely nullify pitch, you can invoke the preventDefault function on pitch events of the map. A full list of map events can be found &lt;a href="https://docs.microsoft.com/en-us/azure/azure-maps/map-events"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Considerations also has to be taken when rendering pins. There will be a &lt;a href="https://azure.microsoft.com/en-us/blog/data-driven-styling-and-more-in-the-latest-azure-maps-web-sdk-update/"&gt;performance hit&lt;/a&gt; with a liberal use of the addPins function of the map. The recommended approach is to have 1 atlas.source.DataSource object and adding items to it by invoking the add function or removing items from it by invoking the remove function.&lt;/p&gt;

&lt;p&gt;Happy Azure Mapping!&lt;/p&gt;

</description>
      <category>azure</category>
      <category>azuremaps</category>
    </item>
  </channel>
</rss>
