<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Carlos Mendible</title>
    <description>The latest articles on DEV Community by Carlos Mendible (@cmendibl3).</description>
    <link>https://dev.to/cmendibl3</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/cmendibl3"/>
    <language>en</language>
    <item>
      <title>Azure Quick Review (azqr) v2.9.0 is out!</title>
      <dc:creator>Carlos Mendible</dc:creator>
      <pubDate>Fri, 19 Sep 2025 21:50:58 +0000</pubDate>
      <link>https://dev.to/cmendibl3/azure-quick-review-azqr-v290-is-out-32nc</link>
      <guid>https://dev.to/cmendibl3/azure-quick-review-azqr-v290-is-out-32nc</guid>
      <description>&lt;p&gt;🚀 Azure Quick Review (azqr) v2.9.0 is out!&lt;br&gt;
🛡️ New: Azure Policy scanning &amp;amp; reporting &lt;br&gt;
📂 Fix: Nested management group retrieval&lt;br&gt;
✅ Fix: Resource Group ID format validation&lt;/p&gt;

&lt;p&gt;👉For sharper insights get the latest: &lt;a href="https://github.com/Azure/azqr" rel="noopener noreferrer"&gt;https://github.com/Azure/azqr&lt;/a&gt;&lt;/p&gt;

</description>
      <category>azure</category>
      <category>azqr</category>
      <category>devops</category>
      <category>opensource</category>
    </item>
    <item>
      <title>AKS: Windows node pool with spot virtual machines and ephemeral disks</title>
      <dc:creator>Carlos Mendible</dc:creator>
      <pubDate>Mon, 13 Sep 2021 10:00:00 +0000</pubDate>
      <link>https://dev.to/cmendibl3/aks-windows-node-pool-with-spot-virtual-machines-and-ephemeral-disks-4807</link>
      <guid>https://dev.to/cmendibl3/aks-windows-node-pool-with-spot-virtual-machines-and-ephemeral-disks-4807</guid>
      <description>&lt;p&gt;Some months ago a customer asked me if there was a way to deploy a Windows node pool with spot virtual machines and ephemeral disks in Azure Kubernetes Service (AKS).&lt;/p&gt;

&lt;p&gt;The idea was to create a cluster that could be used to run Windows batch workloads and minimize costs by deploying the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;An AKS cluster with 2 linux nodes and ephemeral disks as the default node pool configuration.&lt;/li&gt;
&lt;li&gt;A Windows node pool with Spot Virtual Machines, ephemeral disks and auto-scaling enabled.&lt;/li&gt;
&lt;li&gt;Set the windows node pool minimum count and initial number of nodes set to 0.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To create a cluster with the desired configuration with terraform, follow the steps below:&lt;/p&gt;

&lt;h2&gt;
  
  
  Define the terraform providers to use
&lt;/h2&gt;

&lt;p&gt;Create a &lt;em&gt;providers.tf&lt;/em&gt; file with the following contents:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform {
  required_version = "&amp;gt; 0.12"
  required_providers {
    azurerm = {
      source  = "azurerm"
      version = "~&amp;gt; 2.26"
    }
  }
}

provider "azurerm" {
  features {}
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Define the variables
&lt;/h2&gt;

&lt;p&gt;Create a &lt;em&gt;variables.tf&lt;/em&gt; file with the following contents:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;variable "resource_group_name" {
  default = "aks-win"
}

variable "location" {
  default = "West Europe"
}

variable "cluster_name" {
  default = "aks-win"
}

variable "dns_prefix" {
  default = "aks-win"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Define the resource group
&lt;/h2&gt;

&lt;p&gt;Create a &lt;em&gt;main.tf&lt;/em&gt; file with the following contents:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Create Resource Group
resource "azurerm_resource_group" "rg" {
  name     = var.resource_group_name
  location = var.location
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Define the VNET for the cluster
&lt;/h2&gt;

&lt;p&gt;Create a &lt;em&gt;vnet-server.tf&lt;/em&gt; file with the following contents:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "azurerm_virtual_network" "vnet" {
  name                = "aks-vnet"
  location            = azurerm_resource_group.rg.location
  resource_group_name = azurerm_resource_group.rg.name
  address_space       = ["10.0.0.0/16"]
}

resource "azurerm_subnet" "aks-subnet" {
  name                 = "aks-subnet"
  resource_group_name  = azurerm_resource_group.rg.name
  virtual_network_name = azurerm_virtual_network.vnet.name
  address_prefixes     = ["10.0.1.0/24"]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Define the AKS cluster
&lt;/h2&gt;

&lt;p&gt;Create a &lt;em&gt;aks-server.tf&lt;/em&gt; file with the following contents:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Deploy Kubernetes
resource "azurerm_kubernetes_cluster" "k8s" {
  name                = var.cluster_name
  location            = azurerm_resource_group.rg.location
  resource_group_name = azurerm_resource_group.rg.name
  dns_prefix          = var.dns_prefix

  default_node_pool {
    name                = "default"
    node_count          = 2
    vm_size             = "Standard_D2s_v3"
    os_disk_size_gb     = 30
    os_disk_type        = "Ephemeral"
    vnet_subnet_id      = azurerm_subnet.aks-subnet.id
    max_pods            = 15
    enable_auto_scaling = false
  }

  # Using Managed Identity
  identity {
    type = "SystemAssigned"
  }

  network_profile {
    # The --service-cidr is used to assign internal services in the AKS cluster an IP address. This IP address range should be an address space that isn't in use elsewhere in your network environment, including any on-premises network ranges if you connect, or plan to connect, your Azure virtual networks using Express Route or a Site-to-Site VPN connection.
    service_cidr = "172.0.0.0/16"
    # The --dns-service-ip address should be the .10 address of your service IP address range.
    dns_service_ip = "172.0.0.10"
    # The --docker-bridge-address lets the AKS nodes communicate with the underlying management platform. This IP address must not be within the virtual network IP address range of your cluster, and shouldn't overlap with other address ranges in use on your network.
    docker_bridge_cidr = "172.17.0.1/16"
    network_plugin     = "azure"
    network_policy     = "calico"
  }

  role_based_access_control {
    enabled = true
  }

  addon_profile {
    kube_dashboard {
      enabled = false
    }
  }
}

resource "azurerm_kubernetes_cluster_node_pool" "windows" {
  kubernetes_cluster_id = azurerm_kubernetes_cluster.k8s.id
  name                  = "win"
  priority        = "Spot"
  eviction_policy = "Delete"
  spot_max_price  = -1 # The VMs will not be evicted for pricing reasons.
  os_type = "Windows"
  # "The virtual machine size Standard_D2s_v3 has a cache size of 53687091200 bytes, but the OS disk requires 137438953472 bytes. Use a VM size with larger cache or disable ephemeral OS."
  # https://docs.microsoft.com/en-us/azure/virtual-machines/ephemeral-os-disks#size-requirements
  vm_size             = "Standard_DS3_v2"
  os_disk_type        = "Ephemeral"
  node_count          = 0
  enable_auto_scaling = true
  max_count           = 3
  min_count           = 0
}

data "azurerm_resource_group" "node_resource_group" {
  name = azurerm_kubernetes_cluster.k8s.node_resource_group
}

# Assign the Contributor role to the AKS kubelet identity
resource "azurerm_role_assignment" "kubelet_contributor" {
  scope                = data.azurerm_resource_group.node_resource_group.id
  role_definition_name = "Contributor" #"Virtual Machine Contributor"?
  principal_id         = azurerm_kubernetes_cluster.k8s.kubelet_identity[0].object_id
}

resource "azurerm_role_assignment" "kubelet_network_contributor" {
  scope                = azurerm_virtual_network.vnet.id
  role_definition_name = "Network Contributor"
  principal_id         = azurerm_kubernetes_cluster.k8s.identity[0].principal_id
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Deploy the AKS cluster
&lt;/h2&gt;

&lt;p&gt;Run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform init
terraform apply
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Get the credentials for the cluster:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;RESOURCE_GROUP="aks-win"
CLUSTER_NAME="aks-win"
az aks get-credentials --resource-group $RESOURCE_GROUP --name $CLUSTER_NAME
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To verify that there are no windows VMs running, execute:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get nodes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;you should see something like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
aks-default-36675761-vmss000000 Ready agent 80m v1.20.9 10.0.1.4 &amp;lt;none&amp;gt; Ubuntu 18.04.5 LTS 5.4.0-1056-azure containerd://1.4.8+azure
aks-default-36675761-vmss000001 Ready agent 80m v1.20.9 10.0.1.20 &amp;lt;none&amp;gt; Ubuntu 18.04.5 LTS 5.4.0-1056-azure containerd://1.4.8+azure
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Deploy a Windows workload:
&lt;/h2&gt;

&lt;p&gt;To deploy a Windows workload, create a &lt;em&gt;windows_deployment.yaml&lt;/em&gt; file with the following contents:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: apps/v1
kind: Deployment
metadata:
  name: servercore
  labels:
    app: servercore
spec:
  replicas: 1
  template:
    metadata:
      name: servercore
      labels:
        app: servercore
    spec:
      nodeSelector:
        "kubernetes.azure.com/scalesetpriority": "spot"
      containers:
      - name: servercore
        image: mcr.microsoft.com/dotnet/framework/samples:aspnetapp
        resources:
          limits:
            cpu: 1
            memory: 800M
          requests:
            cpu: .1
            memory: 150M
        ports:
          - containerPort: 80
      tolerations:
        - key: "kubernetes.azure.com/scalesetpriority"
          operator: "Equal"
          value: "spot"
          effect: "NoSchedule"
  selector:
    matchLabels:
      app: servercore
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and deploy it to your cluster:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f windows_deployment.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The &lt;em&gt;kubernetes.azure.com/scalesetpriority&lt;/em&gt; label is used to ensure that the workload is scheduled on a spot node.&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;tolerations&lt;/em&gt; are used to ensure that the workload is scheduled on a spot node.&lt;/li&gt;
&lt;li&gt;Deployment will take a while (&amp;gt; 5 minutes) since the windows pool must scale up to fullfill the request.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now check the nodes again:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get nodes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;this time you should see something like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
aks-default-36675761-vmss000000 Ready agent 91m v1.20.9 10.0.1.4 &amp;lt;none&amp;gt; Ubuntu 18.04.5 LTS 5.4.0-1056-azure containerd://1.4.8+azure
aks-default-36675761-vmss000001 Ready agent 91m v1.20.9 10.0.1.20 &amp;lt;none&amp;gt; Ubuntu 18.04.5 LTS 5.4.0-1056-azure containerd://1.4.8+azure
akswin000000 Ready agent 102s v1.20.9 10.0.1.36 &amp;lt;none&amp;gt; Windows Server 2019 Datacenter 10.0.17763.2114 docker://20.10.6
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you check the pod events you’ll find that the workload triggered a scale up:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl describe $(kubectl get po -l "app=servercore" -o name)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I’ll let you test what happens if you delete the deployment.&lt;/p&gt;

&lt;p&gt;Hope it helps!!!&lt;/p&gt;

&lt;p&gt;Please find the complete &lt;strong&gt;terraform&lt;/strong&gt; configuration &lt;a href="https://github.com/cmendible/azure.samples/tree/main/aks_windows_spot_ephemeral_autoscaler/deploy" rel="noopener noreferrer"&gt;here&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aks</category>
      <category>windows</category>
      <category>spot</category>
      <category>azure</category>
    </item>
    <item>
      <title>AKS: Persistent Volume Claim with an Azure File Storage protected with a Private Endpoint</title>
      <dc:creator>Carlos Mendible</dc:creator>
      <pubDate>Mon, 02 Aug 2021 10:00:00 +0000</pubDate>
      <link>https://dev.to/cmendibl3/aks-persistent-volume-claim-with-an-azure-file-storage-protected-with-a-private-endpoint-49hi</link>
      <guid>https://dev.to/cmendibl3/aks-persistent-volume-claim-with-an-azure-file-storage-protected-with-a-private-endpoint-49hi</guid>
      <description>&lt;p&gt;This post will show you the steps you’ll have to take to deploy an Azure Files Storage with a Private Endpoint and use it to create volumes for an Azure Kubernetes Service cluster:&lt;/p&gt;

&lt;h2&gt;
  
  
  Create a &lt;strong&gt;bicep&lt;/strong&gt; file to declare the Azure resources
&lt;/h2&gt;

&lt;p&gt;You’ll have to declare the following resources:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A VNET with 2 subnets. One for the private endpoint and the other for the AKS cluster.&lt;/li&gt;
&lt;li&gt;An Azure Files storage.&lt;/li&gt;
&lt;li&gt;A Private Endpoint for the storage.&lt;/li&gt;
&lt;li&gt;A Private DNS Zone and Private DNS Zone Group.&lt;/li&gt;
&lt;li&gt;A link between the Private DNS Zone to the VNET.&lt;/li&gt;
&lt;li&gt;An AKS cluster.&lt;/li&gt;
&lt;li&gt;A role assignment to add the kubelet identity of the cluster as a Contributor to the Storage Account.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;in a &lt;em&gt;main.bicep&lt;/em&gt; file with the following contents:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;param sa_name string = 'akscsisa'
param aks_name string = 'akscsimsft'

// Create the VNET
resource vnet 'Microsoft.Network/virtualNetworks@2020-11-01' = {
  name: 'private-network'
  location: 'westeurope'
  properties: {
    addressSpace: {
      addressPrefixes: [
        '10.0.0.0/8'
      ]
    }
    subnets: [
      {
        name: 'endpoint'
        properties: {
          addressPrefix: '10.241.0.0/16'
          serviceEndpoints: []
          delegations: []
          privateEndpointNetworkPolicies: 'Disabled'
          privateLinkServiceNetworkPolicies: 'Enabled'
        }
      }
      {
        name: 'aks'
        properties: {
          addressPrefix: '10.240.0.0/16'
          serviceEndpoints: []
          delegations: []
          privateEndpointNetworkPolicies: 'Enabled'
          privateLinkServiceNetworkPolicies: 'Enabled'
        }
      }
    ]
    enableDdosProtection: false
  }
}

// Create the File Storage Account
resource sa 'Microsoft.Storage/storageAccounts@2021-01-01' = {
  name: sa_name
  location: 'westeurope'
  sku: {
    name: 'Premium_LRS'
    tier: 'Premium'
  }
  kind: 'FileStorage'
  properties: {
    minimumTlsVersion: 'TLS1_0'
    allowBlobPublicAccess: false
    isHnsEnabled: false
    networkAcls: {
      bypass: 'AzureServices'
      virtualNetworkRules: []
      ipRules: []
      defaultAction: 'Deny'
    }
    supportsHttpsTrafficOnly: true
    accessTier: 'Hot'
  }
}

// Create the Private Enpoint
resource private_endpoint 'Microsoft.Network/privateEndpoints@2020-11-01' = {
  name: 'sa-endpoint'
  location: 'westeurope'
  properties: {
    privateLinkServiceConnections: [
      {
        name: 'sa-privateserviceconnection'
        properties: {
          privateLinkServiceId: sa.id
          groupIds: [
            'file'
          ]
        }
      }
    ]
    subnet: {
      id: '${vnet.id}/subnets/endpoint'
    }
  }
}

// Create the Private DNS Zone
resource dns 'Microsoft.Network/privateDnsZones@2018-09-01' = {
  name: 'privatelink.file.core.windows.net'
  location: 'global'
}

// Link the Private DNS Zone with the VNET
resource vnet_dns_link 'Microsoft.Network/privateDnsZones/virtualNetworkLinks@2018-09-01' = {
  name: '${dns.name}/test'
  location: 'global'
  properties: {
    registrationEnabled: false
    virtualNetwork: {
      id: vnet.id
    }
  }
}

// Create Private DNS Zone Group 
resource dns_group 'Microsoft.Network/privateEndpoints/privateDnsZoneGroups@2020-11-01' = {
  name: '${private_endpoint.name}/default'
  properties: {
    privateDnsZoneConfigs: [
      {
        name: 'privatelink-file-core-windows-net'
        properties: {
          privateDnsZoneId: dns.id
        }
      }
    ]
  }
}

// Create AKS cluster
resource aks 'Microsoft.ContainerService/managedClusters@2021-02-01' = {
  name: aks_name
  location: 'westeurope'
  identity: {
    type: 'SystemAssigned'
  }
  properties: {
    kubernetesVersion: '1.19.9'
    dnsPrefix: aks_name
    agentPoolProfiles: [
      {
        name: 'default'
        count: 1
        vmSize: 'Standard_D2s_v3'
        osDiskSizeGB: 30
        osDiskType: 'Ephemeral'
        vnetSubnetID: '${vnet.id}/subnets/aks'
        type: 'VirtualMachineScaleSets'
        orchestratorVersion: '1.19.9'
        osType: 'Linux'
        mode: 'System'
      }
    ]
    servicePrincipalProfile: {
      clientId: 'msi'
    }
    addonProfiles: {
      kubeDashboard: {
        enabled: false
      }
    }
    enableRBAC: true
    networkProfile: {
      networkPlugin: 'kubenet'
      networkPolicy: 'calico'
      loadBalancerSku: 'standard'
      podCidr: '10.244.0.0/16'
      serviceCidr: '10.0.0.0/16'
      dnsServiceIP: '10.0.0.10'
      dockerBridgeCidr: '172.17.0.1/16'
    }
    apiServerAccessProfile: {
      enablePrivateCluster: false
    }
  }
}

// Built-in Role Definition IDs
// https://docs.microsoft.com/en-us/azure/role-based-access-control/built-in-roles
var contributor = '/subscriptions/${subscription().subscriptionId}/providers/Microsoft.Authorization/roleDefinitions/b24988ac-6180-42a0-ab88-20f7382dd24c'

// Set AKS kubelet Identity as SA Contributor
resource aks_kubelet_sa_contributor 'Microsoft.Authorization/roleAssignments@2020-04-01-preview' = {
  name: guid('${aks_name}_kubelet_sa_contributor')
  scope: sa
  properties: {
    principalId: reference(aks.id, '2021-02-01', 'Full').properties.identityProfile['kubeletidentity'].objectId
    roleDefinitionId: contributor
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Deploy the Azure resources
&lt;/h2&gt;

&lt;p&gt;Run the following commands to deploy the Azure resources:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;az group create -n private-pvc-test -l westeurope
az deployment group create -f ./main.bicep -g private-pvc-test
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After 10 minutes or so you’ll have all resources up and running.&lt;/p&gt;

&lt;h2&gt;
  
  
  Install Azure CSI Driver
&lt;/h2&gt;

&lt;p&gt;The &lt;a href="https://github.com/kubernetes-sigs/azurefile-csi-driver" rel="noopener noreferrer"&gt;Azure Files Container Storage Interface (CSI) driver&lt;/a&gt; can be installed on the cluster so Azure Kubernetes Service (AKS) can manage the lifecycle of Azure Files shares.&lt;/p&gt;

&lt;p&gt;To install the driver run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;az aks get-credentials -n akscsimsft -g private-pvc-test --overwrite-existing
curl -skSL https://raw.githubusercontent.com/kubernetes-sigs/azurefile-csi-driver/v1.5.0/deploy/install-driver.sh | bash -s v1.5.0 --
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and check the pods status:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl -n kube-system get pod -o wide --watch -l app=csi-azurefile-controller
kubectl -n kube-system get pod -o wide --watch -l app=csi-azurefile-node
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should find instances of &lt;em&gt;csi-azurefile-controller&lt;/em&gt; and &lt;em&gt;csi-azurefile-node&lt;/em&gt; running without issues.&lt;/p&gt;

&lt;h2&gt;
  
  
  Create a Storage Class
&lt;/h2&gt;

&lt;p&gt;Create a file named: &lt;em&gt;storageclass-azurefile-csi.yaml&lt;/em&gt; with the following yaml:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: azurefile-csi
provisioner: file.csi.azure.com
allowVolumeExpansion: true
parameters:
  resourceGroup: &amp;lt;resourceGroup&amp;gt;  # optional, only set this when storage account is not in the same resource group as agent node
  storageAccount: &amp;lt;storageAccountName&amp;gt;
  # Check driver parameters here:
  # https://github.com/kubernetes-sigs/azurefile-csi-driver/blob/master/docs/driver-parameters.md
  server: &amp;lt;storageAccountName&amp;gt;.privatelink.file.core.windows.net 
reclaimPolicy: Delete
volumeBindingMode: Immediate
mountOptions:
  - dir_mode=0777
  - file_mode=0777
  - uid=0
  - gid=0
  - mfsymlinks
  - cache=strict  # https://linux.die.net/man/8/mount.cifs
  - nosharesock  # reduce probability of reconnect race
  - actimeo=30  # reduce latency for metadata-heavy workload
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Remember to replace the values of &lt;code&gt;&amp;lt;resourceGroup&amp;gt;&lt;/code&gt; and &lt;code&gt;&amp;lt;storageAccountName&amp;gt;&lt;/code&gt; with the ones used in the previous deployment. (i.e. &lt;em&gt;private-pvc-test&lt;/em&gt; and &lt;em&gt;akscsisa&lt;/em&gt;)&lt;/p&gt;

&lt;p&gt;Now deploy the Storage Class to the cluster:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f storageclass-azurefile-csi.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Create a Private Volume Claim
&lt;/h2&gt;

&lt;p&gt;Create a Private Volume Claim that uses the storage class. Create &lt;em&gt;pvc.yaml&lt;/em&gt; with the following contents:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: my-azurefile
spec:
  accessModes:
    - ReadWriteMany
  storageClassName: azurefile-csi
  resources:
    requests:
      storage: 100Gi
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Deploy the Private Volume Claim to the cluster and check that everything is ok:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f pvc.yaml
kubectl get pvc
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now feel free to try and mount a volume using the Private Volume Claim: &lt;em&gt;my-azurefile&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Hope it helps!!!&lt;/p&gt;

&lt;p&gt;Please find a &lt;strong&gt;bicep&lt;/strong&gt; based sample &lt;a href="https://github.com/cmendible/azure.samples/tree/main/aks_csi_sa_private_enpoint/bicep" rel="noopener noreferrer"&gt;here&lt;/a&gt; or if you prefer &lt;strong&gt;terraform&lt;/strong&gt; &lt;a href="https://github.com/cmendible/azure.samples/tree/main/aks_csi_sa_private_enpoint/terraform" rel="noopener noreferrer"&gt;here&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;References:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.microsoft.com/en-us/azure/aks/azure-files-csi" rel="noopener noreferrer"&gt;Use Azure Files Container Storage Interface (CSI) drivers in Azure Kubernetes Service (AKS)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/kubernetes-sigs/azurefile-csi-driver" rel="noopener noreferrer"&gt;Azure Files Container Storage Interface (CSI) driver&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>aks</category>
      <category>pvc</category>
      <category>azurefiles</category>
      <category>azure</category>
    </item>
    <item>
      <title>Plan IP addressing for AKS configured with Azure CNI Networking</title>
      <dc:creator>Carlos Mendible</dc:creator>
      <pubDate>Fri, 09 Jul 2021 10:00:00 +0000</pubDate>
      <link>https://dev.to/cmendibl3/plan-ip-addressing-for-aks-configured-with-azure-cni-networking-41id</link>
      <guid>https://dev.to/cmendibl3/plan-ip-addressing-for-aks-configured-with-azure-cni-networking-41id</guid>
      <description>&lt;p&gt;When configuring Azure Kubernetes Service with Azure Container Network Interface (CNI), every pod gets an IP address of the subnet you’ve configured.&lt;/p&gt;

&lt;p&gt;So how do you plan you address space? What factors should you consider?&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Each node consumes one IP.&lt;/li&gt;
&lt;li&gt;Each pod consumes one IP.&lt;/li&gt;
&lt;li&gt;Each internal LoadBalancer Service you anticipate consumes one IP.&lt;/li&gt;
&lt;li&gt;Azure reserves 5 IP addresses within each subnet.&lt;/li&gt;
&lt;li&gt;The Max pods per node is 250.&lt;/li&gt;
&lt;li&gt;The Max pods per nodes lower limit is 10.&lt;/li&gt;
&lt;li&gt;30 pods es the minimum per cluster.&lt;/li&gt;
&lt;li&gt;Max nodes per cluster is 1000.&lt;/li&gt;
&lt;li&gt;When a cluster is upgraded a new node is added as part of the process which requires a minimum of one additional block of IP addresses to be available. Your node count is then n + 1.&lt;/li&gt;
&lt;li&gt;When you scale a cluster an additional node is added. Your node count is then n + number-of-additional-scaled-nodes-you-anticipate + 1.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;With all that in mind the formula to calculate the number of IPs required for your cluster should look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;requiredIPs = (nodes + 1 + scale) + ((nodes + 1 + scale) * maxPods) + isvc

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;where:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;nodes: Number of nodes (default 3)&lt;/li&gt;
&lt;li&gt;maxPods: Max pods per node (default 30)&lt;/li&gt;
&lt;li&gt;sacale: Number of expected scale nodes&lt;/li&gt;
&lt;li&gt;isvc: Number of expected internal LoadBalancer services&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To help you with this I’ve created a small console program written in golang: &lt;strong&gt;&lt;a href="https://github.com/cmendible/aksip" rel="noopener noreferrer"&gt;aksip&lt;/a&gt;&lt;/strong&gt; which performs the necessary validations and calculations for you.&lt;/p&gt;

&lt;p&gt;Let’s say you want a 50 node cluster with one internal load balancer that also includes provision to scale up an additional 10 nodes:&lt;/p&gt;

&lt;p&gt;Just run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aksip -n 50 -s 10

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The output will show that you’ll need 1892 IP addresses and therefore a /21 subnet or larger:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
"nodes": 50,
"scale": 10,
"maxPods": 30,
"isvc": 1,
"requiredIPs": 1892,
"cidr": "/21"
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Hope it helps!!!&lt;/p&gt;

&lt;p&gt;References:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.microsoft.com/en-us/azure/aks/configure-azure-cni" rel="noopener noreferrer"&gt;Configure Azure CNI networking in Azure Kubernetes Service (AKS)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.microsoft.com/en-us/azure/aks/configure-azure-cni#configure-maximum---new-clusters" rel="noopener noreferrer"&gt;Configure maximum - new clusters&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.microsoft.com/en-us/azure/virtual-network/virtual-networks-faq#are-there-any-restrictions-on-using-ip-addresses-within-these-subnets" rel="noopener noreferrer"&gt;Are there any restrictions on using IP addresses within these subnets&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>aks</category>
      <category>kubernetes</category>
      <category>azure</category>
      <category>cni</category>
    </item>
    <item>
      <title>Running k3s inside WSL2 on a Surface Pro X</title>
      <dc:creator>Carlos Mendible</dc:creator>
      <pubDate>Sun, 09 May 2021 17:00:00 +0000</pubDate>
      <link>https://dev.to/cmendibl3/running-k3s-inside-wsl2-on-a-surface-pro-x-gfk</link>
      <guid>https://dev.to/cmendibl3/running-k3s-inside-wsl2-on-a-surface-pro-x-gfk</guid>
      <description>&lt;p&gt;I’m a proud owner of a Surafe Pro X SQ2 which is an ARM64 device. If you’ve been reading me, you know I like to tinker with kubernetes and therefore I needed a solution for this device.&lt;/p&gt;

&lt;p&gt;I remembered reading about &lt;a href="https://k3s.io/" rel="noopener noreferrer"&gt;k3s&lt;/a&gt; a lightweight kubernetes distro built for IoT &amp;amp; Edge computing, and decided to give it a try.&lt;/p&gt;

&lt;h2&gt;
  
  
  Installing k3s in WSL2
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Download the binaries
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;wget https://github.com/k3s-io/k3s/releases/download/v1.19.10%2Bk3s1/k3s-arm64
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Rename the file
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mv k3s-arm64 k3s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Make the file executable
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;chmod +x k3s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Move the file to &lt;code&gt;/usr/local/bin&lt;/code&gt;
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo mv k3s /usr/local/bin
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Copy to k3s config file to &lt;code&gt;$HOME/.kube&lt;/code&gt;
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo cp /etc/rancher/k3s/k3s.yaml $HOME/.kube
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Make current user owner of the k3s config file
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo chown $USER $HOME/.kube/k3s.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Running k3s
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Run the k3s kubernetes server
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo k3s server
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Test k3s
&lt;/h2&gt;

&lt;p&gt;From another WSL2 console window:&lt;/p&gt;

&lt;h3&gt;
  
  
  Add the k3s config file to KUBECONFIG environment variable
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export KUBECONFIG=$HOME/.kube/config:$HOME/.kube/k3s.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Use k3s context
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl config use-context default
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Check all running pods
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get po --all-namespaces
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;you should get an output similar to this one:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system helm-install-traefik-mhhfz 0/1 Completed 0 20d
kube-system metrics-server-7b4f8b595-7cfsb 1/1 Running 1 20d
kube-system svclb-traefik-pqn56 2/2 Running 2 20d
kube-system coredns-66c464876b-pnpgm 1/1 Running 1 20d
kube-system local-path-provisioner-7ff9579c6-nhnbj 1/1 Running 6 20d
kube-system traefik-5dd496474-94lt5 1/1 Running 1 20d
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;Note: k3s uses: &lt;strong&gt;local-path-provisioner&lt;/strong&gt; and saves volume data in the &lt;strong&gt;/var/lib/rancher/k3s/data&lt;/strong&gt; folder&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Hope it helps!!!&lt;/p&gt;

</description>
      <category>k3s</category>
      <category>wsl2</category>
      <category>surfaceprox</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>Deploy AKS + Kubecost with Terraform</title>
      <dc:creator>Carlos Mendible</dc:creator>
      <pubDate>Fri, 30 Apr 2021 19:00:00 +0000</pubDate>
      <link>https://dev.to/cmendibl3/deploy-aks-kubecost-with-terraform-1m16</link>
      <guid>https://dev.to/cmendibl3/deploy-aks-kubecost-with-terraform-1m16</guid>
      <description>&lt;p&gt;This morning I saw this tweet from Mr Brendan Burns:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;AKS Cost Monitoring and Governance With Kubecost &lt;a href="https://t.co/OStwIBsuPp" rel="noopener noreferrer"&gt;https://t.co/OStwIBsuPp&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;— brendandburns (@brendandburns) &lt;a href="https://twitter.com/brendandburns/status/1387933511433154564?ref_src=twsrc%5Etfw" rel="noopener noreferrer"&gt;April 30, 2021&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;And I’m sure that once you also read through it, you’ll learn that you have to take several steps in order to achieve &lt;a href="http://blog.kubecost.com/blog/aks-cost/" rel="noopener noreferrer"&gt;AKS Cost Monitoring and Governance With Kubecost&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;I’m going to try and save you some time, providing you with a basic terraform configuration to help you get up and running in a breeze.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;If you want to learn more about &lt;a href="https://www.kubecost.com/" rel="noopener noreferrer"&gt;Kubecost&lt;/a&gt; in the context of AKS and Azure please read the &lt;a href="https://docs.microsoft.com/en-us/azure/cloud-adoption-framework/scenarios/aks/eslz-security-governance-and-compliance#cost-governance" rel="noopener noreferrer"&gt;Cost Governance&lt;/a&gt; section of the &lt;a href="https://docs.microsoft.com/en-us/azure/cloud-adoption-framework/scenarios/aks/eslz-security-governance-and-compliance#cost-governance" rel="noopener noreferrer"&gt;AKS enterprise-scale platform security governance and compliance&lt;/a&gt; guidelines.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Deploy AKS and &lt;a href="https://www.kubecost.com/" rel="noopener noreferrer"&gt;Kubecost&lt;/a&gt; with Terraform
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Create a &lt;code&gt;provider.tf&lt;/code&gt; file with the following contents:
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform {
  required_version = "&amp;gt; 0.14"
  required_providers {
    azurerm = {
      version = "= 2.57.0"
    }
    azuread = {
      version = "= 1.4.0"
    }
    kubernetes = {
      version = "= 2.1.0"
    }
    helm = {
      version = "= 2.1.2"
    }
  }
}

provider "azurerm" {
  features {}
}

# Configuring the kubernetes provider
# AKS resource name is aks: azurerm_kubernetes_cluster.aks
provider "kubernetes" {
  host                   = azurerm_kubernetes_cluster.aks.kube_config.0.host
  client_certificate     = base64decode(azurerm_kubernetes_cluster.aks.kube_config.0.client_certificate)
  client_key             = base64decode(azurerm_kubernetes_cluster.aks.kube_config.0.client_key)
  cluster_ca_certificate = base64decode(azurerm_kubernetes_cluster.aks.kube_config.0.cluster_ca_certificate)
}

# Configuring the helm provider
# AKS resource name is aks: azurerm_kubernetes_cluster.aks
provider "helm" {
  kubernetes {
    host                   = azurerm_kubernetes_cluster.aks.kube_config.0.host
    client_certificate     = base64decode(azurerm_kubernetes_cluster.aks.kube_config.0.client_certificate)
    client_key             = base64decode(azurerm_kubernetes_cluster.aks.kube_config.0.client_key)
    cluster_ca_certificate = base64decode(azurerm_kubernetes_cluster.aks.kube_config.0.cluster_ca_certificate)
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note that you’ll be using &lt;code&gt;azurerm&lt;/code&gt; to deploy Azure services, &lt;code&gt;azuread&lt;/code&gt; to create a Service Principal information and the &lt;code&gt;kubernetes&lt;/code&gt; and &lt;code&gt;helm&lt;/code&gt; provider to install &lt;a href="https://www.kubecost.com/" rel="noopener noreferrer"&gt;Kubecost&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Create a &lt;code&gt;variables.tf&lt;/code&gt; file with the following contents:
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Location of the services
variable "location" {
  default = "west europe"
}

# Resource Group Name
variable "resource_group" {
  default = "aks-kubecost"
}

# Name of the AKS cluster
variable "aks_name" {
  default = "aksmsftkubecost"
}

# Name of the Servic Principal used by Kubecost
variable "kubecost_sp_name" {
  default = "kubecost"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note: Replace the the default values with your desired location and names.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Create a &lt;code&gt;main.tf&lt;/code&gt; file with the following contents:
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Create Resource Group
resource "azurerm_resource_group" "rg" {
  name     = var.resource_group
  location = var.location
}

# Create VNET for AKS
resource "azurerm_virtual_network" "vnet" {
  name                = "private-network"
  address_space       = ["10.0.0.0/8"]
  location            = azurerm_resource_group.rg.location
  resource_group_name = azurerm_resource_group.rg.name
}

# Create the Subnet for AKS.
resource "azurerm_subnet" "aks" {
  name                 = "aks"
  resource_group_name  = azurerm_resource_group.rg.name
  virtual_network_name = azurerm_virtual_network.vnet.name
  address_prefixes     = ["10.240.0.0/16"]
}

# Create the AKS cluster.
# Cause this is a test node_count is set to 1 
resource "azurerm_kubernetes_cluster" "aks" {
  name                = var.aks_name
  location            = azurerm_resource_group.rg.location
  resource_group_name = azurerm_resource_group.rg.name
  dns_prefix          = var.aks_name

  default_node_pool {
    name            = "default"
    node_count      = 1
    vm_size         = "Standard_D2s_v3"
    os_disk_size_gb = 30
    os_disk_type    = "Ephemeral"
    vnet_subnet_id  = azurerm_subnet.aks.id
  }

  # Using Managed Identity
  identity {
    type = "SystemAssigned"
  }

  network_profile {
    network_plugin = "kubenet"
    network_policy = "calico"
  }

  role_based_access_control {
    enabled = true
  }

  addon_profile {
    kube_dashboard {
      enabled = false
    }
  }
}

# Create Application registration for Kubecost
resource "azuread_application" "kubecost" {
  display_name               = var.kubecost_sp_name
  identifier_uris            = ["http://${var.kubecost_sp_name}"]
}

# Create Service principal for kubecost
resource "azuread_service_principal" "kubecost" {
  application_id = azuread_application.kubecost.application_id
}

# Generate password for the Service Principal
resource "random_password" "passwd" {
  length      = 32
  min_upper   = 4
  min_lower   = 2
  min_numeric = 4
   keepers = {
    aks_app_id = azuread_application.kubecost.id
  }
}

# Create kubecost's Service principal password
resource "azuread_service_principal_password" "main" {
  service_principal_id = azuread_service_principal.kubecost.id
  value                = random_password.passwd.result
  end_date             = "2099-01-01T00:00:00Z"
}

# Get current Subscription
data "azurerm_subscription" "current" {
}

# Create kubecost custom role
resource "azurerm_role_definition" "kubecost" {
  name        = "kubecost_rate_card_query"
  scope       = data.azurerm_subscription.current.id
  description = "kubecost Rate Card query role"

  permissions {
    actions     = [
      "Microsoft.Compute/virtualMachines/vmSizes/read",
      "Microsoft.Resources/subscriptions/locations/read",
      "Microsoft.Resources/providers/read",
      "Microsoft.ContainerService/containerServices/read",
      "Microsoft.Commerce/RateCard/read",
    ]
    not_actions = []
  }

  assignable_scopes = [
    data.azurerm_subscription.current.id
  ]
}

# Assign kubecost's custom role at the subscription level
resource "azurerm_role_assignment" "kubecost" {
  scope                = data.azurerm_subscription.current.id
  role_definition_name = azurerm_role_definition.kubecost.name
  principal_id         = azuread_service_principal.kubecost.object_id
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  4. Create a &lt;code&gt;kubecost.tf&lt;/code&gt; file with the following contents:
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Create the kubecost namespace
resource "kubernetes_namespace" "kubecost" {
  metadata {
    name = "kubecost"
  }
}

# Install kubecost using the hem chart
resource "helm_release" "kubecost" {
  name       = "kubecost"
  chart      = "cost-analyzer"
  namespace  = "kubecost"
  version    = "1.79.1"
  repository = "https://kubecost.github.io/cost-analyzer/"

  # Set the cluster name
  set {
    name  = "kubecostProductConfigs.clusterName"
    value = var.aks_name
  }

  # Set the currency
  set {
    name  = "kubecostProductConfigs.currencyCode"
    value = "EUR"
  }

  # Set the region
  set {
    name  = "kubecostProductConfigs.azureBillingRegion"
    value = "NL"
  }

  # Generate a secret based on the Azure configuration provided below
  set {
    name  = "kubecostProductConfigs.createServiceKeySecret"
    value = true
  }

  # Azure Subscription ID
  set {
    name  = "kubecostProductConfigs.azureSubscriptionID"
    value = data.azurerm_subscription.current.id
  }

  # Azure Client ID
  set {
    name  = "kubecostProductConfigs.azureClientID"
    value = azuread_application.kubecost.application_id
  }

  # Azure Client Password
  set {
    name  = "kubecostProductConfigs.azureClientPassword"
    value = random_password.passwd.result
  }

  # Azure Tenant ID
  set {
    name  = "kubecostProductConfigs.azureTenantID"
    value = data.azurerm_subscription.current.tenant_id
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The configuration in the previous file installs &lt;a href="https://www.kubecost.com/" rel="noopener noreferrer"&gt;Kubecost&lt;/a&gt; in the AKS cluster. If you want to learn more about the available configuration options please check the following file: &lt;a href="https://github.com/kubecost/cost-analyzer-helm-chart/blob/master/cost-analyzer/values.yaml" rel="noopener noreferrer"&gt;values.yaml&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  5. Deploy the solution:
&lt;/h3&gt;

&lt;p&gt;Run the following commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform init
terraform plan -out tf.plan
terraform apply ./tf.plan
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  6. Test and browse Kubecost:
&lt;/h3&gt;

&lt;p&gt;To check the status of the kubecost pods run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;az aks get-credentials -g aks-kubecost -n aksmsftkubecost
kubectl get pods -n kubecost
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl port-forward -n kubecost svc/kubecost-cost-analyzer 9090:9090
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and browse to &lt;a href="http://localhost:9090" rel="noopener noreferrer"&gt;http://localhost:9090&lt;/a&gt; so you can start learning!&lt;/p&gt;

&lt;p&gt;Hope it helps! Please find the complete code &lt;a href="https://github.com/cmendible/azure.samples/tree/main/aks_kubecost" rel="noopener noreferrer"&gt;here&lt;/a&gt;&lt;/p&gt;

</description>
      <category>azure</category>
      <category>kuberntes</category>
      <category>kubecost</category>
      <category>terraform</category>
    </item>
    <item>
      <title>Deploy a Private Azure Cloud Shell with Terraform</title>
      <dc:creator>Carlos Mendible</dc:creator>
      <pubDate>Mon, 12 Apr 2021 10:00:00 +0000</pubDate>
      <link>https://dev.to/cmendibl3/deploy-a-private-azure-cloud-shell-with-terraform-5727</link>
      <guid>https://dev.to/cmendibl3/deploy-a-private-azure-cloud-shell-with-terraform-5727</guid>
      <description>&lt;p&gt;By default Cloud Shell sessions run inside a container inside a Microsoft network separate from any resources you may have deployed in Azure. So what happens when you want to access services you have deployed inside a Virtual Network such as a private AKS cluster, a Virtual Machine or Private Endpoint enabled services?&lt;/p&gt;

&lt;p&gt;Well as you can imagine the solution is to deploy Cloud Shell into an Azure Virtual Network in such a way that the container, it runs on, can acces your private resources. This solution also protects the backing Storage Account that Cloud Shell uses for you the user profiles and data so that you end up with a locked down environmemt.&lt;/p&gt;

&lt;p&gt;The following diagram shows the solution archictecture:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6glqppsh0ba4x803l1oy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6glqppsh0ba4x803l1oy.png" alt="Cloud Shell in an Azure Virtual Network Architecture Diagram" width="800" height="423"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you need more information about the solution, the services and any limitations please check the documentation here: &lt;a href="https://docs.microsoft.com/en-us/azure/cloud-shell/private-vnet" rel="noopener noreferrer"&gt;Cloud Shell in an Azure Virtual Network&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Now let’s se how can we use terraform to make your Cloud Shell private!&lt;/p&gt;

&lt;h2&gt;
  
  
  Deploy a Private Azure Cloud Shell with Terraform
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;If Cloud Shell has been used in the past, the existing clouddrive must be unmounted. To do this run &lt;code&gt;clouddrive unmount&lt;/code&gt; from an active Cloud Shell session.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  1. Create a &lt;code&gt;provider.tf&lt;/code&gt; file with the following contents:
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform {
  required_version = "&amp;gt;= 0.13.5"
}

provider "azurerm" {
  version = "= 2.46.1"
  features {}
}

provider "azuread" {
  version = "= 1.3.0"
}

provider "http" {
  version = "= 2.0.0"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note that we are using &lt;code&gt;azurerm&lt;/code&gt; to deploy Azure services, &lt;code&gt;azuread&lt;/code&gt; to get some Service Principal information and &lt;code&gt;http&lt;/code&gt; to get your current public ip address so that only you can reach your Cloud Shell.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Create a variables.tf file with the following contents:
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;variable location {
  default = "west europe"
}

variable resource_group {
  default = "&amp;lt;resource group name&amp;gt;"
}

variable "vnet_name" {
  default = "&amp;lt;vnet name&amp;gt;"
}

variable sa_name {
  default = "&amp;lt;Backing Storage Account name&amp;gt;"
}

variable relay_name {
  default = "&amp;lt;Azure Relay name&amp;gt;"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Make sure you replace the placeholders with the default values you want to use.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Create a &lt;code&gt;main.tf&lt;/code&gt; file with the following contents:
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Get Azure Container Instance Service Principal. Amazing right? Cloud Shell uses this Service Principal!
data "azuread_service_principal" "container" {
  display_name = "Azure Container Instance Service"
}

# Create Resource Groupto hold the resources.
resource "azurerm_resource_group" "rg" {
  name     = var.resource_group
  location = var.location
}

# Create a VNET.
resource "azurerm_virtual_network" "vnet" {
  name                = var.vnet_name
  address_space       = ["10.0.0.0/16"]
  location            = azurerm_resource_group.rg.location
  resource_group_name = azurerm_resource_group.rg.name
}

# Create a Containers Subnet. Here is where Cloud Shell will run.
resource "azurerm_subnet" "containers" {
  name                 = "cloudshell-containers"
  resource_group_name  = azurerm_resource_group.rg.name
  virtual_network_name = azurerm_virtual_network.vnet.name
  address_prefixes     = ["10.0.0.0/24"]

  # Delegate the subnet to "Microsoft.ContainerInstance/containerGroups".
  delegation {
    name = "cloudshell-delegation"

    service_delegation {
      name = "Microsoft.ContainerInstance/containerGroups"
    }
  }

  # Add service enpoint so Cloud Shell can reach Storage Accounts. At the moment the solution does not work with Private Enpoints for the Storage Account. 
  service_endpoints = ["Microsoft.Storage"]
}

# Create a subnet to host Azure Relay service.
resource "azurerm_subnet" "relay" {
  name                 = "relay"
  resource_group_name  = azurerm_resource_group.rg.name
  virtual_network_name = azurerm_virtual_network.vnet.name
  address_prefixes     = ["10.0.1.0/24"]

  enforce_private_link_endpoint_network_policies = true
}

# Create a network profile for the Cloud Shell containers.
resource "azurerm_network_profile" "networkprofile" {
  name                = "cloudshell"
  location            = azurerm_resource_group.rg.location
  resource_group_name = azurerm_resource_group.rg.name

  container_network_interface {
    name = "cloudshell-containers"

    ip_configuration {
      name      = "ipconfig"
      subnet_id = azurerm_subnet.containers.id
    }
  }
}

# Assign Network Contributor to the Azure Container Instance Service Principal.
resource "azurerm_role_assignment" "network_contributor" {
  scope                = azurerm_network_profile.networkprofile.id
  role_definition_name = "Network Contributor"
  principal_id         = data.azuread_service_principal.container.object_id
}

# Create an Azure Relay namespace.
resource "azurerm_relay_namespace" "relay" {
  name                = var.relay_name
  location            = azurerm_resource_group.rg.location
  resource_group_name = azurerm_resource_group.rg.name

  sku_name = "Standard"
}

# Add a private enpoint to the Azure Relay namespace.
resource "azurerm_private_endpoint" "endpoint" {
  name                = "cloudshell-privateendpoint"
  location            = azurerm_resource_group.rg.location
  resource_group_name = azurerm_resource_group.rg.name
  subnet_id           = azurerm_subnet.relay.id

  private_service_connection {
    name                           = "privateendpoint"
    private_connection_resource_id = azurerm_relay_namespace.relay.id
    is_manual_connection           = false
    subresource_names              = ["namespace"]
  }
}

# Assign Contributor to the Azure Container Instance Service Principal.
resource "azurerm_role_assignment" "contributor" {
  scope                = azurerm_relay_namespace.relay.id
  role_definition_name = "Contributor"
  principal_id         = data.azuread_service_principal.container.object_id
}

# Create the Storage Account to hold the Cloud Shell profiles.
resource "azurerm_storage_account" "sa" {
  name                      = var.sa_name
  resource_group_name       = azurerm_resource_group.rg.name
  location                  = azurerm_resource_group.rg.location
  account_tier              = "Standard"
  account_replication_type  = "GRS"
  enable_https_traffic_only = true
}

# Create a file share to hold the user profiles.
resource "azurerm_storage_share" "share" {
  name                 = "profile"
  storage_account_name = azurerm_storage_account.sa.name
  quota                = 6
}

# Get your current public IP.
data "http" "current_public_ip" {
  url = "http://ipinfo.io/json"
  request_headers = {
    Accept = "application/json"
  }
}

# Protect the Storage Account setting the firewall.
# This is done only after the file share is created.
resource "azurerm_storage_account_network_rules" "sa_rules" {
  resource_group_name  = azurerm_resource_group.rg.name
  storage_account_name = azurerm_storage_account.sa.name

  default_action             = "Deny"
  virtual_network_subnet_ids = [azurerm_subnet.containers.id]

  # ip_rules = [
  #   jsondecode(data.http.current_public_ip.body).ip
  # ]

  depends_on = [
    azurerm_storage_share.share
  ]
}

# Create DNS Zone for Relay
resource "azurerm_private_dns_zone" "private" {
  name                = "privatelink.servicebus.windows.net"
  resource_group_name = azurerm_resource_group.rg.name
}

# Create A record for the Relay
resource "azurerm_private_dns_a_record" "relay" {
  name                = var.relay_name
  zone_name           = azurerm_private_dns_zone.private.name
  resource_group_name = azurerm_resource_group.rg.name
  ttl                 = 3600
  records             = [azurerm_private_endpoint.endpoint.private_service_connection[0].private_ip_address]
}

# Link the Private Zone with the VNet
resource "azurerm_private_dns_zone_virtual_network_link" "relay" {
  name                  = "relay"
  resource_group_name   = azurerm_resource_group.rg.name
  private_dns_zone_name = azurerm_private_dns_zone.private.name
  virtual_network_id    = azurerm_virtual_network.vnet.id
}

# Open the relay firewall to local IP
resource "null_resource" "open_relay_firewall" {
  provisioner "local-exec" {
    interpreter = ["powershell"]
    command = "az rest --method put --uri '${azurerm_relay_namespace.relay.id}/networkrulesets/default?api-version=2017-04-01' --body '{\"properties\":{\"defaultAction\":\\\"Deny\\\",\"ipRules\":[{\"ipMask\":\\\"${jsondecode(data.http.current_public_ip.body).ip}\\\"}],\"virtualNetworkRules\":[],\"trustedServiceAccessEnabled\":false}}'"
  }
  depends_on = [
    data.http.current_public_ip,
    azurerm_relay_namespace.relay
  ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  4. Deploy the solution:
&lt;/h3&gt;

&lt;p&gt;Run the following commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform init
terraform plan -out tf.plan
terraform apply ./tf.plan
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Hope it helps! Please find the complete code &lt;a href="https://github.com/cmendible/azure.samples/tree/main/private.cloud.shell" rel="noopener noreferrer"&gt;here&lt;/a&gt;&lt;/p&gt;

</description>
      <category>azure</category>
      <category>cloudshell</category>
      <category>terraform</category>
      <category>private</category>
    </item>
    <item>
      <title>ASP.NET Core OpenTelemetry Logging</title>
      <dc:creator>Carlos Mendible</dc:creator>
      <pubDate>Fri, 08 Jan 2021 10:00:00 +0000</pubDate>
      <link>https://dev.to/cmendibl3/asp-net-core-opentelemetry-logging-24c6</link>
      <guid>https://dev.to/cmendibl3/asp-net-core-opentelemetry-logging-24c6</guid>
      <description>&lt;p&gt;As you may know I’ve been collaborating with &lt;a href="https://dapr.io/" rel="noopener noreferrer"&gt;Dapr&lt;/a&gt; and I’ve learned that one of the things it enables you to do is to collect traces with the use of the &lt;a href="https://github.com/open-telemetry/opentelemetry-collector" rel="noopener noreferrer"&gt;OpenTelemetry Collector&lt;/a&gt; and push the events to &lt;a href="https://docs.microsoft.com/en-us/azure/azure-monitor/app/app-insights-overview?WT.mc_id=AZ-MVP-5002618" rel="noopener noreferrer"&gt;Azure Application Insights&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;After some reading I went and check if I could also write my ASP.NET Core applications to log using the OpenTelemetry Log and Event record definition:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Field Name&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Timestamp&lt;/td&gt;
&lt;td&gt;Time when the event occurred.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;TraceId&lt;/td&gt;
&lt;td&gt;Request trace id.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;SpanId&lt;/td&gt;
&lt;td&gt;Request span id.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;TraceFlags&lt;/td&gt;
&lt;td&gt;W3C trace flag.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;SeverityText&lt;/td&gt;
&lt;td&gt;The severity text (also known as log level).&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;SeverityNumber&lt;/td&gt;
&lt;td&gt;Numerical value of the severity.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Name&lt;/td&gt;
&lt;td&gt;Short event identifier.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Body&lt;/td&gt;
&lt;td&gt;The body of the log record.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Resource&lt;/td&gt;
&lt;td&gt;Describes the source of the log.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Attributes&lt;/td&gt;
&lt;td&gt;Additional information about the event.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;So here is how you can do it:&lt;/p&gt;

&lt;h2&gt;
  
  
  Create an ASP.NET Core application
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;dotnet new webapi -n aspnet.opentelemetry
cd aspnet.opentelemetry
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Add a reference to the OpenTelemtry libraries
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;dotnet add package OpenTelemetry.Extensions.Hosting --prerelease
dotnet add package OpenTelemetry.Exporter.Console --prerelease
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Modify the CreateHostBuilder method in Program.cs
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public static IHostBuilder CreateHostBuilder(string[] args) =&amp;gt;
  Host.CreateDefaultBuilder(args)
      .ConfigureLogging(logging =&amp;gt;
      {
          logging.ClearProviders();
          logging.AddOpenTelemetry(options =&amp;gt;
          {
              options.AddProcessor(new SimpleExportProcessor&amp;lt;LogRecord&amp;gt;(new ConsoleExporter&amp;lt;LogRecord&amp;gt;(new ConsoleExporterOptions())));
          });
      })
      .ConfigureWebHostDefaults(webBuilder =&amp;gt;
      {
          webBuilder.UseStartup&amp;lt;Startup&amp;gt;();
      });
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The code first clears all logging providers and then adds OpenTelemetry using the &lt;code&gt;SimpleExportProcessor&lt;/code&gt; and &lt;code&gt;ConsoleExporter&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Please check the &lt;a href="https://github.com/open-telemetry/opentelemetry-dotnet" rel="noopener noreferrer"&gt;OpenTelemetry .NET API&lt;/a&gt; repo to learn more.&lt;/p&gt;

&lt;h2&gt;
  
  
  Modify the log settings
&lt;/h2&gt;

&lt;p&gt;Edit the &lt;code&gt;appsettings.Development.json&lt;/code&gt; in order to configure the default log settings using the OpenTelemetry provider:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "Logging": {
    "LogLevel": {
      "Default": "Information",
      "Microsoft": "Warning",
      "Microsoft.Hosting.Lifetime": "Information"
    }
  },
  "AllowedHosts": "*"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Test the program
&lt;/h2&gt;

&lt;p&gt;Run the following command to test the application.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;dotnet run
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should see the logs, formatted as OpenTelemtry log records, written in the console:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;LogRecord.TraceId:&lt;/td&gt;
&lt;td&gt;00000000000000000000000000000000&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;LogRecord.SpanId:&lt;/td&gt;
&lt;td&gt;0000000000000000&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;LogRecord.Timestamp:&lt;/td&gt;
&lt;td&gt;2021-01-08T10:36:26.1338054Z&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;LogRecord.EventId:&lt;/td&gt;
&lt;td&gt;0&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;LogRecord.CategoryName:&lt;/td&gt;
&lt;td&gt;Microsoft.Hosting.Lifetime&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;LogRecord.LogLevel:&lt;/td&gt;
&lt;td&gt;Information&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;LogRecord.TraceFlags:&lt;/td&gt;
&lt;td&gt;None&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;LogRecord.State:&lt;/td&gt;
&lt;td&gt;Application is shutting down…&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Both &lt;code&gt;TraceId&lt;/code&gt; and &lt;code&gt;SpanId&lt;/code&gt; will get filled when a request is handled by your application and if the call is made by Dapr it will respect the values sent by it so you can correlate traces and logs improving the observability of your solution.&lt;/p&gt;

&lt;p&gt;Hope it helps! and please find the complete code &lt;a href="https://github.com/cmendible/dotnetcore.samples/tree/main/aspnet.opentelemetry" rel="noopener noreferrer"&gt;here&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aspnetcore</category>
      <category>opentelemetry</category>
      <category>dotnet</category>
      <category>logging</category>
    </item>
    <item>
      <title>Dapr: Reading local secrets with .NET 5</title>
      <dc:creator>Carlos Mendible</dc:creator>
      <pubDate>Wed, 18 Nov 2020 10:00:00 +0000</pubDate>
      <link>https://dev.to/cmendibl3/dapr-reading-local-secrets-with-net-5-454d</link>
      <guid>https://dev.to/cmendibl3/dapr-reading-local-secrets-with-net-5-454d</guid>
      <description>&lt;p&gt;Now that &lt;a href="https://dapr.io/" rel="noopener noreferrer"&gt;Dapr&lt;/a&gt; is about to hit version 1.0.0 let me show you how easy is to read secrets with a &lt;a href="https://dotnet.microsoft.com/?WT.mc_id=AZ-MVP-5002618" rel="noopener noreferrer"&gt;.NET 5&lt;/a&gt; console application.&lt;/p&gt;

&lt;h2&gt;
  
  
  Create a console application
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;dotnet new console -n DaprSecretSample
cd DaprSecretSample
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Add a reference to the Dapr.Client library
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;dotnet add package Dapr.Client --prerelease
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Create a Secret Store component
&lt;/h2&gt;

&lt;p&gt;Create a &lt;code&gt;components&lt;/code&gt; folder and inside place a file named &lt;code&gt;secretstore.yaml&lt;/code&gt; with the following contents:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: starwarssecrets
spec:
  type: secretstores.local.file
  metadata:
    - name: secretsFile
      value: ./secrets.json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;This component will enable Dapr, and therefore your application, to read secrets from a &lt;code&gt;secrets.json&lt;/code&gt; file.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Create a Secrets file
&lt;/h2&gt;

&lt;p&gt;Create a &lt;code&gt;secrets.json&lt;/code&gt; file with the following contents:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "mandalorianSecret": "this is the way!"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Replace the contents of Program.cs
&lt;/h2&gt;

&lt;p&gt;Replace the contents of &lt;code&gt;Program.cs&lt;/code&gt; with the following code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;using System;
using Dapr.Client;

var client = new DaprClientBuilder().Build();
var secret = await client.GetSecretAsync("starwarssecrets", "mandalorianSecret");
Console.WriteLine($"Secret from local file: {secret["mandalorianSecret"]});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Test the program
&lt;/h2&gt;

&lt;p&gt;Run the following command to test the application.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;dapr run --app-id secretapp --components-path .\components\ -- dotnet run
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Experiment
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;One of the amazing things about Dapr is that you code will be the same even if you change the underlying secret store.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;In your local environment you can also try reading “secrets” from environment variables. In order to do so, replace the contents of the &lt;code&gt;./componentes/secrets.yaml&lt;/code&gt; file with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: starwarssecrets
spec:
  type: secretstores.local.env
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Be sure to set, in your system, an environment variable named &lt;code&gt;mandalorianSecret&lt;/code&gt;, for instance:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export mandalorianSecret="May The Force be with you"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and run the application again:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;dapr run --app-id secretapp --components-path .\components\ -- dotnet run
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; I recommend using the secret stores shown in this post only for development or test scenarios. For a complete list of supported secret stores check the following repo: &lt;a href="https://github.com/dapr/components-contrib/tree/master/secretstores" rel="noopener noreferrer"&gt;https://github.com/dapr/components-contrib/tree/master/secretstores&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Hope it helps!&lt;/p&gt;

</description>
      <category>azure</category>
      <category>dapr</category>
      <category>dotnet</category>
      <category>secrets</category>
    </item>
    <item>
      <title>What I Learned From Hacktoberfest 2020</title>
      <dc:creator>Carlos Mendible</dc:creator>
      <pubDate>Sun, 18 Oct 2020 10:00:00 +0000</pubDate>
      <link>https://dev.to/cmendibl3/what-i-learned-from-hacktoberfest-2020-9fe</link>
      <guid>https://dev.to/cmendibl3/what-i-learned-from-hacktoberfest-2020-9fe</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F5haps693rfgaa1t0eg08.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F5haps693rfgaa1t0eg08.png" alt="hacktoberfest" width="800" height="333"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://hacktoberfest.digitalocean.com/" rel="noopener noreferrer"&gt;Hacktoberfest®&lt;/a&gt; is an open global event where people all around de globe contribute to open source projects.&lt;/p&gt;

&lt;p&gt;The idea behind &lt;a href="https://hacktoberfest.digitalocean.com/" rel="noopener noreferrer"&gt;Hacktoberfest®&lt;/a&gt; is great, in my opinion it encourages and motivates contributions specially from those who don’t know where to start with OSS, but saddly what we saw this year was many people, let’s call them trolls, spamming repos with useless pull requests in order to claim the nice tee. The &lt;a href="https://hacktoberfest.digitalocean.com/" rel="noopener noreferrer"&gt;Hacktoberfest®&lt;/a&gt; organization reacted quickly to fix the situation and the rules of the game have been changed: &lt;a href="https://hacktoberfest.digitalocean.com/hacktoberfest-update" rel="noopener noreferrer"&gt;the event is now offically opt-in only for projects and mantainers&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Now let me tell you a bit of what I did:&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Learned From Hacktoberfest
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Azure Arc enabled Kubernetes:
&lt;/h3&gt;

&lt;p&gt;I learned how to enable container monitoring on &lt;a href="https://docs.microsoft.com/en-us/azure/azure-arc/kubernetes/overview?WT.mc_id=AZ-MVP-5002618" rel="noopener noreferrer"&gt;Azure Arc enabled Kubernetes&lt;/a&gt; and in the process I noticed that in the documentation a powershell command was using &lt;code&gt;curl&lt;/code&gt; instead of &lt;code&gt;Invoke-WebRequest&lt;/code&gt; so I sumbited the following PR: &lt;a href="https://github.com/MicrosoftDocs/azure-docs/pull/63625" rel="noopener noreferrer"&gt;Replacing wget with iwr in the powershell sample&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  K8Spin
&lt;/h3&gt;

&lt;p&gt;I continued learning about &lt;a href="https://github.com/k8spin/k8spin-operator" rel="noopener noreferrer"&gt;K8Spin&lt;/a&gt;: a nice OOS project which adds three new hierarchy concepts (Organizations, Tenants, and Spaces) to your kuberneets clusters in order to enable multi-tenancy.&lt;/p&gt;

&lt;p&gt;Collaborating with the &lt;a href="https://github.com/k8spin/k8spin-operator" rel="noopener noreferrer"&gt;K8Spin&lt;/a&gt; team was great and I fixed two issues:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A broken link in one of their repos: &lt;a href="https://github.com/k8spin/oneinfra-worker-modules/pull/3" rel="noopener noreferrer"&gt;Fixing Issue#1: oneinfra link&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Added a Helm chart to install the &lt;a href="https://github.com/k8spin/k8spin-operator" rel="noopener noreferrer"&gt;K8Spin&lt;/a&gt;. I had created some charts in the past but for this one I had to learn about &lt;a href="https://helm.sh/docs/chart_best_practices/" rel="noopener noreferrer"&gt;chart best practices&lt;/a&gt; and apply them.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As a bonus also added a PR to add a developer container to their repo in other to enable &lt;a href="https://github.com/features/codespaces" rel="noopener noreferrer"&gt;Codespaces&lt;/a&gt; and help any newcomer to start contributing avoiding python and other dependencies setup: &lt;a href="https://github.com/k8spin/k8spin-operator/pull/21" rel="noopener noreferrer"&gt;Adding dev container definition&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Bonus
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;I continued my journey learning and collaborating with Dapr and this time I submited a PR to remove &lt;a href="https://github.com/dapr/components-contrib/pull/496" rel="noopener noreferrer"&gt;a component from Dapr&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;I hade some SEO issues with my blog and it turns out the the theme I use was not adding [the post title as part of the correspoing tag(&lt;a href="https://github.com/avianto/hugo-kiera/pull/41" rel="noopener noreferrer"&gt;https://github.com/avianto/hugo-kiera/pull/41&lt;/a&gt;).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Glad to Help!&lt;/p&gt;

</description>
      <category>hacktoberfest</category>
    </item>
    <item>
      <title>Managing Terraform Cloud with .NET Core</title>
      <dc:creator>Carlos Mendible</dc:creator>
      <pubDate>Wed, 09 Sep 2020 10:00:00 +0000</pubDate>
      <link>https://dev.to/cmendibl3/managing-terraform-cloud-with-net-core-2d48</link>
      <guid>https://dev.to/cmendibl3/managing-terraform-cloud-with-net-core-2d48</guid>
      <description>&lt;p&gt;Today I’m going to show you how to manage &lt;a href="https://app.terraform.io/" rel="noopener noreferrer"&gt;Terraform Cloud&lt;/a&gt; with .NET Core using the &lt;a href="https://github.com/everis-technology/Tfe.NetClient" rel="noopener noreferrer"&gt;Tfe.NetClient&lt;/a&gt; library.&lt;/p&gt;

&lt;p&gt;The idea is to create a simple console application that will:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Add GitHub as a &lt;a href="https://www.terraform.io/docs/cloud/vcs/index.html" rel="noopener noreferrer"&gt;VCS Provider&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Create a &lt;a href="https://www.terraform.io/docs/cloud/workspaces/index.html" rel="noopener noreferrer"&gt;Workspace&lt;/a&gt; conected to a GitHub repo where your Terraform files live.&lt;/li&gt;
&lt;li&gt;Create a &lt;a href="https://www.terraform.io/docs/cloud/workspaces/variables.html" rel="noopener noreferrer"&gt;variable&lt;/a&gt; in the workspace.&lt;/li&gt;
&lt;li&gt;Create a Run (Plan) based on the Terraform files&lt;/li&gt;
&lt;li&gt;Apply the Run.&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;a href="https://github.com/everis-technology/Tfe.NetClient" rel="noopener noreferrer"&gt;Tfe.NetClient&lt;/a&gt; is still in alpha and not every Terraform Cloud API or feature is present. Please feel free to submit any issues, bugs or pull requests.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;A &lt;a href="https://app.terraform.io/" rel="noopener noreferrer"&gt;Terraform Cloud&lt;/a&gt; Account (You can create a &lt;a href="https://www.terraform.io/docs/cloud/paid.html#free-organizations" rel="noopener noreferrer"&gt;Free Organization&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;A &lt;a href="https://app.terraform.io/" rel="noopener noreferrer"&gt;Terraform Cloud&lt;/a&gt; Organization&lt;/li&gt;
&lt;li&gt;A &lt;a href="https://app.terraform.io/" rel="noopener noreferrer"&gt;Terraform Cloud&lt;/a&gt; &lt;a href="https://www.terraform.io/docs/cloud/users-teams-organizations/api-tokens.html#organization-api-tokens" rel="noopener noreferrer"&gt;Organization Token&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;A &lt;a href="https://app.terraform.io/" rel="noopener noreferrer"&gt;Terraform Cloud&lt;/a&gt; &lt;a href="https://www.terraform.io/docs/cloud/users-teams-organizations/api-tokens.html#team-api-tokens" rel="noopener noreferrer"&gt;Team Token&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;A &lt;a href="https://docs.github.com/en/github/authenticating-to-github/creating-a-personal-access-token" rel="noopener noreferrer"&gt;GitHub Personal Access Token&lt;/a&gt; with the following permissions:

&lt;ul&gt;
&lt;li&gt;repo: repo:status Access commit status&lt;/li&gt;
&lt;li&gt;repo: repo_deployment Access deployment status&lt;/li&gt;
&lt;li&gt;repo: public_repo Access public repositories&lt;/li&gt;
&lt;li&gt;repo: repo:invite Access repository invitations&lt;/li&gt;
&lt;li&gt;repo: security_events Read and write security events&lt;/li&gt;
&lt;li&gt;workflow&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;A GitHub Repo with the Terraform script you want to run.&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  1. Create a folder for your new project
&lt;/h2&gt;




&lt;p&gt;Open a command prompt an run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mkdir TerraformCloud
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  2. Create the project
&lt;/h2&gt;






&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd TerraformCloud
dotnet new console
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  3. Add a reference to Tfe.NetClient
&lt;/h2&gt;






&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;dotnet add package Tfe.NetClient -v 0.1.0
dotnet restore
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  4. Replace the contents of Program.cs with the following code
&lt;/h2&gt;






&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;namespace TerraformCloud
{
    using System;
    using System.Net.Http;
    using System.Threading.Tasks;
    using Tfe.NetClient;
    using Tfe.NetClient.OAuthClients;
    using Tfe.NetClient.Runs;
    using Tfe.NetClient.Workspaces;
    using Tfe.NetClient.WorkspaceVariables;

    class Program
    {
        static async Task Main(string[] args)
        {
            // The values of this variables are hardcoded here just for simplicity and should be retrieved from configuration.
            var organizationName = "&amp;lt;organizationName&amp;gt;";
            var organizationToken = "&amp;lt;organizationToken&amp;gt;";
            var teamToken = "&amp;lt;teamToken&amp;gt;";
            var gitHubToken = "&amp;lt;GitHub Personal Access Token&amp;gt;";
            var gitHubRepo = "&amp;lt;github user or organization name&amp;gt;/&amp;lt;repo name&amp;gt;"; // i.e. cmendible/terraform-hello-world

            // Create an HttpClient
            var httpClient = new HttpClient();

            // Create the Configiration used by the TFE client.
            // For management tasks you'll need to connect to Terraform Cloud using an Organization Token.
            var config = new TfeConfig(organizationToken, httpClient);

            // Create the TFE client.
            var client = new TfeClient(config);

            // Connect Terraform Cloud and GitHub adding GitHub as a VCS Provider.
            var oauthClientsRequest = new OAuthClientsRequest();
            oauthClientsRequest.Data.Attributes.ServiceProvider = "github";
            oauthClientsRequest.Data.Attributes.HttpUrl = new Uri("https://github.com");
            oauthClientsRequest.Data.Attributes.ApiUrl = new Uri("https://api.github.com");
            oauthClientsRequest.Data.Attributes.OAuthTokenString = gitHubToken; // Use the GitHub Personal Access Token
            var oauthResult = await client.OAuthClient.CreateAsync(organizationName, oauthClientsRequest);

            // Get the OAuthToken.
            var oauthTokenId = oauthResult.Data.Relationships.OAuthTokens.Data[0].Id;

            // Create a Workspace connected to a GitHub repo.
            var workspacesRequest = new WorkspacesRequest();
            workspacesRequest.Data.Attributes.Name = "my-workspace";
            workspacesRequest.Data.Attributes.VcsRepo = new RequestVcsRepo();
            workspacesRequest.Data.Attributes.VcsRepo.Identifier = gitHubRepo; // Use the GitHub Repo
            workspacesRequest.Data.Attributes.VcsRepo.OauthTokenId = oauthTokenId;
            workspacesRequest.Data.Attributes.VcsRepo.Branch = "";
            workspacesRequest.Data.Attributes.VcsRepo.DefaultBranch = true;
            var workspaceResult = await client.Workspace.CreateAsync(organizationName, workspacesRequest);

            // Get the Workspace Id so we can add variales or request a plan or apply.
            var workspaceId = workspaceResult.Data.Id;

            // Create a variable in the workspace.
            // You can make the values invible setting the Sensitive attribute to true.
            // If you want to se an environement variable change the Category attribute to "env".
            // You'll have to create as any variables your script needs.
            var workspaceVariablesRequest = new WorkspaceVariablesRequest();
            workspaceVariablesRequest.Data.Attributes.Key = "variable_1";
            workspaceVariablesRequest.Data.Attributes.Value = "variable_1_value";
            workspaceVariablesRequest.Data.Attributes.Description = "variable_1 description";
            workspaceVariablesRequest.Data.Attributes.Category = "terraform";
            workspaceVariablesRequest.Data.Attributes.Hcl = false;
            workspaceVariablesRequest.Data.Attributes.Sensitive = false;
            var variableResult = await client.WorkspaceVariable.CreateAsync(workspaceId, workspaceVariablesRequest);

            // Get the workspace by name.
            var workspace = client.Workspace.ShowAsync(organizationName, "my-workspace");

            // To create Runs and Apply thme you need to use a Team Token.
            // So create a new TfeClient.
            var runsClient = new TfeClient(new TfeConfig(teamToken, new HttpClient()));

            // Create the Run.
            // This is th equivalent to running: terraform plan. 
            var runsRequest = new RunsRequest();
            runsRequest.Data.Attributes.IsDestroy = false;
            runsRequest.Data.Attributes.Message = "Triggered by .NET Core";
            runsRequest.Data.Relationships.Workspace.Data.Type = "workspaces";
            runsRequest.Data.Relationships.Workspace.Data.Id = workspace.Result.Data.Id;
            var runsResult = await runsClient.Run.CreateAsync(runsRequest);

            // Get the Run Id. You'll need it to check teh state of the run and Apply it if possible.
            var runId = runsResult.Data.Id;

            var ready = false;
            while (!ready)
            {
                // Wait for the Run to be planned .
                await Task.Delay(5000);
                var run = await client.Run.ShowAsync(runId);
                ready = run.Data.Attributes.Status == "planned";

                // Throw an exception if the Run status is: errored.
                if (run.Data.Attributes.Status == "errored") {
                    throw new Exception("Plan failed...");
                }
            }

            // If the Run is planned then Apply your configuration.
            if (ready)
            {
                await runsClient.Run.ApplyAsync(runId, null);
            }
        }
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;&lt;a href="https://github.com/everis-technology/Tfe.NetClient" rel="noopener noreferrer"&gt;Tfe.NetClient&lt;/a&gt; is still in early stages of development and the resulting code is very verbose and prone to errors. We will address this in a future relases introducing the use of enums and perhaps a fluent API.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  5. Run the program
&lt;/h2&gt;




&lt;p&gt;Run the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;dotnet run
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  6. Check the results
&lt;/h2&gt;




&lt;p&gt;Log In to &lt;a href="https://app.terraform.io/" rel="noopener noreferrer"&gt;Terraform Cloud&lt;/a&gt; and check the status of the new workspace.&lt;/p&gt;

&lt;p&gt;Hope it helps!&lt;/p&gt;

</description>
      <category>terraform</category>
      <category>dotnetcore</category>
    </item>
    <item>
      <title>Azure Functions: use Blob Trigger with Private Endpoint</title>
      <dc:creator>Carlos Mendible</dc:creator>
      <pubDate>Mon, 18 May 2020 00:00:00 +0000</pubDate>
      <link>https://dev.to/cmendibl3/azure-functions-use-blob-trigger-with-private-endpoint-469l</link>
      <guid>https://dev.to/cmendibl3/azure-functions-use-blob-trigger-with-private-endpoint-469l</guid>
      <description>&lt;p&gt;The intent of this post is to help you understand how to connect an Azure Function to a Storage Account privately so all traffic flows through a VNet therefore enhancing the security of your solutions and blobs.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Case:
&lt;/h2&gt;

&lt;p&gt;Supose you have the following Azure Function written in C# which only copies a blob from one conatiner to another:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;using System.IO;
using Microsoft.Azure.WebJobs;
using Microsoft.Extensions.Logging;

namespace Secured.Function
{
    public static class SecureCopy
    {
        [FunctionName("SecureCopy")]
        public static void Run(
        [BlobTrigger("input/{name}", Connection = "privatecfm_STORAGE")] Stream myBlob,
        [Blob("output/{name}", FileAccess.Write, Connection = "privatecfm_STORAGE")] Stream copy,
        ILogger log)
        {
            myBlob.CopyTo(copy);
        }
    }
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;In this case the storage account used, for the blob trigger and the output binding, has a public endpoint exposed to the internet, which you can secure using features such as the Storage Account Firewall and the new &lt;strong&gt;private endpoints&lt;/strong&gt; which will allow clients on a virtual network (VNet) to securely access data over a &lt;a href="https://docs.microsoft.com/en-us/azure/private-link/private-link-overview"&gt;Private Link&lt;/a&gt;. The private endpoint uses an IP address from the VNet address space for your storage account service. [1]&lt;/p&gt;

&lt;p&gt;With those features we can lockdown all inbound traffic to the Storage Account to only accept calls from inside a VNet, so the next step is to enable a feature for Azure Functions that will give your function app access to the resources in the VNet: &lt;strong&gt;Azure App Service VNet Integration feature&lt;/strong&gt; [2].&lt;/p&gt;

&lt;p&gt;The following sketch shows how this works:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--z4ZQTFws--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://carlos.mendible.com/assets/img/posts/function_private_endpoint_sa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--z4ZQTFws--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://carlos.mendible.com/assets/img/posts/function_private_endpoint_sa.png" alt="Solution Diagram"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The Azure Function is integrated with a VNet using Regional VNet Integration (blue line).&lt;/li&gt;
&lt;li&gt;The Storage Account (shown on the right) has a Private Endpoint which assigns a private IP to the Storage Account.&lt;/li&gt;
&lt;li&gt;Traffic (red line) from the Azure Function flows through the VNet, the Private Endpoint and reaches the Storage Account.&lt;/li&gt;
&lt;li&gt;The Storage Account, shown on the left, is used for the core services of the Azure Function and, at the time ow writing, can’t be protected using private enpoints.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;But wait there is one more thing, you will need to add an Azure Private DNS Zone to enable the Azure Function to resolve the name of the Storage Account so it uses the private ip for communication.)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; The solution will require use of the PremiumV2, or Elastic Premium pricing plan for the Azure Function.&lt;/p&gt;

&lt;h2&gt;
  
  
  Deploying the Infrastructure with Terraform
&lt;/h2&gt;

&lt;p&gt;We’ll be using Terraform (version &amp;gt; 0.12) to deploy the solution. Start creating a &lt;strong&gt;providers.tf&lt;/strong&gt; file with the following contents:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform {
  required_version = "&amp;gt; 0.12"
}

provider "azurerm" {
  version = "&amp;gt;= 2.0"
  features {}
}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Define the following variables in a &lt;strong&gt;variables.tf&lt;/strong&gt; file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Azure Resource Location
variable location {
  default = "west europe"
}

# Azure Resource Group Name
variable resource_group {
  default = "private-endpoint"
}

# Name of the Storage Account you´ll expose through the private endpoint
variable sa_name {
  default = "privatecfm"
}

# Name of the Storage Account backing the Azure Function
variable function_required_sa {
  default = "privatecfmfunc"
}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Create a &lt;strong&gt;mainty.tf&lt;/strong&gt; file with the following contents (Make sure you read the comments to understand the manifest):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Create Resource Group
resource "azurerm_resource_group" "rg" {
  name     = var.resource_group
  location = var.location
}

# Create VNet
resource "azurerm_virtual_network" "vnet" {
  name                = "private-network"
  address_space       = ["10.0.0.0/16"]
  location            = azurerm_resource_group.rg.location
  resource_group_name = azurerm_resource_group.rg.name
  # Use Private DNS Zone. That's right we have to add this magical IP here.
  dns_servers = ["168.63.129.16"]
}

# Create the Subnet for the Azure Function. This is thge subnet where we'll enable Vnet Integration.
resource "azurerm_subnet" "service" {
  name                 = "service"
  resource_group_name  = azurerm_resource_group.rg.name
  virtual_network_name = azurerm_virtual_network.vnet.name
  address_prefixes     = ["10.0.1.0/24"]

  enforce_private_link_service_network_policies = true

  # Delegate the subnet to "Microsoft.Web/serverFarms"
  delegation {
    name = "acctestdelegation"

    service_delegation {
      name    = "Microsoft.Web/serverFarms"
      actions = ["Microsoft.Network/virtualNetworks/subnets/action"]
    }
  }
}

# Create the Subnet for the private endpoints. This is where the IP of the private enpoint will live.
resource "azurerm_subnet" "endpoint" {
  name                 = "endpoint"
  resource_group_name  = azurerm_resource_group.rg.name
  virtual_network_name = azurerm_virtual_network.vnet.name
  address_prefixes     = ["10.0.2.0/24"]

  enforce_private_link_endpoint_network_policies = true
}

# Get current public IP. We´ll need this so we can access the Storage Account from our PC.
data "http" "current_public_ip" {
  url = "http://ipinfo.io/json"
  request_headers = {
    Accept = "application/json"
  }
}

# Create the "private" Storage Account.
resource "azurerm_storage_account" "sa" {
  name                      = var.sa_name
  resource_group_name       = azurerm_resource_group.rg.name
  location                  = azurerm_resource_group.rg.location
  account_tier              = "Standard"
  account_replication_type  = "GRS"
  enable_https_traffic_only = true
  # We are enabling the firewall only allowing traffic from our PC's public IP.
  network_rules {
    default_action             = "Deny"
    virtual_network_subnet_ids = []
    ip_rules = [
      jsondecode(data.http.current_public_ip.body).ip
    ]
  }
}

# Create input container
resource "azurerm_storage_container" "input" {
  name                  = "input"
  container_access_type = "private"
  storage_account_name  = azurerm_storage_account.sa.name
}

# Create output container
resource "azurerm_storage_container" "output" {
  name                  = "output"
  container_access_type = "private"
  storage_account_name  = azurerm_storage_account.sa.name
}

# Create the Private endpoint. This is where the Storage account gets a private IP inside the VNet.
resource "azurerm_private_endpoint" "endpoint" {
  name                = "sa-endpoint"
  location            = azurerm_resource_group.rg.location
  resource_group_name = azurerm_resource_group.rg.name
  subnet_id           = azurerm_subnet.endpoint.id

  private_service_connection {
    name                           = "sa-privateserviceconnection"
    private_connection_resource_id = azurerm_storage_account.sa.id
    is_manual_connection           = false
    subresource_names              = ["blob"]
  }
}

# Create the blob.core.windows.net Private DNS Zone
resource "azurerm_private_dns_zone" "private" {
  name                = "blob.core.windows.net"
  resource_group_name = azurerm_resource_group.rg.name
}

# Create an A record pointing to the Storage Account private endpoint
resource "azurerm_private_dns_a_record" "sa" {
  name                = var.sa_name
  zone_name           = azurerm_private_dns_zone.private.name
  resource_group_name = azurerm_resource_group.rg.name
  ttl                 = 3600
  records             = [azurerm_private_endpoint.endpoint.private_service_connection[0].private_ip_address]
}

# Link the Private Zone with the VNet
resource "azurerm_private_dns_zone_virtual_network_link" "sa" {
  name                  = "test"
  resource_group_name   = azurerm_resource_group.rg.name
  private_dns_zone_name = azurerm_private_dns_zone.private.name
  virtual_network_id    = azurerm_virtual_network.vnet.id
}

# Create the Storage Account required by Azure Functions.
resource "azurerm_storage_account" "function_required_sa" {
  name                      = var.function_required_sa
  resource_group_name       = azurerm_resource_group.rg.name
  location                  = azurerm_resource_group.rg.location
  account_tier              = "Standard"
  account_replication_type  = "GRS"
  enable_https_traffic_only = true
}

# Create a container to hold the Azure Function Zip
resource "azurerm_storage_container" "functions" {
  name                  = "function-releases"
  storage_account_name  = azurerm_storage_account.function_required_sa.name
  container_access_type = "private"
}

# Create a blob with the Azure Function zip
resource "azurerm_storage_blob" "function" {
  name                   = "securecopy.zip"
  storage_account_name   = azurerm_storage_account.function_required_sa.name
  storage_container_name = azurerm_storage_container.functions.name
  type                   = "Block"
  source                 = "./securecopy.zip"
}

# Create a SAS token so the Fucntion can access the blob and deploy the zip
data "azurerm_storage_account_sas" "sas" {
  connection_string = azurerm_storage_account.function_required_sa.primary_connection_string
  https_only        = false
  resource_types {
    service   = false
    container = false
    object    = true
  }
  services {
    blob  = true
    queue = false
    table = false
    file  = false
  }
  start  = "2020-05-18"
  expiry = "2025-05-18"
  permissions {
    read    = true
    write   = false
    delete  = false
    list    = false
    add     = false
    create  = false
    update  = false
    process = false
  }
}

# Create the Azure Function plan (Elastic Premium) 
resource "azurerm_app_service_plan" "plan" {
  name                = "azure-functions-test-service-plan"
  location            = azurerm_resource_group.rg.location
  resource_group_name = azurerm_resource_group.rg.name

  kind = "elastic"
  sku {
    tier     = "ElasticPremium"
    size     = "EP1"
    capacity = 1
  }
}

# Create Application Insights
resource "azurerm_application_insights" "ai" {
  name                = "func-pe-test"
  location            = azurerm_resource_group.rg.location
  resource_group_name = azurerm_resource_group.rg.name
  application_type    = "web"
  retention_in_days   = 90
}

# Create the Azure Function App
resource "azurerm_function_app" "func_app" {
  name                       = "func-pe-test"
  location                   = azurerm_resource_group.rg.location
  resource_group_name        = azurerm_resource_group.rg.name
  app_service_plan_id        = azurerm_app_service_plan.plan.id
  storage_account_name       = azurerm_storage_account.function_required_sa.name
  storage_account_access_key = azurerm_storage_account.function_required_sa.primary_access_key
  version                    = "~3"

  app_settings = {
    https_only                     = true
    APPINSIGHTS_INSTRUMENTATIONKEY = azurerm_application_insights.ai.instrumentation_key
    privatecfm_STORAGE             = azurerm_storage_account.sa.primary_connection_string
    # With this setting we'll force all outbound traffic through the VNet
    WEBSITE_VNET_ROUTE_ALL = "1"
    # Properties used to deploy the zip
    HASH            = filesha256("./securecopy.zip")
    WEBSITE_USE_ZIP = "https://${azurerm_storage_account.function_required_sa.name}.blob.core.windows.net/${azurerm_storage_container.functions.name}/${azurerm_storage_blob.function.name}${data.azurerm_storage_account_sas.sas.sas}"
  }
}

# Enable Regional VNet integration. Function --&amp;gt; service Subnet 
resource "azurerm_app_service_virtual_network_swift_connection" "vnet_integration" {
  app_service_id = azurerm_function_app.func_app.id
  subnet_id      = azurerm_subnet.service.id
}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Now download, into your working folder, the &lt;strong&gt;&lt;a href="https://github.com/cmendible/azure.samples/blob/master/function_sa_private_endpoint/deploy/securecopy.zip"&gt;securecopy.zip&lt;/a&gt;&lt;/strong&gt; file containing the sample Azure Function or create a zip, with the same name, containing your own code.&lt;/p&gt;

&lt;p&gt;Deploy the solution running the following commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform init
terraform apply
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h2&gt;
  
  
  Test the solution
&lt;/h2&gt;

&lt;p&gt;Use the Azure portal or Storage Explorer to upload a file to the &lt;strong&gt;input&lt;/strong&gt; container, after a few seconds you should find a copy of the file in the &lt;strong&gt;output&lt;/strong&gt; container.&lt;/p&gt;

&lt;h2&gt;
  
  
  VNET Integration Name Resolution Test
&lt;/h2&gt;

&lt;p&gt;If the previous test didn’t work, please connect through KUDU or the Console to the Azure Function and run the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;nameresolver &amp;lt;name of the storage account&amp;gt;.blob.core.windows.net
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The output of the command should show &lt;strong&gt;10.0.2.4&lt;/strong&gt; as the IP address. If that’s not the case you probably misconfigured something. [3]&lt;/p&gt;

&lt;p&gt;Hope it helps!&lt;/p&gt;

&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;[1] &lt;a href="https://docs.microsoft.com/en-us/azure/storage/common/storage-private-endpoints"&gt;Use private endpoints for Azure Storage&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;[2] Integrate your app with an Azure virtual network: &lt;a href="https://docs.microsoft.com/en-us/azure/app-service/web-sites-integrate-with-vnet#regional-vnet-integration"&gt;Regional VNET Integration&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;[3] Integrate your app with an Azure virtual network: &lt;a href="https://docs.microsoft.com/en-us/azure/storage/common/storage-private-endpoints"&gt;Troubleshooting&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>azure</category>
      <category>functions</category>
      <category>privateendpoint</category>
      <category>blob</category>
    </item>
  </channel>
</rss>
