<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Nico Meisenzahl</title>
    <description>The latest articles on DEV Community by Nico Meisenzahl (@nmeisenzahl).</description>
    <link>https://dev.to/nmeisenzahl</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/nmeisenzahl"/>
    <language>en</language>
    <item>
      <title>Azure Kubernetes Service — Next level persistent storage with Azure Disk CSI driver</title>
      <dc:creator>Nico Meisenzahl</dc:creator>
      <pubDate>Wed, 13 Oct 2021 17:01:06 +0000</pubDate>
      <link>https://dev.to/nmeisenzahl/azure-kubernetes-service-next-level-persistent-storage-with-azure-disk-csi-driver-537k</link>
      <guid>https://dev.to/nmeisenzahl/azure-kubernetes-service-next-level-persistent-storage-with-azure-disk-csi-driver-537k</guid>
      <description>&lt;p&gt;When talking about persistent storage with Kubernetes Persistent Volumes (PVs) and Persistent Volume Claims (PVCs) are our tools of choice. We can use them with Azure Disks and Azure Files for a long time now. But now it is time to bring them to the next level!&lt;/p&gt;

&lt;p&gt;This post mainly focuses on Azure Disks (don’t miss to review the links below to get more details on Azure Files).&lt;/p&gt;

&lt;h2&gt;
  
  
  What’s new?
&lt;/h2&gt;

&lt;p&gt;With Azure Disk Container Storage Interface (CSI) driver for Azure Disk, we are now getting some great features that help us to run our stateful services much smoother on Azure Kubernetes Service:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Zone Redundant Storage (ZRS) Azure Disks&lt;/li&gt;
&lt;li&gt;ReadWriteMany with Azure Disks&lt;/li&gt;
&lt;li&gt;Kubernetes-native volume snapshots&lt;/li&gt;
&lt;li&gt;Volume resizing&lt;/li&gt;
&lt;li&gt;Volume cloning&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Azure Disk Container Storage Interface driver
&lt;/h2&gt;

&lt;p&gt;Since Kubernetes 1.21 (on Azure Kubernetes Service) we can use the new Container Storage Interface implementation which is available for Azure Disk and Azure Files.&lt;/p&gt;

&lt;h3&gt;
  
  
  Wait, what is Container Storage Interface?
&lt;/h3&gt;

&lt;p&gt;The Container Storage Interface (CSI) is an abstraction layer that allows third-party storage providers to write plugins exposing new storage systems in Kubernetes. CSI is generally available for some time now and replaces the previous volume plugin system. With CSI third-party providers can develop and maintain their plugins outside of the Kubernetes project lifecycle which brings them more flexibility. CSI is the future for any storage integration and is already used in many scenarios.&lt;/p&gt;

&lt;h3&gt;
  
  
  How to use the Azure Disk CSI driver with Azure Kubernetes Service?
&lt;/h3&gt;

&lt;p&gt;You will need at least Kubernetes 1.21 or newer to be able to use the Azure Disk Container Storage Interface driver. Besides this you will also need to enable some Azure features gates to be able to use all of the above-mentioned features:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;az feature register &lt;span class="nt"&gt;--name&lt;/span&gt; SsdZrsManagedDisks &lt;span class="nt"&gt;--namespace&lt;/span&gt; Microsoft.Compute
az feature register &lt;span class="nt"&gt;--name&lt;/span&gt; EnableAzureDiskFileCSIDriver &lt;span class="nt"&gt;--namespace&lt;/span&gt; Microsoft.ContainerService
az feature list &lt;span class="nt"&gt;-o&lt;/span&gt; table &lt;span class="nt"&gt;--query&lt;/span&gt; &lt;span class="s2"&gt;"[?contains(name, 'Microsoft.Compute/SsdZrsManagedDisks')].{Name:name,State:properties.state}"&lt;/span&gt;
az feature list &lt;span class="nt"&gt;-o&lt;/span&gt; table &lt;span class="nt"&gt;--query&lt;/span&gt; &lt;span class="s2"&gt;"[?contains(name, 'Microsoft.ContainerService/EnableAzureDiskFileCSIDriver')].{Name:name,State:properties.state}"&lt;/span&gt;
az provider register &lt;span class="nt"&gt;--namespace&lt;/span&gt; Microsoft.Compute
az provider register &lt;span class="nt"&gt;-n&lt;/span&gt; Microsoft.ContainerService
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You will then need to spin up an Azure Kubernetes Cluster with the CSI driver feature enabled:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;az group create &lt;span class="nt"&gt;-g&lt;/span&gt; aks-1-22 &lt;span class="nt"&gt;-l&lt;/span&gt; westeurope
az aks create &lt;span class="nt"&gt;-n&lt;/span&gt; aks-1-22 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-g&lt;/span&gt; aks-1-22 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-l&lt;/span&gt; westeurope &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-c&lt;/span&gt; 3 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-s&lt;/span&gt; Standard_B2ms &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--node-zones&lt;/span&gt; 1 2 3 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--aks-custom-headers&lt;/span&gt; &lt;span class="nv"&gt;EnableAzureDiskFileCSIDriver&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true
&lt;/span&gt;az aks get-credentials &lt;span class="nt"&gt;-n&lt;/span&gt; aks-1-22 &lt;span class="nt"&gt;-g&lt;/span&gt; aks-1-22
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Furthermore, you will need to create a &lt;code&gt;StorageClass&lt;/code&gt; and &lt;code&gt;VolumeSnapshotClass&lt;/code&gt; to use all mentioned features:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cat&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt; | kubectl apply -f -
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: azuredisk-csi-premium-zrs
provisioner: disk.csi.azure.com
parameters:
  skuname: Premium_ZRS
  maxShares: "3"
  cachingMode: None
allowVolumeExpansion: true
reclaimPolicy: Delete
volumeBindingMode: Immediate
---
apiVersion: snapshot.storage.k8s.io/v1beta1
kind: VolumeSnapshotClass
metadata:
  name: azuredisk-csi-vsc
driver: disk.csi.azure.com
deletionPolicy: Delete
parameters:
  incremental: "true"
&lt;/span&gt;&lt;span class="no"&gt;EOF
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;StorageClass&lt;/code&gt; is configured to use the new Premium_ZRS Azure Disks that allow to mount the Disk to multiple Pods across multiple Availability Zones.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;VolumeSnapshotClass&lt;/code&gt; allows us to use the Kubernetes-native volume snapshot feature.&lt;/p&gt;

&lt;p&gt;We are now ready to create a &lt;code&gt;PersistentVolumeClaim&lt;/code&gt; based on the above &lt;code&gt;StorageClass&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cat&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt; | kubectl apply -f -
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: pvc-azuredisk-csi-premium-zrs
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 256Gi
  volumeMode: Block
  storageClassName: azuredisk-csi-premium-zrs
&lt;/span&gt;&lt;span class="no"&gt;EOF
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Kubernetes-native volume snapshots
&lt;/h2&gt;

&lt;p&gt;With the Azure Disk CSI driver, we are able to create volume snapshots. You will find more details on the volume snapshot feature itself &lt;a href="https://kubernetes.io/docs/concepts/storage/volume-snapshots"&gt;here&lt;/a&gt;. Use the following code snippet to create a snapshot of the above create PVC:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cat&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt; | kubectl apply -f -
apiVersion: snapshot.storage.k8s.io/v1beta1
kind: VolumeSnapshot
metadata:
  name: azuredisk-volume-snapshot
spec:
  volumeSnapshotClassName: azuredisk-csi-vsc
  source:
    persistentVolumeClaimName: pvc-azuredisk-csi-premium-zrs
&lt;/span&gt;&lt;span class="no"&gt;EOF
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can now verify the snapshot with the following &lt;code&gt;kubectl&lt;/code&gt; command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl describe volumesnapshot azuredisk-volume-snapshot
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Volume cloning
&lt;/h2&gt;

&lt;p&gt;Volume snapshots themselves are great but we can also use them to clone an existing PVC. For example to attach it to another workload. This can be done with the following manifest:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cat&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt; | kubectl apply -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-azuredisk-snapshot-restored
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: azuredisk-csi-premium-zrs
  resources:
    requests:
      storage: 265Gi
  dataSource:
    name: azuredisk-volume-snapshot
    kind: VolumeSnapshot
    apiGroup: snapshot.storage.k8s.io
&lt;/span&gt;&lt;span class="no"&gt;EOF
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once again you can review it with &lt;code&gt;kubectl&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get pvc,pv
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Volume resizing
&lt;/h2&gt;

&lt;p&gt;The CSI driver also supports resizing an existing PVC. Some important notes on this one:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The PVC needs to be unmounted for resizing&lt;/li&gt;
&lt;li&gt;The PV will be resized directly, the PVC after mounting it&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can resize the above-cloned PVC via kubectl by executing the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl patch pvc pvc-azuredisk-snapshot-restored &lt;span class="nt"&gt;--type&lt;/span&gt; merge &lt;span class="nt"&gt;--patch&lt;/span&gt; &lt;span class="s1"&gt;'{"spec": {"resources": {"requests": {"storage": "512Gi"}}}}'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  ReadWriteMany across Availability Zones
&lt;/h2&gt;

&lt;p&gt;As already mentioned above we are now also able to mount our PVC to multiple workloads via ReadWriteMany. With the new ZRS Azure Disks, we are even further able to use them across Availability Zones.&lt;/p&gt;

&lt;p&gt;Let’s test these new features with the below &lt;code&gt;Deployment&lt;/code&gt; manifest:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cat&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt; | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: nginx
  name: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
      name: nginx
    spec:
      containers:
        - name: nginx
          image: nginx:latest
          volumeDevices:
            - name: azuredisk
              devicePath: /dev/sdx
      volumes:
        - name: azuredisk
          persistentVolumeClaim:
            claimName: pvc-azuredisk-csi-premium-zrs
&lt;/span&gt;&lt;span class="no"&gt;EOF
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you see above are spinning up a deployment with three replicas across our spread Kubernetes nodes mounting the same Azure Disk with read-write access. You can review it by executing the following &lt;code&gt;kubectl&lt;/code&gt; command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get pods,volumeattachments &lt;span class="nt"&gt;-owide&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That said, (as you may have already seen) ZRS only supports Block devices so far.&lt;/p&gt;

&lt;h2&gt;
  
  
  More details
&lt;/h2&gt;

&lt;p&gt;Don’t miss to review the following links to get some more insights and further documentation:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://kubernetes-csi.github.io/docs"&gt;https://kubernetes-csi.github.io/docs&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/kubernetes-sigs/azuredisk-csi-driver"&gt;https://github.com/kubernetes-sigs/azuredisk-csi-driver&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/kubernetes-sigs/azurefile-csi-driver"&gt;https://github.com/kubernetes-sigs/azurefile-csi-driver&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.microsoft.com/azure/aks/csi-storage-drivers"&gt;https://docs.microsoft.com/azure/aks/csi-storage-drivers&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>azure</category>
      <category>kubernetes</category>
      <category>cloudnative</category>
      <category>devops</category>
    </item>
    <item>
      <title>Your Attackers Won't Be Happy — How GitLab Can Help You Secure Your Cloud-Native Applications!</title>
      <dc:creator>Nico Meisenzahl</dc:creator>
      <pubDate>Sat, 26 Sep 2020 21:22:18 +0000</pubDate>
      <link>https://dev.to/nmeisenzahl/your-attackers-won-t-be-happy-how-gitlab-can-help-you-secure-your-cloud-native-applications-17ab</link>
      <guid>https://dev.to/nmeisenzahl/your-attackers-won-t-be-happy-how-gitlab-can-help-you-secure-your-cloud-native-applications-17ab</guid>
      <description>&lt;p&gt;In the cloud-native ecosystem, decisions and changes are made on a rapid basis. Applications get adapted and deployed multiple times a week or even day. Microservices get developed decentralized with different peoples and teams involved. In such an environment, it is crucial to ensure that applications are developed and operated safely. This can be done by shifting security left into the developer lifecycle but also by using DevSecOps to empower operations with enhanced monitoring and protection for the application runtime.&lt;/p&gt;

&lt;p&gt;In this article, I would like to show you how GitLab can help you streamline your application security from a code and operations point of view by providing you with real-world examples. Before we deep dive into the example, let me first introduce you to the &lt;a href="https://about.gitlab.com/stages-devops-lifecycle/secure/" rel="noopener noreferrer"&gt;GitLab Secure&lt;/a&gt; and &lt;a href="https://about.gitlab.com/stages-devops-lifecycle/defend/" rel="noopener noreferrer"&gt;GitLab Defend&lt;/a&gt; product portfolio which are the foundation for this. GitLab Secure helps developers to enable accurate, automated, and continuous assessment of their applications by proactively identifying vulnerabilities and weaknesses and therefore minimizing security risk. GitLab Defend, on the other hand, supports operations by proactively protecting environments and cloud-native applications by providing context-aware technologies to reduce overall security risk. Both are backed by leading open-source projects that have been fully integrated into developer and operation processes and the GitLab user interface (UI).&lt;/p&gt;

&lt;h1&gt;
  
  
  The attack
&lt;/h1&gt;

&lt;p&gt;Let’s assume we have an application hosting a web interface that allows a user to provide some input. The application is written in &lt;a href="https://golang.org/" rel="noopener noreferrer"&gt;Golang&lt;/a&gt; and executes the input as part of an external operating system command (&lt;a href="https://golang.org/pkg/os/exec/" rel="noopener noreferrer"&gt;os/exec&lt;/a&gt;). The application does not contain any validation or security features to validate the input, which allows us to inject additional commands that are also executed in the application environment.&lt;/p&gt;

&lt;p&gt;The application is running as containerized microservices in a Kubernetes cluster. The Kubernetes Cluster is shared across multiple teams and projects, allowing us to inject and read data in another application running next to ours. In our example, we will connect an unsecured Redis instance in a different Namespace and read/write data.&lt;/p&gt;

&lt;p&gt;Now let us take a closer look at how GitLab can help us detect the attack, permit its execution, and finally help us find and fix the root cause in our code.&lt;/p&gt;

&lt;h1&gt;
  
  
  Container Host Security
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://about.gitlab.com/stages-devops-lifecycle/defend/" rel="noopener noreferrer"&gt;Container Host Security&lt;/a&gt; helps us to detect an attack in real-time by monitoring the pod for any unusual activity. It can then alert operations with detailed information on the attack itself.&lt;/p&gt;

&lt;p&gt;Container Host Security is powered by &lt;a href="https://falco.org/" rel="noopener noreferrer"&gt;Falco&lt;/a&gt;, an open-source runtime security tool that listens to the Linux kernel using eBPF. Falco parses system calls and asserts the stream against a configurable rules engine in real-time. The Falco deployment used by Container Host Security can be deployed and fully managed using &lt;a href="https://docs.gitlab.com/ee/user/clusters/applications.html#install-falco-using-gitlab-cicd" rel="noopener noreferrer"&gt;GitLab Managed Apps&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;In our example, Falco detects the injected redis-cli command, which is used to read/write data into the unsecured Redis instance. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F11c3abrbkmhcz3sg8g36.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F11c3abrbkmhcz3sg8g36.png" alt="Container Host Security"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Falco can now alert operations who can use those valuable insights to define and execute further steps. &lt;/p&gt;

&lt;h1&gt;
  
  
  Container Network Security
&lt;/h1&gt;

&lt;p&gt;A first step to permit access to the unsecured Redis instance would be to permit traffic between the application in our Kubernetes cluster. This can be done by using &lt;a href="https://about.gitlab.com/stages-devops-lifecycle/defend/" rel="noopener noreferrer"&gt;Container Network Security&lt;/a&gt;. Container Network Security is again fully managed by &lt;a href="https://docs.gitlab.com/ee/user/clusters/applications.html#install-cilium-using-gitlab-cicd" rel="noopener noreferrer"&gt;GitLab Managed Apps&lt;/a&gt; and can also be configured within the GitLab project user interface.&lt;/p&gt;

&lt;p&gt;Container Network Security is powered by &lt;a href="https://cilium.io/" rel="noopener noreferrer"&gt;Cilium&lt;/a&gt;, an open-source networking plugin for Kubernetes that can be used to implement support for NetworkPolicy resources. &lt;a href="https://kubernetes.io/docs/concepts/services-networking/network-policies/" rel="noopener noreferrer"&gt;Network Policies&lt;/a&gt; can be used to detect and block unauthorized network traffic between pods and to/from the Internet.&lt;/p&gt;

&lt;p&gt;Implementing Network Policies for our application will block the underlying network traffic generated by the attack. The policies can be enabled within the GitLab project UI:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fqqviwt3jjxwwtdy55yr3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fqqviwt3jjxwwtdy55yr3.png" alt="Network Policies"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Web Application Firewall
&lt;/h1&gt;

&lt;p&gt;With Container Network Security in place, our attack isn’t able to talk to the Redis instance anymore, but it is still possible to execute other network unrelated attacks using the command injection. &lt;a href="https://about.gitlab.com/stages-devops-lifecycle/defend/" rel="noopener noreferrer"&gt;Web Application Firewall (WAF)&lt;/a&gt; can now help us to increase the security and detect and block the attack at the &lt;a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="noopener noreferrer"&gt;Kubernetes Ingress&lt;/a&gt; level. &lt;/p&gt;

&lt;p&gt;The Web Application firewall is also powered by open-source. It is based on the &lt;a href="https://kubernetes.github.io/ingress-nginx/user-guide/third-party-addons/modsecurity/" rel="noopener noreferrer"&gt;ModSecurity&lt;/a&gt; module, a toolkit for real-time web application monitoring, logging, and access control. It is preconfigured to use the &lt;a href="https://www.modsecurity.org/CRS/Documentation/" rel="noopener noreferrer"&gt;OWASP’s Core Rule Set&lt;/a&gt;, which provides generic attack detection capabilities. Like the other integrations, Web Application Firewall is also fully managed by GitLab using &lt;a href="https://docs.gitlab.com/ee/user/clusters/applications.html#web-application-firewall-modsecurity" rel="noopener noreferrer"&gt;GitLab Managed Apps&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;In our example, the Web Application Firewall detects the attack and is also able to block it:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fmsz9sjdwshjq2sespskb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fmsz9sjdwshjq2sespskb.png" alt="Web Application Firewall logs"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Blocking the attack at the Ingress level will help us to deny the traffic before it hits our application. To do so, we can enable the Web Application Firewall blocking mode directly from the GitLab UI:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fwvnjum0j7998znu56dyv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fwvnjum0j7998znu56dyv.png" alt="WAF settings"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In addition to Container Host Security, we could have used the Web Application Firewall to detect the attack using the Thread Monitoring dashboard within our GitLab project:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F0rmvfruecewwcfexh7tz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F0rmvfruecewwcfexh7tz.png" alt="Thread Monitoring"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The Thread Monitoring dashboard also provides us with useful insights and metrics of our enforced Container Network Policy.&lt;/p&gt;

&lt;h1&gt;
  
  
  Static Application Security Testing
&lt;/h1&gt;

&lt;p&gt;We have now successfully protected our application runtime and ensured that no additional attacks can be executed. But we should also find and fix the root cause to ensure that such incidents are not recurring in the future. This is where &lt;a href="https://about.gitlab.com/stages-devops-lifecycle/secure/" rel="noopener noreferrer"&gt;Static Application Security Testing (SAST)&lt;/a&gt; can help us. Static Application Security Testing can be easily integrated into our project using &lt;a href="https://docs.gitlab.com/ee/ci/README.html" rel="noopener noreferrer"&gt;GitLab CI/CD&lt;/a&gt; and then allows us to analyze our source code for known vulnerabilities.&lt;/p&gt;

&lt;p&gt;In our case (a Golang application) the code scanning is executed using the open-source project &lt;a href="https://github.com/securego/gosec" rel="noopener noreferrer"&gt;Golang Security Checker&lt;/a&gt;. The results are displayed in the Security dashboard of our GitLab project for easy access:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fnicsjq4buci2ev6exstx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fnicsjq4buci2ev6exstx.png" alt="Security Dashboard"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In our example, the code scan has identified the root cause and provides us with detailed information about the vulnerability, the line of code that needs to be fixed, and the ability to easily create an issue to fix it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fkd40xs0x637nyf4qq6np.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fkd40xs0x637nyf4qq6np.png" alt="SAST"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Finally, of course, we should also talk to the team running the other application to make sure that their Redis instance gets secured too. We should also verify how the other &lt;a href="https://about.gitlab.com/stages-devops-lifecycle/secure/" rel="noopener noreferrer"&gt;GitLab Secure&lt;/a&gt; features can help to further improve the overall security of the application.&lt;/p&gt;

&lt;h1&gt;
  
  
  GitLab Defend and Secure in action
&lt;/h1&gt;

&lt;p&gt;If you like to get more insights on GitLab Secure and Defend and want to see it in action, you are welcome to check out &lt;a href="https://gitlab.com/whaber" rel="noopener noreferrer"&gt;Wayne&lt;/a&gt;, &lt;a href="https://gitlab.com/plafoucriere" rel="noopener noreferrer"&gt;Philippe&lt;/a&gt; and myself in our session &lt;a href="https://gitlabcommitvirtual2020.sched.com/event/dUWw/your-attackers-wont-be-happy-how-gitlab-can-help-you-secure-your-cloud-native-applications" rel="noopener noreferrer"&gt;“Your Attackers Won't Be Happy! How GitLab Can Help You Secure Your Cloud-Native Applications!”&lt;/a&gt; at GitLab Commit where you can gain further insights on Container Host Security, Container Network Security, Web Application Firewall (WAF), and Status Application Security Testing (SAST).&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/M19qgvkmZo4"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

</description>
      <category>cloudnative</category>
      <category>kubernetes</category>
      <category>gitlab</category>
      <category>security</category>
    </item>
    <item>
      <title>Azure Service Operator - manage your Azure resources with Kubernetes</title>
      <dc:creator>Nico Meisenzahl</dc:creator>
      <pubDate>Mon, 29 Jun 2020 07:01:43 +0000</pubDate>
      <link>https://dev.to/nmeisenzahl/azure-service-operator-manage-your-azure-resources-with-kubernetes-41a7</link>
      <guid>https://dev.to/nmeisenzahl/azure-service-operator-manage-your-azure-resources-with-kubernetes-41a7</guid>
      <description>&lt;p&gt;Before I introduce you to &lt;a href="https://github.com/Azure/azure-service-operator"&gt;Azure Service Operator&lt;/a&gt; and how it helps you to manage your Azure resources with Kubernetes let me briefly start with why you should use it and where it can help. Let me give you two examples:&lt;/p&gt;

&lt;p&gt;Think of a common cloud-native application. Some microservices running on Kubernetes, using Redis for caching and a database to persist state. In such a scenario a common practice is to store and manage the application and its dependencies together. Until now you might have packed your microservices into a Helm chart for easier deployment and also created some Terraform code to deploy and manage the Redis and database. But those are still not linked together and are also deployed via two different continuous delivery pipelines. You could now argue that for example, the Terraform Helm provider could fix this issue by combining your application and infrastructure. But do you like your developers to learn and use another tool - mainly used for infrastructure management? Wouldn't it be better to just manage the application dependencies together with the application itself in a Helm chart? This is where Azure Service Operator can help!&lt;/p&gt;

&lt;p&gt;Another example would be &lt;a href="https://www.weave.works/technologies/gitops/"&gt;GitOps&lt;/a&gt;. With GitOps, Git is the single source of truth. Applications and infrastructure are defined in a declarative manner, stored in Git, and automatically updated and managed by Kubernetes (to be more precise by &lt;a href="https://kubernetes.io/docs/concepts/architecture/controller/"&gt;Kubernetes Controllers&lt;/a&gt;). With Azure Service Operator in combination with, for example, &lt;a href="https://fluxcd.io/"&gt;FluxCD&lt;/a&gt; Kubernetes can also manage Azure resources using the GitOps approach.&lt;/p&gt;

&lt;h1&gt;
  
  
  What Azure Service Operator is and how it works
&lt;/h1&gt;

&lt;p&gt;So now that we know how the Azure Service Operator project can help us, we should talk about what it is exactly and how it works.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/Azure/azure-service-operator"&gt;Azure Service Operator&lt;/a&gt; is an open-source project by the Microsoft Azure. It is a pretty new project that got &lt;a href="https://cloudblogs.microsoft.com/opensource/2020/06/25/announcing-azure-service-operator-kubernetes/"&gt;announced&lt;/a&gt; last week. The whole project, as well as the roadmap, is available on &lt;a href="https://github.com/Azure/azure-service-operator"&gt;GitHub&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The Azure Service Operator consists of:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Custom Resource Definitions (CRDs) for each of the Azure services. &lt;a href="https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/"&gt;CRDs&lt;/a&gt; are Kubernetes API extensions. This enables us to create Kubernetes resources of kind &lt;em&gt;RedisCache&lt;/em&gt; or &lt;em&gt;SQLServer&lt;/em&gt;.&lt;/li&gt;
&lt;li&gt;A Kubernetes controller that watches for changes of the CRDs and then acting (creates, update, delete the Azure resources) on them.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Beside these Azure Service Operator also depends on:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://github.com/jetstack/cert-manager"&gt;Cert-manager&lt;/a&gt; to manage internal certificates. Cert-manager is a Kubernetes add-on to automate the management and issuance of TLS certificates from various issuing sources. Cert-manager is not part of the Azure Service Operator and needs to be installed upfront.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://dev.toAzure%20AD%20(AAD)%20Pod%20Identity"&gt;Azure AD (AAD) Pod Identity&lt;/a&gt; is used to manage authentication against Azure when managed identities are used. AAD Pod Identity is part of the Azure Service Operator and is provided as a Helm subchart.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With this abstraction, Azure Service Operator is not limited to be used with Azure Kubernetes Service, but can also be used with any Kubernetes cluster — regardless of whether it runs in a public or private cloud.&lt;/p&gt;

&lt;p&gt;Further technical details on how Azure Service Operator works can be found &lt;a href="https://github.com/Azure/azure-service-operator/blob/master/docs/design/controlflow.md"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;So far Azure Service Operator supports the following Azure resources:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Resource Group&lt;/li&gt;
&lt;li&gt;Event Hubs&lt;/li&gt;
&lt;li&gt;Azure SQL&lt;/li&gt;
&lt;li&gt;Azure Database for PostgreSQL&lt;/li&gt;
&lt;li&gt;Azure Database for MySQL&lt;/li&gt;
&lt;li&gt;Azure Key Vault&lt;/li&gt;
&lt;li&gt;Azure Cache for Redis&lt;/li&gt;
&lt;li&gt;Storage Account&lt;/li&gt;
&lt;li&gt;Blob Storage&lt;/li&gt;
&lt;li&gt;Virtual Network&lt;/li&gt;
&lt;li&gt;Application Insights&lt;/li&gt;
&lt;li&gt;API Management&lt;/li&gt;
&lt;li&gt;Cosmos DB&lt;/li&gt;
&lt;li&gt;Virtual Machine&lt;/li&gt;
&lt;li&gt;Virtual Machine Scale Set&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Get started with Azure Service Operator
&lt;/h1&gt;

&lt;p&gt;Below you will find all the steps necessary to get started with Azure Service Operator. A more detailed guide is available &lt;a href="https://github.com/Azure/azure-service-operator"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;First of all, we need to install Cert-Manager:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl create namespace cert-manager
kubectl label namespace cert-manager cert-manager.io/disable-validation&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true
&lt;/span&gt;helm repo add jetstack https://charts.jetstack.io
helm repo update
helm &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  cert-manager jetstack/cert-manager &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--namespace&lt;/span&gt; cert-manager &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--set&lt;/span&gt; &lt;span class="nv"&gt;installCRDs&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Before we can install the Azure Service Operator, we need to create a service principal that is used for authentication against Azure. We will then assign it contributor access on a subscription level (as mentioned above, it is also possible to use managed identities, which requires AAD Pod identity and AKS):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;NAME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;aso-sp
&lt;span class="nv"&gt;AZURE_TENANT_ID&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;az account show &lt;span class="nt"&gt;--query&lt;/span&gt; &lt;span class="s1"&gt;'[tenantId]'&lt;/span&gt; &lt;span class="nt"&gt;-o&lt;/span&gt; tsv&lt;span class="si"&gt;)&lt;/span&gt;
&lt;span class="nv"&gt;AZURE_SUBSCRIPTION_ID&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;az account show &lt;span class="nt"&gt;--query&lt;/span&gt; &lt;span class="s1"&gt;'[id]'&lt;/span&gt; &lt;span class="nt"&gt;-o&lt;/span&gt; tsv&lt;span class="si"&gt;)&lt;/span&gt;
&lt;span class="nv"&gt;AZURE_CLIENT_SECRET&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;az ad sp create-for-rbac &lt;span class="nt"&gt;-n&lt;/span&gt; &lt;span class="nv"&gt;$NAME&lt;/span&gt; &lt;span class="nt"&gt;--role&lt;/span&gt; contributor &lt;span class="nt"&gt;--year&lt;/span&gt; 99 &lt;span class="nt"&gt;--query&lt;/span&gt; &lt;span class="s1"&gt;'[password]'&lt;/span&gt; &lt;span class="nt"&gt;-o&lt;/span&gt; tsv&lt;span class="si"&gt;)&lt;/span&gt;
&lt;span class="nv"&gt;AZURE_CLIENT_ID&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;az ad sp list &lt;span class="nt"&gt;--display-name&lt;/span&gt; &lt;span class="nv"&gt;$NAME&lt;/span&gt; &lt;span class="nt"&gt;--query&lt;/span&gt; &lt;span class="s1"&gt;'[].appId'&lt;/span&gt; &lt;span class="nt"&gt;-o&lt;/span&gt; tsv&lt;span class="si"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now we are ready to install Azure Service Operator using Helm:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;HELM_EXPERIMENTAL_OCI&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1
helm chart pull mcr.microsoft.com/k8s/asohelmchart:latest
helm chart &lt;span class="nb"&gt;export &lt;/span&gt;mcr.microsoft.com/k8s/asohelmchart:latest &lt;span class="nt"&gt;--destination&lt;/span&gt; &lt;span class="nb"&gt;.&lt;/span&gt;
helm &lt;span class="nb"&gt;install &lt;/span&gt;aso &lt;span class="nt"&gt;-n&lt;/span&gt; azureoperator-system &lt;span class="nt"&gt;--create-namespace&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--set&lt;/span&gt; &lt;span class="nv"&gt;azureSubscriptionID&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;$AZURE_SUBSCRIPTION_ID&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--set&lt;/span&gt; &lt;span class="nv"&gt;azureTenantID&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;$AZURE_TENANT_ID&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--set&lt;/span&gt; &lt;span class="nv"&gt;azureClientID&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;$AZURE_CLIENT_ID&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--set&lt;/span&gt; &lt;span class="nv"&gt;azureClientSecret&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;$AZURE_CLIENT_SECRET&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--set&lt;/span&gt; &lt;span class="nv"&gt;createNamespace&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--set&lt;/span&gt; image.repository&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"mcr.microsoft.com/k8s/azureserviceoperator:latest"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  ./azure-service-operator
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;At this point we are ready to test our installation by creating a first resource group:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cat&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt; | kubectl apply -f -
apiVersion: azure.microsoft.com/v1alpha1
kind: ResourceGroup
metadata:
  name: aso-test-rg
spec:
  location: "westeurope"
&lt;/span&gt;&lt;span class="no"&gt;EOF
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The output of &lt;code&gt;kubectl get resourcegroups aso-test-rg&lt;/code&gt; will give us details about the status of resource creation. Once we see &lt;em&gt;successfully provisioned&lt;/em&gt; the resource is available.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get resourcegroups
NAME           PROVISIONED   MESSAGE
aso-test-rg    &lt;span class="nb"&gt;true          &lt;/span&gt;successfully provisioned
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  Bundle your application with its infrastructure dependencies
&lt;/h1&gt;

&lt;p&gt;As we now know how Azure Service Operator works, we are going to look at how we can use it to bundle our application together with our infrastructure. We will use the &lt;a href="https://github.com/Azure-Samples/azure-voting-app-redis"&gt;Azure voting app&lt;/a&gt; as an example. It consists of a Python microservice provided as a deployment, a LoadBalancer service to publish it externally, and Azure Cache for Redis to persist the state. Let’s take a look at the manifest:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;azure.microsoft.com/v1alpha1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ResourceGroup&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;azure-vote-rg&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;location&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;westeurope&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;azure.microsoft.com/v1alpha1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;RedisCache&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;azure-vote-redis&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;location&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;westeurope&lt;/span&gt;
  &lt;span class="na"&gt;resourceGroup&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;azure-vote&lt;/span&gt;
  &lt;span class="na"&gt;properties&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;sku&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Basic&lt;/span&gt;
      &lt;span class="na"&gt;family&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;C&lt;/span&gt;
      &lt;span class="na"&gt;capacity&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
    &lt;span class="na"&gt;enableNonSslPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="no"&gt;true&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;azure-vote-front&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;azure-vote-front&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;azure-vote-front&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;nodeSelector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;beta.kubernetes.io/os"&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="s"&gt;linux&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;azure-vote-front&lt;/span&gt;
        &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;microsoft/azure-vote-front:v1&lt;/span&gt;
        &lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;requests&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;100m&lt;/span&gt;
            &lt;span class="na"&gt;memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;128Mi&lt;/span&gt;
          &lt;span class="na"&gt;limits&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;250m&lt;/span&gt;
            &lt;span class="na"&gt;memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;256Mi&lt;/span&gt;
        &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
        &lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;REDIS_NAME&lt;/span&gt;
          &lt;span class="na"&gt;valueFrom&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;secretKeyRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;azure-redis&lt;/span&gt;
              &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;redisCacheName&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;REDIS&lt;/span&gt;
          &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;$(REDIS_NAME).redis.cache.windows.net&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;REDIS_PWD&lt;/span&gt;
          &lt;span class="na"&gt;valueFrom&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;secretKeyRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;azure-redis&lt;/span&gt;
              &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;primaryKey&lt;/span&gt;
&lt;span class="s"&gt;--------&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Service&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;azure-vote-front&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;LoadBalancer&lt;/span&gt;
  &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;azure-vote-front&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We will now take a closer look at the above definitions. In the first step, we declare a resource group as before. Next, we declare an Azure Cache for Redis by specifying a name, a location, a resource group, and several Redis-related parameters. If we take a closer look at the deployment itself, we may notice the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;        &lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;REDIS_NAME&lt;/span&gt;
          &lt;span class="na"&gt;valueFrom&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;secretKeyRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;azure-redis&lt;/span&gt;
              &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;redisCacheName&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;REDIS&lt;/span&gt;
          &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;$(REDIS_NAME).redis.cache.windows.net&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;REDIS_PWD&lt;/span&gt;
          &lt;span class="na"&gt;valueFrom&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;secretKeyRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;azure-redis&lt;/span&gt;
              &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;primaryKey&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Azure Service Operator will not only provide the resources for us but will also ensure that we can discover the secret and the name by putting them into a Kubernetes secret (you can also use Azure Keyvault for that). In this way, we can now easily inject them into our containers.&lt;/p&gt;

&lt;h1&gt;
  
  
  Manage your Azure resources with a GitOps approach
&lt;/h1&gt;

&lt;p&gt;Now that everything above is in place, we are ready to talk about using Azure Service Operator in a GitOps approach. In the following example, we will use &lt;a href="https://fluxcd.io/"&gt;FluxCD&lt;/a&gt; to achieve this.&lt;/p&gt;

&lt;p&gt;First of all, you need to install the &lt;a href="https://docs.fluxcd.io/en/1.19.0/references/fluxctl/"&gt;fluxctl CLI&lt;/a&gt;. Afterward, we install FluxCD and provide our Git repository containing our manifests (you can reuse &lt;a href="https://github.com/nmeisenzahl/aso-fluxcd-sample"&gt;my repository&lt;/a&gt; by forking it):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm repo add fluxcd https://charts.fluxcd.io
helm repo update
helm upgrade &lt;span class="nt"&gt;-i&lt;/span&gt; flux fluxcd/flux &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--set&lt;/span&gt; git.url&lt;span class="o"&gt;=&lt;/span&gt;git@github.com:nmeisenzahl/aso-fluxcd-sample.git &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--set&lt;/span&gt; git-path&lt;span class="o"&gt;=&lt;/span&gt;workloads &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--namespace&lt;/span&gt; flux
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After the installation is complete, we have to give FluxCD access to our repository. This is done with an SSH key, which you can get with &lt;code&gt;fluxctl&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;fluxctl identity &lt;span class="nt"&gt;--k8s-fwd-ns&lt;/span&gt; flux
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once you add the SSH key to your profile, FluxCD will immediately start to create your resources based on the manifests. FluxCD will now ensure that all changes are applied.&lt;/p&gt;

</description>
      <category>azure</category>
      <category>kubernetes</category>
      <category>gitops</category>
      <category>opensource</category>
    </item>
    <item>
      <title>Scaffold a production-ready Terraform project on Azure</title>
      <dc:creator>Nico Meisenzahl</dc:creator>
      <pubDate>Wed, 17 Jun 2020 06:45:59 +0000</pubDate>
      <link>https://dev.to/nmeisenzahl/scaffold-a-production-ready-terraform-project-on-azure-53df</link>
      <guid>https://dev.to/nmeisenzahl/scaffold-a-production-ready-terraform-project-on-azure-53df</guid>
      <description>&lt;p&gt;This post is an updated version of my previous post &lt;a href="https://medium.com/01001101/using-terraform-with-azure-the-right-way-35af3b51a6b0"&gt;“Using Terraform with Azure”&lt;/a&gt; that I published some time ago. Now, nearly one year later, I have learned a lot and also optimized and extended the examples and code snippets here and there. As a result, we decided to publish all code in this public &lt;a href="https://github.com/whiteducksoftware/terraform-scaffold-for-azure"&gt;GitHub repository&lt;/a&gt;. This post should provide you with some further details on the project and any details around it. All below code snippets are related to this project.&lt;/p&gt;

&lt;h1&gt;
  
  
  The project
&lt;/h1&gt;

&lt;p&gt;As mentioned above, we decided to &lt;a href="https://github.com/whiteducksoftware/terraform-scaffold-for-azure"&gt;publish everything&lt;/a&gt; needed to scaffold a new production-ready Terraform project on Azure. The project is called “Terraform scaffold for Azure” and is available &lt;a href="https://github.com/whiteducksoftware/terraform-scaffold-for-azure"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;To get started you need to make sure to meet the following requirements:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;a Unix-Shell (we might support Powershell Core in the future, you can use &lt;a href="http://shell.azure.com/"&gt;Azure Cloud Shell&lt;/a&gt; until then)&lt;/li&gt;
&lt;li&gt;Azure CLI&lt;/li&gt;
&lt;li&gt;Owner access on the target Subscription (to allow Terraform to add future Service Principals as contributors)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If those requirements are met, you only need to execute a single script called &lt;code&gt;up.sh&lt;/code&gt;. This will then provision the needed resources:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;a Service Principal used to run Terraform on behalf&lt;/li&gt;
&lt;li&gt;a Storage Container used to store the Terraform state file&lt;/li&gt;
&lt;li&gt;a Key Vault containing all secrets to allow easy and secure access&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;After executing the script you are ready to create your first Terraform project on Azure. But before that, I would like to give you some details about the project itself.&lt;/p&gt;

&lt;h1&gt;
  
  
  Where to store the state store
&lt;/h1&gt;

&lt;p&gt;It is best practices to store the Terraform state file in a secure and central location. Secure, because it contains sensible data like secrets. Central, because you may like to work with your entire team on the same project.&lt;/p&gt;

&lt;p&gt;Speaking of Azure the best location to store it is in a Storage Container as Blob which is supported by &lt;a href="https://www.terraform.io/docs/backends/types/azurerm.html"&gt;Terraform by default&lt;/a&gt;. The needed Storage Container will be created by the &lt;code&gt;up.sh&lt;/code&gt; script.&lt;/p&gt;

&lt;p&gt;You then need to link the Storage Container within your Terraform project by defining some setting in your &lt;code&gt;main.tf&lt;/code&gt; configuration file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight terraform"&gt;&lt;code&gt;&lt;span class="k"&gt;provider&lt;/span&gt; &lt;span class="s2"&gt;"azurerm"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;version&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"~&amp;gt; 2.1"&lt;/span&gt;
  &lt;span class="nx"&gt;features&lt;/span&gt; &lt;span class="p"&gt;{}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;terraform&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;backend&lt;/span&gt; &lt;span class="s2"&gt;"azurerm"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nx"&gt;key&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"azure.tfstate"&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  Why you should use a Service Principal
&lt;/h1&gt;

&lt;p&gt;It is best practice to run Terraform on behalf of a Service Principal. That allows you to either restrict access rights to those needed for your particular deployment. But, you can also give your Service Principal extended privileges that your personal account might not be allowed to have.&lt;/p&gt;

&lt;p&gt;The Service Principal will be also be created by the &lt;code&gt;up.sh&lt;/code&gt; script. By default, it will be added with owner access rights to allow it to enable future Service Principals which might be created within your Terraform project. This can, of course, be adjusted to your personal needs.&lt;/p&gt;

&lt;p&gt;To use a particular Service Principal, we need to make the Terraform CLI aware of it by providing the following environment variables in advance:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;ARM_SUBSCRIPTION_ID&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&amp;lt;your-subscription-id&amp;gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;ARM_TENANT_ID&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&amp;lt;your-tenant-id&amp;gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;ARM_CLIENT_ID&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&amp;lt;your-service-principal-id&amp;gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;ARM_CLIENT_SECRET&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&amp;lt;your-service-principal-secret&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  Secret management
&lt;/h1&gt;

&lt;p&gt;As you may have noticed, the above setup depends heavily on secrets for various authentications. I already mentioned above that you should never store secrets within your code. In order to simplify the use of this setup but still ensure a high level of security, it is recommended to store and retrieve them from a vault.&lt;/p&gt;

&lt;p&gt;Since you are working with Azure, we use Azure Key Vault for this. The up.sh script also takes care of this. It will create the Key Vault itself but also directly store all needed secrets in it. With this in place, you can easily retrieve the needed secrets when needed. Regardless of whether you run the CLI manually, as a script or in a CI/CD pipeline.&lt;/p&gt;

&lt;p&gt;This is a sample code snippet that you can use to retrieve the secrets locally or as a task within your CI/CD pipeline:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;subscriptionId&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"00000000-0000-0000-0000-000000000000"&lt;/span&gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;rg&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"my-rg"&lt;/span&gt;

&lt;span class="c"&gt;# sets subscription&lt;/span&gt;
az account &lt;span class="nb"&gt;set&lt;/span&gt; &lt;span class="nt"&gt;--subscription&lt;/span&gt; &lt;span class="nv"&gt;$subscriptionId&lt;/span&gt;

&lt;span class="c"&gt;# get vault&lt;/span&gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;vaultName&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;az keyvault list &lt;span class="nt"&gt;--subscription&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;$subscriptionId&lt;/span&gt; &lt;span class="nt"&gt;-g&lt;/span&gt; &lt;span class="nv"&gt;$rg&lt;/span&gt; &lt;span class="nt"&gt;--query&lt;/span&gt; &lt;span class="s1"&gt;'[0].{name:name}'&lt;/span&gt; &lt;span class="nt"&gt;-o&lt;/span&gt; tsv&lt;span class="si"&gt;)&lt;/span&gt;

&lt;span class="c"&gt;## extracts and exports secrets&lt;/span&gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;saKey&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;az keyvault secret show &lt;span class="nt"&gt;--subscription&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;$subscriptionId&lt;/span&gt; &lt;span class="nt"&gt;--vault-name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$vaultName&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="nt"&gt;--name&lt;/span&gt; sa-key &lt;span class="nt"&gt;--query&lt;/span&gt; value &lt;span class="nt"&gt;-o&lt;/span&gt; tsv&lt;span class="si"&gt;)&lt;/span&gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;saName&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;az keyvault secret show &lt;span class="nt"&gt;--subscription&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;$subscriptionId&lt;/span&gt; &lt;span class="nt"&gt;--vault-name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$vaultName&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="nt"&gt;--name&lt;/span&gt; sa-name &lt;span class="nt"&gt;--query&lt;/span&gt; value &lt;span class="nt"&gt;-o&lt;/span&gt; tsv&lt;span class="si"&gt;)&lt;/span&gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;scName&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;az keyvault secret show &lt;span class="nt"&gt;--subscription&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;$subscriptionId&lt;/span&gt; &lt;span class="nt"&gt;--vault-name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$vaultName&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="nt"&gt;--name&lt;/span&gt; sc-name &lt;span class="nt"&gt;--query&lt;/span&gt; value &lt;span class="nt"&gt;-o&lt;/span&gt; tsv&lt;span class="si"&gt;)&lt;/span&gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;spSecret&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;az keyvault secret show &lt;span class="nt"&gt;--subscription&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;$subscriptionId&lt;/span&gt; &lt;span class="nt"&gt;--vault-name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$vaultName&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="nt"&gt;--name&lt;/span&gt; sp-secret &lt;span class="nt"&gt;--query&lt;/span&gt; value &lt;span class="nt"&gt;-o&lt;/span&gt; tsv&lt;span class="si"&gt;)&lt;/span&gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;spId&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;az keyvault secret show &lt;span class="nt"&gt;--subscription&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;$subscriptionId&lt;/span&gt; &lt;span class="nt"&gt;--vault-name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$vaultName&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="nt"&gt;--name&lt;/span&gt; sp-id &lt;span class="nt"&gt;--query&lt;/span&gt; value &lt;span class="nt"&gt;-o&lt;/span&gt; tsv&lt;span class="si"&gt;)&lt;/span&gt;

&lt;span class="c"&gt;# exports secrets&lt;/span&gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;ARM_SUBSCRIPTION_ID&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;$subscriptionId&lt;/span&gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;ARM_TENANT_ID&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;$tenantId&lt;/span&gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;ARM_CLIENT_ID&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;$spId&lt;/span&gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;ARM_CLIENT_SECRET&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;$spSecret&lt;/span&gt;

&lt;span class="c"&gt;# runs Terraform init&lt;/span&gt;
terraform init &lt;span class="nt"&gt;-input&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;false&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-backend-config&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"access_key=&lt;/span&gt;&lt;span class="nv"&gt;$saKey&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-backend-config&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"storage_account_name=&lt;/span&gt;&lt;span class="nv"&gt;$saName&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-backend-config&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"container_name=&lt;/span&gt;&lt;span class="nv"&gt;$scName&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;p&gt;With this, you are now ready to build your first production-ready and secure Infrastructure-as-code deployment with Terraform on Azure. As mentioned above, all scripts and further details are available &lt;a href="https://github.com/whiteducksoftware/terraform-scaffold-for-azure"&gt;here&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>terraform</category>
      <category>azure</category>
      <category>infrastructureascode</category>
      <category>devops</category>
    </item>
    <item>
      <title>Containerize your .NET Core app – the right way</title>
      <dc:creator>Nico Meisenzahl</dc:creator>
      <pubDate>Tue, 16 Jun 2020 06:04:13 +0000</pubDate>
      <link>https://dev.to/nmeisenzahl/containerize-your-net-core-app-the-right-way-2ceh</link>
      <guid>https://dev.to/nmeisenzahl/containerize-your-net-core-app-the-right-way-2ceh</guid>
      <description>&lt;p&gt;There are already many articles out there that provide you with details on how to containerize your .NET Core application. Nevertheless, I still saw the need to write a bit more detailed post which helps you to build a production-ready container image based on container and .NET Core best practices. &lt;/p&gt;

&lt;p&gt;For better understanding, I will explain everything in detail based on a small sample ASPNET Core web application. You will find more details on the application itself &lt;a href="https://github.com/whiteducksoftware/sample-mvc"&gt;here&lt;/a&gt;. Of course, the shared best practices are not limited to .NET Core. You can adjust them and use them with any of your projects.&lt;/p&gt;

&lt;p&gt;There are millions of use cases out there and that’s why there isn’t a one fits all solution. I would like to introduce you to two different options I use most. You will get all the details to decide which of them works best for you. Let us start with the basics first.&lt;/p&gt;

&lt;h1&gt;
  
  
  The common way
&lt;/h1&gt;

&lt;p&gt;This is a common example Dockerfile which comes across my way quite often:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; mcr.microsoft.com/dotnet/core/aspnet:3.1&lt;/span&gt;
&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /app&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; /app/output .&lt;/span&gt;
&lt;span class="k"&gt;EXPOSE&lt;/span&gt;&lt;span class="s"&gt; 8080&lt;/span&gt;
&lt;span class="k"&gt;ENTRYPOINT&lt;/span&gt;&lt;span class="s"&gt; ["dotnet", "sample-mvc.dll"]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;There is nothing particularly wrong with it. It will work, but it is not tuned or optimized at all. Neither for performance nor for security-related issues and therefore not optimal for a production environment. Some examples are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;containers executing their process as root&lt;/li&gt;
&lt;li&gt;an inefficient sorting order that results in slower build times * due to invalid image layer caches&lt;/li&gt;
&lt;li&gt;an improper image layer management that affects the final image size&lt;/li&gt;
&lt;li&gt;slower builds due to missing &lt;code&gt;.dockerignore&lt;/code&gt; file&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let’s have a closer look at another Dockerfile example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; mcr.microsoft.com/dotnet/core/sdk:3.1&lt;/span&gt;
&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /app&lt;/span&gt;
&lt;span class="k"&gt;ADD&lt;/span&gt;&lt;span class="s"&gt; /src .&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;dotnet publish &lt;span class="se"&gt;\
&lt;/span&gt;  &lt;span class="nt"&gt;-c&lt;/span&gt; Release &lt;span class="se"&gt;\
&lt;/span&gt;  &lt;span class="nt"&gt;-o&lt;/span&gt; ./output
&lt;span class="k"&gt;EXPOSE&lt;/span&gt;&lt;span class="s"&gt; 8080&lt;/span&gt;
&lt;span class="k"&gt;ENTRYPOINT&lt;/span&gt;&lt;span class="s"&gt; ["dotnet", "sample-mvc.dll"]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This example uses a containerized build. This means that the application build itself is moved into the Docker build process. Doing this a pretty good pattern that allows you to build in an immutable and isolated build environment with all dependencies build in. But as a downside, you need to build your image based on a bigger SDK image. The SDK image provides the needed dependencies to build the application but wouldn’t be needed to execute it afterward. Luckily, there is a solution that addresses this particular issue.&lt;/p&gt;

&lt;h1&gt;
  
  
  Multi-stage builds
&lt;/h1&gt;

&lt;p&gt;If you are using a similar version of the above Dockerfiles you might not have heard about a feature called multi-stage build. Multi-stage builds allow us to split our image build process into multiple stages.&lt;br&gt;
The first stage is used to build our application that requires that we need to provide the required dependencies. In the second stage, we are copying the application artifacts into a smaller runtime environment which then is used as our final image. This corresponding Dockerfile could look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; mcr.microsoft.com/dotnet/core/sdk:3.1 AS build-env&lt;/span&gt;
&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /app&lt;/span&gt;
&lt;span class="k"&gt;ADD&lt;/span&gt;&lt;span class="s"&gt; /src .&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;dotnet publish &lt;span class="se"&gt;\
&lt;/span&gt;  &lt;span class="nt"&gt;-c&lt;/span&gt; Release &lt;span class="se"&gt;\
&lt;/span&gt;  &lt;span class="nt"&gt;-o&lt;/span&gt; ./output
&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; mcr.microsoft.com/dotnet/core/aspnet:3.1&lt;/span&gt;
&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /app&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; --from=build-env /app/output .&lt;/span&gt;
&lt;span class="k"&gt;EXPOSE&lt;/span&gt;&lt;span class="s"&gt; 8080&lt;/span&gt;
&lt;span class="k"&gt;ENTRYPOINT&lt;/span&gt;&lt;span class="s"&gt; ["dotnet", "sample-mvc.dll"]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let’s take a closer look at the individual steps:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; mcr.microsoft.com/dotnet/core/sdk:3.1 AS build-env&lt;/span&gt;
...
&lt;span class="k"&gt;RUN &lt;/span&gt;dotnet publish &lt;span class="se"&gt;\
&lt;/span&gt;  &lt;span class="nt"&gt;-c&lt;/span&gt; Release &lt;span class="se"&gt;\
&lt;/span&gt;  &lt;span class="nt"&gt;-o&lt;/span&gt; ./output
...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Our first stage is based on the SDK image which provides all dependencies to build our app.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;...
&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; mcr.microsoft.com/dotnet/core/aspnet:3.1&lt;/span&gt;
...
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; --from=build-env /app/output .&lt;/span&gt;
...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the second stage, we define a new base image that this time only contains our runtime dependencies. We then copy the application artifacts from the first stage into the second one.&lt;/p&gt;

&lt;p&gt;With this in place, we are now able to build a smaller and more secure container image because it only contains the dependencies needed to execute the application. But we still have room for further improvement which we talk about in the next paragraph.&lt;/p&gt;

&lt;p&gt;If you like to learn more about Dockerfile best practices I would recommend you check out &lt;a href="https://docs.docker.com/develop/develop-images/dockerfile_best-practices/"&gt;this page&lt;/a&gt; of the official Docker documentation.&lt;/p&gt;

&lt;h1&gt;
  
  
  You have the choice
&lt;/h1&gt;

&lt;p&gt;As already mentioned above, there isn’t a single best practice. It varies from use case to use case. With the examples below, you will get two blueprints as well as their pros and cons, which you can then use to adapt them according to your needs.&lt;/p&gt;

&lt;h2&gt;
  
  
  A good starting point
&lt;/h2&gt;

&lt;p&gt;The below Dockerfile is an optimized version of the above multi-stage example and should be a good fit for most scenarios.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;ARG&lt;/span&gt;&lt;span class="s"&gt; VERSION=3.1-alpine3.10&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; mcr.microsoft.com/dotnet/core/sdk:$VERSION AS build-env&lt;/span&gt;
&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /app&lt;/span&gt;
&lt;span class="k"&gt;ADD&lt;/span&gt;&lt;span class="s"&gt; /src/*.csproj .&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;dotnet restore
&lt;span class="k"&gt;ADD&lt;/span&gt;&lt;span class="s"&gt; /src .&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;dotnet publish &lt;span class="se"&gt;\
&lt;/span&gt;  &lt;span class="nt"&gt;-c&lt;/span&gt; Release &lt;span class="se"&gt;\
&lt;/span&gt;  &lt;span class="nt"&gt;-o&lt;/span&gt; ./output
&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; mcr.microsoft.com/dotnet/core/aspnet:$VERSION&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;adduser &lt;span class="se"&gt;\
&lt;/span&gt;  &lt;span class="nt"&gt;--disabled-password&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;  &lt;span class="nt"&gt;--home&lt;/span&gt; /app &lt;span class="se"&gt;\
&lt;/span&gt;  &lt;span class="nt"&gt;--gecos&lt;/span&gt; &lt;span class="s1"&gt;''&lt;/span&gt; app &lt;span class="se"&gt;\
&lt;/span&gt;  &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;chown&lt;/span&gt; &lt;span class="nt"&gt;-R&lt;/span&gt; app /app
&lt;span class="k"&gt;USER&lt;/span&gt;&lt;span class="s"&gt; app&lt;/span&gt;
&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /app&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; --from=build-env /app/output .&lt;/span&gt;
&lt;span class="k"&gt;ENV&lt;/span&gt;&lt;span class="s"&gt; DOTNET_RUNNING_IN_CONTAINER=true \&lt;/span&gt;
  ASPNETCORE_URLS=http://+:8080
&lt;span class="k"&gt;EXPOSE&lt;/span&gt;&lt;span class="s"&gt; 8080&lt;/span&gt;
&lt;span class="k"&gt;ENTRYPOINT&lt;/span&gt;&lt;span class="s"&gt; ["dotnet", "sample-mvc.dll"]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Again, we take a closer look at the individual steps:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;ARG&lt;/span&gt;&lt;span class="s"&gt; VERSION=3.1-alpine3.10&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; mcr.microsoft.com/dotnet/core/sdk:$VERSION AS build-env&lt;/span&gt;
...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We are first defining our base image tag using an ARG instruction. This helps us to easily update the tag instead of changing several lines. As you may have noticed, we use a different tag. The &lt;em&gt;tag3.1-alpine3.10&lt;/em&gt; states that this image contains the ASPNET version 3.1 and is based on Alpine 3.10.&lt;/p&gt;

&lt;p&gt;Alpine Linux is a Linux distribution designed for security, simplicity, and resource efficiency use cases. In this stage, Alpine Linux already can help us to reduce the footprint of our build stage.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;...
&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; mcr.microsoft.com/dotnet/core/aspnet:$VERSION&lt;/span&gt;
...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Because we are using a multi-stage build we also need to define the image used in our final stage. Once again we will use the Alpine based ASPNET runtime as our base image. As already said, building our image based on Alpine allows us to build a smaller and more secure container image.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;ADD&lt;/span&gt;&lt;span class="s"&gt; /src/*.csproj .&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;dotnet restore
&lt;span class="k"&gt;ADD&lt;/span&gt;&lt;span class="s"&gt; /src .&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;dotnet publish &lt;span class="se"&gt;\
&lt;/span&gt;  &lt;span class="nt"&gt;-c&lt;/span&gt; Release &lt;span class="se"&gt;\
&lt;/span&gt;  &lt;span class="nt"&gt;-o&lt;/span&gt; ./output
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Unlike in the above example, we this time splitting the build process into multiple pieces. The &lt;code&gt;dotnet restore&lt;/code&gt; command uses NuGet to restore dependencies as well as project-specific tools that are specified in the project file. The dependencies restore is also part of the &lt;code&gt;dotnet pubish&lt;/code&gt; command but separating it allows us to build the dependencies into a separate image layer. This shortens the time needed to build the image and reduces the download size since the image layer dependencies are only rebuilt if the dependencies get changed.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;...
&lt;span class="k"&gt;RUN &lt;/span&gt;adduser &lt;span class="se"&gt;\
&lt;/span&gt;  &lt;span class="nt"&gt;--disabled-password&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;  &lt;span class="nt"&gt;--home&lt;/span&gt; /app &lt;span class="se"&gt;\
&lt;/span&gt;  &lt;span class="nt"&gt;--gecos&lt;/span&gt; &lt;span class="s1"&gt;''&lt;/span&gt; app &lt;span class="se"&gt;\
&lt;/span&gt;  &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;chown&lt;/span&gt; &lt;span class="nt"&gt;-R&lt;/span&gt; app /app
&lt;span class="k"&gt;USER&lt;/span&gt;&lt;span class="s"&gt; app&lt;/span&gt;
...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To secure the runtime of our application we need to execute them without any root privileges. Because of this, we are creating a new user and changing the user context using the USER definition.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;...
&lt;span class="k"&gt;ENV&lt;/span&gt;&lt;span class="s"&gt; DOTNET_RUNNING_IN_CONTAINER=true \&lt;/span&gt;
  ASPNETCORE_URLS=http://+:8080
&lt;span class="k"&gt;EXPOSE&lt;/span&gt;&lt;span class="s"&gt; 8080&lt;/span&gt;
...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Because we run our app without any root privileges we need to expose it on a port higher 1024. In this example, 8080 was chosen. With the ENV definition, we are exposing further environment variables to our application process. &lt;em&gt;DOTNET_RUNNING_IN_CONTAINER=true&lt;/em&gt; is only an informal environment variable to let a developer/application know that the process is running within a container. &lt;em&gt;ASPNETCORE_URLS=http://+:8080&lt;/em&gt; is used to provide the runtime with the information to expose the process on port 8080.&lt;/p&gt;

&lt;h2&gt;
  
  
  Smaller, smaller, smaller
&lt;/h2&gt;

&lt;p&gt;As already mentioned, the above example should fit for most of the scenarios. The following example describes a way to build the smallest possible container image. A possible use-case might be for &lt;a href="https://azure.microsoft.com/de-de/services/iot-edge/"&gt;IoT Edge use cases&lt;/a&gt; or environments that need optimized start times. Unfortunately, we also get some disadvantages which I will talk about in detail below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;ARG&lt;/span&gt;&lt;span class="s"&gt; VERSION=3.1-alpine3.10&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; mcr.microsoft.com/dotnet/core/sdk:$VERSION AS build-env&lt;/span&gt;
&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /app&lt;/span&gt;
&lt;span class="k"&gt;ADD&lt;/span&gt;&lt;span class="s"&gt; /src .&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;dotnet publish &lt;span class="se"&gt;\
&lt;/span&gt;  &lt;span class="nt"&gt;--runtime&lt;/span&gt; alpine-x64 &lt;span class="se"&gt;\
&lt;/span&gt;  &lt;span class="nt"&gt;--self-contained&lt;/span&gt; &lt;span class="nb"&gt;true&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;  /p:PublishTrimmed&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;  /p:PublishSingleFile&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;  &lt;span class="nt"&gt;-c&lt;/span&gt; Release &lt;span class="se"&gt;\
&lt;/span&gt;  &lt;span class="nt"&gt;-o&lt;/span&gt; ./output
&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; mcr.microsoft.com/dotnet/core/runtime-deps:$VERSION&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;adduser &lt;span class="se"&gt;\
&lt;/span&gt;  &lt;span class="nt"&gt;--disabled-password&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;  &lt;span class="nt"&gt;--home&lt;/span&gt; /app &lt;span class="se"&gt;\
&lt;/span&gt;  &lt;span class="nt"&gt;--gecos&lt;/span&gt; &lt;span class="s1"&gt;''&lt;/span&gt; app &lt;span class="se"&gt;\
&lt;/span&gt;  &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;chown&lt;/span&gt; &lt;span class="nt"&gt;-R&lt;/span&gt; app /app
&lt;span class="k"&gt;USER&lt;/span&gt;&lt;span class="s"&gt; app&lt;/span&gt;
&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /app&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; --from=build-env /app/output .&lt;/span&gt;
&lt;span class="k"&gt;ENV&lt;/span&gt;&lt;span class="s"&gt; DOTNET_SYSTEM_GLOBALIZATION_INVARIANT=1 \&lt;/span&gt;
  DOTNET_RUNNING_IN_CONTAINER=true \
  ASPNETCORE_URLS=http://+:8080
&lt;span class="k"&gt;EXPOSE&lt;/span&gt;&lt;span class="s"&gt; 8080&lt;/span&gt;
&lt;span class="k"&gt;ENTRYPOINT&lt;/span&gt;&lt;span class="s"&gt; ["./sample-mvc", "--urls", "http://0.0.0.0:8080"]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once again, we take a closer look at the individual steps:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;...
&lt;span class="k"&gt;RUN &lt;/span&gt;dotnet publish &lt;span class="se"&gt;\
&lt;/span&gt;  &lt;span class="nt"&gt;--runtime&lt;/span&gt; alpine-x64 &lt;span class="se"&gt;\
&lt;/span&gt;  &lt;span class="nt"&gt;--self-contained&lt;/span&gt; &lt;span class="nb"&gt;true&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;  /p:PublishTrimmed&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;  /p:PublishSingleFile&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;  &lt;span class="nt"&gt;-c&lt;/span&gt; Release &lt;span class="se"&gt;\
&lt;/span&gt;  &lt;span class="nt"&gt;-o&lt;/span&gt; ./output
...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The big difference to the upper one is that we will build a self-contained application. Providing the parameter &lt;code&gt;--self-contained true&lt;/code&gt; will force the build to include all dependencies into the application artifact. Wich includes the .NET Core runtime. Because of this, we also need to define the runtime we would like to execute the binary in. This is done with the &lt;code&gt;--runtime alpine-x64&lt;/code&gt; parameter.&lt;/p&gt;

&lt;p&gt;Since the final image should be optimized for size we are defining the &lt;code&gt;/p:PubishTrimmed=true&lt;/code&gt; flag that advises the build process to not include any unused libraries. The &lt;code&gt;/p:PublishSingleFile=true&lt;/code&gt; flag allows us to speed up the build process itself. As a downside, you will have to define dynamically loaded assemblies upfront to make sure that required libraries aren’t trimmed and therefore not available in the image. More details on this are available &lt;a href="https://docs.microsoft.com/en-us/dotnet/core/whats-new/dotnet-core-3-0"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;A second disadvantage of having a smaller image is that code changes result in a bigger change. This is because the code and runtime are packed together in a single image layer. Every time the code changes the whole image layer needs to rebuild and also redistributed to the system running the code.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;...
&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; mcr.microsoft.com/dotnet/core/runtime-deps:$VERSION&lt;/span&gt;
...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Because the application artifact is self-contained we do not need to provide a runtime with the image. In this example, I have chosen the runtime-deps image based on Alpine Linux. This image is stripped down to the minimum native dependencies needed to execute the application artifact.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;...
&lt;span class="k"&gt;ENV&lt;/span&gt;&lt;span class="s"&gt; DOTNET_SYSTEM_GLOBALIZATION_INVARIANT=1 \&lt;/span&gt;
  DOTNET_RUNNING_IN_CONTAINER=true \
  ASPNETCORE_URLS=http://+:8080
...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Another image size improvement is to use the globalization invariant mode. This mode is useful for applications that are not globally aware and that can use the formatting conventions, casing conventions, and string comparison and sort order of the invariant culture. The globalization invariant mode is enabled via the &lt;em&gt;DOTNET_SYSTEM_GLOBALIZATION_INVARIANT=1&lt;/em&gt; environment variable. If your application requires globalization you will need to install the &lt;a href="https://pkgs.alpinelinux.org/package/edge/main/x86/icu"&gt;ICU library&lt;/a&gt; and remove the above environment variable. This will increase your container image size by about 28 MB. You will find more details on the globalization invariant mode &lt;a href="https://github.com/dotnet/runtime/blob/master/docs/design/features/globalization-invariant-mode.md"&gt;here&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;...
&lt;span class="k"&gt;ENTRYPOINT&lt;/span&gt;&lt;span class="s"&gt; ["./sample-mvc", "--urls", "http://0.0.0.0:8080"]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For self-contained applications, we need to change the ENTRYPOINT definition to run the binary itself.&lt;br&gt;
The size of this image will be around 73 MB (including my sample application). Let’s compare this to other images:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;an image based on a common multi-stage Dockerfile: 250 MB&lt;/li&gt;
&lt;li&gt;an image based on the above multi-stage Dockerfile: 124 MB&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As already mentioned above: Which Dockerfile is most suitable for you depends on your use case. Smaller is not necessarily better.&lt;/p&gt;




&lt;p&gt;If you are planning to deploy your application to Azure Kubernetes Service or Azure Container Instances you might also think about storing your images in the Azure Container Registry. Azure Container Registry also supports to build container images. You can include it in your build pipeline or just call it manually using the &lt;code&gt;az acr build&lt;/code&gt; command.&lt;/p&gt;

&lt;p&gt;I hope that these details help you to containerize your .NET Core application. As already mentions the above examples are best practices and might need to be customized to fit your needs. Check out &lt;a href="https://medium.com/@michaeldimoudis/hardening-asp-net-core-3-1-docker-images-f0c2ede1667f"&gt;Michael Dimoudis post&lt;/a&gt; on details about how to harden your .NET Core container images.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>docker</category>
      <category>dotnet</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>Policy and Governance for Kubernetes</title>
      <dc:creator>Nico Meisenzahl</dc:creator>
      <pubDate>Mon, 15 Jun 2020 17:27:11 +0000</pubDate>
      <link>https://dev.to/nmeisenzahl/policy-and-governance-for-kubernetes-3n4p</link>
      <guid>https://dev.to/nmeisenzahl/policy-and-governance-for-kubernetes-3n4p</guid>
      <description>&lt;p&gt;Before I talk about Policy and Governance for Kubernetes let’s briefly talk about policy and governance in general. In short, it means to provide a set of rules which define a guideline that either can be enforced or audited. So why do we need this? It is important because in a Cloud ecosystem decisions are made decentralized and also taken at a rapid pace. A governance model or policy becomes crucial to keep the entire organization on track. Those definitions can include but are not limited to, security baselines or consistency of resources and deployments.&lt;/p&gt;

&lt;p&gt;So, why do we need Governance and Policy for Kubernetes? Kubernetes provides Role-based Access Control (RBAC) which allows Operators to define in a very granular manner which identity is allowed to create or manager which resource. But RBAC does not allow us to control the specification of those resources. As already mentioned this is a necessary requirement to be able to define the policy boundaries. Some examples are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;whitelist of trusted container registries and images&lt;/li&gt;
&lt;li&gt;required container security specifications&lt;/li&gt;
&lt;li&gt;required labels to group resources&lt;/li&gt;
&lt;li&gt;permit conflicting Ingress host resources&lt;/li&gt;
&lt;li&gt;permit publicly exposed LoadBalancer services&lt;/li&gt;
&lt;li&gt;…&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is where Policy and Governance for Kubernetes comes in. But let me first introduce you to Open Policy Agent. Open Policy Agent is the foundation for policy management on Kubernetes or even the whole cloud-native ecosystem.&lt;/p&gt;

&lt;h1&gt;
  
  
  Open Policy Agent
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://www.openpolicyagent.org/"&gt;Open Policy Agent (OPA)&lt;/a&gt; is an open-source project by &lt;a href="https://www.styra.com/"&gt;styra&lt;/a&gt;. It provides policy-based control for cloud-native environments using a unified toolset and framework and a declarative approach. Open Policy Agents allows decoupling policy declaration and management from the application code by either integrating the &lt;a href="https://www.openpolicyagent.org/docs/latest/integration/#integrating-with-the-go-api"&gt;OPA Golang library&lt;/a&gt; or calling the REST API of a collocated &lt;a href="https://www.openpolicyagent.org/docs/latest/integration/#integrating-with-the-rest-api"&gt;OPA daemon&lt;/a&gt; instance.&lt;/p&gt;

&lt;p&gt;With this in place, OPA can be used to evaluate any JSON-based inputs against user-defined policies and mark the input as passing or failing. With this in place, Open Policy Agent can be seamlessly integrated with a variety of &lt;a href="https://www.openpolicyagent.org/docs/latest/ecosystem/"&gt;tools and projects&lt;/a&gt;. Some examples are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;API and service authorization with &lt;a href="https://www.envoyproxy.io/"&gt;Envoy&lt;/a&gt;, &lt;a href="https://konghq.com/kong/"&gt;Kong&lt;/a&gt; or &lt;a href="https://docs.traefik.io/"&gt;Traefik&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Authorization policies for SQL, &lt;a href="https://kafka.apache.org/"&gt;Kafka&lt;/a&gt; and others&lt;/li&gt;
&lt;li&gt;Container Network authorization with &lt;a href="https://istio.io/"&gt;Istio&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Test policies for &lt;a href="https://www.terraform.io/"&gt;Terraform&lt;/a&gt; infrastructure changes&lt;/li&gt;
&lt;li&gt;Polices for SSH and sudo&lt;/li&gt;
&lt;li&gt;Policy and Governance for &lt;a href="https://kubernetes.io/"&gt;Kubernetes&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--gwvhbr-O--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/nh10wwfhbn6x4831vnd2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--gwvhbr-O--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/nh10wwfhbn6x4831vnd2.png" alt="Open Policy Agent"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Open Policy Agent policies are written in a declarative policy language called Rego. Rego queries are claims about data stored in OPA. These queries can be used to define policies that enumerate data instances that violate the expected state of the system.&lt;/p&gt;

&lt;p&gt;You can use the &lt;a href="https://play.openpolicyagent.org/"&gt;Rego playground&lt;/a&gt; to get familiar with the policy language. The playground also contains a variety of examples that help you to get started writing your own queries.&lt;/p&gt;

&lt;h1&gt;
  
  
  OPA Gatekeeper — the Kubernetes implementation
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://github.com/open-policy-agent/gatekeeper"&gt;Open Policy Agent Gatekeeper&lt;/a&gt; got introduced by Google, Microsoft, Red Hat, and styra. It is a &lt;a href="https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/"&gt;Kubernetes Admission Controller&lt;/a&gt; built around OPA to integrate it with the Kubernetes API server and enforcing policies defined by &lt;a href="https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/"&gt;Custom Resource Definitions (CRDs)&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The Gatekeeper webhook, gets invoked whenever a Kubernetes resource is created, updated, or deleted which then allows Gatekeeper to permit it. In addition, Gatekeeper can also audit existing resources. Polices, as well as data, can be replicated into the included OPA instance to also create advanced queries that for example need access to objects in the cluster other than the object under current evaluation.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ewHeMn1i--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/wzbg9mru935mg42uib3n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ewHeMn1i--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/wzbg9mru935mg42uib3n.png" alt="Open Policy Agent Gatekeeper"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Get started with Gatekeeper
&lt;/h1&gt;

&lt;p&gt;This post will not cover how to install Gatekeeper. All steps as well as detailed information are available &lt;a href="https://github.com/open-policy-agent/gatekeeper"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;After installing Gatekeeper you are ready to define your first policies. But let’s first review what we got:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--fxGKDZs5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/d3qsiuqn30u4siuh8tko.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--fxGKDZs5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/d3qsiuqn30u4siuh8tko.png" alt="Gatekeeper Deployments"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Gatekeeper consists out of two deployments:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;em&gt;gatekeeper-controller-manager&lt;/em&gt; is the Admission Controller monitoring the Kubernetes API server for resource changes, verifying policy definitions, and permitting actions.&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;gatekeeper-audit&lt;/em&gt; is used to verifying and auditing existing resources.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Besides these two deployments, also two Custom Resource Definitions (CRDs) got created:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;em&gt;ConstraintTemplate&lt;/em&gt; is used to define a policy based on metadata and the actual Rego query.&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;Config&lt;/em&gt; is used to define which resources should be replicated to the OPA instance to allow advanced policy queries.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  A first example
&lt;/h2&gt;

&lt;p&gt;In this example, we want to make sure that all labels required by the policy are present in the Kubernetes resource manifest.&lt;/p&gt;

&lt;p&gt;To do this, we first have to build our Rego query:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;package k8srequiredlabels

        violation[{"msg": msg, "details": {"missing_labels": missing}}] {
          provided := {label | input.review.object.metadata.labels[label]}
          required := {label | label := input.parameters.labels[_]}
          missing := required - provided
          count(missing) &amp;gt; 0
          msg := sprintf("you must provide labels: %v", [missing])
        }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It consists of a package containing a violation definition. The violation defined the input data, the condition to be matched, and a message which gets returned in case of a violation.&lt;/p&gt;

&lt;p&gt;We now need to make Gatekeeper aware of our definition. This is done by wrapping our Rego query into a ContrainTemplate resource:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;templates.gatekeeper.sh/v1beta1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ConstraintTemplate&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;k8srequiredlabels&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;crd&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;names&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;K8sRequiredLabels&lt;/span&gt;
        &lt;span class="na"&gt;listKind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;K8sRequiredLabelsList&lt;/span&gt;
        &lt;span class="na"&gt;plural&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;k8srequiredlabels&lt;/span&gt;
        &lt;span class="na"&gt;singular&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;k8srequiredlabels&lt;/span&gt;
      &lt;span class="na"&gt;validation&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;openAPIV3Schema&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;properties&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;array&lt;/span&gt;
              &lt;span class="na"&gt;items&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;string&lt;/span&gt;
  &lt;span class="na"&gt;targets&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;target&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;admission.k8s.gatekeeper.sh&lt;/span&gt;
      &lt;span class="na"&gt;rego&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
        &lt;span class="s"&gt;package k8srequiredlabels&lt;/span&gt;

        &lt;span class="s"&gt;violation[{"msg": msg, "details": {"missing_labels": missing}}] {&lt;/span&gt;
          &lt;span class="s"&gt;provided := {label | input.review.object.metadata.labels[label]}&lt;/span&gt;
          &lt;span class="s"&gt;required := {label | label := input.parameters.labels[_]}&lt;/span&gt;
          &lt;span class="s"&gt;missing := required - provided&lt;/span&gt;
          &lt;span class="s"&gt;count(missing) &amp;gt; 0&lt;/span&gt;
          &lt;span class="s"&gt;msg := sprintf("you must provide labels: %v", [missing])&lt;/span&gt;
        &lt;span class="s"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;em&gt;ConstraintTemplate&lt;/em&gt; resource contains out of the Rego query as well as some metadata. As soon as we apply the &lt;em&gt;ConstraintTemplate&lt;/em&gt; resource a new CRD will be created. In our case, it is called &lt;em&gt;K8sRequiredLabels&lt;/em&gt;. This allows us to easily interact with our policy later on.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--TfMNfZD---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/l76abznpwaiv5mqtxmax.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--TfMNfZD---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/l76abznpwaiv5mqtxmax.png" alt="Gatekeeper CRDs"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We now defined our policy but you may have noticed that we not yet defined which label we require our Kubernetes resources to have. We also did not yet define on which Kubernetes resources we like to have this policy applied too. To do so, we must first create another manifest called &lt;em&gt;Constraints&lt;/em&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;constraints.gatekeeper.sh/v1beta1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;K8sRequiredLabels&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ns-must-have-gk&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;match&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;kinds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;apiGroups&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
        &lt;span class="na"&gt;kinds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Namespace"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
  &lt;span class="na"&gt;parameters&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;gatekeeper"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;em&gt;Constraints&lt;/em&gt; manifest allows us to enforce our above-defined policy by providing our parameters as well as the Kubernetes resource it should act on. In our example, we want to enforce this policy for all namespace resources and make sure that they all contain a gatekeeper label.&lt;/p&gt;

&lt;p&gt;If we now apply the above manifest, we cannot create a namespace without providing a gatekeeper label. Instead, our configured message is returned. If we provide a label the namespace gets created.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--HI1VfAPr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/bm8eaufpqbgzpu42dqda.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--HI1VfAPr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/bm8eaufpqbgzpu42dqda.png" alt="Gatekeeper example"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://github.com/open-policy-agent/gatekeeper/tree/master/library/general"&gt;Gatekeeper repository&lt;/a&gt; provides some more examples which can be used to get started.&lt;/p&gt;

&lt;h2&gt;
  
  
  Policy auditing
&lt;/h2&gt;

&lt;p&gt;Let’s talk about auditing. Why do we need it? Well, we may have created resources before we created policies, and we would like to review them to make sure they meet our policies. Or maybe we just want to audit our cluster without enforcing the policies.&lt;/p&gt;

&lt;p&gt;For this we can use our K8sRequiredLabels Custom Resource Definition we created above and retrieve our auditing information from the Kubernetes API server using &lt;code&gt;kubectl&lt;/code&gt; or any other tool of our choice. In the below example I used &lt;code&gt;kubectl describe K8sRequiredLabels ns-must-have-gk&lt;/code&gt; to get the following output:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--D0LHI_ER--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/ydaykyon9y56n78ronnt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--D0LHI_ER--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/ydaykyon9y56n78ronnt.png" alt="Gatekeeper auditing"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Furthermore, Gatekeeper provides us with Prometheus endpoints that can be used to collect metrics and build dashboards and alerts based on them. Both, &lt;em&gt;gatekeeper-controller-manager&lt;/em&gt; as well as &lt;em&gt;gatekeeper-audit&lt;/em&gt; are exposing those endpoints which can be used to gather the following metrics:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;gatekeeper_constraints: Current number of constraints&lt;/li&gt;
&lt;li&gt;gatekeeper_constraint_templates: Current number of constraint templates&lt;/li&gt;
&lt;li&gt;gatekeeper_constraint_template_ingestion_count: The number of constraint template ingestion actions&lt;/li&gt;
&lt;li&gt;gatekeeper_constraint_template_ingestion_duration_seconds: Constraint Template ingestion duration distribution&lt;/li&gt;
&lt;li&gt;gatekeeper_request_count: The number of requests that are routed to admission webhook from the API server&lt;/li&gt;
&lt;li&gt;gatekeeper_request_duration_seconds: Admission request duration distribution&lt;/li&gt;
&lt;li&gt;gatekeeper_violations: The number of audit violations per constraint detected in the last audit cycle&lt;/li&gt;
&lt;li&gt;gatekeeper_audit_last_run_time: The epoch timestamp since the last audit runtime&lt;/li&gt;
&lt;li&gt;gatekeeper_audit_duration_seconds: Audit cycle duration distribution&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;All you have to do is make your Prometheus instance aware of it. The easiest way is to customize both deployments and add the well-known annotations to the pod specification to enable Prometheus scraping:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;annotations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;prometheus.io/port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;8888"&lt;/span&gt;
  &lt;span class="na"&gt;prometheus.io/scrape&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;true"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With this in place, you can collect the above Gatekeeper metrics and use them to build auditing dashboards as well as alerting.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--6gEvQR2g--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/y3rtylzpv4y3drkazvum.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--6gEvQR2g--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/y3rtylzpv4y3drkazvum.png" alt="Gatekeeper with Grafana"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;Happy auditing! 😃&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>cloud</category>
      <category>cloudnative</category>
      <category>devops</category>
    </item>
  </channel>
</rss>
