<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: coulof</title>
    <description>The latest articles on DEV Community by coulof (@coulof).</description>
    <link>https://dev.to/coulof</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/coulof"/>
    <language>en</language>
    <item>
      <title>Implement Tanzu on private networks with VyOS</title>
      <dc:creator>coulof</dc:creator>
      <pubDate>Mon, 04 Jan 2021 08:00:00 +0000</pubDate>
      <link>https://dev.to/coulof/implement-tanzu-on-private-networks-with-vyos-4lb8</link>
      <guid>https://dev.to/coulof/implement-tanzu-on-private-networks-with-vyos-4lb8</guid>
      <description>&lt;h1&gt;
  
  
  TL; DR
&lt;/h1&gt;

&lt;p&gt;With a virtual router (here &lt;a href="https://vyos.io/"&gt;VyOS&lt;/a&gt;), and proper routing table, it is possible to implement &lt;a href="https://docs.vmware.com/en/VMware-Tanzu-Kubernetes-Grid/1.0/vmware-tanzu-kubernetes-grid-10/GUID-tanzu-k8s-clusters-create.html"&gt;VMware Tanzu&lt;/a&gt; on private networks without NSX-T.&lt;/p&gt;

&lt;h1&gt;
  
  
  The premise
&lt;/h1&gt;

&lt;p&gt;I needed to evaluate VMware Tanzu basic capabilities with &lt;a href="https://github.com/dell/csi-powerscale"&gt;CSI driver for PowerScale&lt;/a&gt;. To install Tanzu, also named Workload Management in vCenter, I followed that &lt;a href="https://www.youtube.com/watch?v=XjCbIHlaMR4"&gt;video&lt;/a&gt; and these &lt;a href="https://cormachogan.com/2020/09/25/deploy-ha-proxy-for-vsphere-with-tanzu/"&gt;1&lt;/a&gt;, &lt;a href="https://cormachogan.com/2020/09/28/enabling-vsphere-with-tanzu-using-ha-proxy/"&gt;2&lt;/a&gt;, &lt;a href="https://cormachogan.com/2020/09/29/deploying-tanzu-kubernetes-guest-cluster-in-vsphere-with-tanzu/"&gt;3&lt;/a&gt; posts.&lt;/p&gt;

&lt;p&gt;As a prerequisite, Tanzu needs two networks at least, one for the management cluster (also named Supervisor cluster) and one for the workload (i.e. the on-demand clusters to run workload).&lt;/p&gt;

&lt;p&gt;In my lab, I only have one routable VLAN on the network with limited IPs. For other activities, like &lt;a href="///anthos-baremetal-1.0.html"&gt;Anthos validation&lt;/a&gt;, I use a single private network behind a NAT ; unfortunately that configuration is not enough here.&lt;/p&gt;

&lt;p&gt;For Tanzu, I decided to use three networks:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The frontend network (&lt;code&gt;10.247.247.0/24&lt;/code&gt;), which is routable to the external world and will have the Load-Balancer VIP&lt;/li&gt;
&lt;li&gt;The management network (&lt;code&gt;10.0.0.0/24&lt;/code&gt;), for the vSphere management of the supervisor cluster&lt;/li&gt;
&lt;li&gt;The workload network (&lt;code&gt;10.0.1.0/24&lt;/code&gt;), to host all Tanzu clusters&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--_iy7LRsp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/http://0.0.0.0:4321/assets/img/tanzu/vmware-dswitch.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--_iy7LRsp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/http://0.0.0.0:4321/assets/img/tanzu/vmware-dswitch.png" alt="Architecture"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The problem is, I do not have an NSX-T license to manage these private networks. Luckily for us, once again Linux is here to save the day.&lt;/p&gt;

&lt;h1&gt;
  
  
  The implementation
&lt;/h1&gt;

&lt;p&gt;The trick here is to use a virtual machine that will act as a router for the different networks. My choice went to &lt;a href="https://vyos.io/products/#vyos-platform"&gt;VyOS&lt;/a&gt;, a Debian-base distro design for routing. It can be used as a firewall, VPN, do QoS, etc. The configuration is done with commands similar to what you can have with Cisco IOS or other proprietary networking devices.&lt;/p&gt;

&lt;p&gt;In this case, I use only the routing and NAT features. For free, VyOS will route between the connected interface, so after the NICs configuration I just had to configure the NAT for my two private networks.&lt;/p&gt;

&lt;p&gt;The final configuration is pretty much:&lt;/p&gt;


&lt;pre&gt;400: Invalid request&lt;/pre&gt; 

&lt;p&gt;With that configuration:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;any VMs deployed with Tanzu can connect to the external world through the NAT via the Dell LAN segment&lt;/li&gt;
&lt;li&gt;any VMs with a single NIC can talk to the same segment via the DSwitch or to the other network via the VyOS VM&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Ping issue
&lt;/h2&gt;

&lt;p&gt;Tanzu explicitly states that Supervisor and Workload clusters must be able to connect to the HA Proxy data plane.&lt;/p&gt;

&lt;p&gt;On my setup I used &lt;a href="https://github.com/haproxytech/vmware-haproxy"&gt;VMware HA Proxy&lt;/a&gt; and deployed the image with three networks. The Data plane API management listens on the Management network and default port of 5556.&lt;/p&gt;

&lt;p&gt;Supervisor cluster deployment went well but during the workload cluster creation I got the following error:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;unexpected error while reconciling control plane endpoint for my-cluster: failed to reconcile loadbalanced endpoint for WCPCluster csi/my-cluster: failed to get control plane endpoint for Cluster csi/my-cluster: Virtual Machine Service LB does not yet have VIP assigned: VirtualMachine Service LoadBalancer does not have any Ingresses

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As a matter of fact I could :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;ping any IPs from the Load-Balancer&lt;/li&gt;
&lt;li&gt;ping the Load-Balancer IP from a node on the same network&lt;/li&gt;
&lt;li&gt;ping any IP Supervisor IP from the workload node and the other way around&lt;/li&gt;
&lt;li&gt;but I could not ping a Management proxy IP which from a workload node&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The problem here the ICMP echo requests goes though the router but the response is issued directly on the DSwitch : &lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--9UKYLoBJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/http://0.0.0.0:4321/assets/img/tanzu/tanzu_ping_error.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--9UKYLoBJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/http://0.0.0.0:4321/assets/img/tanzu/tanzu_ping_error.png" alt="Ping error"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To solve that, the trick is to make sure all the request to HAProxy are answered through the router. To do so there are three things to do :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Set the workload IP with a &lt;code&gt;/32&lt;/code&gt; mask to force to use this communication for this device only in : &lt;code&gt;/etc/systemd/network/10-workload.network&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Update the gateway of the Management NIC to for the two networks management &amp;amp; workload in : &lt;code&gt;/etc/systemd/network/10-management.network&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Add a default route for the external work with a higher weight in : &lt;code&gt;/etc/systemd/network/10-frontend.network&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Below are the NICs configuration you can edit then apply with a &lt;code&gt;systemctl restart systemd-networkd&lt;/code&gt;&lt;/p&gt;


&lt;pre&gt;400: Invalid request&lt;/pre&gt; &lt;pre&gt;400: Invalid request&lt;/pre&gt; &lt;pre&gt;400: Invalid request&lt;/pre&gt; 

&lt;p&gt;Now the ICMP echo reply goes through the same route and responds : &lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ClCyqhn4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/http://0.0.0.0:4321/assets/img/tanzu/tanzu_ping.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ClCyqhn4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/http://0.0.0.0:4321/assets/img/tanzu/tanzu_ping.png" alt="Ping success"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Conclusion
&lt;/h1&gt;

&lt;p&gt;For basic features (like NAT, static or dynamic routing, NAT, DHCP, Firewall, etc.) VyOS is an excellent virtual router that can get you a long way if, like me, you miss NSX-T license.&lt;/p&gt;

</description>
      <category>vmware</category>
      <category>linux</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>Use cert-manager with for Karavi Observability</title>
      <dc:creator>coulof</dc:creator>
      <pubDate>Tue, 29 Dec 2020 08:00:00 +0000</pubDate>
      <link>https://dev.to/coulof/use-cert-manager-with-for-karavi-observability-32f9</link>
      <guid>https://dev.to/coulof/use-cert-manager-with-for-karavi-observability-32f9</guid>
      <description>&lt;h1&gt;
  
  
  TL; DR
&lt;/h1&gt;

&lt;p&gt;Check out how works Prometheus metrics &amp;amp; Grafana dashboard for CSI PowerFlex from that &lt;a href="https://youtu.be/Fmmv-nP06QU"&gt;karavi observability video&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Learn more about &lt;a href="https://github.com/dell/karavi"&gt;karavi&lt;/a&gt; and &lt;a href="https://github.com/dell/karavi-observability"&gt;karavi-observability&lt;/a&gt; from their respective repositories. And use these &lt;a href="https://gist.github.com/coulof/e036bafcf619f0d0d382e1327b804016"&gt;configurations&lt;/a&gt; to use &lt;a href="http://cert-manager.io/"&gt;cert-manager&lt;/a&gt; as a certificate source for the karavi components.&lt;/p&gt;

&lt;h1&gt;
  
  
  The premise
&lt;/h1&gt;

&lt;p&gt;Dell Technologies launched the project Karavi in December 2020 with the objective to complement functionalities not covered by the &lt;a href="https://kubernetes-csi.github.io/docs/"&gt;CSI specification&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Karavi focuses in three domains :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Observability&lt;/li&gt;
&lt;li&gt;Data mobility&lt;/li&gt;
&lt;li&gt;Security&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The first project brought to life as a tech-preview is the metrics &amp;amp; topology collection for PowerFlex Kubernetes volumes.&lt;/p&gt;

&lt;p&gt;The video below will give you an introduction on the architecture, use-cases and what is coming in the future.&lt;/p&gt;

&lt;p&gt;At the time of the publication of this post, the communication between the OpenTelemetry component and Prometheus server, and, the communication between the Karavi topology component and Grafana are secured with TLS by default : &lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Lz9gSfvC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://storage-chaos.io/assets/img/karavi-obs-vxflexos.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Lz9gSfvC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://storage-chaos.io/assets/img/karavi-obs-vxflexos.png" alt="Image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As indicated &lt;a href="https://github.com/dell/karavi-observability/blob/main/docs/GETTING_STARTED_GUIDE.md"&gt;in the documentation&lt;/a&gt;, you have to supply certificates to the helm installer :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm install karavi-observability dell/karavi-observability -n karavi --create-namespace \
    --set-file karaviTopology.certificateFile=&amp;lt;location-of-karavi-topology-certificate-file&amp;gt; \
    --set-file karaviTopology.privateKeyFile=&amp;lt;location-of-karavi-topology-private-key-file&amp;gt; \
    --set-file otelCollector.certificateFile=&amp;lt;location-of-otel-collector-certificate-file&amp;gt; \
    --set-file otelCollector.privateKeyFile=&amp;lt;location-of-otel-collector-private-key-file&amp;gt; \
    --set karaviMetricsPowerflex.powerflexPassword=&amp;lt;base64-encoded-password&amp;gt; \
    --set karaviMetricsPowerflex.powerflexUser=&amp;lt;base64-encoded-username&amp;gt; \
    --set karaviMetricsPowerflex.powerflexEndpoint=https://&amp;lt;powerflex-endpoint&amp;gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command assumes you generated the different certificates with correct DNS names. For example with &lt;code&gt;openssl&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;openssl req -new -newkey rsa:2048 -nodes -keyout karavi-topology.key -out karavi-topology.csr -subj '/CN=karavi-topology'
openssl x509 -req -in karavi-topology.csr -signkey karavi-topology.key -out karavi-topology.crt
openssl req -new -newkey rsa:2048 -nodes -keyout otel-collector.key -out otel-collector.csr -subj '/CN=otel-collector'
openssl x509 -req -in otel-collector.csr -signkey otel-collector.key -out otel-collector.crt

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Obviously, anytime the certificate expires or you move the components to different namespace you have to manually generate new certificates.&lt;/p&gt;

&lt;p&gt;So let us see how to take advantage of &lt;a href="http://cert-manager.io/"&gt;cert-manager&lt;/a&gt; to automate that process.&lt;/p&gt;

&lt;h1&gt;
  
  
  The implementation
&lt;/h1&gt;

&lt;p&gt;&lt;a href="http://cert-manager.io/"&gt;cert-manager&lt;/a&gt; is the goto Kubernetes certificate management tool. It bundled with some distribution like GKE. If you need to install it few &lt;a href="https://cert-manager.io/docs/installation/kubernetes/#installing-with-regular-manifests"&gt;&lt;code&gt;kubectl apply&lt;/code&gt;&lt;/a&gt; will do the job.&lt;/p&gt;

&lt;p&gt;The first step to use certificate delivered by cert-manager is to configure an Issuer. The easiest is to use a &lt;code&gt;SelfSigned&lt;/code&gt; without a dedicated certificate authority :&lt;/p&gt;


&lt;pre&gt;400: Invalid request&lt;/pre&gt; 

&lt;p&gt;The next step is to define the parameters for your certificate (size, rotation, DNS names, etc.) ; here is, for example, what we have for the &lt;code&gt;otel-collector&lt;/code&gt; :&lt;/p&gt;


&lt;pre&gt;400: Invalid request&lt;/pre&gt; 

&lt;p&gt;The same for karavi-topology can be downloaded &lt;a href="https://gist.github.com/coulof/e036bafcf619f0d0d382e1327b804016#file-topology-cert-manager-yaml"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The last step is to patch the deployment (&lt;code&gt;otel-collector&lt;/code&gt; and &lt;code&gt;karavi-topology&lt;/code&gt;) to use the new certificate stored as a secret.&lt;/p&gt;

&lt;p&gt;You can choose to do it directly in the &lt;a href="https://github.com/dell/helm-charts/tree/main/charts/karavi-metrics-powerflex"&gt;karavi-metrics-powerflex&lt;/a&gt; &amp;amp; &lt;a href="https://github.com/dell/helm-charts/tree/main/charts/karavi-topology"&gt;karavi-topology&lt;/a&gt; Helm charts ; or patch it in Kubernetes if already deployed.&lt;/p&gt;

&lt;p&gt;For example, with the &lt;code&gt;otel-collector&lt;/code&gt; we have:&lt;/p&gt;


&lt;pre&gt;400: Invalid request&lt;/pre&gt; 

&lt;p&gt;For &lt;code&gt;karavi-topology&lt;/code&gt;, the patch looks like this:&lt;/p&gt;


&lt;pre&gt;400: Invalid request&lt;/pre&gt; 

&lt;p&gt;The last step is to apply the patch to the deployments:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl patch deployments.apps -n karavi otel-collector --patch "$(cat otel-cert_manager.patch)"
kubectl patch deployments.apps -n karavi karavi-topology --patch "$(cat karavi-topology-cert_manager.patch)"

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  Go further
&lt;/h1&gt;

&lt;p&gt;If you want to learn more about Karavi observability, you can check that &lt;a href="https://volumes.blog/2020/12/17/introducing-project-karavi-extending-the-kubernetes-csi-capabilities/"&gt;other blog post&lt;/a&gt; or ask for help on the &lt;a href="https://www.dell.com/community/Containers/"&gt;Dell container community forum&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>prometheus</category>
      <category>dell</category>
      <category>csi</category>
    </item>
    <item>
      <title>PersistentVolume static provisioning</title>
      <dc:creator>coulof</dc:creator>
      <pubDate>Sun, 01 Nov 2020 07:00:00 +0000</pubDate>
      <link>https://dev.to/coulof/persistentvolume-static-provisioning-40ag</link>
      <guid>https://dev.to/coulof/persistentvolume-static-provisioning-40ag</guid>
      <description>&lt;h1&gt;
  
  
  TL; DR
&lt;/h1&gt;

&lt;p&gt;In this article, we will discuss and present a script (&lt;a href="https://github.com/coulof/dell-csi-static-pv"&gt;ingest-static-pv.sh&lt;/a&gt;) for Persistent Volume &lt;a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#static"&gt;static provisioning&lt;/a&gt; of Dell CSI Drivers for &lt;a href="https://github.com/dell/csi-powermax"&gt;PowerMax&lt;/a&gt;, &lt;a href="https://github.com/dell/csi-powerstore"&gt;PowerStore&lt;/a&gt;, &lt;a href="https://github.com/dell/csi-powerscale"&gt;PowerScale&lt;/a&gt;, &lt;a href="https://github.com/dell/csi-vxflexos"&gt;PowerFlex&lt;/a&gt;, and &lt;a href="https://github.com/dell/csi-unity"&gt;Unity&lt;/a&gt;.&lt;/p&gt;

&lt;h1&gt;
  
  
  The premise
&lt;/h1&gt;

&lt;p&gt;As part of an OpenShift migration project from one cluster to a new one, we wanted to ease the transition by loading the existing persistent storage in the new cluster.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;TL; DR&lt;/li&gt;
&lt;li&gt;The premise&lt;/li&gt;
&lt;li&gt;
Concepts for static provisioning

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;PersistentVolume&lt;/code&gt; static provisioning&lt;/li&gt;
&lt;li&gt;&lt;code&gt;reclaimPolicy&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;Provisioner&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;volumeHandle&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;PowerMax&lt;/li&gt;
&lt;li&gt;PowerStore&lt;/li&gt;
&lt;li&gt;PowerScale&lt;/li&gt;
&lt;li&gt;PowerFlex&lt;/li&gt;
&lt;li&gt;Unity&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;ingest-static-pv.sh&lt;/code&gt; ; step by step with PowerMax

&lt;ul&gt;
&lt;li&gt;Get the volume details&lt;/li&gt;
&lt;li&gt;Confirm the Provisioner and StorageClass&lt;/li&gt;
&lt;li&gt;Prepare the &lt;code&gt;volumeHandle&lt;/code&gt; value&lt;/li&gt;
&lt;li&gt;Dry-run&lt;/li&gt;
&lt;li&gt;And run it for real !&lt;/li&gt;
&lt;li&gt;Map the PVC to a Pod&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Conclusion&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Concepts for static provisioning
&lt;/h1&gt;

&lt;p&gt;Before we dive into the implementation, let us review some Kubernetes concepts that back the static provisioning.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;code&gt;PersistentVolume&lt;/code&gt; static provisioning
&lt;/h2&gt;

&lt;p&gt;The &lt;a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#static"&gt;static provisioning&lt;/a&gt;, as opposite to &lt;a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#dynamic"&gt;dynamic provisioning&lt;/a&gt;, is the action of creating &lt;code&gt;PersistentVolume&lt;/code&gt; upfront to they are ready to be consumed later by a &lt;code&gt;PersistentVolumeClaim&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;code&gt;reclaimPolicy&lt;/code&gt;
&lt;/h2&gt;

&lt;p&gt;Each &lt;code&gt;StorageClass&lt;/code&gt; has a &lt;a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#reclaiming"&gt;&lt;code&gt;reclaimPolicy&lt;/code&gt;&lt;/a&gt; that tells Kubernetes what to do with a volume once it is released from it usage. Every Dell drivers support the &lt;a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#retain"&gt;&lt;code&gt;Retain&lt;/code&gt;&lt;/a&gt; and &lt;a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#delete"&gt;&lt;code&gt;Delete&lt;/code&gt;&lt;/a&gt; policies.&lt;/p&gt;

&lt;p&gt;If set to &lt;code&gt;Delete&lt;/code&gt;, the Dell drivers will remove the volume and other objects (like the exports, quotas, etc.) on the backend array. If set to &lt;code&gt;Retain&lt;/code&gt;, only the Kubernetes objects (&lt;code&gt;PersistentVolume&lt;/code&gt;, &lt;code&gt;VolumeAttachment&lt;/code&gt;) will be deleted. With &lt;code&gt;Retain&lt;/code&gt;, it is up to the storage administrator to cleanup or not.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;reclaimPolicy&lt;/code&gt; is inherited in the &lt;code&gt;PersistentVolume&lt;/code&gt; under the attribute &lt;code&gt;persistentVolumeReclaimPolicy&lt;/code&gt;. It is possible to change a &lt;a href="https://kubernetes.io/docs/tasks/administer-cluster/change-pv-reclaim-policy/"&gt;&lt;code&gt;persistentVolumeReclaimPolicy&lt;/code&gt;&lt;/a&gt; at any point in time with a &lt;code&gt;kubectl edit pv [my_pv]&lt;/code&gt; or command like :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl patch pv [my_pv] -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}'

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  &lt;code&gt;Provisioner&lt;/code&gt;
&lt;/h2&gt;

&lt;p&gt;The &lt;a href="https://kubernetes.io/docs/concepts/storage/storage-classes/#provisioner"&gt;provisioner&lt;/a&gt; gives the driver to be used for the volume provisioning. The Dell drivers defaults are:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
isilon (default) csi-isilon.dellemc.com Delete Immediate true 15d
powermax (default) csi-powermax.dellemc.com Delete Immediate true 46d
powermax-xfs csi-powermax.dellemc.com Delete Immediate true 46d
powerstore (default) csi-powerstore.dellemc.com Delete WaitForFirstConsumer true 22h
powerstore-nfs csi-powerstore.dellemc.com Delete WaitForFirstConsumer true 22h
powerstore-xfs csi-powerstore.dellemc.com Delete WaitForFirstConsumer true 22h
unity (default) csi-unity.dellemc.com Delete Immediate true 20d
unity-iscsi csi-unity.dellemc.com Delete Immediate true 20d
unity-nfs csi-unity.dellemc.com Delete Immediate true 20d
vxflexos (default) csi-vxflexos.dellemc.com Delete WaitForFirstConsumer true 29d
vxflexos-xfs csi-vxflexos.dellemc.com Delete WaitForFirstConsumer true 29d

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://github.com/dell/csi-powerstore"&gt;csi-powerstore&lt;/a&gt; and &lt;a href="https://github.com/dell/csi-powermax"&gt;csi-powermax&lt;/a&gt; drivers allow you to tweak the provisioner. It enables you to spin multiple instances of the driver in the same cluster and, therefore, connect to different array or create more granular &lt;code&gt;StorageClass&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;To do so you can, during the installation, edit the variable &lt;a href="https://github.com/dell/csi-powerstore/blob/09afdf03a3cea66ed0c3795bb8d8714474cf57b9/helm/csi-powerstore/values.yaml#L2"&gt;&lt;code&gt;driverName&lt;/code&gt;&lt;/a&gt; for PowerStore and the variable &lt;a href="https://github.com/dell/csi-powermax/blob/f16597ff8f2c725e552096fff8ab3b5c6c4b03a5/helm/csi-powermax/values.yaml#L4"&gt;&lt;code&gt;customDriverName&lt;/code&gt;&lt;/a&gt; for PowerMax.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;code&gt;volumeHandle&lt;/code&gt;
&lt;/h2&gt;

&lt;p&gt;The &lt;code&gt;volumeHandle&lt;/code&gt; is the unique identifier of the volume created on the storage backend and returned by the CSI driver during the volume creation. Any call by the CSI drivers on a volume will reference that unique id.&lt;/p&gt;

&lt;p&gt;What follows is probably the most important piece of this article. We will list for each Dell driver how the &lt;code&gt;volumeHandle&lt;/code&gt; is construct and how to load an existing volume as a PV.&lt;/p&gt;

&lt;h3&gt;
  
  
  PowerMax
&lt;/h3&gt;

&lt;p&gt;The PowerMax &lt;code&gt;volumeHandle&lt;/code&gt; consists of:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    csi-&amp;lt;Cluster Prefix&amp;gt;-&amp;lt;Volume Prefix&amp;gt;-&amp;lt;Volume Name&amp;gt;-&amp;lt;Symmetrix ID&amp;gt;-&amp;lt;Symmetrix Vol ID&amp;gt;
    1 2 3 4 5 6

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;csi-&lt;/strong&gt; is an hardcoded value&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cluster Prefix&lt;/strong&gt; , is the value given during the installation of the driver in the variable named &lt;a href="https://github.com/dell/csi-powermax/blob/f16597ff8f2c725e552096fff8ab3b5c6c4b03a5/helm/csi-powermax/values.yaml#L40"&gt;&lt;code&gt;clusterPrefix&lt;/code&gt;&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Volume Prefix&lt;/strong&gt; , is the value given during the installation of the driver in the variable named &lt;a href="https://github.com/dell/csi-powermax/blob/f16597ff8f2c725e552096fff8ab3b5c6c4b03a5/helm/csi-powermax/values.yaml#L44"&gt;&lt;code&gt;volumeNamePrefix&lt;/code&gt;&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Volume Name&lt;/strong&gt; , is a random UUID given by the controller sidecar.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Symmetrix ID&lt;/strong&gt; is the twelve characters long PowerMax identifier.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Symmetrix Vol ID&lt;/strong&gt; is the LUN identifier.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The &lt;code&gt;volumeHandle&lt;/code&gt; can be found in Unisphere for PowerMax under &lt;em&gt;Storage &amp;gt; Volumes&lt;/em&gt;. Highlighting the volume will populate the window on the left showing this information.&lt;/p&gt;

&lt;p&gt;The construction of the &lt;code&gt;volumeHandle&lt;/code&gt; is given &lt;a href="https://github.com/dell/csi-powermax/blob/v1.4.0/service/controller.go#L720"&gt;here&lt;/a&gt; and you can test your &lt;code&gt;volumeHandle&lt;/code&gt; is valid on &lt;a href="https://rubular.com/r/TSQDPIDss2K9Hb"&gt;rubular&lt;/a&gt; or directly from &lt;code&gt;STORAGECLASS=powermax VOLUMEHANDLE=help ingest-static-pv.sh&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  PowerStore
&lt;/h3&gt;

&lt;p&gt;The PowerStore &lt;code&gt;volumehandle&lt;/code&gt; is just volume’s or NFS share ID. For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    volumeHandle: 880fb26c-9a94-4565-9e6e-c0bf2b029ecc

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The easiest to get the identifier, is to connect to the WebUI and copy the UUID from the URL like here:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--jBwOz_ZA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://storage-chaos.io/assets/img/static-pv/03_powerstore_uid.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--jBwOz_ZA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://storage-chaos.io/assets/img/static-pv/03_powerstore_uid.png" alt="Image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can test your &lt;code&gt;volumeHandle&lt;/code&gt; is valid a valid UUID on &lt;a href="https://rubular.com/r/7HgR3Zzn6VPMA1"&gt;rubular&lt;/a&gt; or directly from &lt;code&gt;STORAGECLASS=powerstore VOLUMEHANDLE=help ingest-static-pv.sh&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  PowerScale
&lt;/h3&gt;

&lt;p&gt;The PowerScale/Isilon &lt;code&gt;volumeHandle&lt;/code&gt; is the volume name, the Export ID and the Access Zone separated by &lt;code&gt;=_=_=&lt;/code&gt;. For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    volumeHandle: PowerScaleStaticVolTest=_=_=176=_=_=System

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This information can be found in the PowerScale OneFS GUI under &lt;em&gt;Protocols &amp;gt; Unix sharing (NFS) &amp;gt; NFS Exports&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The construction of the &lt;code&gt;volumeHandle&lt;/code&gt; is given &lt;a href="https://github.com/dell/csi-powerscale/blob/3c6bb929adbcfe3118387be566576cf812b638c5/common/utils/utils.go#L341"&gt;here&lt;/a&gt; and you can test your &lt;code&gt;volumeHandle&lt;/code&gt; is valid on &lt;a href="https://rubular.com/r/4iQnXaX26sipCb"&gt;rubular&lt;/a&gt; or directly from &lt;code&gt;STORAGECLASS=isilon VOLUMEHANDLE=help ingest-static-pv.sh&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  PowerFlex
&lt;/h3&gt;

&lt;p&gt;The PowerFlex/VxFlexOS &lt;code&gt;volumehandle&lt;/code&gt; is just volume’s ID. For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    volumeHandle: ecdbd5bd0000000a

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The volume’s ID can be found in the VxFlex OS GUI under &lt;em&gt;Frontend &amp;gt; Volumes&lt;/em&gt;. Click on the volume then the &lt;em&gt;Show property sheet&lt;/em&gt; icon. The ID will be listed under &lt;em&gt;Identity &amp;gt; ID&lt;/em&gt; ; or you can check it with : &lt;code&gt;scli --query_all_volumes&lt;/code&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The volume ID is missing from the 3.5 Web UI.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Unity
&lt;/h3&gt;

&lt;p&gt;The Unity &lt;code&gt;volumeHandle&lt;/code&gt; is the volume name or filesystem name, the protocol (iSCSI, FC or NFS), and the &lt;em&gt;CLI ID&lt;/em&gt; separated by dashes &lt;code&gt;-&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    volumeHandle: csiunity-fde5df688a-iSCSI-fnm00000000000-sv_16
    volumeHanble: csiunity-46d0385efd-FC-fnm00000000000-sv_938
    volumeHandle: csiunity-a7bb9ee130-NFS-fnm00000000000-fs_5

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This information can be found in the Unisphere for Unity ; under Storage then Block or File. The CLI ID may not be visible by default. To view the CLI ID click &lt;em&gt;Customize your view &amp;gt; Columns&lt;/em&gt; check the &lt;em&gt;CLI ID&lt;/em&gt; check box.&lt;/p&gt;

&lt;p&gt;The construction of the &lt;code&gt;volumeHandle&lt;/code&gt; is given &lt;a href="https://github.com/dell/csi-unity/blob/067f9044455399124d199f017d66ecb49519fc27/service/utils/emcutils.go#L57"&gt;here&lt;/a&gt; and you can test your &lt;code&gt;volumeHandle&lt;/code&gt; is valid on &lt;a href="https://rubular.com/r/Cuf9BnPq7ZCEgp"&gt;rubular&lt;/a&gt; or directly from:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;STORAGECLASS=unity VOLUMEHANDLE=help ./ingest-static-pv.sh

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  &lt;code&gt;ingest-static-pv.sh&lt;/code&gt; ; step by step with PowerMax
&lt;/h1&gt;

&lt;p&gt;The &lt;code&gt;ingest-static-pv.sh&lt;/code&gt; script (available on &lt;a href="https://github.com/coulof/dell-csi-static-pv"&gt;https://github.com/coulof/dell-csi-static-pv&lt;/a&gt;) will ease the loading of existing volume as a PV or both PV and PVC. It is designed to take the minimal required fields by the provisioner to work.&lt;/p&gt;

&lt;p&gt;The example below will go through every step to statically load a PowerMax LUN in Kubernetes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Get the volume details
&lt;/h2&gt;

&lt;p&gt;For the first step you can connect to Unisphere and get the &lt;em&gt;Volume Identifier&lt;/em&gt; (here &lt;code&gt;csi-fdg-pmax-9e954fcdfa&lt;/code&gt;), the &lt;em&gt;Symmetrix ID&lt;/em&gt; (&lt;code&gt;000197900704&lt;/code&gt;), the &lt;em&gt;Symmetrix Vol ID&lt;/em&gt; (&lt;code&gt;0017C&lt;/code&gt;) and the &lt;em&gt;Capacity&lt;/em&gt; (&lt;code&gt;8GB&lt;/code&gt;) which are mandatory to provision the PV. &lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--tgQyMers--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://storage-chaos.io/assets/img/static-pv/01_pmax_volume_details.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--tgQyMers--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://storage-chaos.io/assets/img/static-pv/01_pmax_volume_details.gif" alt="Image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Confirm the Provisioner and StorageClass
&lt;/h2&gt;

&lt;p&gt;You can validate the provisioner for your driver with &lt;code&gt;kubectl get sc -o custom-columns=NAME:.metadata.name,PROVISIONER:.provisioner&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NAME PROVISIONER
powermax csi-powermax.dellemc.com
powermax-xfs csi-powermax.dellemc.com

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this case, we will use the default one given by the StorageClass name.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prepare the &lt;code&gt;volumeHandle&lt;/code&gt; value
&lt;/h2&gt;

&lt;p&gt;As mentioned in the chapter above, the construction of &lt;code&gt;volumeHandle&lt;/code&gt; is the key piece to map a Kubernetes PersistentVolume to its actual volume in the backend storage array.&lt;/p&gt;

&lt;p&gt;To get help on the &lt;code&gt;volumeHandle&lt;/code&gt; for the storage system you need to load, you can use : &lt;code&gt;STORAGECLASS=powermax VOLUMEHANDLE=help ./ingest-static-pv.sh&lt;/code&gt; ; or here, refer to the PowerMax chapter of this blog.&lt;/p&gt;

&lt;p&gt;For the help on all the parameters, run &lt;code&gt;./ingest-static-pv.sh&lt;/code&gt; without any environment variable.&lt;/p&gt;

&lt;p&gt;Thanks to the Unisphere values we found above, we must define the value of &lt;code&gt;VOLUMEHANDLE&lt;/code&gt; as &lt;code&gt;csi-fdg-pmax-9e954fcdfa-000197900704-0017C&lt;/code&gt;.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Tip: in case you need to create a volume from scratch and you need to cook a PV name with a UUID you can use the following command: &lt;code&gt;UUID=$(uuidgen) &amp;amp;&amp;amp; echo ${UUID: -10}&lt;/code&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Dry-run
&lt;/h2&gt;

&lt;p&gt;The script will output the definitions on stdout only by default.&lt;/p&gt;

&lt;p&gt;In the example below, we decided to create both the PV and the PVC (cf. &lt;code&gt;PVCNAME&lt;/code&gt; variable) in the default namespace.&lt;/p&gt;

&lt;p&gt;To do so we execute:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;STORAGECLASS=powermax VOLUMEHANDLE=csi-fdg-pmax-9e954fcdfa-000197900704-0017C PVNAME=pmax-9e954fcdfa SIZE=8 PVCNAME=testpvc ./ingest-static-pv.sh

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;which results in :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pmax-9e954fcdfa
spec:
  capacity:
    storage: 8Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storageClassName: powermax
  volumeMode: Filesystem
  csi:
    driver: csi-powermax.dellemc.com
    volumeHandle: csi-fdg-pmax-9e954fcdfa-000197900704-0017C
    fsType: ext4
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: testpvc
  namespace: default
spec:
  volumeName: pmax-9e954fcdfa
  storageClassName: powermax
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 8Gi

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you run the script without &lt;code&gt;PVNAME&lt;/code&gt; the PersistentVolume definition only will be displayed.&lt;/p&gt;

&lt;h2&gt;
  
  
  And run it for real !
&lt;/h2&gt;

&lt;p&gt;To load the definition in your Kubernetes instance you must specify &lt;code&gt;DRYRUN=false&lt;/code&gt;.&lt;/p&gt;


  


&lt;h2&gt;
  
  
  Map the PVC to a Pod
&lt;/h2&gt;

&lt;p&gt;Now that the PVC is Bound to the PV, we can use it in a Pod.&lt;/p&gt;

&lt;p&gt;If the CSI driver succeeds to mount the volume to the Pod, we will have &lt;code&gt;VolumeAttachment&lt;/code&gt; object displayed with &lt;code&gt;kubectl get volumeattachments&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NAME ATTACHER PV NODE ATTACHED AGE
csi-39d511b28dc4490d47ede7f573d7616ae05addda6db9f2a8b6aecf7649a00722 csi-powermax.dellemc.com pmax-9e954fcdfa lsisfty94.lss.emccom true 82s

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And the events attached to the Pod will show a &lt;em&gt;SuccessfulAttachVolume&lt;/em&gt; message like below :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 28s default-scheduler Successfully assigned default/testpod-script to lsisfty94.lss.emc.com
Normal SuccessfulAttachVolume 18s attachdetach-controller AttachVolume.Attach succeeded for volume "pmax-9e954fcdfa"

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  Conclusion
&lt;/h1&gt;

&lt;p&gt;The static provisioning proved to be very useful for migration projects.&lt;/p&gt;

&lt;p&gt;The script &lt;a href="https://github.com/coulof/dell-csi-static-pv"&gt;ingest-static-pv.sh&lt;/a&gt; is planned to be used in a couple of projects. One for a migration from an existing OpenShift cluster to a new one ; the second for a migration of an OpenShift storage backend from Pure storage to PowerMax.&lt;/p&gt;

&lt;p&gt;Feel free to try it and open tickets to the repo .&lt;/p&gt;

</description>
      <category>k8s</category>
      <category>dell</category>
      <category>csi</category>
    </item>
    <item>
      <title>Home dir automation with Ansible PowerScale / Isilon</title>
      <dc:creator>coulof</dc:creator>
      <pubDate>Sat, 29 Aug 2020 07:00:00 +0000</pubDate>
      <link>https://dev.to/coulof/home-dir-automation-with-ansible-powerscale-isilon-3m9j</link>
      <guid>https://dev.to/coulof/home-dir-automation-with-ansible-powerscale-isilon-3m9j</guid>
      <description>&lt;h1&gt;
  
  
  TL;DR
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://github.com/dell/ansible-isilon"&gt;ansible-isilon&lt;/a&gt; eases the admin tasks on Isilon / PowerScale ; watch how cool it can be on &lt;a href="https://www.youtube.com/watch?v=RF5WoeRry1k&amp;amp;list=PLbssOJyyvHuVXyKi0c9Z7NLqBiDiwF1eA&amp;amp;index=3"&gt;Youtube&lt;/a&gt; and how to use it below.OverlayFS is great but has some limitations for some use-cases ; &lt;a href="https://en%0A.wikipedia.org/wiki/UnionFS"&gt;UnionFS&lt;/a&gt; is not dead !&lt;/p&gt;

&lt;h1&gt;
  
  
  Table of Contents
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;The premise&lt;/li&gt;
&lt;li&gt;
The implementation

&lt;ul&gt;
&lt;li&gt;Install Ansible modules for PowerScale/Isilon&lt;/li&gt;
&lt;li&gt;The files&lt;/li&gt;
&lt;li&gt;List usage in Ansible&lt;/li&gt;
&lt;li&gt;UnionFS&lt;/li&gt;
&lt;li&gt;File system removal&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Video&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  The premise
&lt;/h1&gt;

&lt;p&gt;In my old days at the university, I used to work &lt;a href="https://en.wikipedia.org/wiki/Sun_Ray"&gt;Sun Ray&lt;/a&gt; thin client (imagine the evolution between &lt;a href="https://en.wikipedia.org/wiki/VT100"&gt;VT100&lt;/a&gt; to &lt;a href="https://www.delltechnologies.com/en-us/solutions/vdi/index.htm"&gt;modern VDI&lt;/a&gt;). Students and teachers were all connected to the same SPARC server to work. Each of us had its own home directory accessible from the NFS server.&lt;/p&gt;

&lt;p&gt;More than 15 years later, enterprises of any size still use home directories on NFS for their users !&lt;/p&gt;

&lt;p&gt;In the following article, we will show how to use &lt;a href="https://www.ansible.com/"&gt;Ansible&lt;/a&gt; to manage home directories hosted on a PowerScale array in a university.The predicate is that &lt;a href="https://docs.microsoft.com/en-us/windows-server/identity/ad-ds/manage/ad-ds-simplified-administration"&gt;Active Directory&lt;/a&gt; &lt;strong&gt;is the reference&lt;/strong&gt; for the userbase. Each LDAP user can be either in the &lt;em&gt;student&lt;/em&gt; group or the &lt;em&gt;teacher&lt;/em&gt; group.Any student or teacher in AD must have his homedir in PowerScale and be accessible via NFS exports. Any student who is no longer enrolled and not in AD will have thier homedir removed.&lt;/p&gt;

&lt;p&gt;The ansible playbook will :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;get the list of students and teachers from AD&lt;/li&gt;
&lt;li&gt;create a unix home directory in PowerScale/Isilon for each user&lt;/li&gt;
&lt;li&gt;set different quotas if the user is a student or a teacher&lt;/li&gt;
&lt;li&gt;have daily snapshots of the home directories with varying policies of retention if for the students and teachers&lt;/li&gt;
&lt;li&gt;mount the home directories in a list of Unix server&lt;/li&gt;
&lt;li&gt;cleanup the home directories of students that are not in the AD anymore&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  The implementation
&lt;/h1&gt;

&lt;p&gt;In this chapter I will not detail all the tasks as most of them are self-explanatory, but, describe a few tips &amp;amp; tricks that can be reused in other playbooks.&lt;/p&gt;

&lt;h2&gt;
  
  
  Install Ansible modules for PowerScale/Isilon
&lt;/h2&gt;

&lt;p&gt;The &lt;a href="https://github.com/dell/ansible-isilon/blob/master/dellemc_ansible/docs/Ansible%20for%20Dell%20EMC%20Isilon%20v1.1%20Product%20Guide.pdf"&gt;Product Guide&lt;/a&gt; documents the module installation and usage (equivalent to &lt;code&gt;ansible-doc dellemc_isilon_[module]&lt;/code&gt;).&lt;/p&gt;

&lt;p&gt;This example comes with a &lt;a href="https://github.com/dell/ansible-storage-automation/blob/powerscale/Dockerfile"&gt;Dockerfile&lt;/a&gt; that has the required dependencies to run the playbook.&lt;/p&gt;

&lt;p&gt;As the &lt;a href="https://github.com/dell/ansible-isilon"&gt;ansible-isilon&lt;/a&gt; is very specific about Isilon SDK version, the most important line is :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;RUN pip3 install isi-sdk-8-1-1 pywinrm &amp;amp;&amp;amp; \
    git clone https://github.com/dell/ansible-isilon.git

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Once &lt;code&gt;docker build&lt;/code&gt;-ed, you can execute the playbook with it with :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;podman run --security-opt label=disable -e ANSIBLE_HOST_KEY_CHECKING=False \
           -v ~/.ssh/id_rsa.emc.pub:/root/.ssh/id_rsa.pub -v ~/.ssh/id_rsa.emc:/root/.ssh/id_rsa \
           -v "$(pwd)"/homedir/:/ansible-isilon \
           -ti docker.io/coulof/ansible-isilon:1.1.0 ansible-playbook \
           -i /ansible-isilon/hosts.ini /ansible-isilon/create_homedir_for_ad_users_in_isilon.yml

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Note that on my Fedora 32 machine, the &lt;code&gt;--security-opt label=disable&lt;/code&gt; is mandatory to be able to mount the volumes.&lt;/p&gt;

&lt;h2&gt;
  
  
  The files
&lt;/h2&gt;

&lt;p&gt;To use the playbook, you will have to update a couple of files:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://github.com/dell/ansible-storage-automation/blob/homedir/powerscale/homedir/hosts.ini"&gt;hosts.ini&lt;/a&gt; ; which has the inventory of Unix and Domain Controller&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/dell/ansible-storage-automation/blob/homedir/powerscale/homedir/credentials-isi.yml"&gt;credentials-isi.yml&lt;/a&gt; ; which has the details of the PowerScale&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/dell/ansible-storage-automation/blob/homedir/powerscale/homedir/create_homedir_for_ad_users_in_isilon.yml"&gt;create_homedir_for_ad_users_in_isilon.yml&lt;/a&gt; ; which is the playbook with all the tasks, where you have to update the variables &lt;code&gt;base_path&lt;/code&gt; and &lt;code&gt;nfs_server_ip&lt;/code&gt; in several sections to point to the PowerScale path and IP.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  List usage in Ansible
&lt;/h2&gt;

&lt;p&gt;The first tip is in task &lt;a href="https://github.com/dell/ansible-storage-automation/blob/powerscale/homedir/create_homedir_for_ad_users_in_isilon.yml#L2"&gt;Get userbase from Active Directory&lt;/a&gt; with :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    - set_fact:
        students_list: "{{members_students_group.members | list}}"
        teachers_list: "{{members_teachers_group.members | list}}"

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;set_fact&lt;/code&gt; creates two lists of users that will be reused across the playbook.With the object list, we can loop through and execute the same task for each user as done in the &lt;a href="https://github.com/dell/ansible-storage-automation/blob/powerscale/homedir/create_homedir_for_ad_users_in_isilon.yml#L49"&gt;FS creation&lt;/a&gt; :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;      dellemc_isilon_filesystem:
        &amp;lt;&amp;lt;: *isi_connection_vars
        path: "{{base_path}}/students/{{item}}"
        ...
        state: 'present'
      loop: "{{ hostvars['devconad.com']['students_list'] }}"

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Or make it easy to find orphan homedirs by playing with list operations when &lt;a href="https://github.com/dell/ansible-storage-automation/blob/homedir/powerscale/homedir/create_homedir_for_ad_users_in_isilon.yml#L167"&gt;listing unix mounted dirs&lt;/a&gt; :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    - name: Capture files in path and register
      shell: &amp;gt;
        ls -1 /mnt/nfs_students
      register: students_home_dir
      run_once: True
    - set_fact:
        orphan_home_dirs: "{{students_home_dir.stdout_lines | list | difference(hostvars['devconad.com']['students_list'])}}"

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h2&gt;
  
  
  UnionFS
&lt;/h2&gt;

&lt;p&gt;To stick with the usual &lt;code&gt;/home/&amp;lt;username&amp;gt;&lt;/code&gt; file system hierarchy, I wanted to mount the students and teachers sub-dirs within the same &lt;code&gt;/home&lt;/code&gt; and keep the write in the lower dirs as follow :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/mnt/nfs_teachers/ /mnt/nfs_students/ /home
├── alice ├── carol ├── alice
└── bob └── dan ├── bob
                                                ├── carol
                                                └── dan

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The capability of writing in lowerdirs live is available in &lt;a href="https://en.wikipedia.org/wiki/Aufs"&gt;AuFS&lt;/a&gt; and &lt;a href="https://en.wikipedia.org/wiki/UnionFS"&gt;UnionFS&lt;/a&gt; but not in the very popular &lt;a href="https://en.wikipedia.org/wiki/OverlayFS"&gt;OverlayFS&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;As stated by the &lt;a href="https://www.kernel.org/doc/Documentation/filesystems/overlayfs.txt"&gt;Kernel documentation&lt;/a&gt;:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Changes to the underlying filesystems while part of a mounted overlay filesystem are not allowed.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;There are plenty of discussions about that topic on &lt;a href="https://unix.stackexchange.com/questions/393930/merge-changes-to-upper-filesystem-to-lower-filesystem-in-linux-overlay-overlayf"&gt;Stackoverflow&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;To achieve it I used &lt;code&gt;unionfs-fuse&lt;/code&gt; which is available from Ubuntu repo or &lt;a href="https://centos.pkgs.org/7/repoforge-x86_64/fuse-unionfs-0.26-1.el7.rf.x86_64.rpm.html"&gt;CentOS third-party repo&lt;/a&gt;. The obvious advantage of Filesystem in Userspace is that I won’t need to recompile the Linux kernel to use it. In the &lt;a href="https://github.com/dell/ansible-storage-automation/blob/homedir/powerscale/homedir/create_homedir_for_ad_users_in_isilon.yml#L154"&gt;/etc/fstab&lt;/a&gt; we can use &lt;code&gt;unionfs#&lt;/code&gt; to mount a FUSE filesystem :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;        line: "unionfs#/mnt/nfs_students=RW:/mnt/nfs_teachers=RW /home/ fuse cow 0 0"

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h2&gt;
  
  
  File system removal
&lt;/h2&gt;

&lt;p&gt;It is possible to &lt;a href="https://github.com/dell/ansible-storage-automation/blob/homedir/powerscale/homedir/create_homedir_for_ad_users_in_isilon.yml#L182"&gt;remove PowerScale/Isilon file system&lt;/a&gt; with the Ansible directive :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    - name: Remove Filesystem and Quota for missing students from AD
      dellemc_isilon_filesystem:
        path: "{{base_path}}/students/{{item}}"
        quota:
          quota_state: absent
        state: absent

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Note that by design, the Ansible module will only remove the directory &lt;strong&gt;if empty&lt;/strong&gt;.If you need to remove a non-empty directory, you have to issue REST call directly.&lt;/p&gt;

&lt;h1&gt;
  
  
  Video
&lt;/h1&gt;

&lt;p&gt;For a live demo, check the video here: &lt;a href="https://www.youtube.com/watch?v=RF5WoeRry1k&amp;amp;list=PLbssOJyyvHuVXyKi0c9Z7NLqBiDiwF1eA&amp;amp;index=2"&gt;https://www.youtube.com/watch?v=RF5WoeRry1k&amp;amp;list=PLbssOJyyvHuVXyKi0c9Z7NLqBiDiwF1eA&amp;amp;index=2&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ansible</category>
      <category>dell</category>
      <category>isilon</category>
      <category>devops</category>
    </item>
    <item>
      <title>K8s mount PV with SELinux</title>
      <dc:creator>coulof</dc:creator>
      <pubDate>Tue, 14 Jul 2020 07:00:00 +0000</pubDate>
      <link>https://dev.to/coulof/k8s-mount-pv-with-selinux-4nb9</link>
      <guid>https://dev.to/coulof/k8s-mount-pv-with-selinux-4nb9</guid>
      <description>&lt;h1&gt;
  
  
  TL; DR
&lt;/h1&gt;

&lt;p&gt;If you want to use Kubernetes with &lt;a href="https://wiki.gentoo.org/wiki/SELinux"&gt;SELinux&lt;/a&gt; and mount &lt;code&gt;PersistentVolume&lt;/code&gt;, you have to make sure your mounted FS has labels. You can do it with the &lt;a href="https://unofficial-kubernetes.readthedocs.io/en/latest/concepts/storage/persistent-volumes/#mount-options"&gt;mountOptions&lt;/a&gt; &lt;code&gt;-o context="system_u:object_r:container_var_lib_t:s0"&lt;/code&gt; and if your driver doesn’t support it, you can write an SELinux policy like &lt;a href="https://gist.github.com/coulof/9df7c9f3178ecf6706b0c5316ab9de7e"&gt;this one&lt;/a&gt;.&lt;/p&gt;

&lt;h1&gt;
  
  
  The premise
&lt;/h1&gt;

&lt;p&gt;The problem came from a customer who is using &lt;a href="https://www.mirantis.com/software/docker/docker-enterprise/"&gt;Docker Enterprise&lt;/a&gt; with &lt;a href="https://github.com/dell/csi-vxflexos/"&gt;CSI Driver for VxFlexOS&lt;/a&gt; (now &lt;a href="https://www.delltechnologies.com/en-us/storage/powerflex.htm"&gt;PowerFlex&lt;/a&gt;), and SELinux enforced on the nodes.&lt;/p&gt;

&lt;p&gt;Anytime a Pod tried to write data on the &lt;code&gt;PersistentVolume&lt;/code&gt;, we had &lt;code&gt;Permission denied&lt;/code&gt; error from the OS.&lt;/p&gt;

&lt;p&gt;SELinux relies on file label (sometimes call context) in the file extended attributes to apply its policies. For examples below :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ls -lZd /root /home /etc
drwxr-xr-x. root root system_u:object_r:etc_t:s0 /etc
drwxr-xr-x. root root system_u:object_r:home_root_t:s0 /home
dr-xr-x---. root root system_u:object_r:admin_home_t:s0 /root

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;By default, on a newly formatted and mounted FS the files are unlabeled :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mkfs.ext4 /dev/loop0
mount -o loop /dev/loop0 /media/xxx
ls -Z
drwxr-xr-x. root root system_u:object_r:unlabeled_t:s0 xxx

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;On the customer setup, it was forbidden to do any action on an unlabeled file.&lt;/p&gt;

&lt;h1&gt;
  
  
  How to get the new FS labeled ?
&lt;/h1&gt;

&lt;p&gt;The typical way to relabel a directory is to use the command &lt;code&gt;restorecon&lt;/code&gt; against the mount point, which will restore the security context and label.Because we are in a containerized environment with volumes dynamically mounted by Kubernetes, this is not realistic to expect the &lt;code&gt;restorecon&lt;/code&gt; command to be executed each time a volume is mounted.&lt;/p&gt;

&lt;p&gt;Another option, especially suitable to big FileSystems with many files, is to can create a file named &lt;code&gt;.autorelabel&lt;/code&gt; at the root level, so it forces a relabel on the mountpoint. Slightly better but, still, not feasible in a dynamic environment like ours.&lt;/p&gt;

&lt;p&gt;A better option is to mount the FS with the &lt;a href="https://www.man7.org/linux/man-pages/man8/mount.8.html#FILESYSTEM-INDEPENDENT_MOUNT_OPTIONS"&gt;context option&lt;/a&gt;. That option is my favorite ❣&lt;/p&gt;

&lt;p&gt;Unfortunately, the DellEMC CSI drivers don’t have the &lt;code&gt;mountOption&lt;/code&gt; capability at the time of that post. That feature is on the roadmap, but in the meantime, I needed a plan B.&lt;/p&gt;

&lt;p&gt;The last possibility and the one we implemented is to write a specific policy to allow containers to manipulate unlabeled files and directories.&lt;/p&gt;

&lt;p&gt;Since SELinux follows a model of the least-privilege (aka you can’t do anything except if explicitly allowlisted), the challenge was to have all the syscalls a container needs to do their job.&lt;/p&gt;

&lt;p&gt;To get inspiration, I hijacked policies given in the &lt;a href="https://github.com/SELinuxProject/refpolicy/tree/master/policy/modules/apps"&gt;SELinux Project&lt;/a&gt; and came up with that list : &lt;code&gt;class file { create open getattr setattr read write append rename link unlink ioctl lock };&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;The full policy is :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight c"&gt;&lt;code&gt;&lt;span class="n"&gt;module&lt;/span&gt; &lt;span class="n"&gt;vxflexos&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;cni&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="n"&gt;require&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;type&lt;/span&gt; &lt;span class="n"&gt;unlabeled_t&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="n"&gt;type&lt;/span&gt; &lt;span class="n"&gt;container_t&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="n"&gt;class&lt;/span&gt; &lt;span class="n"&gt;file&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="n"&gt;create&lt;/span&gt; &lt;span class="n"&gt;open&lt;/span&gt; &lt;span class="n"&gt;getattr&lt;/span&gt; &lt;span class="n"&gt;setattr&lt;/span&gt; &lt;span class="n"&gt;read&lt;/span&gt; &lt;span class="n"&gt;write&lt;/span&gt; &lt;span class="n"&gt;append&lt;/span&gt; &lt;span class="n"&gt;rename&lt;/span&gt; &lt;span class="n"&gt;link&lt;/span&gt; &lt;span class="n"&gt;unlink&lt;/span&gt; &lt;span class="n"&gt;ioctl&lt;/span&gt; &lt;span class="n"&gt;lock&lt;/span&gt; &lt;span class="p"&gt;};&lt;/span&gt;
    &lt;span class="n"&gt;class&lt;/span&gt; &lt;span class="n"&gt;dir&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="n"&gt;add_name&lt;/span&gt; &lt;span class="n"&gt;create&lt;/span&gt; &lt;span class="n"&gt;getattr&lt;/span&gt; &lt;span class="n"&gt;ioctl&lt;/span&gt; &lt;span class="n"&gt;link&lt;/span&gt; &lt;span class="n"&gt;lock&lt;/span&gt; &lt;span class="n"&gt;open&lt;/span&gt; &lt;span class="n"&gt;read&lt;/span&gt; &lt;span class="n"&gt;remove_name&lt;/span&gt; &lt;span class="n"&gt;rename&lt;/span&gt; &lt;span class="n"&gt;reparent&lt;/span&gt; &lt;span class="n"&gt;rmdir&lt;/span&gt; &lt;span class="n"&gt;search&lt;/span&gt; &lt;span class="n"&gt;setattr&lt;/span&gt; &lt;span class="n"&gt;unlink&lt;/span&gt; &lt;span class="n"&gt;write&lt;/span&gt; &lt;span class="p"&gt;};&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="cp"&gt;#!!!! WARNING: 'unlabeled_t' is a base type.
&lt;/span&gt;&lt;span class="n"&gt;allow&lt;/span&gt; &lt;span class="n"&gt;container_t&lt;/span&gt; &lt;span class="n"&gt;unlabeled_t&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;&lt;span class="n"&gt;dir&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="n"&gt;create&lt;/span&gt; &lt;span class="n"&gt;open&lt;/span&gt; &lt;span class="n"&gt;getattr&lt;/span&gt; &lt;span class="n"&gt;setattr&lt;/span&gt; &lt;span class="n"&gt;read&lt;/span&gt; &lt;span class="n"&gt;write&lt;/span&gt; &lt;span class="n"&gt;link&lt;/span&gt; &lt;span class="n"&gt;unlink&lt;/span&gt; &lt;span class="n"&gt;rename&lt;/span&gt; &lt;span class="n"&gt;search&lt;/span&gt; &lt;span class="n"&gt;add_name&lt;/span&gt; &lt;span class="n"&gt;remove_name&lt;/span&gt; &lt;span class="n"&gt;reparent&lt;/span&gt; &lt;span class="n"&gt;rmdir&lt;/span&gt; &lt;span class="n"&gt;lock&lt;/span&gt; &lt;span class="n"&gt;ioctl&lt;/span&gt; &lt;span class="p"&gt;};&lt;/span&gt;
&lt;span class="n"&gt;allow&lt;/span&gt; &lt;span class="n"&gt;container_t&lt;/span&gt; &lt;span class="n"&gt;unlabeled_t&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;&lt;span class="n"&gt;file&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="n"&gt;create&lt;/span&gt; &lt;span class="n"&gt;open&lt;/span&gt; &lt;span class="n"&gt;getattr&lt;/span&gt; &lt;span class="n"&gt;setattr&lt;/span&gt; &lt;span class="n"&gt;read&lt;/span&gt; &lt;span class="n"&gt;write&lt;/span&gt; &lt;span class="n"&gt;append&lt;/span&gt; &lt;span class="n"&gt;rename&lt;/span&gt; &lt;span class="n"&gt;link&lt;/span&gt; &lt;span class="n"&gt;unlink&lt;/span&gt; &lt;span class="n"&gt;ioctl&lt;/span&gt; &lt;span class="n"&gt;lock&lt;/span&gt; &lt;span class="p"&gt;};&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;To compile it, you will need the SELinux Devel package (in Fedora : &lt;code&gt;dnf install selinux-policy-devel.noarch&lt;/code&gt; will do) and then compile the policy with : &lt;code&gt;make -f /usr/share/selinux/devel/Makefile&lt;/code&gt;. To install the newly compiled policy run : &lt;code&gt;semodule -i vxflexos-cni.pp&lt;/code&gt;.&lt;/p&gt;

&lt;h1&gt;
  
  
  Wrap-up
&lt;/h1&gt;

&lt;p&gt;&lt;code&gt;mountOptions&lt;/code&gt; capability is coming with every Dell Technologies CSI driver in the incoming months.&lt;/p&gt;

&lt;p&gt;The Gentoo website is a gold mine of information for SELinux ! To understand better the issue, I mostly read these pages on &lt;a href="https://wiki.gentoo.org/wiki/SELinux/Labels"&gt;Labels&lt;/a&gt;, the &lt;a href="https://wiki.gentoo.org/wiki/SELinux/Tutorials/Creating_your_own_policy_module_file"&gt;Tutorial to create a policy&lt;/a&gt;, and &lt;a href="https://github.com/SELinuxProject/refpolicy/tree/master/policy/modules/apps"&gt;SELinux Project Policies&lt;/a&gt;.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Pod uses dynamic environment variable</title>
      <dc:creator>coulof</dc:creator>
      <pubDate>Fri, 19 Jun 2020 07:00:00 +0000</pubDate>
      <link>https://dev.to/coulof/pod-uses-dynamic-environment-variable-1224</link>
      <guid>https://dev.to/coulof/pod-uses-dynamic-environment-variable-1224</guid>
      <description>&lt;h1&gt;
  
  
  TL; DR
&lt;/h1&gt;

&lt;p&gt;This post is build-up on the &lt;a href="///configmap-and-secret.html"&gt;Merge ConfigMap and Secrets&lt;/a&gt; post.It is another use of &lt;a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/"&gt;initContainers&lt;/a&gt;, &lt;a href="https://docs.ruby-lang.org/en/master/ERB.html"&gt;templating&lt;/a&gt; and &lt;a href="https://docs.docker.com/engine/reference/builder/#entrypoint"&gt;entrypoint&lt;/a&gt; to customize a container startup.&lt;/p&gt;

&lt;h1&gt;
  
  
  The premise
&lt;/h1&gt;

&lt;p&gt;I worked on a Kubernetes architecture where the hosts of the cluster had several NIC Cards connected to different networks (one to expose the services, one for management, one for storage, etc.).&lt;/p&gt;

&lt;p&gt;When to create and mount an NFS volume, the &lt;a href="https://github.com/dell/csi-isilon/"&gt;CSI driver for PowerScale/Isilon&lt;/a&gt; passes a client IP that is used to create the export array-side.The driver picks the IP return by the fieldRef&lt;sup id="fnref:1"&gt;1&lt;/sup&gt; &lt;code&gt;status.hostIP&lt;/code&gt;, as you can see &lt;a href="https://github.com/dell/csi-isilon/blob/master/helm/csi-isilon/templates/node.yaml#L113"&gt;here&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;The problem is that IP is used to serve Kubernetes services (aka the Internal IP displayed by &lt;code&gt;kubectl get node -o wide&lt;/code&gt;). So how to change that value to use the storage network-related IP ?&lt;/p&gt;

&lt;h1&gt;
  
  
  The implementation
&lt;/h1&gt;

&lt;p&gt;In my setup, I know which NIC card connects to which network (in this case &lt;code&gt;ens33&lt;/code&gt;).The patch to the native &lt;a href="https://github.com/dell/csi-isilon/"&gt;csi-isilon&lt;/a&gt; deployment aims to :&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Have a simple way to get the IP address of a specific NIC card&lt;/li&gt;
&lt;li&gt;Pass that information on the driver startup&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The first piece of configuration is to create a custom &lt;a href="https://docs.docker.com/engine/reference/builder/#entrypoint"&gt;entrypoint&lt;/a&gt; that will set the &lt;code&gt;X_NODE_IP&lt;/code&gt; variable with the proper.&lt;/p&gt;

&lt;p&gt;Here I use an &lt;a href="https://docs.ruby-lang.org/en/master/ERB.html"&gt;ERB&lt;/a&gt; template in which I call the &lt;code&gt;ip addr&lt;/code&gt; &lt;a href="https://ruby-doc.org/core-2.7.1/Kernel.html#method-i-60"&gt;command in a subshell&lt;/a&gt; with &lt;code&gt;%x@ @&lt;/code&gt; syntax, then I extract the IP with the &lt;a href="https://ruby-doc.org/core-2.7.1/String.html#method-i-5B-5D"&gt;substring&lt;/a&gt; &lt;code&gt;[/inet\s+(\d+(\.\d+){3})/,1]&lt;/code&gt;. If you use IPv6 or another NIC card you can easily adjust it at line 9 of the following snippet.&lt;/p&gt;

&lt;p&gt;It is not displayed in the configuration above, but the &lt;code&gt;ip addr&lt;/code&gt; command works because the Isilon Node Pod has access to the host network thanks to &lt;code&gt;hostNetwork: true&lt;/code&gt; in its definition.&lt;/p&gt;


&lt;pre&gt;400: Invalid request&lt;/pre&gt; 

&lt;p&gt;The second step is to add an &lt;a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/"&gt;initContainers&lt;/a&gt; to the &lt;a href="https://github.com/dell/csi-isilon/blob/master/helm/csi-isilon/templates/node.yaml"&gt;DaemonSet&lt;/a&gt; to generate a new entrypoint, and then force the driver Pod to use the new entrypoint :&lt;/p&gt;


&lt;pre&gt;400: Invalid request&lt;/pre&gt; 

&lt;p&gt;To apply the patch you can create the config map with :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl create -f nodeip-configmap.yaml

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;And patch the Isilon daemon set with :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl patch daemonset isilon-node -n isilon --patch "$(cat isilon-ds.patch)"

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h1&gt;
  
  
  Wrap-up
&lt;/h1&gt;

&lt;p&gt;The same tools (ERB, ConfigMap, initContainer, Entrypoint), can be use to tune pretty much any Kubernetes Pod deployments to customize or add extra-capabilities to your Pod startup (integration with &lt;a href="https://www.vaultproject.io/"&gt;Vault&lt;/a&gt;, tweak program startup, etc.).&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The list of fieldRef possible values is documented &lt;a href="https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/#capabilities-of-the-downward-api"&gt;here&lt;/a&gt;, ↩︎&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>csi</category>
      <category>dell</category>
      <category>isilon</category>
    </item>
    <item>
      <title>Gitlab CI/CD with CSI PowerMax</title>
      <dc:creator>coulof</dc:creator>
      <pubDate>Fri, 29 May 2020 07:00:00 +0000</pubDate>
      <link>https://dev.to/coulof/gitlab-ci-cd-with-csi-powermax-4cil</link>
      <guid>https://dev.to/coulof/gitlab-ci-cd-with-csi-powermax-4cil</guid>
      <description>&lt;h1&gt;
  
  
  TL; DR
&lt;/h1&gt;

&lt;p&gt;Watch the &lt;a href="https://youtu.be/dfKPWqKMuGk"&gt;basic deployment&lt;/a&gt; &amp;amp; &lt;a href="https://youtu.be/6sClYeToXRg"&gt;snapshot-based deployment&lt;/a&gt; videos on Youtube and check the &lt;a href="https://gitlab.com/coulof/todos"&gt;.gitlab-ci-cd.yaml&lt;/a&gt; on Gitlab.&lt;/p&gt;

&lt;h1&gt;
  
  
  The premise
&lt;/h1&gt;

&lt;p&gt;For the first release of the &lt;a href="https://github.com/dell/csi-powermax"&gt;CSI Driver for PowerMax&lt;/a&gt; we wanted to show how the PV dynamic provisioning and snapshot capabilities.&lt;/p&gt;

&lt;p&gt;To present a realistic scenario, we used Gitlab CI/CD, its &lt;a href="https://docs.gitlab.com/ee/user/project/clusters/add_remove_clusters.html#adding-and-removing-kubernetes-clusters"&gt;Kubernetes runner&lt;/a&gt;, and the CSI Driver off course.&lt;/p&gt;

&lt;p&gt;The application itself is a fork of the &lt;a href="https://vuejs.org/v2/examples/todomvc.html"&gt;VueJS example app TODO&lt;/a&gt;, which we modified to use &lt;a href="https://gitlab.com/coulof/todos/-/blob/master/server.rb"&gt;Sinatra as an API provider&lt;/a&gt; and SQLite to store the TODOs.&lt;/p&gt;

&lt;h1&gt;
  
  
  The implementation
&lt;/h1&gt;

&lt;p&gt;The concept is:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;the master branch corresponds to the latest image and is the production environment&lt;/li&gt;
&lt;li&gt;anytime we push a new branch to GitLab we:

&lt;ul&gt;
&lt;li&gt;build the image&lt;/li&gt;
&lt;li&gt;take a snapshot of PV from production&lt;/li&gt;
&lt;li&gt;create an &lt;a href="https://docs.gitlab.com/ee/ci/environments/"&gt;environment&lt;/a&gt; to access the new app&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;new commits on the branch will keep using their own environment with an independent PV&lt;/li&gt;
&lt;li&gt;on branch merge:

&lt;ul&gt;
&lt;li&gt;the dedicated environment and related PV are deleted&lt;/li&gt;
&lt;li&gt;the production is redeployed with the latest image&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Most of the magic on the storage layer happens in the &lt;a href="https://gitlab.com/coulof/todos/-/blob/master/deploy/todos/templates/pvc.yaml"&gt;PVC&lt;/a&gt;, &lt;a href="https://gitlab.com/coulof/todos/-/blob/master/deploy/todos/templates/snap.yaml"&gt;Snap&lt;/a&gt; definitions, and with the Helm variables.&lt;/p&gt;

&lt;p&gt;we can see that, if the branch is the latest, we deploy a dedicated volume (i.e. only the first time). For every other branches we start from a snapshot taken at the time of the first branch creation.&lt;/p&gt;

&lt;p&gt;Under the scene, we will have two independent volumes in PowerMax. For more deep-dive on PowerMax SnapVX (i.e. PowerMax local replicas) you can check that &lt;a href="https://www.dellemc.com/resources/en-us/asset/white-papers/products/storage/h13697-dell-emc-powermax-vmax-all-flash-timefinder-snapvx-local-replication.pdf"&gt;white paper&lt;/a&gt;.&lt;/p&gt;


&lt;pre&gt;400: Invalid request&lt;/pre&gt; 

&lt;p&gt;To avoid, the storage box to be bloated by the project, we also defined a resource Quota on the namespace.&lt;/p&gt;


&lt;pre&gt;400: Invalid request&lt;/pre&gt; 



&lt;p&gt;&lt;strong&gt;Limitations&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The current version of the CSI driver (v1.2) the snapshot API is v1alpha1 and not compatible with Kubernetes v1.17 and beyond.&lt;/p&gt;

&lt;p&gt;A snapshot is only accessible from the &lt;a href="https://kubernetes.io/blog/2019/12/09/kubernetes-1-17-feature-cis-volume-snapshot-beta/#importing-an-existing-volume-snapshot-with-kubernetes"&gt;same namespace&lt;/a&gt; and cannot restore a volume on a different namespace.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Other tips
&lt;/h2&gt;

&lt;p&gt;One of the &lt;a href="https://gitlab.com/coulof/todos/-/blob/master/.gitlab-ci.yml#L41"&gt;tricks&lt;/a&gt; is is to put the Gitlab variable &lt;code&gt;CI_COMMIT_SHORT_SHA&lt;/code&gt; in the helm template ; that way, we make sure it is re-proceed and therefore redeployed with the latest build by Helm.&lt;/p&gt;

&lt;p&gt;Finally, to save some time in building the images, I used &lt;a href="https://gitlab.com/coulof/todos/-/blob/master/Dockerfile#L11"&gt;local gems&lt;/a&gt;.&lt;/p&gt;

&lt;h1&gt;
  
  
  Videos
&lt;/h1&gt;

&lt;p&gt;For a live demo, check the videos here:&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>csi</category>
      <category>devops</category>
      <category>powermax</category>
    </item>
    <item>
      <title>Merge ConfigMap and Secrets</title>
      <dc:creator>coulof</dc:creator>
      <pubDate>Thu, 28 May 2020 07:00:00 +0000</pubDate>
      <link>https://dev.to/coulof/merge-configmap-and-secrets-4cje</link>
      <guid>https://dev.to/coulof/merge-configmap-and-secrets-4cje</guid>
      <description>&lt;h1&gt;
  
  
  TL; DR
&lt;/h1&gt;

&lt;p&gt;To use a &lt;a href="https://kubernetes.io/docs/concepts/configuration/secret/"&gt;Secret&lt;/a&gt; value within a &lt;a href="https://kubernetes.io/docs/concepts/configuration/configmap/"&gt;ConfigMap&lt;/a&gt; you can use an &lt;a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/"&gt;initContainer&lt;/a&gt; to call a &lt;a href="https://en.wikipedia.org/wiki/ERuby"&gt;template engine&lt;/a&gt;.&lt;/p&gt;

&lt;h1&gt;
  
  
  The premise
&lt;/h1&gt;

&lt;p&gt;In the &lt;a href="//kubernetes-event-monitoring.html"&gt;previous post&lt;/a&gt;, I presented how to use kubernetes-event-exporter with Elasticsearch.&lt;/p&gt;

&lt;p&gt;One of the problems I faced is that the tool doesn’t follow the configuration guidelines from the &lt;a href="https://12factor.net/config"&gt;12-factor app&lt;/a&gt; methodology.&lt;/p&gt;

&lt;p&gt;That is to say, we have to put the credentials in the YAML configuration rather than in environment variable :-(&lt;/p&gt;

&lt;p&gt;As for Kubernetes, it doesn’t allow us to mix Secret values within ConfigMap.&lt;/p&gt;

&lt;h1&gt;
  
  
  The solution
&lt;/h1&gt;

&lt;p&gt;To solve that issue, we have 3 components:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;the &lt;code&gt;Secret&lt;/code&gt; as-is&lt;/li&gt;
&lt;li&gt;the &lt;code&gt;ConfigMap&lt;/code&gt; which will have the configuration as a &lt;strong&gt;template&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;the &lt;code&gt;initContainer&lt;/code&gt; that will merge the two&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  SecretMap
&lt;/h2&gt;

&lt;p&gt;The secret comes from the &lt;a href="https://www.elastic.co/elastic-cloud-kubernetes"&gt;ECK Operator&lt;/a&gt; ; we can get it with &lt;code&gt;kubectl get secrets quickstart-es-elastic-user -o yaml&lt;/code&gt; :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Secret
data:
  elastic: YU80bnc4NzZWMXBWMThOZThqOFlnOE1r

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h2&gt;
  
  
  ConfigMap
&lt;/h2&gt;

&lt;p&gt;In the case of &lt;a href="https://github.com/coulof/k8s-events-reporting"&gt;k8s-events-reporting&lt;/a&gt; the ConfigMap looks like this:&lt;/p&gt;


&lt;pre&gt;400: Invalid request&lt;/pre&gt; 

&lt;p&gt;The important piece is the last line, &lt;code&gt;&amp;lt;=% %&amp;gt;&lt;/code&gt; is the erb syntax to call ruby code, and &lt;a href="https://ruby-doc.org/core-2.7.0/ENV.html"&gt;ENV&lt;/a&gt; is a hash to acces the environment variables.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why &lt;a href="https://docs.ruby-lang.org/en/master/ERB.html"&gt;ERB&lt;/a&gt;?
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Because I ♥ Ruby !&lt;/li&gt;
&lt;li&gt;Because the erb command line comes with the &lt;a href="https://hub.docker.com/_/ruby"&gt;ruby docker official image&lt;/a&gt; (there is no need for a custom Dockerfile and therefore no maintenance)&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  initContainer
&lt;/h2&gt;

&lt;p&gt;Last but not least, here is the &lt;code&gt;Deployment&lt;/code&gt; with the initContainer &lt;code&gt;config&lt;/code&gt; that will craft the config file from both the &lt;code&gt;Secret&lt;/code&gt; passed as an environment variable and the &lt;code&gt;ConfigMap&lt;/code&gt; template. The &lt;code&gt;event-exporter&lt;/code&gt; container can later use that file.&lt;/p&gt;



&lt;pre&gt;400: Invalid request&lt;/pre&gt; 

</description>
      <category>kubernetes</category>
      <category>secret</category>
      <category>config</category>
      <category>ruby</category>
    </item>
    <item>
      <title>K8s events monitoring</title>
      <dc:creator>coulof</dc:creator>
      <pubDate>Wed, 27 May 2020 07:00:00 +0000</pubDate>
      <link>https://dev.to/coulof/k8s-events-monitoring-317d</link>
      <guid>https://dev.to/coulof/k8s-events-monitoring-317d</guid>
      <description>&lt;h1&gt;
  
  
  TL; DR
&lt;/h1&gt;

&lt;p&gt;For events monitoring you can use this ready-to-go &lt;a href="https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-deploy-elasticsearch.html"&gt;elastic&lt;/a&gt; + &lt;a href="https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-deploy-kibana.html"&gt;kibana&lt;/a&gt; + &lt;a href="https://github.com/opsgenie/kubernetes-event-exporter"&gt;k8s-event-exporter&lt;/a&gt; stack from : &lt;a href="https://github.com/coulof/k8s-events-reporting"&gt;k8s-events-reporting&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  The premise
&lt;/h1&gt;

&lt;p&gt;As part of an internal project at Dell, I needed to measure the time between the different PVC and PV states (Pending, Bound, Mounted, etc.) through their lifecycles.&lt;/p&gt;

&lt;p&gt;For example, how long it takes for a PVC to be bounded to is volume? The same question between a volume request and mounted to a Pod.&lt;/p&gt;

&lt;p&gt;The idea is to measure the performance of our different drivers (&lt;a href="https://github.com/dell/csi-powermax"&gt;csi-powermax&lt;/a&gt;, &lt;a href="https://github.com/dell/csi-isilon"&gt;csi-isilon&lt;/a&gt;, &lt;a href="https://github.com/dell/csi-vxflexos"&gt;csi-vxflexos&lt;/a&gt;, &lt;a href="https://github.com/dell/csi-powerstore"&gt;csi-powerstore&lt;/a&gt;) in various scenarios (e.g. one pod needs one hundred volumes, one hundred pods writing to the same volume, evaluate the impact of the volume size, etc.).&lt;/p&gt;

&lt;p&gt;The PV/PVC/Pod status is available with &lt;code&gt;kubectl get pv,pvc,po&lt;/code&gt; and you can track the lifecycle through &lt;a href="https://kubernetes.io/docs/tasks/debug-application-cluster/"&gt;Kubernetes events&lt;/a&gt; with &lt;code&gt;kubectl get events&lt;/code&gt;.&lt;/p&gt;

&lt;h1&gt;
  
  
  How to get the events?
&lt;/h1&gt;

&lt;p&gt;A simple search on &lt;em&gt;kubernetes&lt;/em&gt; and &lt;em&gt;monitoring&lt;/em&gt; will return tons of resources from open-source or proprietary projects to collect metrics and logs from Kubernetes. Unfortunately, they mostly focus on metrics collection and container logs.&lt;/p&gt;

&lt;p&gt;For my little project, I just needed to get the details of the events.&lt;/p&gt;

&lt;p&gt;My first idea was to dump the events &lt;code&gt;kubectl get events -A -o json&lt;/code&gt; and load them somewhere like SQLite or excel. Being a nerd, I think we can do better.&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://eos2git.cec.lab.emc.com/coulof/k8s-events-reporting"&gt;Kubernetes events reporting&lt;/a&gt; stack is composed of :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://github.com/opsgenie/kubernetes-event-exporter"&gt;kubernetes-event-exporter&lt;/a&gt; for the event collection&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-deploy-elasticsearch.html"&gt;elasticsearch&lt;/a&gt; for the database&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-deploy-kibana.html"&gt;kibana&lt;/a&gt; for the reporting engine&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  kubernetes-event-exporter
&lt;/h2&gt;

&lt;p&gt;This utility developed by OpsGenie will basically dump the events and forward them to different destinations (sink in their terminology). It has extra features like filtering the type of events or choose the fields you want to forward.&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://github.com/opsgenie/kubernetes-event-exporter/tree/master/deploy"&gt;deployment&lt;/a&gt; of that component has been tweaked and templatized in a helm chart like the rest of the stack.&lt;/p&gt;

&lt;p&gt;The magic to connect that component to elasticsearch will be discussed in the dedicated post for &lt;a href="///configmap-and-secret.html"&gt;ConfigMap and Secrets&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  ElasticSearch &amp;amp; Kibana
&lt;/h2&gt;

&lt;p&gt;The rest of the stack uses well-known components from Elastic. The deployment uses the &lt;a href="https://www.elastic.co/elastic-cloud-kubernetes"&gt;Elastic Cloud on Kubernetes&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Thanks to the operator framework, we can very easily &lt;a href="https://www.elastic.co/guide/en/cloud-on-k8s/1.1/k8s-quickstart.html"&gt;configure and deploy&lt;/a&gt; a secured version of ElasticSearch and Kibana. This is done by the &lt;code&gt;install.sh&lt;/code&gt; with :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f https://download.elastic.co/downloads/eck/1.1.1/all-in-one.yaml

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;After you run and successful launch the &lt;code&gt;install.sh&lt;/code&gt; and your stack is ready ; you can load the kibana dashboard with :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -X POST "localhost:5601/api/kibana/dashboards/import?exclude=index-pattern" -H 'Content-Type: application/json' -d @kibana-dashboard.json

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Of course, you have to adjust the URL &amp;amp; credentials; the result will look like this : &lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--jFfQEyug--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://storage-chaos.io/assets/img/kibana_dashboard.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--jFfQEyug--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://storage-chaos.io/assets/img/kibana_dashboard.png" alt="Dashboard"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;From now on, you can visualize the kubernetes events and keep them longer than the 1-hour default in your cluster. The provided dashboard is PV/PVC/Pods centric but any events are collected so you can tweak and hack the Kibana dashboard ;-)&lt;/p&gt;

&lt;h1&gt;
  
  
  Wrap-up
&lt;/h1&gt;

&lt;p&gt;That &lt;a href="https://github.com/coulof/k8s-events-reporting"&gt;Kubernetes events monitoring stack&lt;/a&gt; have been used and tested for one-shot statistics and analytics. Nevertheless, the component and approach can fit other use-cases like observability, alerting, and monitoring.&lt;/p&gt;

&lt;p&gt;There is more to say about the kubernetes-even-exporter configuration in the context of Kubernetes. It will be addressed in more detail in the &lt;a href="///configmap-and-secret.html"&gt;ConfigMap and Secrets&lt;/a&gt; post.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>monitoring</category>
      <category>kibana</category>
      <category>events</category>
    </item>
  </channel>
</rss>
