<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Camptocamp Infrastructure Solutions</title>
    <description>The latest articles on DEV Community by Camptocamp Infrastructure Solutions (@camptocamp-ops).</description>
    <link>https://dev.to/camptocamp-ops</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/camptocamp-ops"/>
    <language>en</language>
    <item>
      <title>Platform Engineering at CNS Munich</title>
      <dc:creator>Xavier Rakotomamonjy</dc:creator>
      <pubDate>Tue, 26 Aug 2025 14:33:58 +0000</pubDate>
      <link>https://dev.to/camptocamp-ops/platform-engineering-at-cns-munich-20f6</link>
      <guid>https://dev.to/camptocamp-ops/platform-engineering-at-cns-munich-20f6</guid>
      <description>&lt;p&gt;This year Cloud Native Summit (CNS) in Munich gathered adopters and technologists from open source and cloud native communities.&lt;/p&gt;

&lt;p&gt;Compared to last year's Kubernetes Communities Days (KCD), the agenda dedicated many slots to Platform Engineering. This discipline can be seen as a way to fill gaps between different stakeholders of an Internal Developer Platform (IDP). Last year I wrote an article on this topic at the KCD Munich. This year I would like to share fresh ideas or principles from some talks given during the conference.&lt;/p&gt;

&lt;h2&gt;
  
  
  Identifying problems before solutions
&lt;/h2&gt;

&lt;p&gt;Day 1 opened with a presentation of "Product thinking" approach made by &lt;em&gt;Stéphane Di Cesare&lt;/em&gt; (Deutsche Kreditbank) and &lt;em&gt;Dominik Schmidle&lt;/em&gt; (Giant Swarm). They reminded some core principles and techniques of "Platform Engineering" to manage the link between problem space and solution space. &lt;/p&gt;

&lt;p&gt;Day 1 also closed with a panel discussion on "Platform Engineering" animated by &lt;em&gt;Max Körbächer&lt;/em&gt; (Liquid Reply),  focused on the adoption of IDPs by end users and within projects.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=g-sZwa52DNE&amp;amp;list=PL54A_DPe8WtDLSA_EA7ETfprpRWzd2yqV&amp;amp;index=1" rel="noopener noreferrer"&gt;Video 1 Platform Thinking&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=MpU-vo4K7BQ" rel="noopener noreferrer"&gt;Video 2 Panel Discussion&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  AI workloads on K8s
&lt;/h2&gt;

&lt;p&gt;"Use platform engineering for AI workloads on K8s", that was the motto of the presentation given by &lt;em&gt;Mario-Leander Reimer&lt;/em&gt; (QAware GmbH). He  tackled emerging challenges for deploying Agentic AI workloads and introduced an example of what a platform should cover including quality plane and compliance plane. His talk is part of an open source initiative launched at QAware this year to mature the concepts of a dedicated control plane: the agentic layer.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=pg2DKYc9n_o&amp;amp;list=PL54A_DPe8WtDLSA_EA7ETfprpRWzd2yqV&amp;amp;index=24" rel="noopener noreferrer"&gt;Video 3 Architecting and Building a K8s-based AI Platform&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Composable platforms on Kubernetes
&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;Hossein Salahi&lt;/em&gt; (Liquid Reply) presented a reference implementation to streamline the delivery of services for developers. Some of the challenges were to link dev and ops, to manage services in configuration, to orchestrate infrastructure deployment and to manage access. For that purpose the open source tools Kratix codify a contract between dev and ops. Combined with Backstage, this abstraction allows services to be packaged and delivered to developers.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=aLdgVrnMxcs&amp;amp;list=PL54A_DPe8WtDLSA_EA7ETfprpRWzd2yqV&amp;amp;index=10" rel="noopener noreferrer"&gt;Video 4  Modular Platform Engineering with Kratix and Backstage&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Reduce cognitive workloads
&lt;/h2&gt;

&lt;p&gt;Interestingly the problem of cognitive workloads due to some platform complexity was a recurring theme. But &lt;em&gt;Michel Murabito&lt;/em&gt; (Mia Platform) positioned platform engineering as a way to reduce cognitive workloads. He highlighted essential building blocks and features that can reduce mental burden. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=iSMk7a62wUc&amp;amp;list=PL54A_DPe8WtDLSA_EA7ETfprpRWzd2yqV&amp;amp;index=39" rel="noopener noreferrer"&gt;Video 5 Creating a smooth Developer Experience&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Involved user as contributor
&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;Lian Li&lt;/em&gt; (lianmakesthings) told us a story of what can go wrong when building a platform based on real examples and some facts. She advocated some receipts that can improve communication among teams in an organisation and  explain some benefits of having contributing users in the loop.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=4CcNPHT_-nA&amp;amp;t=1597s" rel="noopener noreferrer"&gt;Video 6 Many Cooks, One Platform: Balancing Ownership and Contribution for the Perfect Broth&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Convergence
&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;Evelyn Osman&lt;/em&gt; (enmacc) discussed the importance of convergence when building a platform in order to avoid fragmentation in teams and solutions. She shared some important steps that foster innovations and better meet user expectations.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=EztpUoi0hgU&amp;amp;list=PL54A_DPe8WtDLSA_EA7ETfprpRWzd2yqV&amp;amp;index=27" rel="noopener noreferrer"&gt;Video 7 Convergence on Platforms&lt;br&gt;
&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;This is not an exhaustive list of what was discussed, but somehow it made it clear that Platform Engineering approach can help picking the right solution in the open source landscape. Some enablers already provide a strong integration with Kubernetes to develop and publish new services on top of it. And depending on the organization, some shift in practices also might be required to follow the evolution of the platform at build and runtime. The conference was a great place to exchange with the community on technical and organization challenges.&lt;/p&gt;

&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.kcdmunich.de/schedule/" rel="noopener noreferrer"&gt;Agenda of the conference&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://dev.to/xavier_rakotomamonjy/platform-engineering-at-kcd-munich-25i1"&gt;Platform Engineering at KCD Munich&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://landscape.cncf.io/?group=cnai" rel="noopener noreferrer"&gt;CNCF landscape for AI&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.agentic-layer.ai/" rel="noopener noreferrer"&gt;Agentic Layer by QAware&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.kratix.io" rel="noopener noreferrer"&gt;kratix.io&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>cloudnative</category>
      <category>clusters</category>
      <category>kubernetes</category>
      <category>ia</category>
    </item>
    <item>
      <title>Talos Linux: a new standard for on-premises Kubernetes clusters?</title>
      <dc:creator>Gonçalo Heleno</dc:creator>
      <pubDate>Tue, 22 Apr 2025 17:44:21 +0000</pubDate>
      <link>https://dev.to/camptocamp-ops/talos-linux-a-new-standard-for-on-premises-kubernetes-clusters-283i</link>
      <guid>https://dev.to/camptocamp-ops/talos-linux-a-new-standard-for-on-premises-kubernetes-clusters-283i</guid>
      <description>&lt;p&gt;A few weeks ago, I was at KubeCon Europe 2025 in London and I had the opportunity to attend a presentation that tackled the monumental challenge of migrating 35 Kubernetes clusters in an air-gapped environment from nodes deployed with a mix of kubeadm/Ansible/Puppet to &lt;a href="https://www.talos.dev" rel="noopener noreferrer"&gt;Talos Linux&lt;/a&gt; nodes deployed using &lt;a href="https://cluster-api.sigs.k8s.io" rel="noopener noreferrer"&gt;Cluster API&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;While the presentation was quite interesting (you can get the slides &lt;a href="https://kccnceu2025.sched.com/event/1tx78/day-2000-migration-from-kubeadmansible-to-clusterapitalos-a-swiss-banks-journey-clement-nussbaumer-postfinance" rel="noopener noreferrer"&gt;here&lt;/a&gt; and watch the session recording on &lt;a href="https://www.youtube.com/watch?v=uQ_WN1kuDo0&amp;amp;list=PLj6h78yzYM2MP0QhYFK8HOb8UqgbIkLMc&amp;amp;index=253" rel="noopener noreferrer"&gt;CNCF's YouTube Channel&lt;/a&gt;), I want to dive more into the Talos Linux project and its features.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Talos Linux?
&lt;/h2&gt;

&lt;p&gt;Talos Linux is a modern Linux distribution purpose-built for running Kubernetes clusters. Some noteworthy characteristics of Talos Linux are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Immutable&lt;/strong&gt;: Talos Linux is designed to be immutable and always runs from a SquashFS image. This means that the operating system is read-only and cannot be modified at runtime. This immutability provides a strong security posture and means that there is no need to worry about unintended changes to the operating system.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Minimal&lt;/strong&gt;: Talos Linux is a minimal operating system that only includes the components necessary to run Kubernetes. All the OS is built from the ground-up and no unnecessary components are included. This minimalism reduces the attack surface and improves performance.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Ephemeral and Declarative&lt;/strong&gt;: Talos Linux nodes are ephemeral and everything written to disk is reconstructible. It is also declarative, meaning that the desired state of the system is defined in a configuration file and gRPC API, which is perfect for someone that loves automation and reproducibility, like myself.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Secure&lt;/strong&gt;: As a consequence of its design, Talos Linux provides enhanced security features, ensuring that the system remains robust against various threats.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Agnostic&lt;/strong&gt;: Talos Linux is cloud-agnostic, allowing it to run on various cloud providers and on-premises environments without vendor lock-in.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In summary, and I'm quoting the official documentation, &lt;em&gt;"Talos is meant to do one thing: maintain a Kubernetes cluster, and it does this very, very well."&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  My thoughts
&lt;/h2&gt;

&lt;p&gt;I have been following the Talos Linux project for a while, and I was gladly surprised to see a Swiss-bank like PostFinance being on the forefront of adopting such modern solutions like Talos Linux and Cluster API.&lt;/p&gt;

&lt;p&gt;I think Talos Linux will be a key player in the Kubernetes ecosystem, especially for organizations looking for an on-premises solution that's secure, efficient and easy to manage.&lt;/p&gt;

&lt;p&gt;The fact that Talos is declarative and immutable might seem like a drawback at first for someone used to the &lt;em&gt;old ways&lt;/em&gt; of managing infrastructure with Ansible or Puppet, but I believe that this is the future of managing Kubernetes clusters.&lt;/p&gt;

&lt;p&gt;I want my nodes to behave like pods that I can easily create, destroy, and replace. Besides, I don't want to deal with the overhead of managing the operating system. I already have enough to deal with the on-premises infrastructure for the network and storage and the Kubernetes cluster itself, so why not offload the management of the operating system to a purpose-built distribution like Talos?&lt;/p&gt;

&lt;p&gt;With &lt;a href="https://www.siderolabs.com/platform/saas-for-kubernetes/" rel="noopener noreferrer"&gt;Omni&lt;/a&gt;, Sidero Lab's SaaS platform for managing Talos Linux clusters, I believe Sidero Labs have a good revenue model to continue developing Talos Linux. As a fan of open-source, we are all aware of the challenges of maintaining a project like Talos Linux, and I believe that having a SaaS platform to manage Talos Linux clusters is a good way to ensure the project's sustainability.&lt;/p&gt;

&lt;h2&gt;
  
  
  Talos Linux vs. other solutions
&lt;/h2&gt;

&lt;p&gt;Red Hat OpenShift is a well-known solution in large enterprises. However, more than a Kubernetes distribution, it is a complete platform that includes a lot of features and components, including CI/CD tools, monitoring, etc. It is also expensive.&lt;/p&gt;

&lt;p&gt;On the other hand, Talos Linux shines with its simplicity and minimalism, which brings more flexibility and allows teams to choose their solution to complete the platform as they see fit.&lt;/p&gt;

&lt;p&gt;RKE2 is another Kubernetes distribution that focuses on simplicity and security, making it a strong contender for organizations looking for a lightweight solution. However, it still requires an underlying operating system that you need to operate.&lt;/p&gt;

&lt;h2&gt;
  
  
  Bonus
&lt;/h2&gt;

&lt;p&gt;While at KubeCon, I had the opportunity to visit the Sidero Labs' booth and talk to the team behind Talos Linux. I thank the team for a warm welcome and great conversations about the project.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb8gy1k9411do5c5a3grc.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb8gy1k9411do5c5a3grc.jpg" alt="Sidero Booth" width="800" height="1066"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Go further
&lt;/h2&gt;

&lt;p&gt;I wanted to keep this blog post short and not too technical, but if you want to learn more about Talos Linux, I recommend checking out the following resources:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.talos.dev/v1.9/introduction/what-is-talos/" rel="noopener noreferrer"&gt;What is Talos?&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/siderolabs/talos" rel="noopener noreferrer"&gt;Talos Linux GitHub Repository&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.talos.dev/v1.9/introduction/quickstart/" rel="noopener noreferrer"&gt;Quickstart a Talos Linux cluster with Docker&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.talos.dev/v1.9/learn-more/philosophy/" rel="noopener noreferrer"&gt;Philosophy of Talos Linux&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.reddit.com/r/kubernetes/comments/16v0j8x/talos_linux_a_modern_linux_distribution_purpose/" rel="noopener noreferrer"&gt;Interesting Reddit thread with some comments from Sidero Labs' employees&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>kubernetes</category>
      <category>taloslinux</category>
      <category>containers</category>
      <category>siderolabs</category>
    </item>
    <item>
      <title>Unveiling the Simplicity of Cluster Mesh for Kubernetes Deployments</title>
      <dc:creator>Federico Sismondi</dc:creator>
      <pubDate>Tue, 16 Apr 2024 13:30:57 +0000</pubDate>
      <link>https://dev.to/camptocamp-ops/unveiling-the-simplicity-of-cluster-mesh-for-kubernetes-deployments-1bfc</link>
      <guid>https://dev.to/camptocamp-ops/unveiling-the-simplicity-of-cluster-mesh-for-kubernetes-deployments-1bfc</guid>
      <description>&lt;h2&gt;
  
  
  Unveiling the Simplicity of Cluster Mesh for Kubernetes Deployments
&lt;/h2&gt;

&lt;p&gt;During Kubecon EU 2024, among a crowd of tech enthusiasts and Kubernetes aficionados, Liz Rice the Queen bee, demo’ed multi-cluster networking. This is Cluster Mesh 101 with Cilium. &lt;/p&gt;

&lt;p&gt;Here are a few paragraphs summarizing the experience.&lt;/p&gt;

&lt;h3&gt;
  
  
  Overview
&lt;/h3&gt;

&lt;p&gt;Cluster Mesh extends the networking plane across multiple clusters. It enables connection among endpoints of connected clusters. Two noticeable features are:&lt;br&gt;
i) Network Policy Enforcement as implemented by Cilium prevails even under this network setup&lt;br&gt;
ii) Services can load balance requests among clusters just by using annotations&lt;/p&gt;
&lt;h3&gt;
  
  
  Networking Adventures Begin
&lt;/h3&gt;

&lt;p&gt;To the surprise of many, Liz announces she will be running the demo over the venue's Wi-Fi.&lt;/p&gt;

&lt;p&gt;Presentation gets started with connectivity tests over VPN connections, routes propagation validation, checks of the BGP peering and visualization of routing tables. The setup is a running k8s cluster in GKE and another in EKS. Node to node Network connectivity is the final objective here, and do not forget all assigned IPs should be not overlapping. Cilium cannot create a bridge between two cloud providers. No black magic. &lt;/p&gt;

&lt;p&gt;This is foreplay preparing the ground for the demo. The steps for reproducing this can be found in official Cilium &lt;a href="https://docs.cilium.io/en/latest/network/clustermesh/clustermesh/#setting-up-cluster-mesh" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;h3&gt;
  
  
  Enter Cilium's Cluster Mesh
&lt;/h3&gt;

&lt;p&gt;With the foundational work laid, it's time to kickstart the demo. Liz demonstrates how to enable all necessary components with &lt;code&gt;cilium clustermesh enable&lt;/code&gt; in both clusters. These trigger the deployment of the clustermesh-apiserver into each cluster, along with the generation of all required certificates. This component goes the extra mile by attempting to auto-detect the optimal service type for LoadBalancer, ensuring the Cluster Mesh control plane is efficiently exposed to other clusters. &lt;/p&gt;

&lt;p&gt;A simple &lt;code&gt;cilium clustermesh connect&lt;/code&gt; builds the bridge between clusters, just as if we had a single network plane between Pods across all clusters.&lt;/p&gt;

&lt;p&gt;Now &lt;code&gt;cilium clustermesh status&lt;/code&gt; echos:&lt;/p&gt;

&lt;p&gt;✅ All 2 nodes are connected to all clusters&lt;br&gt;
🔌 Cluster Connections:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;cilium-cli-ci-multicluster-2-168: 2/2 configured, 2/2 connected&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Traffic load balancing and failover in the hive:
&lt;/h3&gt;

&lt;p&gt;Till now demo showed and validated the piping between Pods. We can successfully communicate with a Pod on another cluster using its IP address for example. But this wouldn't be very practical in a real world-scenario. Moreover no DNS service can help us fetch this dynamic IP. In fact, this is the raison d'être of k8s Service object.&lt;/p&gt;

&lt;p&gt;What Cilium proposes is extending native Service resources into a cluster-mesh-aware Service using annotations.&lt;/p&gt;

&lt;p&gt;Cilium provides a pragmatic solution through global services with auto discovery and failover mechanisms. &lt;/p&gt;

&lt;p&gt;By using &lt;code&gt;service.cilium.io/global=true&lt;/code&gt; makes a service global, which meansmatching Pods across clusters. Or in other words we extend service’s backends to use Pods in remote clusters. Then the service’s traffic is balanced across clusters. &lt;/p&gt;

&lt;p&gt;Questions? This is probably better explained by Ciilium documentation &lt;a href="https://docs.cilium.io/en/latest/network/clustermesh/services/#load-balancing-with-global-services" rel="noopener noreferrer"&gt;here&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Service&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;rebel-base&lt;/span&gt;
  &lt;span class="na"&gt;annotations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;service.cilium.io/global&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;true"&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ClusterIP&lt;/span&gt;
  &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;rebel-base&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With &lt;code&gt;service.cilium.io/affinity=local|remote&lt;/code&gt; we can fine tune the global service to prefer local or remote Pods. With local we could designate our local cluster as the primary destination, while the remote Pods serve as a backup.&lt;/p&gt;

&lt;p&gt;The following represents a service, which is global and prefers using endpoints found locally:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Service&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;rebel-base&lt;/span&gt;
  &lt;span class="na"&gt;annotations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;service.cilium.io/global&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;true"&lt;/span&gt;
    &lt;span class="na"&gt;service.cilium.io/affinity&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;local"&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ClusterIP&lt;/span&gt;
  &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;rebel-base&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Should the primary cluster encounter any issues or downtime, traffic seamlessly shifts to the backup cluster, ensuring continuity of service.&lt;br&gt;
In essence, Cilium offers a straightforward approach to traffic management, enhancing reliability by providing a failover mechanism that ensures service accessibility remains intact in the face of pod disruptions.&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;A few interesting use cases arise when using Cluster Mesh. The one we focused on in this article is about leveraging remote clusters Pods as service's Pod backend as failover mechanism. We can also think of using a &lt;code&gt;global service&lt;/code&gt; for moving workloads around, for example for lowering computational costs into cheaper regions.&lt;/p&gt;

&lt;p&gt;So there you have it, folks. A whirlwind tour of multicluster networking traffic management with Cilium, served up with a dose of honey.&lt;br&gt;
Who knew cluster meshing would be that simple? &lt;/p&gt;

&lt;h3&gt;
  
  
  Contact us
&lt;/h3&gt;

&lt;p&gt;Needs a demo, or dig into some specifics on Cilium?&lt;br&gt;
Ping us here: &lt;a href="https://camptocamp.com/consulting" rel="noopener noreferrer"&gt;https://camptocamp.com/consulting&lt;/a&gt;&lt;/p&gt;

</description>
      <category>cilium</category>
      <category>clustermesh</category>
      <category>ebpf</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>Beyond the Buzz: Embracing the Magic of eBPF in Kubernetes</title>
      <dc:creator>Julien Acroute</dc:creator>
      <pubDate>Wed, 10 Apr 2024 14:02:32 +0000</pubDate>
      <link>https://dev.to/camptocamp-ops/beyond-the-buzz-embracing-the-magic-of-ebpf-in-kubernetes-45md</link>
      <guid>https://dev.to/camptocamp-ops/beyond-the-buzz-embracing-the-magic-of-ebpf-in-kubernetes-45md</guid>
      <description>&lt;h2&gt;
  
  
  Beyond the Buzz: Embracing the Magic of eBPF in Kubernetes
&lt;/h2&gt;

&lt;p&gt;In a time where the buzz around Artificial Intelligence (AI) seems to overshadow everything else, this year's KubeCon Europe offered a refreshing perspective. While AI continues to be a hot topic, some in the Kubernetes community are starting to feel a bit tired of it. With all the hype and uncertainty surrounding AI, another hero has emerged: eBPF (Extended Berkeley Packet Filter).&lt;/p&gt;

&lt;h3&gt;
  
  
  AI: A Distant Shining Horizon
&lt;/h3&gt;

&lt;p&gt;AI has certainly added some excitement to discussions about cloud-native technologies, from automating cluster troubleshooting to hosting AI on Kubernetes. But not everyone is fully on board. While AI has proved to be of great assistance - e.g. suggesting, fixing and reviewing code written by humans-  some folks worry that too much focus on AI might distract from more practical, ready-to-implement advancements. The feeling is clear: while AI offers lots of possibilities, the horizon is a bit uncertain and overly hyped.&lt;/p&gt;

&lt;h3&gt;
  
  
  eBPF: Here and Now in Kubernetes Innovation
&lt;/h3&gt;

&lt;p&gt;In contrast, eBPF is all about practical innovation. It's a technology that delivers real results, right here, right now. We need solutions for network security, observability, and performance today, and that's where eBPF shines. Unlike the abstract promises of AI, eBPF offers concrete tools and methods to improve Kubernetes environments immediately. For example, when it comes to network security, eBPF-powered tools like Cilium can do the job without needing the complexity of AI.&lt;br&gt;
This shift towards valuing what's immediately useful over what's exciting but distant was noticeable at KubeCon. As we dive deeper into what eBPF can do, it becomes clear why this technology has captured the attention of the Kubernetes community.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Way Forward with eBPF
&lt;/h3&gt;

&lt;p&gt;By embracing eBPF, the Kubernetes community isn't just adopting new tools; it's championing a philosophy of practical, tangible progress. As we explore the latest eBPF innovations – from better security to revolutionary observability tools – we see the benefits eBPF brings to Kubernetes. It's a journey grounded in reality, offering not just a vision of the future, but a roadmap to get there.&lt;/p&gt;

&lt;h3&gt;
  
  
  Redefining Efficiency: eBPF's Radical Return to Linux's Roots
&lt;/h3&gt;

&lt;p&gt;eBPF represents a refreshing departure from the conventional approach of stacking layer upon layer in pursuit of functionality, often resulting in bloated, resource-intensive systems, even for simple tasks like rendering a webpage or routing network packets between nodes or clusters. With eBPF, we're venturing back into the depths of the Linux system, where innovation meets efficiency. Here, we witness a paradigm shift—a departure from the status quo. The results speak for themselves: a &lt;a href="https://isovalent.com/blog/post/tetragon-release-10/#process-execution-tracking-at-less2percent-overhead"&gt;nearly negligible overhead&lt;/a&gt; and remarkable responsiveness. In fact, eBPF brings us so close to real-time processing that it's revolutionizing how we think about performance in cloud-native environments. We're not just optimizing; we're redefining what's possible, and eBPF is leading the charge.&lt;/p&gt;

&lt;h3&gt;
  
  
  eBPF talks from KubeCon 2024
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Cilium: Connecting, Observing, and Securing Service Mesh and Beyond with eBPF&lt;br&gt;
Speakers: Liz Rice, Christine Kim, Nico Meisenzahl, Vlad Ungureanu&lt;br&gt;
&lt;a href="https://youtu.be/wq1TxZw1AaY?si=JTyhE333QfsGht0T"&gt;https://youtu.be/wq1TxZw1AaY?si=JTyhE333QfsGht0T&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Dealing with eBPF’s Observability Data Deluge&lt;br&gt;
Speaker: Anna Kapuścińska&lt;br&gt;
&lt;a href="https://youtu.be/yWB8n_e4N14?si=OyMJEKzbxS5zxA5P"&gt;https://youtu.be/yWB8n_e4N14?si=OyMJEKzbxS5zxA5P&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Unlock Energy Consumption in the Cloud with eBPF&lt;br&gt;
Speaker: Leonard Pahlke&lt;br&gt;
&lt;a href="https://youtu.be/lW9pZoKRJVs?si=rX5CQMaFuZBm8bBT"&gt;https://youtu.be/lW9pZoKRJVs?si=rX5CQMaFuZBm8bBT&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Fast and Efficient Log Processing with Wasm and eBPF&lt;br&gt;
Speaker: Michael Yuan&lt;br&gt;
&lt;a href="https://youtu.be/4u7nUpZxr3g?si=pkcpoEwOeDaH5HcE"&gt;https://youtu.be/4u7nUpZxr3g?si=pkcpoEwOeDaH5HcE&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;No 'Soup' for You! Enforcing Network Policies for Host Processes via eBPF&lt;br&gt;
Speaker: Vinay Kulkarni&lt;br&gt;
&lt;a href="https://youtu.be/AWAf3H4Qwq8?si=qVqQfWb3J_905BCJ"&gt;https://youtu.be/AWAf3H4Qwq8?si=qVqQfWb3J_905BCJ&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;eBPF’s Abilities and Limitations: The Truth&lt;br&gt;
Speakers: Liz Rice &amp;amp; John Fastabend&lt;br&gt;
&lt;a href="https://youtu.be/tClsqnZMN6I?si=TyMFTMk4Q45K6T2v"&gt;https://youtu.be/tClsqnZMN6I?si=TyMFTMk4Q45K6T2v&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>ebpf</category>
      <category>ia</category>
    </item>
    <item>
      <title>Use Tetragon to Limit Network Usage for a set of Binary</title>
      <dc:creator>Julien Acroute</dc:creator>
      <pubDate>Thu, 03 Aug 2023 07:39:02 +0000</pubDate>
      <link>https://dev.to/camptocamp-ops/use-tetragon-to-limit-network-usage-for-a-set-of-binary-2l4e</link>
      <guid>https://dev.to/camptocamp-ops/use-tetragon-to-limit-network-usage-for-a-set-of-binary-2l4e</guid>
      <description>&lt;h2&gt;
  
  
  A matter of trust
&lt;/h2&gt;

&lt;p&gt;Many interesting software are coming from the community, many are distributed through the package manager of the operating system. But for the others, you can download them from Github release pages, use snap or homebrew to cite a few. But this last installation method bypasses the security team that tries to improve the security of your operating system. By doing so, you are implicitly trusting the author he is not distributing malware or implementing backdoors. How many tools did you install by hand? Do you really trust all of them? Confidence is very important, yet it would be nice to limit capabilities for a set of binary that you don't fully trust. In this blog post, we will use &lt;a href="https://github.com/cilium/tetragon"&gt;Tetragon&lt;/a&gt;  to forbid network usage for tools that don't need to.&lt;/p&gt;

&lt;h2&gt;
  
  
  Goal
&lt;/h2&gt;

&lt;p&gt;We will separate tools installed locally into two families. On one side, tools that need network access in a specific directory. Another directory for tools that don't need internet access. &lt;/p&gt;

&lt;p&gt;For example the following tools use network sockets:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/derailed/k9s"&gt;k9s&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://kubernetes.io/fr/docs/tasks/tools/install-kubectl/"&gt;kubectl&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://helm.sh/"&gt;helm&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;while these do not: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://jqlang.github.io/jq/"&gt;jq&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/mikefarah/yq"&gt;yq&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://jless.io/"&gt;jless&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We will move the last tools in a specific directory &lt;code&gt;~/bin-no-network/&lt;/code&gt; and use &lt;a href="https://github.com/cilium/tetragon"&gt;Tetragon&lt;/a&gt; to inject a policy in the kernel as a &lt;a href="https://ebpf.io/"&gt;eBPF&lt;/a&gt; program to kill any binary located in &lt;code&gt;~/bin-no-network/&lt;/code&gt; trying to open a network socket.&lt;/p&gt;

&lt;h2&gt;
  
  
  Tetragon installation
&lt;/h2&gt;

&lt;p&gt;Tetragon is "Kubernetes-aware" but it can also be used outside Kubernetes on a regular workstation. You can deploy Tetragon as a container:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;docker run &lt;span class="nt"&gt;--name&lt;/span&gt; tetragon &lt;span class="nt"&gt;--rm&lt;/span&gt; &lt;span class="nt"&gt;-d&lt;/span&gt;                 &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--pid&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;host &lt;span class="nt"&gt;--cgroupns&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;host &lt;span class="nt"&gt;--privileged&lt;/span&gt;          &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;-v&lt;/span&gt; /sys/kernel/btf/vmlinux:/var/lib/tetragon/btf &lt;span class="se"&gt;\&lt;/span&gt;
    quay.io/cilium/tetragon:v0.10.0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Because Tetragon needs to inject code in the kernel, we need to bypass most of the docker isolation mechanism. We only use the packaging feature of docker to avoid installation of system libraries and the binary. So you will have to trust ;-) this tetragon binary because it will run like a process running as root on your workstation. Note the mount point: &lt;code&gt;/sys/kernel/btf/vmlinux&lt;/code&gt;, this is a kind of bridge to eBPF kernel features.&lt;/p&gt;

&lt;h2&gt;
  
  
  eBPF Principle
&lt;/h2&gt;

&lt;p&gt;When using eBPF, applications are always composed of two parts, one deployed in the Kernel as an eBPF program and another running as a regular program in "user space". The user space program (Tetragon) will inject a small eBPF program in the kernel to intercept some system calls. This means that every application that uses this system call will trigger this eBPF program that can observe or modify the system call result. A specific data structure is then created in the kernel: a ring buffer. This is used by the eBPF program to store some interesting data. Finally, the user space program Tetragon, which has also read-only access to this data structure, will be able to retrieve information gathered by the eBPF program.&lt;br&gt;
One good point with this architecture is that evaluation of rules is done within the kernel and does not require communication with the user space program. The only drawback is that information observed by the eBPF program can be overridden by new incoming information if the user space part does not read information fast enough. This is how ring buffers are designed.&lt;/p&gt;
&lt;h2&gt;
  
  
  Writing a Policy
&lt;/h2&gt;

&lt;p&gt;Our goal is to forbid network usage coming from binaries in a specific folder.&lt;br&gt;
For this we will need a CLI to interact with tetragon. We can use the &lt;code&gt;tetra&lt;/code&gt; binary in the docker container:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;alias &lt;/span&gt;&lt;span class="nv"&gt;tetra&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"docker exec -ti tetragon tetra"&lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;tetra version
server version: v0.10.0
cli version: v0.10.0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Even if we are not using Kubernetes, we still need to write some yaml as Tetragon only understands TracingPolicy objects. A TracingPolicy is a kubernetes custom resource to install hooks in the kernel and actions.&lt;/p&gt;

&lt;p&gt;Let's start with a TracingPolicy from the &lt;a href="https://tetragon.cilium.io/docs/use-cases/network-observability/"&gt;Tetragon documentation&lt;/a&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cilium.io/v1alpha1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;TracingPolicy&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;connect"&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;kprobes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;call&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;tcp_connect"&lt;/span&gt;
    &lt;span class="na"&gt;syscall&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="no"&gt;false&lt;/span&gt;
    &lt;span class="na"&gt;args&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;index&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt;
      &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;sock"&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;call&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;tcp_close"&lt;/span&gt;
    &lt;span class="na"&gt;syscall&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="no"&gt;false&lt;/span&gt;
    &lt;span class="na"&gt;args&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;index&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt;
      &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;sock"&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;call&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;tcp_sendmsg"&lt;/span&gt;
    &lt;span class="na"&gt;syscall&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="no"&gt;false&lt;/span&gt;
    &lt;span class="na"&gt;args&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;index&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt;
      &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;sock"&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;index&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2&lt;/span&gt;
      &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;int&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This Tracing policy will just observe network events:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;socket creation (tcp_connect)&lt;/li&gt;
&lt;li&gt;traffic in the socket (tcp_sendmsg)&lt;/li&gt;
&lt;li&gt;socket close (tcp_close)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now we need to add: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;an action when this kind of event is detected&lt;/li&gt;
&lt;li&gt;a filter to only apply this action if this is generated from a binary located in our specific directory &lt;code&gt;~/bin-no-network/&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We need to add a &lt;code&gt;selectors&lt;/code&gt; section in the TracingPolicy and use the &lt;code&gt;matchBinaries&lt;/code&gt; selector to apply this policy only for binary in the &lt;code&gt;~/bin-no-network/&lt;/code&gt; folder:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;    &lt;span class="na"&gt;selectors&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;matchBinaries&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;operator&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;In"&lt;/span&gt;
        &lt;span class="na"&gt;values&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/home/jacroute/bin-no-network/curl"&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/home/jacroute/bin-no-network/jq"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Unfortunately, only 'In' and 'NotIn' operator are &lt;a href="https://github.com/cilium/tetragon/pull/686/files#diff-0b6c66cf370068fe2e441466d6124582732e8b67dd0daddccf72c7eb2bef16d4R994-R1000"&gt;implemented&lt;/a&gt; for the &lt;code&gt;matchBinaries&lt;/code&gt; selector. &lt;a href="https://github.com/cilium/tetragon/issues/1278"&gt;We cannot use the 'Prefix' operator&lt;/a&gt; like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;    &lt;span class="na"&gt;selectors&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;matchBinaries&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;operator&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Prefix"&lt;/span&gt;
        &lt;span class="na"&gt;values&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/home/jacroute/bin-no-network/"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So the TracingPolicy should look like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cilium.io/v1alpha1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;TracingPolicy&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;connect"&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;kprobes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;call&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;tcp_connect"&lt;/span&gt;
    &lt;span class="na"&gt;syscall&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="no"&gt;false&lt;/span&gt;
    &lt;span class="na"&gt;args&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;index&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt;
      &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;sock"&lt;/span&gt;
    &lt;span class="na"&gt;selectors&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="nl"&gt;&amp;amp;selector&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;matchBinaries&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;operator&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;In"&lt;/span&gt;
        &lt;span class="na"&gt;values&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/home/jacroute/bin-no-network/curl"&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/home/jacroute/bin-no-network/jq"&lt;/span&gt;
      &lt;span class="na"&gt;matchActions&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;action&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Sigkill&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;call&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;tcp_close"&lt;/span&gt;
    &lt;span class="na"&gt;syscall&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="no"&gt;false&lt;/span&gt;
    &lt;span class="na"&gt;args&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;index&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt;
      &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;sock"&lt;/span&gt;
    &lt;span class="na"&gt;selectors&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="nv"&gt;*selector&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;call&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;tcp_sendmsg"&lt;/span&gt;
    &lt;span class="na"&gt;syscall&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="no"&gt;false&lt;/span&gt;
    &lt;span class="na"&gt;args&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;index&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt;
      &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;sock"&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;index&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2&lt;/span&gt;
      &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;int&lt;/span&gt;
    &lt;span class="na"&gt;selectors&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="nv"&gt;*selector&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Deploy the TracingPolicy
&lt;/h2&gt;

&lt;p&gt;We first need to transfer the file to the container. Suppose that you stored the TracingPolicy in &lt;code&gt;bin-no-network.yaml&lt;/code&gt; file, you can transfer the policy using the &lt;code&gt;docker cp&lt;/code&gt; command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;docker &lt;span class="nb"&gt;cp &lt;/span&gt;bin-no-network.yaml tetragon:/tmp/bin-no-network.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then we can use the &lt;code&gt;tetra&lt;/code&gt; cli to deploy this policy:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;tetra tracingpolicy add /tmp/bin-no-network.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Testing the TracingPolicy
&lt;/h2&gt;

&lt;p&gt;Now we can test if the policy is blocking network access for the two binaries listed in the policy.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;mkdir&lt;/span&gt; ~/bin-no-network/
&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;cp&lt;/span&gt; /usr/bin/curl ~/bin-no-network/
&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;cp&lt;/span&gt; /usr/bin/jq ~/bin-no-network/
&lt;span class="nv"&gt;$ &lt;/span&gt;~/bin-no-network/curl google.fr
&lt;span class="o"&gt;[&lt;/span&gt;1] 122677 killed ~/bin-no-network/curl google.fr
&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"{}"&lt;/span&gt; | ~/bin-no-network/jq
&lt;span class="o"&gt;{}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;curl&lt;/code&gt; tried to open a socket but the process was killed. &lt;code&gt;jq&lt;/code&gt; does not try this kind of system call and was not killed.&lt;/p&gt;

&lt;h2&gt;
  
  
  Automatically populating the Policy
&lt;/h2&gt;

&lt;p&gt;We can build a shell script to maintain the list of binaries synchronized with the content of the &lt;code&gt;~/bin-no-network/&lt;/code&gt; directory. Using &lt;code&gt;find&lt;/code&gt; we can list binaries in this folder:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;find ~/bin-no-network/ &lt;span class="nt"&gt;-executable&lt;/span&gt; &lt;span class="nt"&gt;-type&lt;/span&gt; f
/home/jacroute/bin-no-network/curl
/home/jacroute/bin-no-network/jq
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then with awk with can add surrounding spaces and quotes needed to integrate the yaml:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;find ~/bin-no-network/ &lt;span class="nt"&gt;-executable&lt;/span&gt; &lt;span class="nt"&gt;-type&lt;/span&gt; f | &lt;span class="nb"&gt;awk&lt;/span&gt; &lt;span class="s1"&gt;'{ print "          - \""$0"\""}'&lt;/span&gt;
          - &lt;span class="s2"&gt;"/home/jacroute/bin-no-network/curl"&lt;/span&gt;
          - &lt;span class="s2"&gt;"/home/jacroute/bin-no-network/jq"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Finally, integrate this in the yaml and inject the policy in the kernel with the following script:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#!/bin/bash&lt;/span&gt;

docker &lt;span class="nb"&gt;exec &lt;/span&gt;tetragon tetra sensors &lt;span class="nb"&gt;rm &lt;/span&gt;connect

&lt;span class="nv"&gt;policy&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;mktemp&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; /dev/shm&lt;span class="si"&gt;)&lt;/span&gt;

&lt;span class="nb"&gt;cat&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt; &lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt; &amp;gt; &lt;/span&gt;&lt;span class="nv"&gt;$policy&lt;/span&gt;&lt;span class="sh"&gt;
apiVersion: cilium.io/v1alpha1
kind: TracingPolicy
metadata:
  name: "connect"
spec:
  kprobes:
  - call: "tcp_connect"
    syscall: false
    args:
    - index: 0
      type: "sock"
    selectors: &amp;amp;selector
    - matchBinaries:
      - operator: "In"
        values:
&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;find ~/bin-no-network/ &lt;span class="nt"&gt;-executable&lt;/span&gt; &lt;span class="nt"&gt;-type&lt;/span&gt; f | &lt;span class="nb"&gt;awk&lt;/span&gt; &lt;span class="s1"&gt;'{ print "          - \""$0"\""}'&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="sh"&gt;
      matchActions:
      - action: Sigkill
  - call: "tcp_close"
    syscall: false
    args:
    - index: 0
      type: "sock"
    selectors: *selector
  - call: "tcp_sendmsg"
    syscall: false
    args:
    - index: 0
      type: "sock"
    - index: 2
      type: int
    selectors: *selector
&lt;/span&gt;&lt;span class="no"&gt;EOF

&lt;/span&gt;docker &lt;span class="nb"&gt;cp&lt;/span&gt; &lt;span class="nv"&gt;$policy&lt;/span&gt; tetragon:/tmp/policy.yaml
docker &lt;span class="nb"&gt;exec &lt;/span&gt;tetragon tetra tracingpolicy add /tmp/policy.yaml
&lt;span class="nb"&gt;rm&lt;/span&gt; &lt;span class="nv"&gt;$policy&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Building trust in your relationship with your workstation XD
&lt;/h2&gt;

&lt;p&gt;After a first date with Tetragon during the &lt;a href="https://www.kcdfrance.fr/"&gt;Kubernetes Community Days&lt;/a&gt; during 2023, I was seduced by the way it's implemented: most of the time new features are implemented on top of others creating a complex multi layered cathedral. Perfect for a wedding, but I prefer simplicity. &lt;/p&gt;

&lt;p&gt;Using eBPF to enforce "security" seems to be very efficient from the performance point of view. Bypassing Tetragon security by denial of service attacks is unlikely. For example, the sigkill action triggered by the Tetragon’s eBPF program is done synchronously, in the kernel and not at user space, which makes it hard to bypass. In other words, the security mechanism is fully and autonomously implemented at kernel level.&lt;/p&gt;

&lt;p&gt;This mechanism proves to be effective in blocking network activity and building trust in our beloved workstation even when running potentially malicious binaries.&lt;/p&gt;

&lt;p&gt;Using Tetragon in a Kubernetes context is the next step. The plan is to observe the behavior of an application in a development environment (files read/write, command executed) and generate a profile. Then, promote this profile from development to production, and enforce an &lt;em&gt;allow-only&lt;/em&gt; behavior using Tetragon.&lt;/p&gt;

</description>
      <category>tetragon</category>
      <category>security</category>
      <category>ebpf</category>
      <category>workstations</category>
    </item>
    <item>
      <title>Using ArgoCD Pull Request Generator to review application modifications</title>
      <dc:creator>Julien Acroute</dc:creator>
      <pubDate>Tue, 11 Apr 2023 14:29:12 +0000</pubDate>
      <link>https://dev.to/camptocamp-ops/using-argocd-pull-request-generator-to-review-application-modifications-236e</link>
      <guid>https://dev.to/camptocamp-ops/using-argocd-pull-request-generator-to-review-application-modifications-236e</guid>
      <description>&lt;p&gt;As a developer, when modifications are pushed to a feature branch, you and your team want to test this new feature. If you have the chance to work with a stateless application, you can deploy another instance of the application with modifications from the feature branch.&lt;/p&gt;

&lt;p&gt;An interesting feature of &lt;a href="https://github.com/argoproj/argo-cd"&gt;ArgoCD&lt;/a&gt; is the &lt;a href="https://argocd-applicationset.readthedocs.io/en/stable/Generators-Pull-Request/"&gt;Pull Request Generator&lt;/a&gt;. It's a generator for &lt;a href="https://argocd-applicationset.readthedocs.io/en/stable/"&gt;ApplicationSet&lt;/a&gt;. An ApplicationSet is a template of ArgoCD &lt;a href="https://argo-cd.readthedocs.io/en/stable/operator-manual/application.yaml"&gt;Application&lt;/a&gt; associated with a generator. Generator can be a &lt;a href="https://argocd-applicationset.readthedocs.io/en/stable/Generators-Git/#git-generator-directories"&gt;directory&lt;/a&gt;: an application will be created for every sub-folder. There is also the &lt;a href="https://argocd-applicationset.readthedocs.io/en/stable/Generators-Cluster/"&gt;Cluster generator&lt;/a&gt; that deploy the same Application but in every cluster managed by ArgoCD.&lt;/p&gt;

&lt;p&gt;The syntax for the Pull Request generator is quite simple:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;argoproj.io/v1alpha1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ApplicationSet&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;myapps&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;generators&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;pullRequest&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;github&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;owner&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;camptocamp&lt;/span&gt;
        &lt;span class="na"&gt;repo&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;myrepository&lt;/span&gt;
        &lt;span class="na"&gt;tokenRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;secretName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;github-token&lt;/span&gt;
          &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;token&lt;/span&gt;
        &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;deploy&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="s"&gt;...&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Before testing this feature what is expected ?&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A new ArgoCD Application is created when a pull request is created with a "deploy" label&lt;/li&gt;
&lt;li&gt;In the template of the Application we can use metadata of the pull request: ID, title, description, labels, source and target branch name, commit ref, …&lt;/li&gt;
&lt;li&gt;A comment is added to the PR when the Application is deployed and Synced with the available URLs&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now we will see how to implement this ;-)&lt;/p&gt;

&lt;h1&gt;
  
  
  Workflow before Implementation
&lt;/h1&gt;

&lt;h2&gt;
  
  
  Source Code Repository
&lt;/h2&gt;

&lt;p&gt;Let's start with a simple application repository: &lt;code&gt;git@github.com:camptocamp/frontend.git&lt;/code&gt;&lt;br&gt;
Let's consider that this repository has a github action that builds a container image when a pull request is opened. The image is tagged with the short commit hash and the concatenation of the branch name and the short commit hash. For example, if the feature branch is name &lt;code&gt;update_lib&lt;/code&gt; with the last commit: &lt;code&gt;4a9b29e&lt;/code&gt;, the following tags will be generated: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;4a9b29e&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;update_lib-4a9b29e&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Deployment Repository
&lt;/h2&gt;

&lt;p&gt;There is another git repository: &lt;code&gt;git@github.com:camptocamp/argocd-project-foo-apps.git&lt;/code&gt; to describe what needs to be deployed in each kubernetes cluster (dev, int, …). In this repository, we have one folder per environment:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apps
├── dev
│   ├── backend
│   └── frontend
├── int
│   ├── backend
│   └── frontend
└── prod
    ├── backend
    └── frontend
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We are using an ApplicationSet to deploy every component defined in this git repository using the directory generator. For example, for "dev" env:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;argoproj.io/v1alpha1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ApplicationSet&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;foo-dev&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;argocd&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;generators&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;git&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;directories&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/dev/*&lt;/span&gt;
      &lt;span class="na"&gt;repoURL&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;git@github.com:camptocamp/argocd-project-foo-apps.git&lt;/span&gt;
      &lt;span class="na"&gt;revision&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;main&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;foo-dev-{{path.basename}}&lt;/span&gt; &lt;span class="c1"&gt;# sub folder name&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;source&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;{{path}}'&lt;/span&gt; &lt;span class="c1"&gt;# full path&lt;/span&gt;
        &lt;span class="na"&gt;repoURL&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;git@github.com:camptocamp/argocd-project-foo-apps.git&lt;/span&gt;
        &lt;span class="na"&gt;targetRevision&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;main&lt;/span&gt;
      &lt;span class="na"&gt;project&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
      &lt;span class="na"&gt;destination&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;foo-dev&lt;/span&gt;
        &lt;span class="na"&gt;server&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https://kubernetes.default.svc&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This ApplicationSet will create one ArgoCD Application per folder in &lt;code&gt;apps/dev/&lt;/code&gt;. Imagine that we have the following structure in git :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apps
└── dev
    ├── backend
    │   ├── Chart.yaml
    │   └── values.yaml
    ├── database
    │   ├── Chart.yaml
    │   └── values.yaml
    └── frontend
        ├── Chart.yaml
        └── values.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will generate 3 Argocd Applications : &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;foo-dev-database&lt;/li&gt;
&lt;li&gt;foo-dev-backend&lt;/li&gt;
&lt;li&gt;foo-dev-frontend&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Review App Workflow
&lt;/h1&gt;

&lt;p&gt;When a pull request is opened for a specific application, for example the frontend, we want to deploy a new instance of this specific component. We will monitor pull requests on the frontend repository.&lt;br&gt;
So we want to create an additional ApplicationSet to deploy the frontend if a pull request is created with the label &lt;code&gt;deploy&lt;/code&gt; in the frontend git repository.&lt;/p&gt;
&lt;h1&gt;
  
  
  Implementation
&lt;/h1&gt;
&lt;h2&gt;
  
  
  Create a token to access GitHub API
&lt;/h2&gt;

&lt;p&gt;First step is to &lt;a href="https://github.com/settings/tokens"&gt;create a token&lt;/a&gt; to access the frontend repository.&lt;/p&gt;

&lt;p&gt;Then we need to deploy this token as a kubernetes secret in the cluster :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;GITHUB_TOKEN&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;ghp_XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
kubectl create secret generic github-token &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--from-literal&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;token&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;$GITHUB_TOKEN&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-o&lt;/span&gt; yaml &lt;span class="nt"&gt;--dry-run&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;client &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; github-token-secret.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you follow the GitOps principles to deploy manifests in the cluster, you can commit this file.&lt;/p&gt;

&lt;h2&gt;
  
  
  Create a webhook
&lt;/h2&gt;

&lt;p&gt;For a better user experience, we can &lt;a href="https://github.com/Vampouille/demo-argocd-review-app/settings/hooks"&gt;setup a webhook&lt;/a&gt; that will notify ArgoCD when something changes on Github.&lt;br&gt;
This webhook needs to be defined in the source code repository, where pull requests are created.&lt;br&gt;
Go to the repository "Settings" and then "Webhooks". Click on the "Add webhook" button.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The Payload URL is the URL to access ArgoCD with the path &lt;code&gt;/api/webhook&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;The Content Type should be set to &lt;code&gt;application/json&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;It's a good idea to set a "Secret", just use a random string, we will use this random string in the next step.&lt;/li&gt;
&lt;li&gt;Regarding events, we need to individually selects events and choose "Pull requests" events.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Deploy the new ApplicationSet
&lt;/h2&gt;

&lt;p&gt;This new ApplicationSet will use the "Pull request" generator, this "generator" will monitor pull requests on the source code repository. Just create and commit a file with the following ApplicationSet manifest and deploy this new manifest.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;argoproj.io/v1alpha1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ApplicationSet&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;foo-dev-review-frontend&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;argocd&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;generators&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;pullRequest&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;github&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;owner&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;camptocamp&lt;/span&gt;
        &lt;span class="na"&gt;repo&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;frontend&lt;/span&gt; &lt;span class="c1"&gt;# This is the application source code repo&lt;/span&gt;
        &lt;span class="na"&gt;tokenRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;secretName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;github-token&lt;/span&gt;
          &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;token&lt;/span&gt;
        &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;deploy&lt;/span&gt; &lt;span class="c1"&gt;# label on PR that trigger review app&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;foo-dev-frontend-{{branch}}-{{number}}'&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;source&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;repoURL&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;git@github.com:camptocamp/argocd-project-foo-apps.git&lt;/span&gt;
        &lt;span class="na"&gt;targetRevision&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;main&lt;/span&gt;
        &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/dev/frontend&lt;/span&gt;
        &lt;span class="na"&gt;helm&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;parameters&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;image.tag"&lt;/span&gt;
            &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{branch}}-{{head_sha}}"&lt;/span&gt; &lt;span class="c1"&gt;# override of the image tag&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;ingress.prefix"&lt;/span&gt;
            &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{branch}}"&lt;/span&gt; &lt;span class="c1"&gt;# add a prefix to the URL &lt;/span&gt;
      &lt;span class="na"&gt;project&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
      &lt;span class="na"&gt;destination&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;server&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https://kubernetes.default.svc&lt;/span&gt;
        &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;foo-dev&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This ApplicationSet will deploy a new instance of the frontend for each pull request with the &lt;code&gt;deploy&lt;/code&gt; label. The image tag will be overridden with the branch name and the short commit hash and a unique prefix will be added to the URL to avoid conflicts with other instances. Finally, the name of the release is also unique as it's the same as the Application name: &lt;code&gt;foo-dev-frontend-{{branch}}-{{number}}&lt;/code&gt;, this should also avoid conflicts on object names.&lt;/p&gt;

&lt;h2&gt;
  
  
  Configure ArgoCD to use secret for webhook
&lt;/h2&gt;

&lt;p&gt;The last step is to protect the ArgoCD webhook with a password. The ArgoCD Helm chart allows setting a secret for the github webhook endpoint : &lt;code&gt;configs.secret.githubSecret&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Testing
&lt;/h2&gt;

&lt;p&gt;Now it's time to test. For this step, you just need to make a modification in the source repository and create a pull request for this. Please wait until the container image is built. Then by adding the &lt;code&gt;deploy&lt;/code&gt; label a new ArgoCD app should be created :-)&lt;/p&gt;

&lt;h1&gt;
  
  
  Conclusion
&lt;/h1&gt;

&lt;p&gt;This new ArgoCD feature is very interesting. Maybe it can help to have feedback on the pull requests to see the status of the review app. It can also be interesting to have the list of URLs available. ArgoCD already shows this information in the webUI. So maybe just a link to the ArgocD application is enough.&lt;/p&gt;

&lt;p&gt;Camptocamp will participate in the KubeCon at Amsterdam from 18 to 21 april. Maybe we can meet there and discuss development workflows that really help developers.&lt;/p&gt;

</description>
      <category>argocd</category>
      <category>devops</category>
      <category>pullrequest</category>
    </item>
    <item>
      <title>Deploy your Pulumi project using Docker and Dagger.io</title>
      <dc:creator>Hugo Bollon</dc:creator>
      <pubDate>Wed, 14 Dec 2022 10:30:00 +0000</pubDate>
      <link>https://dev.to/camptocamp-ops/deploy-your-pulumi-project-using-docker-and-daggerio-2dig</link>
      <guid>https://dev.to/camptocamp-ops/deploy-your-pulumi-project-using-docker-and-daggerio-2dig</guid>
      <description>&lt;h2&gt;
  
  
  🕰️ In the previous episode
&lt;/h2&gt;

&lt;p&gt;In the first part of this Dagger's series, I showed you what's Dagger.io, what's the features of it and it's benefits against others ci/cd solutions and finally the very basis of Dagger.&lt;/p&gt;

&lt;p&gt;With this chapter, I will show you how we can overpower the CI/CD of any Pulumi project using Dagger.&lt;/p&gt;

&lt;h2&gt;
  
  
  🧰 Pulumi - An amazing IaC tool
&lt;/h2&gt;

&lt;p&gt;First of all, I think that some of you may doesn't know what is Pulumi or even &lt;strong&gt;&lt;em&gt;IaC&lt;/em&gt;&lt;/strong&gt; (Infrastructure as Code) concept, so I will quickly present to you these two points.&lt;/p&gt;

&lt;h3&gt;
  
  
  Infrastructure as Code
&lt;/h3&gt;

&lt;p&gt;Nowadays, IT extends to many areas, and so new needs are emerging leadingly to a necessity to adapt infrastructures in order to be able to support all of this.&lt;br&gt;
They also have seen their prerequisites evolve given their multiplication and their increasingly large sizes. As a result, companies started wanting to automate and simplify their infrastructures.&lt;/p&gt;

&lt;p&gt;To address this problem, Amazon unveiled in 2006 the concept of Infrastructure As Code (or &lt;strong&gt;&lt;em&gt;IaC&lt;/em&gt;&lt;/strong&gt;) allowing, on Amazon Web Services, the configuration of instances using computer code.&lt;br&gt;
It was a revolution for infrastructure management and, although limited at the time, this method was quickly adopted by the market.&lt;br&gt;
This event also coincides with the date of appearance of the DevOps movement in which it's part.&lt;/p&gt;

&lt;p&gt;Most of the infrastructure as code tools are based on the use of descriptor files to organize the code which avoids duplication between environments. Some advanced tools support variability, the use of outputs and even deployment to several providers simultaneously.&lt;/p&gt;

&lt;p&gt;There are three types of infrastructure as code:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Imperative&lt;/strong&gt;: resources (instances, networks, etc.) are declared via a list of instructions in a defined order to obtain an expected result.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Functional&lt;/strong&gt;: unlike the imperative mode, the order of the instructions does not matter. The resources are defined in such a way that their final configuration is as expected.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Based on the environment&lt;/strong&gt;: the resources are declared in such a way that their state and their final configuration are consistent with the rest of the environment.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The advantages of IaC compared to traditional management are numerous, such as: cost reduction, the possibility of versioning the infrastructure, the speed of deployment and execution, the ability to collaborate, etc.&lt;br&gt;
It also allows complete automation, in fact, once the process is launched, there is no longer any need for human intervention. This advantage not only limits the risks due to human error and therefore increases reliability, but also allows teams to focus on projects and less on the deployment of applications.&lt;/p&gt;
&lt;h3&gt;
  
  
  Pulumi
&lt;/h3&gt;

&lt;p&gt;Pulumi is an open-source IaC tool. It can be used to create, manage and deploy an infrastructure on many cloud provider like AWS, GCP, Scaleway, etc.&lt;br&gt;
It haves few strengths compared to others IaC solutions like &lt;em&gt;&lt;strong&gt;Terraform&lt;/strong&gt;&lt;/em&gt; for example. &lt;/p&gt;

&lt;p&gt;First of all, Pulumi support many programming languages like Go, TypeScript, Python, Java, F# and few others.&lt;br&gt;
This is a great advantage for Pulumi and one of the reasons why we begin to use it at &lt;em&gt;Camptocamp&lt;/em&gt; because the presence of a programming language like Go, rather than a configuration language like HCL (language used by &lt;em&gt;Terraform&lt;/em&gt;) allows much more flexibility and adaptability.&lt;/p&gt;

&lt;p&gt;Furthermore, Pulumi has new native providers like AWS, Google, and others in order to get new versions the same day they are released.&lt;br&gt;
Additionally, Pulumi also supports Terraform providers to maintain compatibility with any infrastructure built using Terraform.&lt;/p&gt;

&lt;p&gt;Another very interesting advantage is the support of what they call “&lt;em&gt;Dynamic Providers&lt;/em&gt;” which allows to easily extend an existing provider with new types of personalized resources and that by directly programming new CRUD operations.&lt;br&gt;
This can allow new resources to be added, for example, while adding complex migration or configuration logic.&lt;/p&gt;

&lt;p&gt;Finally, there are still many advantages to Pulumi such as the ease of carrying out tests thanks to the native frameworks of the programming languages provided for this usage, the presence of &lt;strong&gt;aliases&lt;/strong&gt; allowing a resource to be renamed while maintaining compatibility with the state of the infrastructure, better integrations with mainstream IDEs like VSCode, etc.&lt;/p&gt;
&lt;h2&gt;
  
  
  ⚡ Supercharge your Pulumi project thanks to Dagger
&lt;/h2&gt;

&lt;p&gt;Now that you know &lt;strong&gt;IaC&lt;/strong&gt;, &lt;strong&gt;Pulumi&lt;/strong&gt; and of course &lt;strong&gt;Dagger&lt;/strong&gt; (if not, you can check the first part of this blog series), we will see how we can create CI/CD pipelines for any Pulumi project using Dagger and CUE and finally how can we run them.&lt;br&gt;
For that, I will present to you the Dagger architecture we have built at Camptocamp, it was designed to be complete and reusable. It may be too complex for small projects but if you understand it you will normally be able to create your own !&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Important note&lt;/strong&gt;: Initially Dagger.io was developed using &lt;a href="https://cuelang.org/" rel="noopener noreferrer"&gt;&lt;em&gt;Cuelang&lt;/em&gt;&lt;/a&gt;, an amazing and modern declarative language. Cue was also the only way to do pipelines with it. However, Dagger's team have recently transformed Dagger to be language agnostic.&lt;br&gt;
We now have different SDKs: Go (the main one), CUE, Node.js and Python.&lt;br&gt;
As of today, I advice you to choose the Go SDK if you don't really know which one to take. For the CUE one, I only recommend it to you if:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You like declarative language like YAML&lt;/li&gt;
&lt;li&gt;You know CUE or want to learn it&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For the following chapters, I will present our implementation which is made in CUE. However, it will be easy to transpose it to other SDKs if you understand it. &lt;/p&gt;
&lt;h3&gt;
  
  
  📦 The Pulumi package
&lt;/h3&gt;

&lt;p&gt;In order to make a reusable and powerful Dagger project, we decided to create a "Pulumi" package which embeds our objects definitions like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;the Docker image&lt;/li&gt;
&lt;li&gt;the container definition&lt;/li&gt;
&lt;li&gt;the command object&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As an exemple there is the command object definition:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// pulumi/command.cue
package pulumi

import (
    "dagger.io/dagger"
    "universe.dagger.io/bash"
    "universe.dagger.io/docker"
)

#Command: self = {
    image?: docker.#Image
    name:   string
    args: [...string]
    env: [string]: string
    source: dagger.#FS
    input?: dagger.#FS

    _forceRun: bash.#RunSimple &amp;amp; {
        script: contents: "true"
        always: true
    }

    _container: #Container &amp;amp; {
        if self.image != _|_ {
            image: self.image
        }

        source: self.source

        command: {
            name: self.name
            args: self.args
        }

        env: self.env &amp;amp; {
            FORCE_RUN_HACK: "\(_forceRun.success)"
        }

        if self.input != _|_ {
            mounts: input: {
                type:     "fs"
                dest:     "/input"
                contents: self.input
            }
        }

        export: directories: "/output": _
    }

    output: _container.export.directories."/output"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;At the top of this file, you can see the package definition and all our imports. &lt;code&gt;universe.dagger.io&lt;/code&gt; is a community repository where we can find pre-made package for Docker, bash, alpine, etc.&lt;br&gt;
Just after that, we start defining our &lt;code&gt;#Command&lt;/code&gt; object by adding public fields like command's name, args, custom Docker image (which is optional due to &lt;code&gt;?&lt;/code&gt; character, etc.&lt;br&gt;
We also define our unexported fields (which aren't accessible from outside the package) like the container definition which will run our command (the container object is defined in the &lt;code&gt;container.cue&lt;/code&gt; file in the same repository).&lt;/p&gt;

&lt;p&gt;This is one of the few definitions that we have in this package, I will not show you directly all of them since it's really specific to our implementation and it's not very relevant to explain how it's working.&lt;br&gt;
However, you can retrieve all of our source code here: &lt;a href="https://github.com/camptocamp/dagger-pulumi" rel="noopener noreferrer"&gt;https://github.com/camptocamp/dagger-pulumi&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So one last file from this pulumi package that we will see is the &lt;code&gt;main.cue&lt;/code&gt; one. It's where we defined all the Pulumi commands that we will use (using the &lt;code&gt;#Command&lt;/code&gt; object seen just before).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// pulumi/main.cue
[...]
#Preview: self = {
    stack: string
    diff:  bool | *false

    #Command &amp;amp; {
        name: "preview"

        args: [
            "--stack",
            stack,
            "--save-plan",
            "/output/plan.json",

            if diff {
                "--diff"
            },
        ]
    }

    _file: core.#ReadFile &amp;amp; {
        input: self.output
        path:  "plan.json"
    }

    plan: _file.contents
}

#Update: {
    stack: string
    diff:  bool | *false
    plan:  string

    _file: core.#WriteFile &amp;amp; {
        input:    dagger.#Scratch
        path:     "plan.json"
        contents: planHere, you defined 
    }

    #Command &amp;amp; {
        name: "update"

        args: [
            "--stack",
            stack,
                        [...]
            if diff {
                "--diff"
            },

            "--skip-preview",
        ]

        input: _file.output
    }
}
[...]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I voluntary remove some parts of the file and leave only the definition for Pulumi preview and update commands in order to keep it simple.&lt;br&gt;
In the Pulumi context, preview command is used to compare the codebase which defines the desired infrastructure state with the actual one and preview all the potential changes if we apply the configuration. &lt;br&gt;
The update one, as it name says, update the deployed infrastructure to match the desired state.&lt;/p&gt;

&lt;p&gt;Using these definitions, we finally defined a &lt;code&gt;#Pulumi&lt;/code&gt; object in the &lt;code&gt;main.cue&lt;/code&gt; from the parent &lt;code&gt;ci&lt;/code&gt; package. &lt;br&gt;
It's composed by few attributes used to set our project environment like, for example, the Pulumi stack (workspace), the env variables, the authorization to run destructive actions through this CI, etc.&lt;br&gt;
We also have a list of all available commands construct using all &lt;code&gt;#Command&lt;/code&gt; objects defined earlier.&lt;/p&gt;

&lt;p&gt;Finally, the last step before we can be able to run our pipelines is to create the Dagger's plan.&lt;br&gt;
This is the most important part where all jobs are defined but it's really specific to the project for which it's built.&lt;br&gt;
I will present to you a plan from one of our projects where we used the pulumi package that I show you just before.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import (
    "github.com/camptocamp/pulumi-aws-schweizmobil/ci"

    "dagger.io/dagger"
)

dagger.#Plan &amp;amp; {
    client: {
        env: {
            PULUMI_GOMAXPROCS?: string
            PULUMI_GOMEMLIMIT?: string
        }

        filesystem: {
            sources: read: {
                path:     "."
                contents: dagger.#FS

                exclude: [
                    ".*",
                ]
            }

            // TODO
            // Simplify once https://github.com/dagger/dagger/issues/2909 is fixed
            plan: read: {
                path:     "plan.json"
                contents: string
            }

            planWrite: write: {
                path:     "plan.json"
                contents: actions.preview.plan
            }
        }
    }

    #Pulumi: ci.#Pulumi &amp;amp; {
        env: {
            if client.env.PULUMI_GOMAXPROCS != _|_ {
                GOMAXPROCS: client.env.PULUMI_GOMAXPROCS
            }

            if client.env.PULUMI_GOMEMLIMIT != _|_ {
                GOMEMLIMIT: client.env.PULUMI_GOMEMLIMIT
            }
        }

        source: client.filesystem.sources.read.contents
        stack:  string
        diff:   bool
        update: plan: client.filesystem.plan.read.contents
        enableDestructiveActions: bool
    }

    actions: {
        #Pulumi.commands
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Firstly, you can see at the begin of this file the import of the homemade &lt;code&gt;ci&lt;/code&gt; package (as a reminder, it is fully available &lt;a href="https://github.com/camptocamp/dagger-pulumi" rel="noopener noreferrer"&gt;here&lt;/a&gt;)&lt;br&gt;
After, we defined the &lt;code&gt;dagger.#Plan&lt;/code&gt; object which is composed by few parts:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;the client config: this is where we will be able to set some runtime elements like: env variables, filesystems with read/write permissions, the sources location...&lt;/li&gt;
&lt;li&gt;the actions: this is all the possible jobs that you will be able to run using Dagger CLI or in your CI/CD environment, in our case we just use local variable &lt;code&gt;#Pulumi.commands&lt;/code&gt; which is built using the &lt;code&gt;#Pulumi&lt;/code&gt; object defined in the &lt;code&gt;ci&lt;/code&gt; package. &lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  🚀 Launch Dagger's jobs
&lt;/h3&gt;
&lt;h4&gt;
  
  
  Locally using CLI
&lt;/h4&gt;

&lt;p&gt;As I told in the previous part of this blog series, Dagger can run any jobs locally thanks to his Dockerize design.&lt;br&gt;
You must have installed the Dagger's CLI for Cue SDK (check the first part of this series or &lt;a href="https://docs.dagger.io/sdk/cue/526369/install" rel="noopener noreferrer"&gt;the official documentation&lt;/a&gt; if it's not already done).&lt;/p&gt;

&lt;p&gt;Once ready, you can open your favorite terminal client located inside your project directory (composed by your Pulumi project and your Dagger plan) and run:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;dagger --plan=ci.cue do --with '#Pulumi: stack: "&amp;lt;your_stack&amp;gt;"' preview&lt;/code&gt; -&amp;gt; it will run a &lt;strong&gt;Pulumi preview&lt;/strong&gt; on the desired stack&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;dagger --plan=ci.cue do --with '#Pulumi: stack: "&amp;lt;your_stack&amp;gt;", #Pulumi: diff: true, #Pulumi: enableDestructiveActions: true' update&lt;/code&gt; -&amp;gt; it will run a &lt;strong&gt;Pulumi update&lt;/strong&gt; on the desired stack&lt;/li&gt;
&lt;/ul&gt;
&lt;h4&gt;
  
  
  Remotely using a CI/CD environment
&lt;/h4&gt;

&lt;p&gt;With Dagger you can also run your pipelines on every CI/CD environments! &lt;br&gt;
As a simple example, you can easily run these jobs on Github Actions:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;pulumi-project&lt;/span&gt;

&lt;span class="na"&gt;on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;push&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;branches&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;main&lt;/span&gt;

&lt;span class="na"&gt;jobs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;dagger&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;runs-on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu-latest&lt;/span&gt;
    &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Clone repository&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/checkout@v2&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Install Dagger Engine&lt;/span&gt;
        &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
            &lt;span class="s"&gt;cd /usr/local&lt;/span&gt;
            &lt;span class="s"&gt;wget -O - https://dl.dagger.io/dagger/install.sh | sudo sh&lt;/span&gt;
            &lt;span class="s"&gt;cd -&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Run Pulumi update using Dagger&lt;/span&gt;
        &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
            &lt;span class="s"&gt;dagger-cue --plan=ci.cue do update --log-format plain        &lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note than the &lt;code&gt;--log-format plain&lt;/code&gt; flag is useful in order to have well formatted logs in the Github Action output.&lt;br&gt;
With this action, a &lt;code&gt;pulumi update&lt;/code&gt; will be launched at every push on the main branch using Dagger.&lt;/p&gt;

&lt;h2&gt;
  
  
  🔚 Conclusion
&lt;/h2&gt;

&lt;p&gt;If you have made it this far I thank you very much and I hope I have succeeded to give you a good overview of what is Dagger and how is it possible to exploit it in order to improve its practices in terms of CI/CD.&lt;/p&gt;

&lt;p&gt;From my point of view, Dagger is a very promising solution. Right now we can still feel Dagger's young age, indeed, it's still not perfect.&lt;br&gt;
In my opinion, each SDK should continue to develop and harmonize with each other. I also think that Dagger's execution performance and cache management should be improved (I'm basing myself on the 0.2 version of the engine since 0.3 is not yet available with the Cuelang SDK so maybe that's already better!).&lt;br&gt;
Nevertheless, the advantages of Dagger are already essential for me, such as the possibility of running its pipelines locally or even being able to run them remotely on a remote VM using local sources.&lt;/p&gt;

&lt;p&gt;To be followed very closely! 😉&lt;/p&gt;

</description>
      <category>watercooler</category>
    </item>
    <item>
      <title>Use Docker to build better CI/CD pipelines with Dagger</title>
      <dc:creator>Hugo Bollon</dc:creator>
      <pubDate>Sun, 11 Dec 2022 05:00:00 +0000</pubDate>
      <link>https://dev.to/camptocamp-ops/use-docker-to-build-better-cicd-pipelines-with-dagger-4l4j</link>
      <guid>https://dev.to/camptocamp-ops/use-docker-to-build-better-cicd-pipelines-with-dagger-4l4j</guid>
      <description>&lt;p&gt;With the raises of DevOps practices, CI/CD (continuous integration &amp;amp; continuous deployment) takes a major place in every delivery workload.&lt;br&gt;
CI/CD allow organizations to build, test and finally ship their applications more quickly and efficiently. It's a modern set of practices which allows to automatically trigger build, test or others types of jobs when the changes to the codebase are done.&lt;/p&gt;

&lt;p&gt;In this quest of automation, we can use some CI/CD ecosystem like Github Actions, Gitlab-CI or many more. &lt;br&gt;
However, a very promising new solution open-source is born called Dagger.  &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb0j9iony7pwmmyb9o7cb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb0j9iony7pwmmyb9o7cb.png" alt="Dagger's logo" width="800" height="420"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  🤔 Dagger? What is it?
&lt;/h2&gt;

&lt;p&gt;Dagger.io is a brand-new programmable CI/CD engine which is open-source. It was created by Solomon Hykes, the founder of Docker.&lt;br&gt;
It's designed to use Buildkit from Docker in order to runs our pipelines inside containers. &lt;br&gt;
For those who don't know, Buildkit is an improved backend used by Docker to build images. It's way more efficient than the old legacy Docker's builder due to its improved caching system, the parallelization of build tasks and the support of new features in Dockerfile. &lt;/p&gt;

&lt;p&gt;Dagger is also programmable, so we must create our pipelines as code, Dagger himself is written in CUE (Cuelang), a Google's langage. &lt;/p&gt;

&lt;p&gt;CUE is an acronym of "Configure Unify Execute"  and so as its name suggests, it's not another general purpose language but a declarative one mainly used for data templating and validation, configuration or even code generation.&lt;/p&gt;

&lt;p&gt;It's basically an evolution of more lambda languages like YAML or JSON and one of the thing which make it way better and modern is the presence of a package manager.&lt;/p&gt;

&lt;p&gt;So going back to Dagger, you can use CUE to build your pipelines but you can also use the SDK available with multiples languages support:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Go&lt;/li&gt;
&lt;li&gt;Python&lt;/li&gt;
&lt;li&gt;NodeJS&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  ✨ Features and advantages
&lt;/h2&gt;

&lt;p&gt;Dagger, thanks to his innovative containerized design and architecture, leads to many advantages over more conventional CI/CD methods.&lt;br&gt;
As seen previously, Dagger works with Docker containers allowing it to be cross-compatible with every CI/CD runtime environment like Github Actions, Gitlab-CI, Travis-CI, etc.&lt;/p&gt;

&lt;p&gt;One of the other main strengths of Dagger is the ability to do local testing of our pipeline using the Dagger CLI. This is made possible by the Dockerized design of it which makes the development and testing processes way easier compared to conventional CI/CD solutions.&lt;br&gt;
Conversely, Dagger is also able to perform remote runs (directly on a self-hosted Github runner for example) with local sources.&lt;br&gt;
To do this it is possible to use the environment variable &lt;code&gt;DOCKER_HOST&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;The Dockerized design also allows pipelines made with Dagger devkit to be run in every CI/CD runtime environment like, for example, Github Action (using the &lt;a href="https://github.com/dagger/dagger-for-github" rel="noopener noreferrer"&gt;official Dagger Github&lt;/a&gt; Action from the marketplace).&lt;br&gt;
Furthermore, it can also be run independently of the architecture of the platform. The only requirement is the Docker ecosystem support. So it can be run on a managed runner (eg. Github Runners), a self-hosted runner, a local machine, a serverless compute instance, etc. &lt;/p&gt;

&lt;p&gt;Furthermore, Dagger has a solid caching system which caches every operation by default, but it's customizable.&lt;br&gt;
It helps to reduce execution time of CI/CD jobs after the first runs by caching some unchanged required files like: the downloaded dependencies, some built binaries or just some CI/CD engine stuff.&lt;/p&gt;

&lt;p&gt;Finally, Dagger is designed to be reusable thanks to his internal package manager (provided by the Cue language). Indeed, it's similar to the one of the language Golang.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;package app

import (
    "dagger.io/dagger"
    "dagger.io/dagger/core"
    "universe.dagger.io/bash"
    "universe.dagger.io/docker"
)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here you can see the import of some packages from the "universe". The Universe is a community package repository, all sources of these packages can be found &lt;a href="https://github.com/dagger/dagger/tree/v0.2.x/pkg/universe.dagger.io" rel="noopener noreferrer"&gt;here&lt;/a&gt;, you can also contribute to them. &lt;/p&gt;

&lt;h2&gt;
  
  
  ❓ How does it works?
&lt;/h2&gt;

&lt;p&gt;Dagger is language agnostic, the wish of the team behind is to allow developers to make their pipelines with the language of their choice.&lt;br&gt;
For that, Dagger use a specific architecture. The SDKs (Go, Cue, Node and Python) don't actually run your pipelines themself. Instead, they send pipeline definitions to the Dagger GraphQL API which will after trigger the Dagger Engine.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv947b89r1fksfpwyy94x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv947b89r1fksfpwyy94x.png" alt="Dagger's architecture" width="800" height="468"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Dagger's architecture scheme. Ref: &lt;a href="https://dagger.io/blog/graphql" rel="noopener noreferrer"&gt;https://dagger.io/blog/graphql&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2&gt;
  
  
  🎮 Demo application using Cue SDK
&lt;/h2&gt;

&lt;p&gt;Dagger's team made a demo app with pipelines designed to execute build and test jobs. It's useful to discover and try Dagger. &lt;br&gt;
To use it, you must, of course, have Docker installed since Dagger will use Buildkit but you must also install the CLI (the CUE one is available here: &lt;a href="https://docs.dagger.io/sdk/cue/526369/install" rel="noopener noreferrer"&gt;https://docs.dagger.io/sdk/cue/526369/install&lt;/a&gt;)&lt;br&gt;
Once done, clone this repository: &lt;a href="https://github.com/dagger/todoapp/blob/main/dagger.cue" rel="noopener noreferrer"&gt;https://github.com/dagger/todoapp/blob/main/dagger.cue&lt;/a&gt;&lt;br&gt;
Finally, open a terminal inside this freshly cloned project and run &lt;code&gt;dagger-cue project update&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Now you're ready to get your hands in Dagger, if you look at the &lt;code&gt;dagger.cue&lt;/code&gt; file you will see the &lt;em&gt;plan&lt;/em&gt;. &lt;br&gt;
A plan in Dagger orchestrates the Actions. It's the base component of your configuration.&lt;/p&gt;

&lt;p&gt;Within this plan we can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;interact with the client filesystem

&lt;ul&gt;
&lt;li&gt;read files, usually the current directory as &lt;code&gt;.&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;write files, usually the build output as &lt;code&gt;_build&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;read and define env variables&lt;/li&gt;
&lt;li&gt;declare jobs like dependencies update, build &amp;amp; test&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;There is the one of this demo app:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;package todoapp

import (
    "dagger.io/dagger"

    "dagger.io/dagger/core"
    "universe.dagger.io/netlify"
    "universe.dagger.io/yarn"
)

dagger.#Plan &amp;amp; {
    actions: {
        // Load the todoapp source code 
        source: core.#Source &amp;amp; {
            path: "."
            exclude: [
                "node_modules",
                "build",
                "*.cue",
                "*.md",
                ".git",
            ]
        }

        // Build todoapp
        build: yarn.#Script &amp;amp; {
            name:   "build"
            source: actions.source.output
        }

        // Test todoapp
        test: yarn.#Script &amp;amp; {
            name:   "test"
            source: actions.source.output

            // This environment variable disables watch mode
            // in "react-scripts test".
            // We don't set it for all commands, because it causes warnings
            // to be treated as fatal errors.
            // See https://create-react-app.dev/docs/advanced-configuration
            container: env: CI: "true"
        }

        // Deploy todoapp
        deploy: netlify.#Deploy &amp;amp; {
            contents: actions.build.output
            site:     string | *"dagger-todoapp"
        }
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can see that three different jobs are defined:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;a build one which will compile the react app using yarn&lt;/li&gt;
&lt;li&gt;a test one&lt;/li&gt;
&lt;li&gt;a deploy one&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To execute locally one of those, you can execute:  &lt;code&gt;dagger-cue do &amp;lt;job_name&amp;gt;&lt;/code&gt;.&lt;br&gt;
So to build the project you can do &lt;code&gt;dagger-cue do build&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  ⛏️ To go deeper
&lt;/h2&gt;

&lt;p&gt;I hope to have given you a good overview of Dagger with this first chapter or at least to have made you want to go further with it.&lt;br&gt;
Getting started with Dagger and CUE is not necessarily easy.&lt;br&gt;
It's still an extremely young product (we are currently in version &lt;em&gt;v0.2.36&lt;/em&gt;) and there is, therefore, still a lot of optimization work to be done as well as missing features.&lt;br&gt;
Keep in mind that Dagger is an open-source project available &lt;a href="https://github.com/dagger/dagger" rel="noopener noreferrer"&gt;on Github&lt;/a&gt; so don't hesitate to open issues or pull request if needed. &lt;/p&gt;

&lt;p&gt;In the next (coming soon) chapter we will see how to use Dagger to build a deployment workflow for a Pulumi project and how to manage secrets.&lt;/p&gt;

</description>
      <category>watercooler</category>
    </item>
    <item>
      <title>How I deployed a serverless and high availability Blackbox Exporter on AWS Fargate</title>
      <dc:creator>Hugo Bollon</dc:creator>
      <pubDate>Mon, 04 Jul 2022 14:19:14 +0000</pubDate>
      <link>https://dev.to/camptocamp-ops/how-i-deployed-a-serverless-and-high-availability-blackbox-exporter-on-aws-fargate-37hh</link>
      <guid>https://dev.to/camptocamp-ops/how-i-deployed-a-serverless-and-high-availability-blackbox-exporter-on-aws-fargate-37hh</guid>
      <description>&lt;p&gt;At &lt;a href="https://www.camptocamp.com" rel="noopener noreferrer"&gt;&lt;em&gt;Camptocamp&lt;/em&gt;&lt;/a&gt;, we're using multiple Blackbox Exporters hosted in a few different cloud providers and world regions. We're using them to monitor availability and ssl certificate validity and expiration of many websites.&lt;br&gt;
They were all deployed inside Linux VMs provisioned by Terraform and configured by our Puppet infrastructure. However, in order to achieve more simplicity and high availability, we wanted to deploy containers instead of these VMs.&lt;/p&gt;
&lt;h2&gt;
  
  
  🧐 Why a serverless approach with AWS Fargate
&lt;/h2&gt;

&lt;p&gt;AWS ECS (Elastic Container Service) is a fully managed, highly scalable and docker compatible container orchestration service.&lt;br&gt;
It is widely used to host microservice applications like webservers, APIs or machine learning applications.&lt;/p&gt;

&lt;p&gt;With ECS, you're free to choose between EC2 or Fargate instances to run your apps. &lt;br&gt;
Fargate is a serverless compute engine which allows you to just focus on building and deploying your apps by taking away all infrastructure deployments and maintenance. No need to worry about security or operating systems, AWS will handle that.&lt;br&gt;
On the other hand, EC2 is more flexible than Fargate and less expensive. It can also be interesting for some customers to manage the security themselves.&lt;/p&gt;

&lt;p&gt;In our case, we opted for a serverless approach using Fargate in order to take advantage of the simplicity of a managed infrastructure since for blackboxes we have no specific security constraints for the infrastructure.&lt;/p&gt;
&lt;h2&gt;
  
  
  🧳 What I use
&lt;/h2&gt;

&lt;p&gt;To deploy an application on ECS using Fargate you will need three different components:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;An &lt;strong&gt;ECS Cluster&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;An &lt;strong&gt;ECS Task Definition&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;One or more &lt;strong&gt;ECS Service&lt;/strong&gt;
The &lt;strong&gt;Task Definition&lt;/strong&gt; is a template where you define your application (Docker image, ressources requests, networking mode, etc.).
The &lt;strong&gt;Service&lt;/strong&gt; is the component that  will deploy our Fargate instance(s) based on our task definition(s) in the newly created &lt;strong&gt;Cluster&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;At &lt;em&gt;Camptocamp&lt;/em&gt;, we're doing IAC (infrastructure as code) using mostly Terraform. In order to simplify the deployment of all the resources necessary for the implementation of these components, I created two distinct Terraform modules: one to create an &lt;strong&gt;ECS Cluster&lt;/strong&gt; and one to create &lt;strong&gt;Services&lt;/strong&gt; within an existing cluster.&lt;br&gt;
They have been designed to be flexible and reusable, and we will take a closer look at them to find out what they do and how they work.&lt;/p&gt;
&lt;h3&gt;
  
  
  ⚙️ Module: ECS Cluster
&lt;/h3&gt;

&lt;p&gt;Firstly, I created a module aiming to deploy:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;an &lt;strong&gt;ECS cluster&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;an associated &lt;strong&gt;VPC&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;the necessary &lt;strong&gt;IAM roles&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;a &lt;strong&gt;Cloudwatch Log Group&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;network components (internet gateway, subnets, routes)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To use this module, we must provide some inputs variables:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A project name&lt;/li&gt;
&lt;li&gt;A project environment (optional)&lt;/li&gt;
&lt;li&gt;A list of public subnets&lt;/li&gt;
&lt;li&gt;A list of private subnets&lt;/li&gt;
&lt;li&gt;A list of availability zones &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Link: &lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://assets.dev.to/assets/github-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/camptocamp" rel="noopener noreferrer"&gt;
        camptocamp
      &lt;/a&gt; / &lt;a href="https://github.com/camptocamp/terraform-aws-ecs-cluster" rel="noopener noreferrer"&gt;
        terraform-aws-ecs-cluster
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      Terraform module used to create a new AWS ECS cluster with VPC, IAM roles and networking components
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;terraform-aws-ecs-cluster&lt;/h1&gt;

&lt;/div&gt;
&lt;p&gt;Terraform module used to create a new AWS ECS cluster with VPC, IAM roles and networking components&lt;/p&gt;
&lt;/div&gt;



&lt;/div&gt;
&lt;br&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/camptocamp/terraform-aws-ecs-cluster" rel="noopener noreferrer"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;br&gt;
&lt;/div&gt;
&lt;br&gt;


&lt;h3&gt;
  
  
  ⚙️ Module: ECS Service Fargate
&lt;/h3&gt;

&lt;p&gt;Then, this second module aims to deploy a Fargate &lt;strong&gt;Service&lt;/strong&gt; in an existing &lt;strong&gt;ECS Cluster&lt;/strong&gt; (in this case deployed with the previous module).&lt;br&gt;
It will also create everything necessary to be able to access our service. Here is the full list of resources that will be created:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;An &lt;strong&gt;ECS Fargate Service&lt;/strong&gt; with its needed &lt;strong&gt;Security Group&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;An &lt;strong&gt;ALB&lt;/strong&gt; (Application Load Balancer) also with a &lt;strong&gt;Security Group&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;A &lt;strong&gt;Target Group&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;An &lt;strong&gt;HTTP&lt;/strong&gt; and &lt;strong&gt;HTTPS Listeners&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;A &lt;strong&gt;DNS Zone&lt;/strong&gt; and &lt;strong&gt;Record&lt;/strong&gt; to the &lt;strong&gt;ALB&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;An &lt;strong&gt;ACM Certificate&lt;/strong&gt; with validation&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Once again, this module requires some variables to be used but this time the list is a little bit longer so here are just the most important ones:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;An application name&lt;/li&gt;
&lt;li&gt;An &lt;strong&gt;ECS Cluster ID&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;An &lt;strong&gt;ECS Task Definition&lt;/strong&gt; ressource (to define what will be deployed on this instance)&lt;/li&gt;
&lt;li&gt;A &lt;strong&gt;DNS Zone&lt;/strong&gt; and &lt;strong&gt;Host&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;A &lt;strong&gt;VPC's ID&lt;/strong&gt; and &lt;strong&gt;CIDR Blocks&lt;/strong&gt; &lt;/li&gt;
&lt;li&gt;An application port&lt;/li&gt;
&lt;li&gt;Public and private subnets ids &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Link:&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://assets.dev.to/assets/github-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/camptocamp" rel="noopener noreferrer"&gt;
        camptocamp
      &lt;/a&gt; / &lt;a href="https://github.com/camptocamp/terraform-aws-ecs-service-fargate" rel="noopener noreferrer"&gt;
        terraform-aws-ecs-service-fargate
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      Terraform module used to create a new Fargate Service in an existing ECS cluster with networking components (ALB, Target Group, Listener)
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;terraform-aws-ecs-service-fargate&lt;/h1&gt;

&lt;/div&gt;
&lt;p&gt;Terraform module used to create a new Fargate Service in an existing ECS cluster with networking components (ALB, Target Group, Listener)&lt;/p&gt;
&lt;/div&gt;



&lt;/div&gt;
&lt;br&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/camptocamp/terraform-aws-ecs-service-fargate" rel="noopener noreferrer"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;br&gt;
&lt;/div&gt;
&lt;br&gt;


&lt;h2&gt;
  
  
  🎓 How to
&lt;/h2&gt;

&lt;p&gt;So, our use case is to have a serverless &lt;strong&gt;Blackbox Exporter&lt;/strong&gt; deployed on &lt;strong&gt;AWS ECS&lt;/strong&gt; using a &lt;strong&gt;Fargate&lt;/strong&gt; instance in the &lt;strong&gt;eu-west-1&lt;/strong&gt; region.&lt;br&gt;
Furthermore, it must be accessible only by https with a valid ssl certificate and with basic authentication.&lt;/p&gt;

&lt;p&gt;In order to achieve that, we must add a Nginx sidecar container which will handle basic auth and proxying of the traffic to the Blackbox for authenticated entities.&lt;/p&gt;

&lt;p&gt;There is a simple architecture diagram of what we will achieve:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F04gh6atr753v3tb0ukl6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F04gh6atr753v3tb0ukl6.png" alt="ECS Blackbox exporter architecture diagram" width="800" height="412"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;First, we will begin by creating the ECS Cluster using &lt;a href="https://github.com/camptocamp/terraform-aws-ecs-cluster" rel="noopener noreferrer"&gt;terraform-aws-ecs-cluster module&lt;/a&gt;, so, with all nested resources (VPC, subnets, etc.).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight terraform"&gt;&lt;code&gt;&lt;span class="c1"&gt;# versions.tf&lt;/span&gt;

&lt;span class="k"&gt;terraform&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;required_providers&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;aws&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;source&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"hashicorp/aws"&lt;/span&gt;
      &lt;span class="nx"&gt;version&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"~&amp;gt; 4.0"&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;provider&lt;/span&gt; &lt;span class="s2"&gt;"aws"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;region&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"eu-west-1"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight terraform"&gt;&lt;code&gt;&lt;span class="c1"&gt;# main.tf&lt;/span&gt;

&lt;span class="k"&gt;module&lt;/span&gt; &lt;span class="s2"&gt;"ecs-cluster"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;source&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"git@github.com:camptocamp/terraform-aws-ecs-cluster.git"&lt;/span&gt;

  &lt;span class="nx"&gt;project_name&lt;/span&gt;        &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"ecs-cluster-blackbox-exporters"&lt;/span&gt;
  &lt;span class="nx"&gt;project_environment&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"prod"&lt;/span&gt;
  &lt;span class="nx"&gt;availability_zones&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"eu-west-1a"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"eu-west-1b"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
  &lt;span class="nx"&gt;public_subnets&lt;/span&gt;      &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"10.0.0.0/24"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"10.0.10.0/24"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
  &lt;span class="nx"&gt;private_subnets&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"10.0.20.0/24"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"10.0.30.0/24"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see, a minimum of two availability zones is required in order to create &lt;strong&gt;VPC&lt;/strong&gt; subnets. You also need to provide at least two public and two private cidr blocks.&lt;/p&gt;

&lt;p&gt;Now that we have declared the module which will create a fresh &lt;strong&gt;ECS cluster&lt;/strong&gt; with all networking stuff associated with it, we can create the &lt;strong&gt;Task Definition&lt;/strong&gt; of our Blackbox application task that we will need after to define the &lt;strong&gt;ECS Service&lt;/strong&gt;.&lt;br&gt;
A &lt;strong&gt;Task Definition&lt;/strong&gt; is a template where we define the containers that we will be executed on our &lt;strong&gt;ECS service&lt;/strong&gt; (docker image to run, port mapping, environments values, log configuration, etc.), the resources required (CPU / Memory), the network mode of the task (with Fargate we must use &lt;em&gt;awsvpc&lt;/em&gt; mode), and much more!&lt;/p&gt;

&lt;p&gt;So, as we saw earlier, we will need two containers:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A &lt;strong&gt;Blackbox-Exporter container&lt;/strong&gt; which will have port 9115 exposed but inaccessible from the outside of the cluster.&lt;/li&gt;
&lt;li&gt;A &lt;strong&gt;Nginx container&lt;/strong&gt; which will be exposed to the internet on port 80 with a basic authentication. It will forward authenticated users to the Blackbox container. We will use &lt;a href="https://hub.docker.com/r/beevelop/nginx-basic-auth/" rel="noopener noreferrer"&gt;this docker image&lt;/a&gt; which allows an easy configuration of basic auth using env vars.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We will use the &lt;strong&gt;Cloudwatch Log Group&lt;/strong&gt; created by the ecs-cluster module for the logs of these two containers.&lt;br&gt;
Furthermore, we will also use IAM users created by the module for execution and task role ARNs of our Task Definition.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight terraform"&gt;&lt;code&gt;&lt;span class="c1"&gt;# main.tf&lt;/span&gt;

&lt;span class="k"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_ecs_task_definition"&lt;/span&gt; &lt;span class="s2"&gt;"blackbox_fargate_task"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;family&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"blackbox-exporter-task"&lt;/span&gt;

  &lt;span class="nx"&gt;container_definitions&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="no"&gt;DEFINITION&lt;/span&gt;&lt;span class="sh"&gt;
  [
    {
      "name": "ecs-service-blackbox-prod-container",
      "image": "prom/blackbox-exporter:latest",
      "essential": true,
      "logConfiguration": {
        "logDriver": "awslogs",
        "options": {
          "awslogs-group": "${module.ecs-cluster.cloudwatch_log_group_id}",
          "awslogs-region": "eu-west-1",
          "awslogs-stream-prefix": "ecs-service-blackbox-exporter-prod"
        }
      },
      "portMappings": [
        {
          "containerPort": 9115
        }
      ],
      "cpu": 256,
      "memory": 512
    },
    {
      "name": "ecs-service-nginx-prod-container",
      "image": "beevelop/nginx-basic-auth:v2021.04.1",
      "essential": true,
      "logConfiguration": {
        "logDriver": "awslogs",
        "options": {
          "awslogs-group": "${module.ecs-cluster.cloudwatch_log_group_id}",
          "awslogs-region": "eu-west-1",
          "awslogs-stream-prefix": "ecs-service-nginx-exporter-prod"
        }
      },
      "environment": [
        {
          "name": "HTPASSWD",
          "value": "${var.blackbox_htpasswd}"
        },
        {
          "name": "FORWARD_HOST",
          "value": "localhost"
        },
        {
          "name": "FORWARD_PORT",
          "value": "9115"
        }
      ],
      "portMappings": [
        {
          "containerPort": 80,
          "hostPort": 80,
          "protocol": "tcp"
        }
      ],
      "networkMode": "awsvpc"
    }
  ]
&lt;/span&gt;&lt;span class="no"&gt;  DEFINITION

&lt;/span&gt;  &lt;span class="nx"&gt;requires_compatibilities&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"FARGATE"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
  &lt;span class="nx"&gt;network_mode&lt;/span&gt;             &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"awsvpc"&lt;/span&gt;
  &lt;span class="nx"&gt;memory&lt;/span&gt;                   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"512"&lt;/span&gt;
  &lt;span class="nx"&gt;cpu&lt;/span&gt;                      &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"256"&lt;/span&gt;
  &lt;span class="nx"&gt;execution_role_arn&lt;/span&gt;       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;module&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ecs-cluster&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ecs_task_execution_role_arn&lt;/span&gt;
  &lt;span class="nx"&gt;task_role_arn&lt;/span&gt;            &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;module&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ecs-cluster&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ecs_task_execution_role_arn&lt;/span&gt;

  &lt;span class="nx"&gt;tags&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;Name&lt;/span&gt;        &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"ecs-service-blackbox-exporter-td"&lt;/span&gt;
    &lt;span class="nx"&gt;Environment&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"prod"&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this exemple, I get the htpasswd from a Terraform variable &lt;code&gt;var.blackbox_htpasswd&lt;/code&gt;. You can define it like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight terraform"&gt;&lt;code&gt;&lt;span class="c1"&gt;# variables.tf&lt;/span&gt;

&lt;span class="k"&gt;variable&lt;/span&gt; &lt;span class="s2"&gt;"blackbox_htpasswd"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;type&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;string&lt;/span&gt;
  &lt;span class="nx"&gt;sensitive&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, we will need a &lt;strong&gt;DNS Zone&lt;/strong&gt; where the &lt;strong&gt;ECS Service&lt;/strong&gt; module will create the record.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight terraform"&gt;&lt;code&gt;&lt;span class="c1"&gt;# dns.tf&lt;/span&gt;

&lt;span class="k"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_route53_zone"&lt;/span&gt; &lt;span class="s2"&gt;"alb_dns_zone"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt;              &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"example.com"&lt;/span&gt;
  &lt;span class="nx"&gt;delegation_set_id&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"&amp;lt;Delegation_set_id&amp;gt;"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Optionally, you can create a delegation set in your AWS account, if you don't already have one, and add a &lt;strong&gt;delegation set id&lt;/strong&gt; on your &lt;strong&gt;Route53 zone&lt;/strong&gt; resource in order to always have the same DNS servers.&lt;/p&gt;

&lt;p&gt;Finally, we can now create our ECS Service :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight terraform"&gt;&lt;code&gt;&lt;span class="k"&gt;module&lt;/span&gt; &lt;span class="s2"&gt;"ecs-cluster-service-blackbox"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;source&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"git@github.com:camptocamp/terraform-aws-ecs-service-fargate.git"&lt;/span&gt;

  &lt;span class="nx"&gt;app_name&lt;/span&gt;        &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"ecs-service-blackbox"&lt;/span&gt;
  &lt;span class="nx"&gt;app_environment&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"prod"&lt;/span&gt;
  &lt;span class="nx"&gt;dns_zone&lt;/span&gt;        &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"example.com"&lt;/span&gt;
  &lt;span class="nx"&gt;dns_host&lt;/span&gt;        &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"blackbox.example.com"&lt;/span&gt;

  &lt;span class="nx"&gt;vpc_id&lt;/span&gt;          &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;module&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ecs-cluster&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;vpc_id&lt;/span&gt;
  &lt;span class="nx"&gt;vpc_cidr_blocks&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;module&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ecs-cluster&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;vpc_cidr_blocks&lt;/span&gt;

  &lt;span class="nx"&gt;ecs_cluster_id&lt;/span&gt;         &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;module&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ecs-cluster&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ecs_cluster_id&lt;/span&gt;
  &lt;span class="nx"&gt;task_definition&lt;/span&gt;        &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_ecs_task_definition&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;blackbox_fargate_task&lt;/span&gt;
  &lt;span class="nx"&gt;task_lb_container_name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"ecs-service-nginx-prod-container"&lt;/span&gt;
  &lt;span class="nx"&gt;task_lb_container_port&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;80&lt;/span&gt;

  &lt;span class="nx"&gt;subnet_private_ids&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;module&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ecs-cluster&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;private_subnets&lt;/span&gt;&lt;span class="p"&gt;.*.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
  &lt;span class="nx"&gt;subnet_public_ids&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;module&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ecs-cluster&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;public_subnets&lt;/span&gt;&lt;span class="p"&gt;.*.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;

  &lt;span class="nx"&gt;generate_public_ip&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;

  &lt;span class="nx"&gt;depends_on&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
    &lt;span class="nx"&gt;aws_route53_zone&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;alb_dns_zone&lt;/span&gt;
  &lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see, you must provide to the module some of the previously created resources including: vpc id and cidr_blocks, ECS cluster id, DNS zone, the task definition ressource and the subnets.&lt;br&gt;
You must also set on which container the load balancer will redirect requests and on which port.&lt;/p&gt;

&lt;p&gt;Once all your resources are properly configured, you can run a &lt;code&gt;terraform apply&lt;/code&gt; to create them.&lt;/p&gt;

&lt;p&gt;That's it 🥳! You now have a nice serverless Blackbox accessible on &lt;code&gt;blackbox.example.com&lt;/code&gt; with basic auth! 🎉&lt;/p&gt;

</description>
      <category>devops</category>
      <category>aws</category>
      <category>monitoring</category>
      <category>opensource</category>
    </item>
    <item>
      <title>Towards a Modular DevOps Stack</title>
      <dc:creator>Raphaël Pinson</dc:creator>
      <pubDate>Wed, 23 Feb 2022 17:37:45 +0000</pubDate>
      <link>https://dev.to/camptocamp-ops/towards-a-modular-devops-stack-257c</link>
      <guid>https://dev.to/camptocamp-ops/towards-a-modular-devops-stack-257c</guid>
      <description>&lt;p&gt;A year and a half ago, our infrastructure team at Camptocamp was faced with an increasingly problematic situation. We were provisioning more and more Kubernetes clusters, on different cloud providers. We used Terraform to deploy the infrastructure itself, and we had started to adopt &lt;a href="https://argoproj.github.io/cd/" rel="noopener noreferrer"&gt;Argo CD&lt;/a&gt; to deploy applications on top of the cluster.&lt;/p&gt;

&lt;p&gt;We quickly ended up with many projects using similar logic, often borrowed from older projects, and most of these cluster were starting to use divergent code.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvl7hboahfscgsbvfyit4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvl7hboahfscgsbvfyit4.png" alt="Diverging projects" width="800" height="411"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We thought it was time to put together a standard core in order to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;provision Kubernetes clusters;&lt;/li&gt;
&lt;li&gt;deploy standard applications sets (monitoring, ingress controller, certificate management, etc.) on them;&lt;/li&gt;
&lt;li&gt;provide an interface for developers to deploy their applications in a GitOps manner;&lt;/li&gt;
&lt;li&gt;ensure all teams used similar approaches.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhl1ncdyg1o7bolr9vy5z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhl1ncdyg1o7bolr9vy5z.png" alt="DevOps Stack" width="800" height="350"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://devops-stack.io/" rel="noopener noreferrer"&gt;DevOps Stack&lt;/a&gt; was born!&lt;/p&gt;

&lt;h1&gt;
  
  
  🌟 The Original Design
&lt;/h1&gt;

&lt;p&gt;The original DevOps Stack design was monolithic, both for practical and technical reasons.&lt;/p&gt;

&lt;p&gt;After all, we were trying to centralize best practices from many projects into a common core, so it made sense to put everything together behind a unified interface!&lt;/p&gt;

&lt;p&gt;In addition to that, all the Kubernetes applications were created using an &lt;a href="https://argo-cd.readthedocs.io/en/stable/operator-manual/cluster-bootstrapping/#app-of-apps-pattern" rel="noopener noreferrer"&gt;App of Apps pattern&lt;/a&gt;, because we had no ApplicationSets and no way to control Argo CD directly from Terraform.&lt;/p&gt;

&lt;p&gt;As a result, the basic interface was very simple, for example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;module&lt;/span&gt; &lt;span class="s2"&gt;"cluster"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;source&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"git::https://github.com/camptocamp/devops-stack.git//modules/eks/aws?ref=v0.54.0"&lt;/span&gt;

  &lt;span class="nx"&gt;cluster_name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;local&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;cluster_name&lt;/span&gt;
  &lt;span class="nx"&gt;vpc_id&lt;/span&gt;       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;module&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;vpc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;vpc_id&lt;/span&gt;

  &lt;span class="nx"&gt;worker_groups&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;instance_type&lt;/span&gt;        &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"m5a.large"&lt;/span&gt;
      &lt;span class="nx"&gt;asg_desired_capacity&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;
      &lt;span class="nx"&gt;asg_max_size&lt;/span&gt;         &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;]&lt;/span&gt;

  &lt;span class="nx"&gt;base_domain&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"example.com"&lt;/span&gt;

  &lt;span class="nx"&gt;cognito_user_pool_id&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_cognito_user_pool&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;pool&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
  &lt;span class="nx"&gt;cognito_user_pool_domain&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_cognito_user_pool_domain&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;pool_domain&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;domain&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;However, it could get very complex when application settings needed to be tuned!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F12b9atbhi9iwf0fgoslm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F12b9atbhi9iwf0fgoslm.png" alt="DevOps Stack v0 Architecture" width="800" height="428"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With time, this architecture started being problematic for various reasons:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;it was hard to deactivate or replace components (monitoring, ingress controller, etc.), making it hard to extend the core features;&lt;/li&gt;
&lt;li&gt;default YAML values got quite mixed up and hard to understand;&lt;/li&gt;
&lt;li&gt;as a result, overriding default values was unnecessarily complex;&lt;/li&gt;
&lt;li&gt;adding new applications was done using &lt;code&gt;extra_apps&lt;/code&gt; and &lt;code&gt;extra_applicationsets&lt;/code&gt; parameters, which were monolithic and complex.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It was time to rethink the design.&lt;/p&gt;

&lt;h1&gt;
  
  
  🐙 Argo CD Provider
&lt;/h1&gt;

&lt;p&gt;In order to escape the App of Apps pattern, we started looking into using a &lt;a href="https://registry.terraform.io/providers/oboukili/argocd/" rel="noopener noreferrer"&gt;Terraform provider to control Argo CD resources&lt;/a&gt;. After various contributions, the provider was ready for us to start using in the DevOps Stack.&lt;/p&gt;

&lt;h1&gt;
  
  
  🗺 The Plan for Modularization
&lt;/h1&gt;

&lt;p&gt;Using the Argo CD provider allowed us to split each component into a separate module. In a similar way to DevOps Stack v0, each of these modules would provide Terraform code to set up the component, optionally with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;cloud resources required to set up the component;&lt;/li&gt;
&lt;li&gt;Helm charts to deploy the application using Argo CD.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  💡 A Full Example
&lt;/h2&gt;

&lt;p&gt;As a result, the user interface is much more verbose. The previous example would thus become:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;module&lt;/span&gt; &lt;span class="s2"&gt;"cluster"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;source&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"git::https://github.com/camptocamp/devops-stack.git//modules/eks/aws?ref=v1.0.0"&lt;/span&gt;

  &lt;span class="nx"&gt;cluster_name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;cluster_name&lt;/span&gt;
  &lt;span class="nx"&gt;base_domain&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"demo.camptocamp.com"&lt;/span&gt;
  &lt;span class="nx"&gt;vpc_id&lt;/span&gt;       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;module&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;vpc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;vpc_id&lt;/span&gt;

  &lt;span class="nx"&gt;cluster_endpoint_public_access_cidrs&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;flatten&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;
    &lt;span class="nx"&gt;formatlist&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"%s/32"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;module&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;vpc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;nat_public_ips&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="s2"&gt;"0.0.0.0/0"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;])&lt;/span&gt;

  &lt;span class="nx"&gt;worker_groups&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;instance_type&lt;/span&gt;        &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"m5a.large"&lt;/span&gt;
      &lt;span class="nx"&gt;asg_desired_capacity&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;
      &lt;span class="nx"&gt;asg_max_size&lt;/span&gt;         &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;
      &lt;span class="nx"&gt;root_volume_type&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"gp2"&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;provider&lt;/span&gt; &lt;span class="s2"&gt;"argocd"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;server_addr&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"127.0.0.1:8080"&lt;/span&gt;
  &lt;span class="nx"&gt;auth_token&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;module&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;cluster&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;argocd_auth_token&lt;/span&gt;
  &lt;span class="nx"&gt;insecure&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="nx"&gt;plain_text&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="nx"&gt;port_forward&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="nx"&gt;port_forward_with_namespace&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;module&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;cluster&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;argocd_namespace&lt;/span&gt;

  &lt;span class="nx"&gt;kubernetes&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;host&lt;/span&gt;                   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;module&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;cluster&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;kubernetes_host&lt;/span&gt;
    &lt;span class="nx"&gt;cluster_ca_certificate&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;module&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;cluster&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;kubernetes_cluster_ca_certificate&lt;/span&gt;
    &lt;span class="nx"&gt;token&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;module&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;cluster&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;kubernetes_token&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;module&lt;/span&gt; &lt;span class="s2"&gt;"ingress"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;source&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"git::https://github.com/camptocamp/devops-stack-module-traefik.git//modules/eks"&lt;/span&gt;

  &lt;span class="nx"&gt;cluster_name&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;cluster_name&lt;/span&gt;
  &lt;span class="nx"&gt;argocd_namespace&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;module&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;cluster&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;argocd_namespace&lt;/span&gt;
  &lt;span class="nx"&gt;base_domain&lt;/span&gt;      &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;module&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;cluster&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;base_domain&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;module&lt;/span&gt; &lt;span class="s2"&gt;"oidc"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;source&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"git::https://github.com/camptocamp/devops-stack-module-oidc-aws-cognito.git//modules"&lt;/span&gt;

  &lt;span class="nx"&gt;cluster_name&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;cluster_name&lt;/span&gt;
  &lt;span class="nx"&gt;argocd_namespace&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;module&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;cluster&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;argocd_namespace&lt;/span&gt;
  &lt;span class="nx"&gt;base_domain&lt;/span&gt;    &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;module&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;cluster&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;base_domain&lt;/span&gt;

  &lt;span class="nx"&gt;cognito_user_pool_id&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_cognito_user_pool&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;pool&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
  &lt;span class="nx"&gt;cognito_user_pool_domain&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_cognito_user_pool_domain&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;pool_domain&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;domain&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;module&lt;/span&gt; &lt;span class="s2"&gt;"monitoring"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;source&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"git::https://github.com/camptocamp/devops-stack-module-kube-prometheus-stack.git//modules"&lt;/span&gt;

  &lt;span class="nx"&gt;cluster_name&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;cluster_name&lt;/span&gt;
  &lt;span class="nx"&gt;oidc&lt;/span&gt;             &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;module&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;oidc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;oidc&lt;/span&gt;
  &lt;span class="nx"&gt;argocd_namespace&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;module&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;cluster&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;argocd_namespace&lt;/span&gt;
  &lt;span class="nx"&gt;base_domain&lt;/span&gt;    &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;module&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;cluster&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;base_domain&lt;/span&gt;
  &lt;span class="nx"&gt;cluster_issuer&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"letsencrypt-prod"&lt;/span&gt;
  &lt;span class="nx"&gt;metrics_archives&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{}&lt;/span&gt;

  &lt;span class="nx"&gt;depends_on&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt; &lt;span class="nx"&gt;module&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;oidc&lt;/span&gt; &lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;module&lt;/span&gt; &lt;span class="s2"&gt;"loki-stack"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;source&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"git::https://github.com/camptocamp/devops-stack-module-loki-stack.git//modules/eks"&lt;/span&gt;

  &lt;span class="nx"&gt;cluster_name&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;cluster_name&lt;/span&gt;
  &lt;span class="nx"&gt;argocd_namespace&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;module&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;cluster&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;argocd_namespace&lt;/span&gt;
  &lt;span class="nx"&gt;base_domain&lt;/span&gt;      &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;module&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;cluster&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;base_domain&lt;/span&gt;

  &lt;span class="nx"&gt;cluster_oidc_issuer_url&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;module&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;cluster&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;cluster_oidc_issuer_url&lt;/span&gt;

  &lt;span class="nx"&gt;depends_on&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt; &lt;span class="nx"&gt;module&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;monitoring&lt;/span&gt; &lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;module&lt;/span&gt; &lt;span class="s2"&gt;"cert-manager"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;source&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"git::https://github.com/camptocamp/devops-stack-module-cert-manager.git//modules/eks"&lt;/span&gt;

  &lt;span class="nx"&gt;cluster_name&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;cluster_name&lt;/span&gt;
  &lt;span class="nx"&gt;argocd_namespace&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;module&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;cluster&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;argocd_namespace&lt;/span&gt;
  &lt;span class="nx"&gt;base_domain&lt;/span&gt;      &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;module&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;cluster&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;base_domain&lt;/span&gt;

  &lt;span class="nx"&gt;cluster_oidc_issuer_url&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;module&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;cluster&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;cluster_oidc_issuer_url&lt;/span&gt;

  &lt;span class="nx"&gt;depends_on&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt; &lt;span class="nx"&gt;module&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;monitoring&lt;/span&gt; &lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;module&lt;/span&gt; &lt;span class="s2"&gt;"argocd"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;source&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"git::https://github.com/camptocamp/devops-stack-module-argocd.git//modules"&lt;/span&gt;

  &lt;span class="nx"&gt;cluster_name&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;cluster_name&lt;/span&gt;
  &lt;span class="nx"&gt;oidc&lt;/span&gt;           &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;module&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;oidc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;oidc&lt;/span&gt;
  &lt;span class="nx"&gt;argocd&lt;/span&gt;         &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;namespace&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;module&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;cluster&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;argocd_namespace&lt;/span&gt;
    &lt;span class="nx"&gt;server_secrhttps&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;&lt;span class="c1"&gt;//kubernetes.slack.com/archives/C01SQ1TMBSTetkey = module.cluster.argocd_server_secretkey&lt;/span&gt;
    &lt;span class="nx"&gt;accounts_pipeline_tokens&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;module&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;cluster&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;argocd_accounts_pipeline_tokens&lt;/span&gt;
    &lt;span class="nx"&gt;server_admin_password&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;module&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;cluster&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;argocd_server_admin_password&lt;/span&gt;
    &lt;span class="nx"&gt;domain&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;module&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;cluster&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;argocd_domain&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="nx"&gt;base_domain&lt;/span&gt;    &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;module&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;cluster&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;base_domain&lt;/span&gt;
  &lt;span class="nx"&gt;cluster_issuer&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"letsencrypt-prod"&lt;/span&gt;

  &lt;span class="nx"&gt;depends_on&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt; &lt;span class="nx"&gt;module&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;cert-manager&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;module&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;monitoring&lt;/span&gt; &lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  ☸ The Cluster Module
&lt;/h3&gt;

&lt;p&gt;As you can see, the main &lt;code&gt;cluster&lt;/code&gt; module —which used to do all the work— is now only responsible for deploying the Kubernetes cluster and a basic Argo CD set up (in bootstrap mode).&lt;/p&gt;

&lt;h3&gt;
  
  
  🐙 The Argo CD Provider
&lt;/h3&gt;

&lt;p&gt;We then set up the Argo CD provider using outputs from the &lt;code&gt;cluster&lt;/code&gt; module:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;provider&lt;/span&gt; &lt;span class="s2"&gt;"argocd"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;server_addr&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"127.0.0.1:8080"&lt;/span&gt;
  &lt;span class="nx"&gt;auth_token&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;module&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;cluster&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;argocd_auth_token&lt;/span&gt;
  &lt;span class="nx"&gt;insecure&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="nx"&gt;plain_text&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="nx"&gt;port_forward&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="nx"&gt;port_forward_with_namespace&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;module&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;cluster&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;argocd_namespace&lt;/span&gt;

  &lt;span class="nx"&gt;kubernetes&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;host&lt;/span&gt;                   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;module&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;cluster&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;kubernetes_host&lt;/span&gt;
    &lt;span class="nx"&gt;cluster_ca_certificate&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;module&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;cluster&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;kubernetes_cluster_ca_certificate&lt;/span&gt;
    &lt;span class="nx"&gt;token&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;module&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;cluster&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;kubernetes_token&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This provider is used in all (or most at least) other modules in order to deploy Argo CD applications, without going through a monolithic/centralized App of Apps.&lt;/p&gt;

&lt;h3&gt;
  
  
  🧩 Component Modules
&lt;/h3&gt;

&lt;p&gt;Each module provides a single component, using Terraform and the Argo CD provider (optionally).&lt;/p&gt;

&lt;p&gt;There are common interfaces passed as variables between modules. For example, the &lt;code&gt;oidc&lt;/code&gt; variable can be provided as an output from various modules: &lt;a href="https://github.com/camptocamp/devops-stack-module-keycloak" rel="noopener noreferrer"&gt;Keycloak&lt;/a&gt;, &lt;a href="https://github.com/camptocamp/devops-stack-module-oidc-aws-cognito" rel="noopener noreferrer"&gt;AWS Cognito&lt;/a&gt;, etc. This &lt;code&gt;oidc&lt;/code&gt; variable can then be passed to other component modules to configure the component's authentication layer.&lt;/p&gt;

&lt;p&gt;In a similar fashion, the &lt;code&gt;ingress&lt;/code&gt; module, which was so far only using Traefik, can now be replaced by another Ingress Controller implementation.&lt;/p&gt;

&lt;h3&gt;
  
  
  🐙 The Argo CD Module
&lt;/h3&gt;

&lt;p&gt;The &lt;a href="https://github.com/camptocamp/devops-stack-module-argocd" rel="noopener noreferrer"&gt;Argo CD module&lt;/a&gt; is a special one. A special entrypoint (called &lt;code&gt;boostrap&lt;/code&gt;) is used in the &lt;code&gt;cluster&lt;/code&gt; module to set up a basic Argo CD using Helm.&lt;/p&gt;

&lt;p&gt;The user is then responsible for instantiating the Argo CD module a second time at the end of the stack, in order for Argo CD to manage itself and configure itself properly —including monitoring and authentication, which cannot be configured before the corresponding components are deployed.&lt;/p&gt;

&lt;h2&gt;
  
  
  🥾 The Bootstrap Pattern
&lt;/h2&gt;

&lt;p&gt;Eventually, other modules might adopt the bootstrap pattern in order to solve chicken-and-egg dependencies.&lt;/p&gt;

&lt;p&gt;For example, &lt;a href="https://github.com/camptocamp/devops-stack-module-cert-manager" rel="noopener noreferrer"&gt;cert-manager&lt;/a&gt; requires Prometheus Operator CRDs in order to be monitored. However, the &lt;a href="https://github.com/camptocamp/devops-stack-module-kube-prometheus-stack" rel="noopener noreferrer"&gt;Kube Prometheus Stack&lt;/a&gt; might require valid certificates, thus depending on cert-manager being deployed. This could be solved by deploying a basic cert-manager instance (in a &lt;code&gt;bootstrap&lt;/code&gt; endpoint), and finalizing the deploying at the end of the stack.&lt;/p&gt;

&lt;h1&gt;
  
  
  🚀 To Infinity
&lt;/h1&gt;

&lt;p&gt;The modular DevOps Stack design is the current target for release 1.0.0.&lt;/p&gt;

&lt;p&gt;While this refactoring is still in beta state, it can be tested by using the &lt;a href="https://github.com/camptocamp/devops-stack/tree/v1" rel="noopener noreferrer"&gt;&lt;code&gt;v1&lt;/code&gt; branch&lt;/a&gt; of the project. You can also find examples in the &lt;a href="https://github.com/camptocamp/devops-stack/tree/v1/tests" rel="noopener noreferrer"&gt;&lt;code&gt;tests&lt;/code&gt; directory&lt;/a&gt; (though not all distributions are ported yet).&lt;/p&gt;

&lt;p&gt;Feedback is welcome, and you can contact us on the &lt;a href="https://kubernetes.slack.com/archives/C01SQ1TMBST" rel="noopener noreferrer"&gt;&lt;code&gt;#camptocamp-devops-stack&lt;/code&gt; channel&lt;/a&gt; of the Kubernetes Slack.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>kubernetes</category>
      <category>argocd</category>
      <category>infrastructure</category>
    </item>
    <item>
      <title>Integrate an Application with Prometheus Operator and Package with a Helm Chart</title>
      <dc:creator>Julien Acroute</dc:creator>
      <pubDate>Mon, 19 Jul 2021 08:59:28 +0000</pubDate>
      <link>https://dev.to/camptocamp-ops/integrate-an-application-with-prometheus-operator-and-package-with-a-helm-chart-1159</link>
      <guid>https://dev.to/camptocamp-ops/integrate-an-application-with-prometheus-operator-and-package-with-a-helm-chart-1159</guid>
      <description>&lt;p&gt;In the previous posts, we saw:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;how to implement metrics in applications&lt;/li&gt;
&lt;li&gt;how to run the monitoring stack locally&lt;/li&gt;
&lt;li&gt;how to test and debug metrics generated by a simple Python Flask application&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In this post we will:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;use Kubernetes Custom Resources to integrate our application with the Prometheus Operator&lt;/li&gt;
&lt;li&gt;define some alerts based on the metrics generated by the application&lt;/li&gt;
&lt;li&gt;deploy a custom dashboard in Grafana&lt;/li&gt;
&lt;li&gt;package everything in a Helm chart, including a Grafana dashboard&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Connect Prometheus to your Application
&lt;/h2&gt;

&lt;p&gt;Prometheus will retrieve metrics from &lt;em&gt;Pods&lt;/em&gt; with a &lt;code&gt;/metrics&lt;/code&gt; HTTP endpoint. If the Prometheus Operator is deployed in your Kubernetes Cluster, the discovery of the &lt;em&gt;Pods&lt;/em&gt; is done by deploying one of the following custom Kubernetes objects:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/prometheus-operator/prometheus-operator/blob/master/Documentation/api.md#servicemonitor" rel="noopener noreferrer"&gt;&lt;em&gt;ServiceMonitor&lt;/em&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/prometheus-operator/prometheus-operator/blob/master/Documentation/api.md#podmonitor" rel="noopener noreferrer"&gt;&lt;em&gt;PodMonitor&lt;/em&gt;&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Using &lt;em&gt;ServiceMonitor&lt;/em&gt;
&lt;/h3&gt;

&lt;p&gt;When a &lt;em&gt;ServiceMonitor&lt;/em&gt; object is deployed, Prometheus will create one target per address defined in the &lt;em&gt;Endpoints&lt;/em&gt; object linked to the &lt;em&gt;Service&lt;/em&gt;. This means every &lt;em&gt;Pod&lt;/em&gt; is in a ready status used by the &lt;em&gt;Service&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;For example, if you have the following &lt;em&gt;Service&lt;/em&gt; and &lt;em&gt;Deployment&lt;/em&gt; in your cluster:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Service&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;webapp&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;component&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;backend&lt;/span&gt;
    &lt;span class="na"&gt;instance&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;app&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;containers-my-app&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-app&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ClusterIP&lt;/span&gt;
  &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;http&lt;/span&gt;
    &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
    &lt;span class="na"&gt;protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;TCP&lt;/span&gt;
    &lt;span class="na"&gt;targetPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;webapp&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;component&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;backend&lt;/span&gt;
    &lt;span class="na"&gt;instance&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;app&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;containers-my-app&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;webapp&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;component&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;backend&lt;/span&gt;
    &lt;span class="na"&gt;instance&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;app&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;containers-my-app&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-app&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;component&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;backend&lt;/span&gt;
      &lt;span class="na"&gt;instance&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;app&lt;/span&gt;
      &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;containers-my-app&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;component&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;backend&lt;/span&gt;
        &lt;span class="na"&gt;instance&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;app&lt;/span&gt;
        &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;containers-my-app&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;app&lt;/span&gt;
        &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ghcr.io/camptocamp/course_docker_backend:python&lt;/span&gt;
        &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;8080&lt;/span&gt;
          &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;webapp&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;As defined in the &lt;em&gt;Deployment&lt;/em&gt;’s &lt;code&gt;spec.template.metadata.labels&lt;/code&gt; field, &lt;em&gt;Pods&lt;/em&gt; will have the following labels: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;component: backend&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;instance: app&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;name: containers-my-app&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The &lt;em&gt;Service&lt;/em&gt; has a selector that matches labels of the &lt;em&gt;Pod&lt;/em&gt;. Therefore the &lt;em&gt;Service&lt;/em&gt; will load balance traffic to &lt;em&gt;Pods&lt;/em&gt; deployed by the &lt;em&gt;Deployment&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;ServiceMonitor&lt;/em&gt; objects also use a selector to discover which &lt;em&gt;Services&lt;/em&gt; need to be monitored. Prometheus will scrape metrics from every &lt;em&gt;Pods&lt;/em&gt; behind selected &lt;em&gt;Services&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;For example, to retrieve metrics from our &lt;em&gt;Pods&lt;/em&gt;, we can deploy the following &lt;em&gt;ServiceMonitor&lt;/em&gt; object:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;monitoring.coreos.com/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ServiceMonitor&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;webapp&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;component&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;backend&lt;/span&gt;
    &lt;span class="na"&gt;instance&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;app&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;containers-my-app&lt;/span&gt;
    &lt;span class="na"&gt;release&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;prom&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-app&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;namespaceSelector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchNames&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;my-app&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;component&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;backend&lt;/span&gt;
      &lt;span class="na"&gt;instance&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;app&lt;/span&gt;
      &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;containers-my-app&lt;/span&gt;
  &lt;span class="na"&gt;endpoints&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;http&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Prometheus Operator will search for &lt;em&gt;Services&lt;/em&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;in the &lt;code&gt;my-app&lt;/code&gt; namespace,&lt;/li&gt;
&lt;li&gt;with the following labels:

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;component: backend&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;instance: app&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;name: containers-my-app&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;with a port named: &lt;code&gt;http&lt;/code&gt;
&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;It will then use the &lt;em&gt;Service&lt;/em&gt; selector to find &lt;em&gt;Pods&lt;/em&gt;. As a result, one target per &lt;em&gt;Pod&lt;/em&gt; will be created in the Prometheus configuration.&lt;/p&gt;

&lt;h3&gt;
  
  
  Using &lt;em&gt;PodMonitor&lt;/em&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;em&gt;PodMonitor&lt;/em&gt; objects use a selector to find &lt;em&gt;Pods&lt;/em&gt; directly. No &lt;em&gt;Service&lt;/em&gt; needs to be deployed.&lt;/p&gt;

&lt;p&gt;For our &lt;em&gt;Pods&lt;/em&gt;, we can use the following &lt;em&gt;PodMonitor&lt;/em&gt;:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;monitoring.coreos.com/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;PodMonitor&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;webapp&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;acomponent&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;backend&lt;/span&gt;
    &lt;span class="na"&gt;instance&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;app&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;containers-my-app&lt;/span&gt;
    &lt;span class="na"&gt;release&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;prom&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-app&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;namespaceSelector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchNames&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;my-app&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;component&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;backend&lt;/span&gt;
      &lt;span class="na"&gt;instance&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;app&lt;/span&gt;
      &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;containers-my-app&lt;/span&gt;
  &lt;span class="na"&gt;podMetricsEndpoints&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;webapp&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The Prometheus operator will search for &lt;em&gt;Pods&lt;/em&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;in the &lt;code&gt;my-app&lt;/code&gt; namespace,&lt;/li&gt;
&lt;li&gt;with the following labels:

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;component: backend&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;instance: app&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;name: containers-my-app&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;with a port named: &lt;code&gt;webapp&lt;/code&gt;
&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;For each &lt;em&gt;Pod&lt;/em&gt;, a new target will be added to the Prometheus configuration.&lt;/p&gt;

&lt;h2&gt;
  
  
  Configure Alerts
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Why use Alerts?
&lt;/h3&gt;

&lt;p&gt;Gathering and storing metrics is very useful to investigate when something goes wrong. But there are often some modifications of one or a combination of metrics before a service becomes completely unusable.&lt;/p&gt;

&lt;p&gt;A common example of this is remaining free disk space. Fixing hard thresholds with arbitrary values on disk space is usually inefficient (you might actually end up with 95%, 100% and 101% thresholds). What needs to be monitored is actually the estimated time left until the disk is full, which can be obtained by running &lt;a href="https://prometheus.io/docs/prometheus/latest/querying/functions/#predict_linear" rel="noopener noreferrer"&gt;a time regression on the metric&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Some other examples:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;For an online store, having no purchases during the afternoon is unusual, maybe there is an issue that blocks users in the payment process.&lt;/li&gt;
&lt;li&gt;If the ratio of HTTP responses with code 500 suddenly increases, investigation is needed.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is the purpose of alerts: when such events are detected, the system notifies the right person, allowing you to keep your eyes off dashboards.&lt;/p&gt;

&lt;p&gt;After investigating and finding the root cause, you should always ask yourself if you can build an alert to detect such a case. There is also the possibility of increasing the observability if some metrics are missing.&lt;/p&gt;

&lt;h3&gt;
  
  
  How to define Alerts?
&lt;/h3&gt;

&lt;p&gt;The Prometheus Operator allows the definition of alerts with a custom Kubernetes object: &lt;a href="https://github.com/prometheus-operator/prometheus-operator/blob/master/Documentation/api.md#prometheusrule" rel="noopener noreferrer"&gt;&lt;em&gt;PrometheusRule&lt;/em&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this custom resource, you can define multiple alerts: &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;monitoring.coreos.com/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;PrometheusRule&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;webapp&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-app&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;component&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;backend&lt;/span&gt;
    &lt;span class="na"&gt;instance&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;app&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;containers-my-app&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kube-prometheus-stack&lt;/span&gt;
    &lt;span class="na"&gt;release&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;prom&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;groups&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;app&lt;/span&gt;
      &lt;span class="na"&gt;rules&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;alert&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;NoProductViewSince1h&lt;/span&gt;
          &lt;span class="na"&gt;expr&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;(view_total - view_total offset 1h) &amp;lt; &lt;/span&gt;&lt;span class="m"&gt;1&lt;/span&gt;
          &lt;span class="na"&gt;for&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;5m&lt;/span&gt;
          &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;severity&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Critical&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;In the example above, only one alert is defined. &lt;/p&gt;

&lt;p&gt;Find below what needs to be defined for usual cases for each alert:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;alert&lt;/code&gt;: the alert name&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;expr&lt;/code&gt;: a PromQL expression that triggers an alert if it returns something. This is why most of the time a threshold is used. With the &lt;code&gt;offset&lt;/code&gt; function we compute the views from the past hour, if the result is above the threshold &lt;code&gt;1&lt;/code&gt;, then an alert is created and pushed to AlertManager.&lt;/li&gt;
&lt;li&gt;Optional &lt;code&gt;labels&lt;/code&gt;: a set of labels, usually used for alert severity&lt;/li&gt;
&lt;li&gt;Optional &lt;code&gt;for&lt;/code&gt;: delays triggering the alert. The PromQL expression must return some sample during the duration of the field &lt;code&gt;for&lt;/code&gt;, before an alert is triggered.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Troubleshooting
&lt;/h2&gt;

&lt;p&gt;There are many selectors involved in this process: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;selector on &lt;em&gt;ServiceMonitor&lt;/em&gt; to find &lt;em&gt;Services&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;selector on &lt;em&gt;Service&lt;/em&gt; to find &lt;em&gt;Pods&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;selector on &lt;em&gt;PodMonitor&lt;/em&gt; to find &lt;em&gt;Pods&lt;/em&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;There are also selectors to discover &lt;em&gt;ServiceMonitor&lt;/em&gt;, &lt;em&gt;PodMonitor&lt;/em&gt;, and &lt;em&gt;PrometheusRule&lt;/em&gt; objects. Those selectors are defined in the &lt;em&gt;Prometheus&lt;/em&gt; object using the following fields:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Discovery of &lt;em&gt;ServiceMonitor&lt;/em&gt;:

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;spec.serviceMonitorSelector&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;spec.serviceMonitorNamespaceSelector&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Discovery of &lt;em&gt;PodMonitor&lt;/em&gt;:

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;spec.podMonitorSelector&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;spec.podMonitorNamespaceSelector&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Dicovery of &lt;em&gt;PrometheusRule&lt;/em&gt;:

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;spec.ruleSelector&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;spec.ruleNamespaceSelector&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgithub.com%2Fprometheus-operator%2Fprometheus-operator%2Fraw%2Fmaster%2FDocumentation%2Fcustom-metrics-elements.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgithub.com%2Fprometheus-operator%2Fprometheus-operator%2Fraw%2Fmaster%2FDocumentation%2Fcustom-metrics-elements.png" alt="Selectors used by Prometheus Operator"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Check Selectors
&lt;/h3&gt;

&lt;p&gt;If the target is not discovered by Prometheus:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Check that your &lt;em&gt;ServiceMonitor&lt;/em&gt; or &lt;em&gt;PodMonitor&lt;/em&gt; is deployed in a &lt;em&gt;Namespace&lt;/em&gt; that matches the namespace selector in the &lt;em&gt;Prometheus&lt;/em&gt; object.&lt;/li&gt;
&lt;li&gt;Check that labels on your &lt;em&gt;ServiceMonitor&lt;/em&gt; or &lt;em&gt;PodMonitor&lt;/em&gt; match the selector in the &lt;em&gt;Prometheus&lt;/em&gt; object.&lt;/li&gt;
&lt;li&gt;Check that the selector on your &lt;em&gt;ServiceMonitor&lt;/em&gt; or &lt;em&gt;PodMonitor&lt;/em&gt; matches labels defined in the &lt;em&gt;Service&lt;/em&gt; or &lt;em&gt;Pod&lt;/em&gt;.&lt;/li&gt;
&lt;li&gt;Check that the &lt;em&gt;Service&lt;/em&gt; or &lt;em&gt;Pod&lt;/em&gt; are deployed in the &lt;em&gt;Namespace&lt;/em&gt; selected by the namespace selector defined in the &lt;em&gt;ServiceMonitor&lt;/em&gt; or &lt;em&gt;PodMonitor&lt;/em&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In order to check a label selector, you can use the &lt;code&gt;-l&lt;/code&gt; option of &lt;code&gt;kubectl&lt;/code&gt;. For example, to check the following selector in a &lt;em&gt;ServiceMonitor&lt;/em&gt;:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;component&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;backend&lt;/span&gt;
      &lt;span class="na"&gt;instance&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;app&lt;/span&gt;
      &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;containers-my-app&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;run the following command:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

kubectl &lt;span class="nt"&gt;-l&lt;/span&gt; &lt;span class="nv"&gt;component&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;backend,instance&lt;span class="o"&gt;=&lt;/span&gt;app,name&lt;span class="o"&gt;=&lt;/span&gt;containers-my-app get service


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  Check Port Name or Number
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Check that the port number or name matches a port defined in a &lt;em&gt;Service&lt;/em&gt; or a &lt;em&gt;Pod&lt;/em&gt;. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;ServiceMonitor&lt;/em&gt; references either an incoming port defined in the &lt;em&gt;Service&lt;/em&gt; or a &lt;em&gt;Pod&lt;/em&gt; port:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;ServiceMonitor.spec.endpoints.port&lt;/code&gt; references the name of a &lt;em&gt;Service&lt;/em&gt; port: &lt;code&gt;Service.spec.ports.name&lt;/code&gt; &lt;/li&gt;
&lt;li&gt;
&lt;code&gt;ServiceMonitor.spec.endpoints.targetPort&lt;/code&gt; references a &lt;em&gt;Pod&lt;/em&gt; port: &lt;code&gt;Pod.spec.containers.ports.containerPort&lt;/code&gt; or &lt;code&gt;Pod.spec.containers.ports.name&lt;/code&gt; &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;PodMonitor&lt;/em&gt; references port defined on &lt;em&gt;Pod&lt;/em&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;PodMonitor.spec.podMetricsEndpoints.port&lt;/code&gt; reference the name of a &lt;em&gt;Pod&lt;/em&gt; port: &lt;code&gt;Pod.spec.containers.ports.name&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Note that Prometheus will only use &lt;em&gt;Pods&lt;/em&gt; with a Ready state.&lt;/p&gt;
&lt;h2&gt;
  
  
  Create Grafana Dashboards as ConfigMaps
&lt;/h2&gt;

&lt;p&gt;Grafana includes an auto discovery mechanism for dashboards. Any &lt;em&gt;ConfigMap&lt;/em&gt; with a label &lt;code&gt;grafana_dashboard=1&lt;/code&gt; is loaded into Grafana.&lt;/p&gt;

&lt;p&gt;The following &lt;em&gt;ConfigMap&lt;/em&gt; will create a minimal dashboard in Grafana. Note that this &lt;em&gt;ConfigMap&lt;/em&gt; needs to be deployed in the same &lt;em&gt;Namespace&lt;/em&gt; as Grafana. &lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ConfigMap&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-dashboard&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;grafana_dashboard&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;1"&lt;/span&gt;
&lt;span class="na"&gt;data&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;dashboard.json&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
    &lt;span class="s"&gt;{ "title": "Product Views",&lt;/span&gt;
      &lt;span class="s"&gt;"time": { "from": "now-6h", "to": "now" },&lt;/span&gt;
      &lt;span class="s"&gt;"editable": false,&lt;/span&gt;
      &lt;span class="s"&gt;"panels": [ {&lt;/span&gt;
          &lt;span class="s"&gt;"gridPos": { "h": 9, "w": 12, "x": 0, "y": 0 },&lt;/span&gt;
          &lt;span class="s"&gt;"id": 2,&lt;/span&gt;
          &lt;span class="s"&gt;"targets": [ {&lt;/span&gt;
              &lt;span class="s"&gt;"exemplar": true,&lt;/span&gt;
              &lt;span class="s"&gt;"expr": "rate(view_total[2m])",&lt;/span&gt;
              &lt;span class="s"&gt;"interval": "",&lt;/span&gt;
              &lt;span class="s"&gt;"legendFormat": "{{product}}",&lt;/span&gt;
              &lt;span class="s"&gt;"refId": "A"&lt;/span&gt;
            &lt;span class="s"&gt;} ],&lt;/span&gt;
          &lt;span class="s"&gt;"title": "Product View",&lt;/span&gt;
          &lt;span class="s"&gt;"type": "timeseries"&lt;/span&gt;
        &lt;span class="s"&gt;}&lt;/span&gt;
      &lt;span class="s"&gt;]&lt;/span&gt;
    &lt;span class="s"&gt;}&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  Package Monitoring Objects with Applications using Helm Charts
&lt;/h2&gt;

&lt;p&gt;Including monitoring objects in Application Helm Charts is a good way to maintain the observability layer of an application. The monitoring components can be versioned with application packaging. Also the deployment of monitoring can follow the same workflow as the application.&lt;/p&gt;

&lt;p&gt;I will not explain how to package an application, but I’ll demonstrate how to include the following elements: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Dashboard: &lt;em&gt;ConfigMap&lt;/em&gt; with dashboard code&lt;/li&gt;
&lt;li&gt;Alerts: &lt;em&gt;PrometheusRule&lt;/em&gt; with alerts definition&lt;/li&gt;
&lt;li&gt;Metrics endpoint: &lt;em&gt;PodMonitor&lt;/em&gt; or &lt;em&gt;ServiceMonitor&lt;/em&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In the chart’s &lt;code&gt;values.yaml&lt;/code&gt;, add a new section for monitoring:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

&lt;span class="na"&gt;monitoring&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;alerts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="na"&gt;dashboard&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The &lt;code&gt;monitoring.alerts&lt;/code&gt; values controls the deployment of the &lt;em&gt;PrometheusRule&lt;/em&gt; object. The deployment of the &lt;em&gt;ConfigMap&lt;/em&gt; for the dashboard is controlled by &lt;code&gt;monitoring.dashboard&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;For the &lt;em&gt;PodMonitor&lt;/em&gt; or &lt;em&gt;ServiceMonitor&lt;/em&gt; objects, we can check if the Prometheus Operator is installed using the &lt;a href="https://helm.sh/docs/chart_template_guide/builtin_objects/" rel="noopener noreferrer"&gt;.Capabilities.APIVersions.Has&lt;/a&gt; function:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

&lt;span class="pi"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt;- if .Capabilities.APIVersions.Has "servicemonitor.monitoring.coreos.com/v1"&lt;/span&gt; &lt;span class="pi"&gt;}}&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;monitoring.coreos.com/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ServiceMonitor&lt;/span&gt;
&lt;span class="nn"&gt;...&lt;/span&gt;
&lt;span class="pi"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt;- end&lt;/span&gt; &lt;span class="pi"&gt;}}&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Additionally, for alerts and dashboards, we can check the "values" set on the Helm release:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

&lt;span class="pi"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt;- if .Capabilities.APIVersions.Has "prometheusrule.monitoring.coreos.com/v1"&lt;/span&gt; &lt;span class="pi"&gt;}}&lt;/span&gt;
&lt;span class="pi"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt;- if .Values.monitoring.alerts&lt;/span&gt; &lt;span class="pi"&gt;}}&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;monitoring.coreos.com/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;PrometheusRule&lt;/span&gt;
&lt;span class="nn"&gt;...&lt;/span&gt;
&lt;span class="pi"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt;- end&lt;/span&gt; &lt;span class="pi"&gt;}}&lt;/span&gt;
&lt;span class="pi"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt;- end&lt;/span&gt; &lt;span class="pi"&gt;}}&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;A common workflow to maintain a dashboard is to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;edit dashboards in the Grafana Web UI&lt;/li&gt;
&lt;li&gt;copy the JSON model from Web UI&lt;/li&gt;
&lt;li&gt;paste the JSON to a file in the Helm Chart&lt;/li&gt;
&lt;li&gt;commit and push modifications&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;⚠ If the JSON representation of the dashboard is stored in the &lt;em&gt;ConfigMap&lt;/em&gt; code, you will have to indent the content properly:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ConfigMap&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-dashboard&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;grafana_dashboard&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;1"&lt;/span&gt;
&lt;span class="na"&gt;data&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;dashboard.json&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
    &lt;span class="s"&gt;{ "title": "Product Views",&lt;/span&gt;
      &lt;span class="s"&gt;"time": { "from": "now-6h", "to": "now" },&lt;/span&gt;
      &lt;span class="s"&gt;"editable": false,&lt;/span&gt;
      &lt;span class="s"&gt;"panels": [ {&lt;/span&gt;
          &lt;span class="s"&gt;"gridPos": { "h": 9, "w": 12, "x": 0, "y": 0 },&lt;/span&gt;
          &lt;span class="s"&gt;"id": 2,&lt;/span&gt;
          &lt;span class="s"&gt;"targets": [ {&lt;/span&gt;
              &lt;span class="s"&gt;"exemplar": true,&lt;/span&gt;
              &lt;span class="s"&gt;"expr": "rate(view_total[2m])",&lt;/span&gt;
              &lt;span class="s"&gt;"interval": "",&lt;/span&gt;
              &lt;span class="s"&gt;"legendFormat": "{{product}}",&lt;/span&gt;
              &lt;span class="s"&gt;"refId": "A"&lt;/span&gt;
            &lt;span class="s"&gt;} ],&lt;/span&gt;
          &lt;span class="s"&gt;"title": "Product View",&lt;/span&gt;
          &lt;span class="s"&gt;"type": "timeseries"&lt;/span&gt;
        &lt;span class="s"&gt;}&lt;/span&gt;
      &lt;span class="s"&gt;]&lt;/span&gt;
    &lt;span class="s"&gt;}&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;It is generally easier to store the dashboard code in a dedicated file and then load the contents in the &lt;em&gt;ConfigMap&lt;/em&gt; with some Helm templating functions:&lt;/p&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

&lt;p&gt;&lt;span class="pi"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt;- if .Capabilities.APIVersions.Has "prometheusrule.monitoring.coreos.com/v1"&lt;/span&gt; &lt;span class="pi"&gt;}}&lt;/span&gt;&lt;br&gt;
&lt;span class="pi"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt;- if .Values.monitoring.dashboard&lt;/span&gt; &lt;span class="pi"&gt;}}&lt;/span&gt;&lt;br&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;&lt;br&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ConfigMap&lt;/span&gt;&lt;br&gt;
&lt;span class="s"&gt;…&lt;/span&gt;&lt;br&gt;
&lt;span class="na"&gt;data&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;&lt;br&gt;&lt;br&gt;
  &lt;span class="na"&gt;dashboard.json&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;&lt;br&gt;&lt;br&gt;
    &lt;span class="s"&gt;{{ .Files.Get "dashboard.json" | trim | nindent 4 }}&lt;/span&gt;&lt;br&gt;
&lt;span class="pi"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt;- end&lt;/span&gt; &lt;span class="pi"&gt;}}&lt;/span&gt;&lt;br&gt;
&lt;span class="pi"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt;- end&lt;/span&gt; &lt;span class="pi"&gt;}}&lt;/span&gt;&lt;/p&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
&lt;br&gt;
  &lt;br&gt;
  &lt;br&gt;
  Conclusion&lt;br&gt;
&lt;/h2&gt;

&lt;p&gt;Deploying monitoring stacks —Prometheus, Grafana, AlertManager, ElasticSearch, Loki, …— provides much more observability in a project, but at the cost of consuming more resources. Developers not using these features is a waste of resources. The project also has poor observability because only system metrics and maybe http metrics are retrieved. Adding application specific metrics and even business metrics allows you to build beautiful dashboards with colors and graphs that are very fun to report during boring review meetings.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9ffu21rl2xyylx5778gb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9ffu21rl2xyylx5778gb.png" alt="grafana"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>helm</category>
      <category>kubernetes</category>
      <category>alerting</category>
      <category>monitoring</category>
    </item>
    <item>
      <title>Enable OpenShift login on ArgoCD from GitOps Operator</title>
      <dc:creator>Philippe Bürgisser</dc:creator>
      <pubDate>Thu, 20 May 2021 12:48:47 +0000</pubDate>
      <link>https://dev.to/camptocamp-ops/enable-openshift-login-on-argocd-from-gitops-2h9a</link>
      <guid>https://dev.to/camptocamp-ops/enable-openshift-login-on-argocd-from-gitops-2h9a</guid>
      <description>&lt;p&gt;Since few weeks now, the operator Red Hat OpenShift GitOps became GA and embbed tools like Tekton and ArgoCD.&lt;/p&gt;

&lt;p&gt;When the operator is deployed, it provisions a vanilla ArgoCD which miss the OpenShift integrated login. In this post, we are going to review the steps to enable it.&lt;/p&gt;

&lt;h1&gt;
  
  
  Deploy and fine tune the Red Hat OpenShift GitOps
&lt;/h1&gt;

&lt;ol&gt;
&lt;li&gt;Follow the &lt;a href="https://docs.openshift.com/container-platform/4.7/cicd/gitops/installing-openshift-gitops.html"&gt;official documentation&lt;/a&gt; on the installation of the operator&lt;/li&gt;
&lt;li&gt;Once the operator is deployed, go to the menu &lt;strong&gt;Operators&lt;/strong&gt;&amp;gt;&lt;strong&gt;Installed Operators&lt;/strong&gt; and click on the freshly deployed &lt;strong&gt;Red Hat OpenShift GitOps&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Using the dropdown &lt;strong&gt;Actions&lt;/strong&gt; on top right of the page, choose &lt;strong&gt;Edit Subscription&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;On the YAML code, under the &lt;strong&gt;spec&lt;/strong&gt; level, enable the DEX feature to enable external authentication and click &lt;strong&gt;Save&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="nn"&gt;...&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;config&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;DISABLE_DEX&lt;/span&gt;
        &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;false'&lt;/span&gt;
&lt;span class="nn"&gt;...&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;or&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;oc patch subscription openshift-gitops-operator &lt;span class="nt"&gt;-n&lt;/span&gt; openshift-operators &lt;span class="nt"&gt;--type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;merge &lt;span class="nt"&gt;-p&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'{"spec":{"config":{"env":[{"name":"DISABLE_DEX","Value":"false"}]}}}'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  Configure ArgoCD to allow OpenShift authentication
&lt;/h1&gt;

&lt;ol&gt;
&lt;li&gt;Change the project to &lt;strong&gt;openshift-gitops&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Go to the menu &lt;strong&gt;Operators&lt;/strong&gt;&amp;gt;&lt;strong&gt;Installed Operators&lt;/strong&gt; and click on &lt;strong&gt;Red Hat OpenShift GitOps&lt;/strong&gt;, select tab &lt;strong&gt;Argo CD&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;On the ArgoCD instance list, click on the three dots at the very left of the &lt;strong&gt;openshift-gitops&lt;/strong&gt; and select &lt;strong&gt;Edit ArgoCD&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;On the YAML code, under the &lt;strong&gt;spec&lt;/strong&gt; level, update the DEX and RBAC section to match the following
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="nn"&gt;...&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;dex&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;openShiftOAuth&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="no"&gt;true&lt;/span&gt;
  &lt;span class="na"&gt;rbac&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;defaultPolicy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;role:readonly'&lt;/span&gt;
    &lt;span class="na"&gt;policy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
      &lt;span class="s"&gt;g, system:cluster-admins, role:admin&lt;/span&gt;
    &lt;span class="na"&gt;scopes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;[groups]'&lt;/span&gt;
&lt;span class="nn"&gt;...&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;or&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;oc patch argocd openshift-gitops &lt;span class="nt"&gt;-n&lt;/span&gt; openshift-gitops &lt;span class="nt"&gt;--type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;merge &lt;span class="nt"&gt;-p&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'{"spec":{"dex":{"openShiftOAuth":true},"rbac":{"defaultPolicy":"role:readonly","policy":"g, system:cluster-admins, role:admin","scopes":"[groups]"}}}'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Monitor the pods being restared to apply the configuration and test your login&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>gitops</category>
      <category>openshift</category>
      <category>argocd</category>
      <category>authentication</category>
    </item>
  </channel>
</rss>
