<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Caio Campos Borges Rosa</title>
    <description>The latest articles on DEV Community by Caio Campos Borges Rosa (@caiocampoos).</description>
    <link>https://dev.to/caiocampoos</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/caiocampoos"/>
    <language>en</language>
    <item>
      <title>How to automate provisioning in Proxmox Using Cloud images</title>
      <dc:creator>Caio Campos Borges Rosa</dc:creator>
      <pubDate>Sun, 16 Jun 2024 23:24:56 +0000</pubDate>
      <link>https://dev.to/caiocampoos/how-to-automate-provisioning-in-proxmox-using-cloud-images-7do</link>
      <guid>https://dev.to/caiocampoos/how-to-automate-provisioning-in-proxmox-using-cloud-images-7do</guid>
      <description>&lt;p&gt;There are many ways to create virtual machines. In this article, we aim to leverage cloud images as our primary method for automation while provisioning on Proxmox.&lt;/p&gt;

&lt;h2&gt;
  
  
  Cloud Images
&lt;/h2&gt;

&lt;p&gt;Cloud images are pre-configured disk images designed to be used in virtualized environments, such as cloud infrastructure or virtual machine hosts. These images include minimal operating system installation and are optimized for quick deployment and scalability. They have support for quick configuration tools, we will be using cloud-init for our example.&lt;/p&gt;

&lt;h2&gt;
  
  
  Script
&lt;/h2&gt;

&lt;p&gt;You'll need a Proxmox Virtual Environment node with SSH access. In our script, we will perform some actions to download and prepare a cloud image.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Variable and packages
First, we set up some variables and install some useful packages we will be using in the script.&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="c"&gt;### variables&lt;/span&gt;
&lt;span class="nv"&gt;VM_TEMPLATE_ID&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;999
&lt;span class="nv"&gt;TEMPLATE_NAME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'ubuntu-2204-template'&lt;/span&gt;
&lt;span class="nv"&gt;UBUNTU_IMAGE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'ubuntu-22.04-server-cloudimg-amd64-disk-kvm.img'&lt;/span&gt;
&lt;span class="nv"&gt;UBUNTU_IMAGE_QCOW2&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'ubuntu-22.04.qcow2'&lt;/span&gt;
&lt;span class="nv"&gt;USERNAME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'cap'&lt;/span&gt;
&lt;span class="nv"&gt;PASSWORD&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'12345'&lt;/span&gt;
&lt;span class="nv"&gt;MEMORY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'4096'&lt;/span&gt;
&lt;span class="nv"&gt;CPUS&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'2'&lt;/span&gt;

apt update &lt;span class="nt"&gt;-y&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; apt &lt;span class="nb"&gt;install &lt;/span&gt;nano wget curl libguestfs-tools &lt;span class="nt"&gt;-y&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;Idempotency 
Run a couple of commands to make our script idempotent. We want a quick way of iteration, so we will be deleting the template in case of changes.&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

# remove old image
rm -rfv ${UBUNTU_IMAGE}

# remove old template container - WILL DESTROY COMPLETELY
qm destroy ${VM_TEMPLATE_ID} --destroy-unreferenced-disks 1 --purge 1


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;Download the image: &lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

# download new image
wget http://cloud-images.ubuntu.com/releases/22.04/release/${UBUNTU_IMAGE}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;Customize the image
Add the QEMU guest agent to the image so we don't need to install it later. Here, you can use virt-customize to add other tools and have them pre-installed on the image.&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

virt-customize &lt;span class="nt"&gt;-a&lt;/span&gt; &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;UBUNTU_IMAGE&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt; &lt;span class="nt"&gt;--install&lt;/span&gt; qemu-guest-agent


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;QEMU guest agent is a daemon installed in the guest and helps us get information about our VMs in the Proxmox environment. See more documentation on &lt;a href="https://pve.proxmox.com/wiki/Qemu-guest-agent" rel="noopener noreferrer"&gt;PVE docs&lt;/a&gt;.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Change image extension
Change the img extension on the image file to QCOW2, if you don't rename it in some versions of proxmox qemu will not work&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nb"&gt;mv&lt;/span&gt; &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;UBUNTU_IMAGE&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt; &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;UBUNTU_IMAGE_QCOW2&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Resize&lt;br&gt;
Now we resize the image to a generic size. It doesn't really matter now; we will configure each clone based on the template later.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create the vm&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Again, the values for resources here are not really important. Set up a generic value for memory and CPU. We are setting up a network interface. Another key configuration is the storage controller; we use virtio-scsi as it covers more use cases. To learn more about virtualization of 'physical' devices on the virtio family and emulated storage controllers of the virtio family, go to &lt;a href="https://docs.oasis-open.org/virtio/virtio/v1.1/virtio-v1.1.html" rel="noopener noreferrer"&gt;Virtio&lt;/a&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

qm create &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;VM_TEMPLATE_ID&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt; &lt;span class="nt"&gt;--memory&lt;/span&gt; &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;MEMORY&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt; &lt;span class="nt"&gt;--cores&lt;/span&gt; &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;CPUS&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt; &lt;span class="nt"&gt;--net0&lt;/span&gt; virtio,bridge&lt;span class="o"&gt;=&lt;/span&gt;vmbr0 &lt;span class="nt"&gt;--name&lt;/span&gt; &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;TEMPLATE_NAME&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt; &lt;span class="nt"&gt;--scsihw&lt;/span&gt; virtio-scsi-pci


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;After creating the vm we need to configure and convert to a template, so it can be cloned.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Configure the vm&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We will perform in order the following configuration:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Import the image to a disk and attach to the vm&lt;/li&gt;
&lt;li&gt;attach a cloud-init drive to the vm via ide interface&lt;/li&gt;
&lt;li&gt;Configuring vga output for the console on serial0&lt;/li&gt;
&lt;li&gt;Setting up dhcp for the network so we have connection&lt;/li&gt;
&lt;li&gt;enabling the qemu agent &lt;/li&gt;
&lt;li&gt;seting default user and password &lt;/li&gt;
&lt;li&gt;adding ssh keys so we can have ssh access, it will use the same ssh keys from the host in witch you will be executing the script&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

qm &lt;span class="nb"&gt;set&lt;/span&gt; &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;VM_TEMPLATE_ID&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt; &lt;span class="nt"&gt;--scsi0&lt;/span&gt; local-lvm:0,import-from&lt;span class="o"&gt;=&lt;/span&gt;/root/&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;UBUNTU_IMAGE_QCOW2&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;
qm &lt;span class="nb"&gt;set&lt;/span&gt; &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;VM_TEMPLATE_ID&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt; &lt;span class="nt"&gt;--ide2&lt;/span&gt; local-lvm:cloudinit
qm &lt;span class="nb"&gt;set&lt;/span&gt; &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;VM_TEMPLATE_ID&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt; &lt;span class="nt"&gt;--boot&lt;/span&gt; &lt;span class="nv"&gt;order&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;scsi0
qm &lt;span class="nb"&gt;set&lt;/span&gt; &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;VM_TEMPLATE_ID&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt; &lt;span class="nt"&gt;--serial0&lt;/span&gt; socket &lt;span class="nt"&gt;--vga&lt;/span&gt; serial0
qm &lt;span class="nb"&gt;set&lt;/span&gt; &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;VM_TEMPLATE_ID&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt; &lt;span class="nt"&gt;--ipconfig0&lt;/span&gt; &lt;span class="nv"&gt;ip&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;dhcp
qm &lt;span class="nb"&gt;set&lt;/span&gt; &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;VM_TEMPLATE_ID&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt; &lt;span class="nt"&gt;--agent&lt;/span&gt; &lt;span class="nv"&gt;enabled&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1
qm &lt;span class="nb"&gt;set&lt;/span&gt; &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;VM_TEMPLATE_ID&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt; &lt;span class="nt"&gt;-ciuser&lt;/span&gt; &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;USERNAME&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;
qm &lt;span class="nb"&gt;set&lt;/span&gt; &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;VM_TEMPLATE_ID&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt; &lt;span class="nt"&gt;-cipassword&lt;/span&gt; &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;PASSWORD&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;
qm &lt;span class="nb"&gt;set&lt;/span&gt; &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;VM_TEMPLATE_ID&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt; &lt;span class="nt"&gt;--sshkeys&lt;/span&gt; ~/.ssh/authorized_keys


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;Converting to template&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In the last step, we convert this VM to a template. A template is a VM with a frozen state used as a base for many machines. It provides a stable starting point and makes it easy for provisioning automation.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

qm template &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;VM_TEMPLATE_ID&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt; 


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The complete script can be found on my homelab &lt;a href="https://github.com/caiocampoos/homelab-k8s" rel="noopener noreferrer"&gt;github&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To run over ssh we can pipe the output via ssh running:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nb"&gt;cat &lt;/span&gt;ubuntu-22.04.sh | ssh root@your-PVE-ip /bin/bash


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Result should be a template on your Proxmox node&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0d4ul5s13832l02af2hh.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0d4ul5s13832l02af2hh.gif" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now you can run any number of copies, we can use qemu set to configure any variable to the clone we want to change from the template.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="c"&gt;# clone&lt;/span&gt;

qm clone &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;VM_TEMPLATE_ID&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt; &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;VM_ID&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt; &lt;span class="nt"&gt;--name&lt;/span&gt; &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;VM_NAME&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;

&lt;span class="c"&gt;# configure the vm  &lt;/span&gt;
qm &lt;span class="nb"&gt;set&lt;/span&gt; &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;VM_ID&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt; &lt;span class="nt"&gt;--scsi1&lt;/span&gt; local-lvm:40
qm &lt;span class="nb"&gt;set&lt;/span&gt; &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;VM_ID&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt; &lt;span class="nt"&gt;--memory&lt;/span&gt; &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;MEMORY&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;
qm &lt;span class="nb"&gt;set&lt;/span&gt; &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;VM_ID&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt; &lt;span class="nt"&gt;--cores&lt;/span&gt; &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;CPUS&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;
qm &lt;span class="nb"&gt;set&lt;/span&gt; &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;VM_ID&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt; &lt;span class="nt"&gt;--boot&lt;/span&gt; &lt;span class="nv"&gt;order&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;scsi0
qm &lt;span class="nb"&gt;set&lt;/span&gt; &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;VM_ID&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt; &lt;span class="nt"&gt;--serial0&lt;/span&gt; socket &lt;span class="nt"&gt;--vga&lt;/span&gt; serial0
qm &lt;span class="nb"&gt;set&lt;/span&gt; &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;VM_ID&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt; &lt;span class="nt"&gt;--ipconfig0&lt;/span&gt; &lt;span class="nv"&gt;ip&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;dhcp
qm &lt;span class="nb"&gt;set&lt;/span&gt; &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;VM_ID&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt; &lt;span class="nt"&gt;--agent&lt;/span&gt; &lt;span class="nv"&gt;enabled&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1

&lt;span class="c"&gt;# start&lt;/span&gt;
qm start &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;VM_ID&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Running again over ssh should create a new vm based on the template:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nb"&gt;cat &lt;/span&gt;clone-templates.sh | ssh root@your-PVE-ip /bin/bash


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Its as fast as it gets, less than 30 seconds and you have a vm running:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5c3igfvkwcfvp353nxmr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5c3igfvkwcfvp353nxmr.png" alt="VM"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now we know how to create vms based on templates we can explore more options of remote code execution for provisioning.&lt;/p&gt;

&lt;p&gt;Socials: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://twitter.com/caiocbrr" rel="noopener noreferrer"&gt;Twitter&lt;/a&gt;&lt;br&gt;
&lt;a href="https://www.linkedin.com/in/caio-campos-borges-rosa-392588143/" rel="noopener noreferrer"&gt;Linkedin&lt;/a&gt;&lt;br&gt;
&lt;a href="https://github.com/caiocampoos" rel="noopener noreferrer"&gt;Github&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Photo by &lt;a href="https://unsplash.com/@arnosenoner?utm_content=creditCopyText&amp;amp;utm_medium=referral&amp;amp;utm_source=unsplash" rel="noopener noreferrer"&gt;Arno Senoner&lt;/a&gt; on &lt;a href="https://unsplash.com/photos/blue-and-brown-metal-bridge-yqu6tJkSQ_k?utm_content=creditCopyText&amp;amp;utm_medium=referral&amp;amp;utm_source=unsplash" rel="noopener noreferrer"&gt;Unsplash&lt;/a&gt;&lt;/p&gt;

</description>
      <category>proxmox</category>
      <category>cloud</category>
      <category>virtualization</category>
      <category>cloudinit</category>
    </item>
    <item>
      <title>How to automate tests with Tekton</title>
      <dc:creator>Caio Campos Borges Rosa</dc:creator>
      <pubDate>Mon, 20 May 2024 11:44:07 +0000</pubDate>
      <link>https://dev.to/woovi/how-to-automate-tests-with-tekton-3caj</link>
      <guid>https://dev.to/woovi/how-to-automate-tests-with-tekton-3caj</guid>
      <description>&lt;p&gt;Following our series on CI/CD cloud native, we will go on setting up a simple Tekton pipeline to automate testing, using kubernetes. We should cover the simple flow of updating code and testing code. We will be using GitHub webhook events to trigger our pipeline.&lt;/p&gt;

&lt;p&gt;First we need to install Tekton operator, so we can focus configuration in one config file, making it easy to have all the features we need in a more declarative way. &lt;/p&gt;

&lt;h3&gt;
  
  
  Installing Tekton Operator
&lt;/h3&gt;

&lt;p&gt;First, install the Operator Lifecycle Manager, a tool to manage operators running in your cluster.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

curl &lt;span class="nt"&gt;-sL&lt;/span&gt; https://github.com/operator-framework/operator-lifecycle-manager/releases/download/v0.27.0/install.sh | bash &lt;span class="nt"&gt;-s&lt;/span&gt; v0.27.0


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Install the operator:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;


kubectl create &lt;span class="nt"&gt;-f&lt;/span&gt; https://operatorhub.io/install/tektoncd-operator.yaml


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Wait until the operator is up and running. You can check the status by running:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

kubectl get csv &lt;span class="nt"&gt;-n&lt;/span&gt; operators



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  Configure Tekton
&lt;/h3&gt;

&lt;p&gt;We will need to configure Tekton. The operator is configured using a file called TektonConfig.yaml.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;operator.tekton.dev/v1alpha1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;TektonConfig&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;config&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;tekton-pipelines&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;targetNamespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;tekton-pipelines&lt;/span&gt;
  &lt;span class="na"&gt;profile&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;all&lt;/span&gt;
  &lt;span class="na"&gt;chain&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;disabled&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
  &lt;span class="na"&gt;pipeline&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;await-sidecar-readiness&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
    &lt;span class="na"&gt;disable-affinity-assistant&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
    &lt;span class="na"&gt;disable-creds-init&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
    &lt;span class="na"&gt;enable-api-fields&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;alpha&lt;/span&gt;
    &lt;span class="na"&gt;enable-bundles-resolver&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
    &lt;span class="na"&gt;enable-cluster-resolver&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
    &lt;span class="na"&gt;enable-custom-tasks&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
    &lt;span class="na"&gt;enable-git-resolver&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
    &lt;span class="na"&gt;performance&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;disable-ha&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
      &lt;span class="na"&gt;buckets&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
      &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
      &lt;span class="na"&gt;threads-per-controller&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;32&lt;/span&gt;
      &lt;span class="na"&gt;kube-api-qps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;100.0&lt;/span&gt;
      &lt;span class="na"&gt;kube-api-burst&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;200&lt;/span&gt;
  &lt;span class="na"&gt;pruner&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;disabled&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
    &lt;span class="na"&gt;schedule&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;0&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;*&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;*&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;*&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;*"&lt;/span&gt;
    &lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;taskrun&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;pipelinerun&lt;/span&gt;
    &lt;span class="na"&gt;keep&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3&lt;/span&gt;
    &lt;span class="c1"&gt;# keep-since: 1440&lt;/span&gt;
    &lt;span class="c1"&gt;# NOTE: you can use either keep or keep-since, not both&lt;/span&gt;
    &lt;span class="na"&gt;prune-per-resource&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="na"&gt;hub&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;params&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;enable-devconsole-integration&lt;/span&gt;
        &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;true"&lt;/span&gt;
    &lt;span class="na"&gt;options&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;disabled&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
  &lt;span class="na"&gt;dashboard&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;readonly&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
    &lt;span class="na"&gt;options&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;disabled&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;We will go on about some items in the config file, but you can find the full reference on the &lt;a href="https://tekton.dev/docs/operator/tektonconfig/" rel="noopener noreferrer"&gt;TektonConfig&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;We are setting up the profile of the operator to all as this gives us access to all the features on the Tekton operator. If you plan to use less and want a slimmer setup, the reference for profiles is:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;all&lt;/strong&gt;: This profile will install all components (TektonPipeline, TektonTrigger, and TektonChain)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;basic&lt;/strong&gt;: This profile will install only TektonPipeline, TektonTrigger, and TektonChain components&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;lite&lt;/strong&gt;: This profile will install only TektonPipeline components&lt;/p&gt;

&lt;p&gt;We are disabling affinity assistant. This is a feature that coschedules pods of a PipelineRun that share the same persistent volume to the same Node. This is being deprecated in favor of coschedule workspaces. We are also disabling sidecar readiness, as we won't be using any sidecars.&lt;/p&gt;

&lt;p&gt;Pruner configuration is configured to run at the beginning of each hour. It will delete Tasks and TaskRuns, clearing resources.&lt;/p&gt;

&lt;p&gt;Dashboard readonly false will allow for actions on PipelineRuns and tasks to be taken directly on the dashboard.&lt;/p&gt;

&lt;h3&gt;
  
  
  Overview
&lt;/h3&gt;

&lt;p&gt;Let's visualize what we want to perform in our CI/CD flow.&lt;br&gt;
First, we will make changes to our codebase. Those changes will be pushed to a GitHub repository. The push event on the GitHub webhook will trigger our pipeline to run tests.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8idd2v7gv8h02aocohqc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8idd2v7gv8h02aocohqc.png" alt="Overview Tekton"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Handling Github Events
&lt;/h3&gt;

&lt;p&gt;For external events, we will set up an event listener. Tekton uses a custom resource for that, EventListener. This will create a service exposed via Kubernetes API. This service will receive GitHub &lt;a href="https://docs.github.com/en/webhooks/about-webhooks" rel="noopener noreferrer"&gt;webhook&lt;/a&gt; events. Then, after that, we can write a filter to match the event and trigger on our event listener.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;triggers.tekton.dev/v1beta1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;EventListener&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;tekton-github-pr-{{ .Values.projectName }}&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;serviceAccountName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;service-account-{{ .Values.projectName }}&lt;/span&gt;
  &lt;span class="na"&gt;triggers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;pr-trigger&lt;/span&gt;
      &lt;span class="na"&gt;interceptors&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;ref&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;cel"&lt;/span&gt;
            &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ClusterInterceptor&lt;/span&gt;
            &lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;triggers.tekton.dev&lt;/span&gt;
          &lt;span class="na"&gt;params&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;filter"&lt;/span&gt;
              &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;&amp;gt;&lt;/span&gt;
                &lt;span class="s"&gt;header.match('x-github-event', 'merge')&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;overlays"&lt;/span&gt;
              &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
                &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;author&lt;/span&gt;
                  &lt;span class="na"&gt;expression&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;body.pusher.name.lowerAscii().replace('/','-').replace('.', '-').replace('_', '-')&lt;/span&gt;
                &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;pr-ref&lt;/span&gt;
                  &lt;span class="na"&gt;expression&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;body.ref.lowerAscii().replace("/", '-')&lt;/span&gt;
      &lt;span class="na"&gt;bindings&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;ref&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;tb-github-pr-trigger-binding-{{ .Values.projectName }}&lt;/span&gt;
      &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;ref&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;tt-github-pr-trigger-template-{{ .Values.projectName }}&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Notice this resource has a &lt;code&gt;serviceAccountName&lt;/code&gt; as required. The EventListener will create resources in our cluster. This will ensure we have the correct roles and permissions to do it properly.&lt;/p&gt;

&lt;p&gt;We will use a Cel ClusterInterceptor, another custom resource so we can write filter expressions using &lt;a href="https://github.com/google/cel-spec" rel="noopener noreferrer"&gt;CEL&lt;/a&gt;, This is how we manage to evaluate the webhook request and filter triggers for many kinds of pipelines.&lt;/p&gt;

&lt;p&gt;Here we also use overlays to create variables based on an expression that can be passed down to our pipelines. In this case, we want the author and the ref so we can customize the pipeline display name.&lt;/p&gt;

&lt;p&gt;Bindings and templates are two other resources we will reference in the EventListener.&lt;/p&gt;

&lt;h3&gt;
  
  
  Bindings
&lt;/h3&gt;

&lt;p&gt;TriggerBindings are another way to bind objects from the webhook request to variables we can use to control pipeline flow.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;triggers.tekton.dev/v1beta1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;TriggerBinding&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;tb-github-pr-trigger-binding-{{ .Values.projectName }}&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;params&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;revision&lt;/span&gt;
      &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;$(body.after)&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;repo-url&lt;/span&gt;
      &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;$(body.repository.ssh_url)&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;author&lt;/span&gt;
      &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;$(extensions.author)&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;pr-ref&lt;/span&gt;
      &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;$(extensions.pr-ref)&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;repo-full-name&lt;/span&gt;
      &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;$(body.repository.full_name)&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Here we reference from the request captured by the EventListener the information we want to be assigned to variables, so we can pass down to the pipeline in the TriggerBinding. The variables we create with overlays in the EventListener we have to reference from the object extensions. We can reference the request body and header too.&lt;/p&gt;

&lt;h3&gt;
  
  
  Triggering the Pipeline
&lt;/h3&gt;

&lt;p&gt;TriggerTemplate is the resource that pieces together events with the variables we set up on the TriggerBinding. Here we will associate variables as params to the pipelines, creating a PipelineRun, which is the actual automation being executed as a pod in Kubernetes.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;triggers.tekton.dev/v1beta1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;TriggerTemplate&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;tt-github-pr-trigger-template-{{ .Values.projectName }}&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;params&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;revision&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;repo-url&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;author&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;repo-full-name&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;pr-ref&lt;/span&gt;
  &lt;span class="na"&gt;resourcetemplates&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;tekton.dev/v1beta1&lt;/span&gt;
      &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;PipelineRun&lt;/span&gt;
      &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;generateName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;pr-$(tt.params.pr-ref)-$(tt.params.author)-&lt;/span&gt;
      &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;serviceAccountName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;service-account-{{ .Values.projectName }}&lt;/span&gt;
        &lt;span class="na"&gt;pipelineRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{{&lt;/span&gt; &lt;span class="nv"&gt;.Values.projectName&lt;/span&gt;&lt;span class="pi"&gt;}}&lt;/span&gt;&lt;span class="s"&gt;-pipeline&lt;/span&gt;
        &lt;span class="na"&gt;workspaces&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cache&lt;/span&gt;
            &lt;span class="na"&gt;persistentVolumeClaim&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="na"&gt;claimName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;pvc-cache-{{ .Values.projectName }}&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;shared-data&lt;/span&gt;
            &lt;span class="na"&gt;volumeClaimTemplate&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
                &lt;span class="na"&gt;accessModes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
                  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;ReadWriteOnce&lt;/span&gt;
                &lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
                  &lt;span class="na"&gt;requests&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
                    &lt;span class="na"&gt;storage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;2Gi&lt;/span&gt;
        &lt;span class="na"&gt;params&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;repo-url&lt;/span&gt;
            &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;$(tt.params.repo-url)&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;revision&lt;/span&gt;
            &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;$(tt.params.revision)&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;repo-full-name&lt;/span&gt;
            &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;$(tt.params.repo-full-name)&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ref&lt;/span&gt;
            &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;$(tt.params.ref)&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;deploy-staging&lt;/span&gt;
            &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;$(tt.params.deploy-staging)&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;test-all&lt;/span&gt;
            &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;$(tt.params.test-all)&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;On the TriggerTemplate, we receive the params from the TriggerBinding and pass the params to the PipelineRun.&lt;br&gt;
We need to reference the Pipeline on pipelineRef, this is the pipeline we want to run.&lt;/p&gt;

&lt;p&gt;We also define our workspaces. In our example, we are passing 2 workspaces, one as a volumeClaimTemplate that will be discarded at the end of the PipelineRun. It uses claim templates to dynamically provision non-persistent storage with the resource values described. The workspace cache is defined using persistentVolumeClaim, which takes a persistent volume you need to define.&lt;/p&gt;

&lt;h3&gt;
  
  
  Pipeline
&lt;/h3&gt;

&lt;p&gt;The pipeline is the orchestrated flow of tasks we want to run. It will have the parameters we defined in the TriggerTemplate and the logic we want to execute applied to tasks. For this example, we will look at a simple test pipeline that clones a repository and runs the test script.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;tekton.dev/v1beta1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Pipeline&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{{&lt;/span&gt; &lt;span class="nv"&gt;.Values.projectName&lt;/span&gt; &lt;span class="pi"&gt;}}&lt;/span&gt;&lt;span class="s"&gt;-pipeline-tests&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;workspaces&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;shared-data&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cache&lt;/span&gt;
  &lt;span class="na"&gt;params&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;repo-url&lt;/span&gt;
      &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;string&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;revision&lt;/span&gt;
      &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;string&lt;/span&gt;
  &lt;span class="na"&gt;tasks&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;fetch-source&lt;/span&gt;
      &lt;span class="na"&gt;taskRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;resolver&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cluster&lt;/span&gt;
        &lt;span class="na"&gt;params&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kind&lt;/span&gt;
            &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;task&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;name&lt;/span&gt;
            &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;task-git-clone&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;namespace&lt;/span&gt;
            &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;tekton-pipelines&lt;/span&gt;
      &lt;span class="na"&gt;params&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;url&lt;/span&gt;
          &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;$(params.repo-url)&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;revision&lt;/span&gt;
          &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;$(params.revision)&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;depth&lt;/span&gt;
          &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2&lt;/span&gt;
      &lt;span class="na"&gt;workspaces&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;output&lt;/span&gt;
          &lt;span class="na"&gt;workspace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;shared-data&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;install-deps&lt;/span&gt;
      &lt;span class="na"&gt;runAfter&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;fetch-source"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
      &lt;span class="na"&gt;taskRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;resolver&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cluster&lt;/span&gt;
        &lt;span class="na"&gt;params&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kind&lt;/span&gt;
            &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;task&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;name&lt;/span&gt;
            &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;task-install-deps&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;namespace&lt;/span&gt;
            &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;tekton-pipelines&lt;/span&gt;
      &lt;span class="na"&gt;params&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; 
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;install-script&lt;/span&gt;
          &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;yarn install --prefer-offline --ignore-engines&lt;/span&gt;
      &lt;span class="na"&gt;workspaces&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;source&lt;/span&gt;
          &lt;span class="na"&gt;workspace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;shared-data&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cache&lt;/span&gt;
          &lt;span class="na"&gt;workspace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cache&lt;/span&gt;
 &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;test-task&lt;/span&gt;
      &lt;span class="s"&gt;runAfter&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;install-deps"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
      &lt;span class="na"&gt;taskRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;resolver&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cluster&lt;/span&gt;
        &lt;span class="na"&gt;params&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kind&lt;/span&gt;
            &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;task&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;name&lt;/span&gt;
            &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;task-test&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;namespace&lt;/span&gt;
            &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;tekton-pipelines&lt;/span&gt;
      &lt;span class="na"&gt;params&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;diff&lt;/span&gt;
          &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;$(tasks.fetch-source.results.diff)&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;install-deps&lt;/span&gt;
          &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;yarn install&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;run-test&lt;/span&gt;
          &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;yarn test&lt;/span&gt;
      &lt;span class="na"&gt;workspaces&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;source&lt;/span&gt;
          &lt;span class="na"&gt;workspace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;shared-data&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cache&lt;/span&gt;
          &lt;span class="na"&gt;workspace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cache&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Here we organize the logic of our pipeline.&lt;/p&gt;

&lt;p&gt;To help with organization and reutilization of tasks, which are the more atomic resources of a Tekton pipeline, we use a cluster resolver. This way, we can have one task shared across all namespaces and eliminate the need to duplicate tasks that are common to multiple pipelines. The cluster resolver takes the namespace the task is in and the name of the task.&lt;/p&gt;

&lt;p&gt;The parameters we define in the TriggerTemplate and pass to the pipeline run are defined in the pipeline and passed to the tasks.&lt;/p&gt;

&lt;p&gt;Another great feature of the Tekton pipeline is the TaskResult. Notice we use a parameter in the test-task that is inherited from a task result. This result is defined in the task fetch-source, which is the task we will create to clone a remote repository. The parameter diff is a list of files that were modified in the PR that triggered this pipeline.&lt;/p&gt;

&lt;p&gt;The workspaces we define in the TriggerTemplate are also assigned to the tasks. This ensures all pods created for all tasks in the pipeline execute our automations in the same storage space. That way, we can clone the remote repository at the beginning of the pipeline and perform many tasks with the same files.&lt;/p&gt;

&lt;h3&gt;
  
  
  Creating our tasks
&lt;/h3&gt;

&lt;p&gt;Now we define the tasks, which are the actual work to be done in our pipeline. In Tekton, each task is a Pod in Kubernetes. It is composed of several steps, each step being a container inside this task pod.&lt;/p&gt;

&lt;h4&gt;
  
  
  Fetch Source Task
&lt;/h4&gt;

&lt;p&gt;This task fetches a remote repository. We are using depth 2, which means we will be getting the last two commits from the repository, avoiding downloading too much data.&lt;/p&gt;

&lt;p&gt;We will also generate some task results we can use in our pipelines. The result is defined in the Task custom resource. Tekton also provides an &lt;a href="https://tekton.dev/docs/results/api/" rel="noopener noreferrer"&gt;api&lt;/a&gt; to interact with results. To use the results from a task, we only need to reference them as tasks..results..&lt;/p&gt;

&lt;p&gt;Here an example in our public repository of a &lt;a href="https://github.com/woovibr/WooviOps/blob/main/deployments/tekton/templates/task-git-clone.yaml" rel="noopener noreferrer"&gt;fech-source task&lt;/a&gt;.&lt;/p&gt;

&lt;h4&gt;
  
  
  Test Task
&lt;/h4&gt;

&lt;p&gt;This is a simple task that executes a script command you provide. As with the previous task, we define the workspace where we clone the repository and define one step to install dependencies and another to run the tests. There are many ways to organize this same scenario; this is just an example of tasks and how steps are defined.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;tekton.dev/v1beta1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Task&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;task-test&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{{&lt;/span&gt; &lt;span class="nv"&gt;.Values.projectName&lt;/span&gt; &lt;span class="pi"&gt;}}&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;&amp;gt;-&lt;/span&gt;
    &lt;span class="s"&gt;A generic task to run any bash command in any given image&lt;/span&gt;
  &lt;span class="na"&gt;workspaces&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;source&lt;/span&gt;
      &lt;span class="na"&gt;optional&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cache&lt;/span&gt;
      &lt;span class="na"&gt;optional&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="na"&gt;params&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;run-test&lt;/span&gt;
      &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;string&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;install-deps&lt;/span&gt;
      &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;string&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;diff&lt;/span&gt;
      &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;string&lt;/span&gt;
      &lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;diff of the pull request&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;image&lt;/span&gt;
      &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;string&lt;/span&gt;
      &lt;span class="na"&gt;default&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;node:latest"&lt;/span&gt;
  &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
   &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;install&lt;/span&gt;
      &lt;span class="s"&gt;image&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="s"&gt;$(params.image)&lt;/span&gt;
      &lt;span class="s"&gt;workingDir&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="s"&gt;$(workspaces.source.path)&lt;/span&gt;
      &lt;span class="s"&gt;script&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
        &lt;span class="s"&gt;#!/usr/bin/env bash&lt;/span&gt;
        &lt;span class="s"&gt;set -xe&lt;/span&gt;
        &lt;span class="s"&gt;$(params.install-deps)&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;test&lt;/span&gt;
      &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;$(params.image)&lt;/span&gt;
      &lt;span class="na"&gt;workingDir&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;$(workspaces.source.path)&lt;/span&gt;
      &lt;span class="na"&gt;script&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
        &lt;span class="s"&gt;#!/usr/bin/env bash&lt;/span&gt;
        &lt;span class="s"&gt;set -xe&lt;/span&gt;
        &lt;span class="s"&gt;$(params.run-test)&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  Dashboard
&lt;/h3&gt;

&lt;p&gt;In order to visualize your tekton resources, tekton operators will create a service to host the Tekton Dashboard, to find the ip asigned to the dashboard run:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

 kubectl get services &lt;span class="nt"&gt;-n&lt;/span&gt; tekton-pipelines


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fst6rnb3y7w756innjfib.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fst6rnb3y7w756innjfib.png" alt="Tekton Dashboard"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Final Considerations
&lt;/h3&gt;

&lt;p&gt;In this article, we covered the steps to automate a simple testing pipeline using the Tekton Operator. There are many ways to achieve this same result, and Tekton is a powerful tool that offers a lot of resources you can use depending on your needs. Additionally, the community is active and supportive. You can open an issue on the &lt;a href="https://github.com/tektoncd" rel="noopener noreferrer"&gt;Tekton Github&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Woovi is also improving our own CI/CD internal platform. In the &lt;a href="https://github.com/woovibr/WooviOps" rel="noopener noreferrer"&gt;WooviOps&lt;/a&gt; repo, you can find our basic implementation of the same case we covered in this article and more. If you want to help us improve, we welcome your PRs, comments, or you can just reach out to us on Twitter.&lt;/p&gt;

&lt;p&gt;Also, Woovi is &lt;a href="https://woovi.com/jobs/" rel="noopener noreferrer"&gt;hiring!&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Photo by &lt;a href="https://unsplash.com/@rocknrollmonkey?utm_content=creditCopyText&amp;amp;utm_medium=referral&amp;amp;utm_source=unsplash" rel="noopener noreferrer"&gt;Rock'n Roll Monkey&lt;/a&gt; on &lt;a href="https://unsplash.com/photos/gray-and-orange-plastic-robot-toy-LEPhZkQbUrk?utm_content=creditCopyText&amp;amp;utm_medium=referral&amp;amp;utm_source=unsplash" rel="noopener noreferrer"&gt;Unsplash&lt;/a&gt;&lt;/p&gt;

</description>
      <category>tekton</category>
      <category>cloudnative</category>
      <category>cloud</category>
      <category>cicd</category>
    </item>
    <item>
      <title>Owning your infrastructure: A Journey to Bare Metal and out of the Cloud</title>
      <dc:creator>Caio Campos Borges Rosa</dc:creator>
      <pubDate>Wed, 10 Apr 2024 13:10:53 +0000</pubDate>
      <link>https://dev.to/woovi/owning-your-infrastructure-a-journey-to-bare-metal-and-out-of-the-cloud-55g8</link>
      <guid>https://dev.to/woovi/owning-your-infrastructure-a-journey-to-bare-metal-and-out-of-the-cloud-55g8</guid>
      <description>&lt;p&gt;In today's evolving technological landscape, companies continually assess their infrastructure choices to adapt to changing needs. For Woovi, having the ability to rapidly test-run ideas, maintain full control of our infrastructure, and yet not overspend was the driving force behind this decision. In this article, we'll delve into Woovi's transition and explore the motivations, challenges, and outcomes of transitioning out of the cloud.&lt;/p&gt;

&lt;h2&gt;
  
  
  Motivations and Aspirations
&lt;/h2&gt;

&lt;p&gt;Our motivations for transitioning our infrastructure stemmed from several key factors. Firstly, We aimed to reduce costs significantly while simultaneously enhancing the quality of our technological environment, which seemed almost too good to be true, but at the same time we wanted also a higher standard of infrastructure performance, faster database operations and application responsiveness, better and faster loging and debugging, we wanted it all.&lt;/p&gt;




&lt;h3&gt;
  
  
  AWS Cloud stack
&lt;/h3&gt;

&lt;p&gt;We had a very straightforward setup on AWS; however, its performance was not particularly fast. As a result, any heavy data processing job could significantly slow us down.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;EKS&lt;/strong&gt;&lt;br&gt;
  It was our kubernetes service on aws, 5 t3.2xlarge nodes, performance was ok but not great. EBS for block storage as some of our data services ran on k8s.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;S3&lt;/strong&gt;&lt;br&gt;
  Object storage for assets.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ECR&lt;/strong&gt;&lt;br&gt;
  ECR was our container registry, ECR is great in general. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Mongodb&lt;/strong&gt;&lt;br&gt;
  E nodes replica set on k8s, 1 hidden replica for analytics and backups. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Redis&lt;/strong&gt;&lt;br&gt;
  1 node on k8s. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Elasticsearch&lt;/strong&gt;&lt;br&gt;
  1 Node running on k8s, we also ran kibana and APM integrations.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;CI/CD&lt;/strong&gt; &lt;br&gt;
  Github and Circleci, this part of the migration was so important and complex that we will have a separate series just on CI/CD. &lt;/p&gt;




&lt;h3&gt;
  
  
  Woovi Bare Metal Stack
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Data center&lt;/strong&gt;&lt;br&gt;
We chose a datacenter in São Paulo with colocation services. You bring the servers they take care of it, they offer internet redundancy, generators, coolling and 24/7 support. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Network&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Mikrotik routers, two of them, we wanted redundancy on the network layer. We will talk more a bout how we achieve that later. We have internal DNS services for service discovery and other network services we will soon talk more about. We also have a VRRP(Virtual Router Redundancy Protocol) setup on our Mikrotik for redundancy. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Internet&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Two internet direct link providers.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Servers&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Proxmox was our hypervisor solution due to its comprehensive feature set, providing us with the necessary flexibility. As a small team responsible for managing our infrastructure, Proxmox aligns perfectly with our requirements. Moreover, it's a familiar tool for our team.&lt;br&gt;
Our Staging environment is hosted on a Dell T550, we run a small version of each service we have on production.&lt;br&gt;
In our production environment we run 5 Dell R750.  &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Kubernets&lt;/strong&gt;&lt;br&gt;
We chose microk8s for its lightweight nature and comprehensive feature set. We sought simplicity, with all Kubernetes applications running in our cluster being stateless, prioritizing performance.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Mongodb&lt;/strong&gt; &lt;br&gt;
3 replicasets runnign in separate servers, on LXC containers managed by Proxmox, for disk we use 1.5TB of ssd space, CPU 15 cores and RAM 48gb. Mongodb is quite havy on memory, our focus is performance but we put a lot of effort on backup strategies.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Redis&lt;/strong&gt;&lt;br&gt;
One node running on a managed container, we have plans to expand this service but for now works great for us.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Observability&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We migrated our ELK stack from 7 to 8, integrated over Elastic Agents and now we plan to expand observability using Logstash and Beats to improve Application observability.&lt;/p&gt;

&lt;p&gt;We also added Prometheus/Grafana to our stack, we want to evolve our kubernetes observability as we scale.&lt;/p&gt;

&lt;p&gt;For network and hardware monitoring, we have a Zabbix Server running on an LXC container. &lt;/p&gt;

&lt;p&gt;There is a lot of overlap on those solutions, we wanted to test most of them and see what is the best suited for our needs, in the next months we will have a better understanding of those tools and this might change. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;CI/CD&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We created a full CI/CD platform with Tekton and ArgoCD, we will discuss in detail in its own series &lt;a href="https://dev.to/woovi/how-woovi-is-building-a-self-hosted-cloud-native-cicd-platform-with-tekton-and-argocd-22cd"&gt;Cloud Native CI/CD&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Registry&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We have implemented a fully managed registry using Harbor, which serves as a repository for Docker images and Helm Charts. It is seamlessly integrated into our CI/CD pipelines and Kubernetes environment. Additionally, Harbor offers the ability to integrate with Static Analysis and vulnerability tools such as &lt;a href="https://github.com/quay/clair" rel="noopener noreferrer"&gt;Clair&lt;/a&gt; and &lt;a href="https://github.com/aquasecurity/trivy" rel="noopener noreferrer"&gt;Trivy&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tools&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;For provisioning we use ansible, this is part of an effort to follow IaC(Infrastructure as Code) paradigm, keep a good level of visibility as we scale.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Overview&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcvnkf6g567fgvgka1npk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcvnkf6g567fgvgka1npk.png" alt="Data Center Overview" width="800" height="565"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Migration
&lt;/h3&gt;

&lt;p&gt;The migration was simple: &lt;/p&gt;

&lt;p&gt;1 - Cut access to the platform using cloudflare.&lt;/p&gt;

&lt;p&gt;2 - Scale down workers on EKS&lt;/p&gt;

&lt;p&gt;3 - Scale down servers on EKS&lt;/p&gt;

&lt;p&gt;4 - Backup disks &lt;/p&gt;

&lt;p&gt;5 - Wait for replication on Mongodb Cluster&lt;/p&gt;

&lt;p&gt;6 - Scale up Services on new infrastructure&lt;/p&gt;

&lt;p&gt;7 - Clear access on cloudflare to new infrastructure&lt;/p&gt;

&lt;p&gt;8 - Test absolutely everything!&lt;/p&gt;

&lt;p&gt;!Done&lt;/p&gt;

&lt;p&gt;The most challenging aspect of the migration was the database transition. To address this, we created proxy containers for our new replicas and incorporated them as hidden replicas within our EKS cluster. Over the course of a week-long testing period, we achieved nearly perfect replication with minimal lag. This success instilled confidence in us, assuring that downtime would be kept to a minimum, we encountered some hiccups during the migration process, particularly with outdated images on our self-hosted registry, causing a few services to be off by approximately 10 minutes. Despite these challenges, we successfully migrated the entire infrastructure in just 17 minutes.&lt;/p&gt;

&lt;h3&gt;
  
  
  First Week Impressions
&lt;/h3&gt;

&lt;p&gt;Everything seems too fast, this was the first week we fixed bugs that only occured because our database is so much faster than before. Responsiveness on our platform, queries, observability, tooling, all we had before became much faster. &lt;/p&gt;

&lt;p&gt;Our CI/CD is currently broken with deploys being perform manually, but we will have in no time fully integrated. &lt;/p&gt;

&lt;p&gt;Cost was, as expected, a big win, our AWS bill range from 50k to 100k BRL before.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff8e91zw237mus820an72.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff8e91zw237mus820an72.png" alt="AWS Cost" width="686" height="858"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We will share more in depth artciles about aspects of the migration as we improove our infrastructure, if you want to help...&lt;/p&gt;




&lt;h2&gt;
  
  
  We are &lt;a href="https://woovi.com/jobs/" rel="noopener noreferrer"&gt;hiring&lt;/a&gt;!
&lt;/h2&gt;

&lt;p&gt;Photo by &lt;a href="https://unsplash.com/@dos?utm_content=creditCopyText&amp;amp;utm_medium=referral&amp;amp;utm_source=unsplash" rel="noopener noreferrer"&gt;Filipe Dos Santos Mendes&lt;/a&gt; on &lt;a href="https://unsplash.com/photos/man-jumping-in-sky-during-daytime-2s5spoiwX88?utm_content=creditCopyText&amp;amp;utm_medium=referral&amp;amp;utm_source=unsplash" rel="noopener noreferrer"&gt;Unsplash&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>How Woovi is building a self hosted cloud native CI/CD platform with Tekton and Argocd</title>
      <dc:creator>Caio Campos Borges Rosa</dc:creator>
      <pubDate>Tue, 09 Apr 2024 13:12:29 +0000</pubDate>
      <link>https://dev.to/woovi/how-woovi-is-building-a-self-hosted-cloud-native-cicd-platform-with-tekton-and-argocd-22cd</link>
      <guid>https://dev.to/woovi/how-woovi-is-building-a-self-hosted-cloud-native-cicd-platform-with-tekton-and-argocd-22cd</guid>
      <description>&lt;p&gt;At Woovi, we prioritize both quality and speed in our software development processes, as we've previously discussed in our article on &lt;a href="https://dev.to/woovi/quality-of-software-at-woovi-pch"&gt;software quality&lt;/a&gt;. This commitment entails running over 100,000 tests daily and deploying multiple times to production without disruption. Achieving this level of efficiency requires substantial investment in CI/CD infrastructure. In the past, we relied on CircleCI as our preferred platform, which served us well but came with significant costs.&lt;/p&gt;

&lt;p&gt;In this article, we'll provide an overview of our journey toward building our own CI/CD platform. This platform will not only enhance our development experience but also result in a remarkable 87% reduction in CI/CD costs within the first 30 days of implementation, compared to our previous setup with CircleCI. We're excited to share our progress and insights with you.&lt;/p&gt;

&lt;h3&gt;
  
  
  Requirements
&lt;/h3&gt;

&lt;p&gt;The requirements for our CI/CD pipelines quite simple but not easy to achieve:&lt;/p&gt;

&lt;h4&gt;
  
  
  GitOps
&lt;/h4&gt;

&lt;p&gt;It must follow gitOps culture, all infrastructure must be described in a git repository, that way we can have auditable changes and updates not only for our software but for infrastructrure aswell. &lt;/p&gt;

&lt;h4&gt;
  
  
  Automation
&lt;/h4&gt;

&lt;p&gt;Changes in software must trigger CI/CD pipeline automatically, without human intervention. &lt;/p&gt;

&lt;h4&gt;
  
  
  Conditional Testing
&lt;/h4&gt;

&lt;p&gt;Efficient resource allocation is crucial. Therefore, only tests relevant to the modified portion of the software should be executed. This minimizes unnecessary resource consumption and time spent.&lt;/p&gt;

&lt;h4&gt;
  
  
  Full Suite Testing
&lt;/h4&gt;

&lt;p&gt;Must be able to run full test suites, our monorepos have 7-10k tests each. We use jest as our test framework, that means we have to optimize for it.&lt;/p&gt;

&lt;h4&gt;
  
  
  All self hosted
&lt;/h4&gt;

&lt;p&gt;No cloud services, no paid platforms. &lt;/p&gt;

&lt;h4&gt;
  
  
  Cloud Native
&lt;/h4&gt;

&lt;p&gt;Taking advantage of the k8s api and operators, we can use the controll loop to manage our CI/CD applications. Tekton is a very powerfull cloud native tool we will be using for this project. &lt;/p&gt;

&lt;h2&gt;
  
  
  Design
&lt;/h2&gt;

&lt;p&gt;The CI/CD flow components are:&lt;/p&gt;

&lt;h4&gt;
  
  
  Events
&lt;/h4&gt;

&lt;p&gt;The events are generated from github, each push of new code a developer performs have to trigger a test pipeline. Deploy requests are created from a pull request with the release tags.&lt;/p&gt;

&lt;h4&gt;
  
  
  Pipelines
&lt;/h4&gt;

&lt;p&gt;One pipeline for each repository, the pipeline will run tests, build applications, update registry and bucket with the new version of software if the tests pass and deploy in the correct enviroment based on the event who triggered the pipeline.&lt;/p&gt;

&lt;h4&gt;
  
  
  Applications
&lt;/h4&gt;

&lt;p&gt;Pipelines will interact with applications directly, deploying new versions, performing rollbacks if something goes wrong and sending notifications about overall status. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3ffs1yig8x0lpevqb6sm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3ffs1yig8x0lpevqb6sm.png" alt="Image description" width="800" height="238"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Tekton and ArgoCD
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjobbjidzb4ub4cngnq7l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjobbjidzb4ub4cngnq7l.png" alt="Tekton" width="800" height="404"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Tekton is a very powerfull cloud native CI/CD framework, is open source and has very strong comunity support. We choose mainly because it has all we need in terms of features, with some still in development, and the comunity is very active and helpfull.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6b3ukwsmwyfl876b81h6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6b3ukwsmwyfl876b81h6.png" alt="Argocd" width="800" height="352"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Argocd is a declarative GitOps tool, it allow us to implement track our helm charts from repository changes. We also implement the deploy stage of our production enviroment using argocd apis, that from a task in tekton can sync applications in kubernetes after we update our registry. Its also very complete with many security features out of the box.&lt;/p&gt;

&lt;p&gt;We have a dedicated server for our staging enviroment that runs proxmox hypervisor, there we run our kubernetes cluster in a VM. We will do a deep dive on upcoming articles for each building block and hope to show a little of how we setup the platform.&lt;/p&gt;

&lt;h2&gt;
  
  
  Initial Impact
&lt;/h2&gt;

&lt;p&gt;We saw a whopping reduction of R$40,000.00 in monthly CI/CD costs!&lt;/p&gt;

&lt;p&gt;The primary expense in our CI/CD setup lies in the development pipelines. In our initial implementation, we opted to retain the production pipelines while migrating only the development ones. The outcome? An impressive 87% reduction in costs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fes51lxxw8qnl2i5s6tnv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fes51lxxw8qnl2i5s6tnv.png" alt="Image description" width="800" height="457"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Not all impact was positive, our pipelines are taking 2x the time run as off the last implementation, we have a lot to develop in order to match circleci features. Performance too took a big hit, turns out running so many tests using jest cost alot of resources. Our staging server when running full test pipelines is always on 95% capacity.&lt;/p&gt;

&lt;p&gt;Next article we will be covering, the steps involved in setting up Tekton and Argo CD within Woovi's Kubernetes cluster or any k8s cluster.&lt;/p&gt;

&lt;p&gt;In the meantime, we've just published our tools and manifests on GitHub &lt;a href="https://github.com/woovibr/WooviOps" rel="noopener noreferrer"&gt;WooviOps&lt;/a&gt;. Feel free to contribute, and let us know if we can improve anything. &lt;/p&gt;




&lt;p&gt;Woovi&lt;br&gt;
Woovi is a Startup that enables shoppers to pay as they like. To make this possible, Woovi provides instant payment solutions for merchants to accept orders.&lt;/p&gt;

&lt;p&gt;If you want to work with us, we are &lt;a href="https://woovi.com/jobs/" rel="noopener noreferrer"&gt;hiring&lt;/a&gt;!&lt;/p&gt;




&lt;p&gt;Photo by &lt;a href="https://unsplash.com/@er1end?utm_content=creditCopyText&amp;amp;utm_medium=referral&amp;amp;utm_source=unsplash" rel="noopener noreferrer"&gt;Erlend Ekseth&lt;/a&gt; on &lt;a href="https://unsplash.com/photos/body-of-water-can-be-seen-through-the-tunnel-0a5VbkqqFFE?utm_content=creditCopyText&amp;amp;utm_medium=referral&amp;amp;utm_source=unsplash" rel="noopener noreferrer"&gt;Unsplash&lt;/a&gt;&lt;/p&gt;

</description>
      <category>cicd</category>
      <category>cloud</category>
      <category>k8s</category>
    </item>
    <item>
      <title>How to setup local development with ELK observability tools using docker-compose</title>
      <dc:creator>Caio Campos Borges Rosa</dc:creator>
      <pubDate>Mon, 05 Feb 2024 12:26:17 +0000</pubDate>
      <link>https://dev.to/woovi/how-to-setup-local-development-with-elk-observability-tools-using-docker-compose-4hj5</link>
      <guid>https://dev.to/woovi/how-to-setup-local-development-with-elk-observability-tools-using-docker-compose-4hj5</guid>
      <description>&lt;p&gt;A lot of observability is done using cloud provider tools that are one subscription away, yet replicating them for local development can be challenging. At Woovi, we prioritize aligning our development environment with production. That's why we run locally all services available in production using Docker Compose. Observability is not an exception. In this article, we'll delve into the detailed setup of ELK observability tools in our local development environment.&lt;/p&gt;

&lt;h2&gt;
  
  
  ELK Tools
&lt;/h2&gt;

&lt;p&gt;This are the tools we use for observability in our stack.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;APM&lt;br&gt;
Application performance monitoring.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Elastic Search&lt;br&gt;
Search Engine.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kibana&lt;br&gt;
Visualization Platform&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Logstash&lt;br&gt;
Log ingestion&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;MetricBeat&lt;br&gt;
Server data shipper&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Filebeat&lt;br&gt;
Log harvester and aggregator&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Fleet&lt;br&gt;
Agents integration&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Elastic Integrations offers an unified way to ingrate features of ELK stack, managed by fleet servers you can integrate over Kibana all integrations available using elastic agents, and manage them using policies trough an unified interface. In order to use this features we need security enable in kibana and elastic search and we need to provide certificate authentication for each service. In order to simplify we will be using the same certificate for all services, but this is only suited for local development and should not be used in production.&lt;/p&gt;

&lt;p&gt;First we will need configuration files for some services, the file structure will be:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;├── .env

├── docker-compose.yml

├──|observability |──  filebeat.yml

├── |observability |── logstash.conf

└── |observability |── metricbeat.yml

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Docker Compose
&lt;/h2&gt;

&lt;p&gt;The docker compose will be responsible for setting up certificates and copying to the volume of each service, that way we can setup SSL between services. You can find the .env file on the github &lt;a href="https://github.com/caiocampoos/elk-local-dev/blob/main/.env" rel="noopener noreferrer"&gt;repo&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;3.8"&lt;/span&gt;
&lt;span class="na"&gt;services&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;setup&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}&lt;/span&gt;
    &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;certs:/usr/share/elasticsearch/config/certs&lt;/span&gt;
    &lt;span class="na"&gt;user&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;0"&lt;/span&gt;
    &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;&amp;gt;&lt;/span&gt;
      &lt;span class="s"&gt;bash -c '&lt;/span&gt;
        &lt;span class="s"&gt;if [ x${ELASTIC_PASSWORD} == x ]; then&lt;/span&gt;
          &lt;span class="s"&gt;echo "Set the ELASTIC_PASSWORD environment variable in the .env file";&lt;/span&gt;
          &lt;span class="s"&gt;exit 1;&lt;/span&gt;
        &lt;span class="s"&gt;elif [ x${KIBANA_PASSWORD} == x ]; then&lt;/span&gt;
          &lt;span class="s"&gt;echo "Set the KIBANA_PASSWORD environment variable in the .env file";&lt;/span&gt;
          &lt;span class="s"&gt;exit 1;&lt;/span&gt;
        &lt;span class="s"&gt;fi;&lt;/span&gt;
        &lt;span class="s"&gt;if [ ! -f config/certs/ca.zip ]; then&lt;/span&gt;
          &lt;span class="s"&gt;echo "Creating CA";&lt;/span&gt;
          &lt;span class="s"&gt;bin/elasticsearch-certutil ca --silent --pem -out config/certs/ca.zip;&lt;/span&gt;
          &lt;span class="s"&gt;unzip config/certs/ca.zip -d config/certs;&lt;/span&gt;
        &lt;span class="s"&gt;fi;&lt;/span&gt;
        &lt;span class="s"&gt;if [ ! -f config/certs/certs.zip ]; then&lt;/span&gt;
          &lt;span class="s"&gt;echo "Creating certs";&lt;/span&gt;
          &lt;span class="s"&gt;echo -ne \&lt;/span&gt;
          &lt;span class="s"&gt;"instances:\n"\&lt;/span&gt;
          &lt;span class="s"&gt;"  - name: es01\n"\&lt;/span&gt;
          &lt;span class="s"&gt;"    dns:\n"\&lt;/span&gt;
          &lt;span class="s"&gt;"      - es01\n"\&lt;/span&gt;
          &lt;span class="s"&gt;"      - localhost\n"\&lt;/span&gt;
          &lt;span class="s"&gt;"    ip:\n"\&lt;/span&gt;
          &lt;span class="s"&gt;"      - 127.0.0.1\n"\&lt;/span&gt;
          &lt;span class="s"&gt;"  - name: kibana\n"\&lt;/span&gt;
          &lt;span class="s"&gt;"    dns:\n"\&lt;/span&gt;
          &lt;span class="s"&gt;"      - kibana\n"\&lt;/span&gt;
          &lt;span class="s"&gt;"      - localhost\n"\&lt;/span&gt;
          &lt;span class="s"&gt;"    ip:\n"\&lt;/span&gt;
          &lt;span class="s"&gt;"      - 127.0.0.1\n"\&lt;/span&gt;
          &lt;span class="s"&gt;"  - name: fleet-server\n"\&lt;/span&gt;
          &lt;span class="s"&gt;"    dns:\n"\&lt;/span&gt;
          &lt;span class="s"&gt;"      - fleet-server\n"\&lt;/span&gt;
          &lt;span class="s"&gt;"      - localhost\n"\&lt;/span&gt;
          &lt;span class="s"&gt;"    ip:\n"\&lt;/span&gt;
          &lt;span class="s"&gt;"      - 127.0.0.1\n"\&lt;/span&gt;
          &lt;span class="s"&gt;&amp;gt; config/certs/instances.yml;&lt;/span&gt;
          &lt;span class="s"&gt;bin/elasticsearch-certutil cert --silent --pem -out config/certs/certs.zip --in config/certs/instances.yml --ca-cert config/certs/ca/ca.crt --ca-key config/certs/ca/ca.key;&lt;/span&gt;
          &lt;span class="s"&gt;unzip config/certs/certs.zip -d config/certs;&lt;/span&gt;
        &lt;span class="s"&gt;fi;&lt;/span&gt;
        &lt;span class="s"&gt;echo "Setting file permissions"&lt;/span&gt;
        &lt;span class="s"&gt;chown -R root:root config/certs;&lt;/span&gt;
        &lt;span class="s"&gt;find . -type d -exec chmod 750 \{\} \;;&lt;/span&gt;
        &lt;span class="s"&gt;find . -type f -exec chmod 640 \{\} \;;&lt;/span&gt;
        &lt;span class="s"&gt;echo "Waiting for Elasticsearch availability";&lt;/span&gt;
        &lt;span class="s"&gt;until curl -s --cacert config/certs/ca/ca.crt https://es01:9200 | grep -q "missing authentication credentials"; do sleep 30; done;&lt;/span&gt;
        &lt;span class="s"&gt;echo "Setting kibana_system password";&lt;/span&gt;
        &lt;span class="s"&gt;until curl -s -X POST --cacert config/certs/ca/ca.crt -u "elastic:${ELASTIC_PASSWORD}" -H "Content-Type: application/json" https://es01:9200/_security/user/kibana_system/_password -d "{\"password\":\"${KIBANA_PASSWORD}\"}" | grep -q "^{}"; do sleep 10; done;&lt;/span&gt;
        &lt;span class="s"&gt;echo "All done!";&lt;/span&gt;
      &lt;span class="s"&gt;'&lt;/span&gt;
    &lt;span class="na"&gt;healthcheck&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;test&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;CMD-SHELL"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;[&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;-f&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;config/certs/es01/es01.crt&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;]"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
      &lt;span class="na"&gt;interval&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;1s&lt;/span&gt;
      &lt;span class="na"&gt;timeout&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;5s&lt;/span&gt;
      &lt;span class="na"&gt;retries&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;120&lt;/span&gt;

  &lt;span class="na"&gt;es01&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;depends_on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;setup&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;condition&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;service_healthy&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}&lt;/span&gt;
    &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;co.elastic.logs/module&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;elasticsearch&lt;/span&gt;
    &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;certs:/usr/share/elasticsearch/config/certs&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;esdata01:/usr/share/elasticsearch/data&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;${ES_PORT}:9200&lt;/span&gt;
    &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;node.name=es01&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;cluster.name=${CLUSTER_NAME}&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;discovery.type=single-node&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;ELASTIC_PASSWORD=${ELASTIC_PASSWORD}&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;bootstrap.memory_lock=true&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;xpack.security.enabled=true&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;xpack.security.http.ssl.enabled=true&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;xpack.security.http.ssl.key=certs/es01/es01.key&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;xpack.security.http.ssl.certificate=certs/es01/es01.crt&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;xpack.security.http.ssl.certificate_authorities=certs/ca/ca.crt&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;xpack.security.transport.ssl.enabled=true&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;xpack.security.transport.ssl.key=certs/es01/es01.key&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;xpack.security.transport.ssl.certificate=certs/es01/es01.crt&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;xpack.security.transport.ssl.certificate_authorities=certs/ca/ca.crt&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;xpack.security.transport.ssl.verification_mode=certificate&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;xpack.license.self_generated.type=${LICENSE}&lt;/span&gt;
    &lt;span class="na"&gt;mem_limit&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${ES_MEM_LIMIT}&lt;/span&gt;
    &lt;span class="na"&gt;ulimits&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;memlock&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;soft&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;-1&lt;/span&gt;
        &lt;span class="na"&gt;hard&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;-1&lt;/span&gt;
    &lt;span class="na"&gt;healthcheck&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;test&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;[&lt;/span&gt;
          &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;CMD-SHELL"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt;
          &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;curl&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;-s&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;--cacert&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;config/certs/ca/ca.crt&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;https://localhost:9200&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;|&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;grep&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;-q&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;'missing&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;authentication&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;credentials'"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt;
        &lt;span class="pi"&gt;]&lt;/span&gt;
      &lt;span class="na"&gt;interval&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;10s&lt;/span&gt;
      &lt;span class="na"&gt;timeout&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;10s&lt;/span&gt;
      &lt;span class="na"&gt;retries&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;120&lt;/span&gt;

  &lt;span class="na"&gt;kibana&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;depends_on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;es01&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;condition&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;service_healthy&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;docker.elastic.co/kibana/kibana:${STACK_VERSION}&lt;/span&gt;
    &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;co.elastic.logs/module&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kibana&lt;/span&gt;
    &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;certs:/usr/share/kibana/config/certs&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;kibanadata:/usr/share/kibana/data&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;./observability/kibana.yml:/usr/share/kibana/config/kibana.yml:ro&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;${KIBANA_PORT}:5601&lt;/span&gt;
    &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;SERVERNAME=kibana&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;ELASTICSEARCH_HOSTS=https://es01:9200&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;ELASTICSEARCH_USERNAME=kibana_system&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;ELASTICSEARCH_PASSWORD=${KIBANA_PASSWORD}&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;ELASTICSEARCH_SSL_CERTIFICATEAUTHORITIES=config/certs/ca/ca.crt&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;XPACK_SECURITY_ENCRYPTIONKEY=${ENCRYPTION_KEY}&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;XPACK_ENCRYPTEDSAVEDOBJECTS_ENCRYPTIONKEY=${ENCRYPTION_KEY}&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;XPACK_REPORTING_ENCRYPTIONKEY=${ENCRYPTION_KEY}&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;XPACK_REPORTING_KIBANASERVER_HOSTNAME=localhost&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;SERVER_SSL_ENABLED=true&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;SERVER_SSL_CERTIFICATE=config/certs/kibana/kibana.crt&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;SERVER_SSL_KEY=config/certs/kibana/kibana.key&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;SERVER_SSL_CERTIFICATEAUTHORITIES=config/certs/ca/ca.crt&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;ELASTIC_APM_SECRET_TOKEN=${ELASTIC_APM_SECRET_TOKEN}&lt;/span&gt;
    &lt;span class="na"&gt;mem_limit&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${KB_MEM_LIMIT}&lt;/span&gt;
    &lt;span class="na"&gt;healthcheck&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;test&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;[&lt;/span&gt;
          &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;CMD-SHELL"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt;
          &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;curl&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;-I&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;-s&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;--cacert&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;config/certs/ca/ca.crt&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;https://localhost:5601&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;|&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;grep&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;-q&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;'HTTP/1.1&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;302&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Found'"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt;
        &lt;span class="pi"&gt;]&lt;/span&gt;
      &lt;span class="na"&gt;interval&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;10s&lt;/span&gt;
      &lt;span class="na"&gt;timeout&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;10s&lt;/span&gt;
      &lt;span class="na"&gt;retries&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;120&lt;/span&gt;

  &lt;span class="na"&gt;metricbeat01&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;depends_on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;es01&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;condition&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;service_healthy&lt;/span&gt;
      &lt;span class="na"&gt;kibana&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;condition&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;service_healthy&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;docker.elastic.co/beats/metricbeat:${STACK_VERSION}&lt;/span&gt;
    &lt;span class="na"&gt;user&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;root&lt;/span&gt;
    &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;certs:/usr/share/metricbeat/certs&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;metricbeatdata01:/usr/share/metricbeat/data&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;./observability/metricbeat.yml:/usr/share/metricbeat/metricbeat.yml:ro"&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/var/run/docker.sock:/var/run/docker.sock:ro"&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/sys/fs/cgroup:/hostfs/sys/fs/cgroup:ro"&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/proc:/hostfs/proc:ro"&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/:/hostfs:ro"&lt;/span&gt;
    &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;ELASTIC_USER=elastic&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;ELASTIC_PASSWORD=${ELASTIC_PASSWORD}&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;ELASTIC_HOSTS=https://es01:9200&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;KIBANA_HOSTS=https://kibana:5601&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;LOGSTASH_HOSTS=http://logstash01:9600&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;CA_CERT=certs/ca/ca.crt&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;ES_CERT=certs/es01/es01.crt&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;ES_KEY=certs/es01/es01.key&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;KB_CERT=certs/kibana/kibana.crt&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;KB_KEY=certs/kibana/kibana.key&lt;/span&gt;
    &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;-strict.perms=false&lt;/span&gt;

  &lt;span class="na"&gt;filebeat01&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;depends_on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;es01&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;condition&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;service_healthy&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;docker.elastic.co/beats/filebeat:${STACK_VERSION}&lt;/span&gt;
    &lt;span class="na"&gt;user&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;root&lt;/span&gt;
    &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;certs:/usr/share/filebeat/certs&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;filebeatdata01:/usr/share/filebeat/data&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;./filebeat_ingest_data/:/usr/share/filebeat/ingest_data/"&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;./observability/filebeat.yml:/usr/share/filebeat/filebeat.yml:ro"&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/var/lib/docker/containers:/var/lib/docker/containers:ro"&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/var/run/docker.sock:/var/run/docker.sock:ro"&lt;/span&gt;
    &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;ELASTIC_USER=elastic&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;ELASTIC_PASSWORD=${ELASTIC_PASSWORD}&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;ELASTIC_HOSTS=https://es01:9200&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;KIBANA_HOSTS=https://kibana:5601&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;LOGSTASH_HOSTS=http://logstash01:9600&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;CA_CERT=certs/ca/ca.crt&lt;/span&gt;
    &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;-strict.perms=false&lt;/span&gt;

  &lt;span class="na"&gt;logstash01&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;depends_on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;es01&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;condition&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;service_healthy&lt;/span&gt;
      &lt;span class="na"&gt;kibana&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;condition&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;service_healthy&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;docker.elastic.co/logstash/logstash:${STACK_VERSION}&lt;/span&gt;
    &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;co.elastic.logs/module&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;logstash&lt;/span&gt;
    &lt;span class="na"&gt;user&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;root&lt;/span&gt;
    &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;certs:/usr/share/logstash/certs&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;logstashdata01:/usr/share/logstash/data&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;./logstash_ingest_data/:/usr/share/logstash/ingest_data/"&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;./observability/logstash.conf:/usr/share/logstash/pipeline/logstash.conf:ro"&lt;/span&gt;
    &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;xpack.monitoring.enabled=false&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;ELASTIC_USER=elastic&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;ELASTIC_PASSWORD=${ELASTIC_PASSWORD}&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;ELASTIC_HOSTS=https://es01:9200&lt;/span&gt;

  &lt;span class="na"&gt;fleet-server&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;depends_on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;kibana&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;condition&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;service_healthy&lt;/span&gt;
      &lt;span class="na"&gt;es01&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;condition&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;service_healthy&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;docker.elastic.co/beats/elastic-agent:${STACK_VERSION}&lt;/span&gt;
    &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;certs:/certs&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;fleetserverdata:/usr/share/elastic-agent&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/var/lib/docker/containers:/var/lib/docker/containers:ro"&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/var/run/docker.sock:/var/run/docker.sock:ro"&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/sys/fs/cgroup:/hostfs/sys/fs/cgroup:ro"&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/proc:/hostfs/proc:ro"&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/:/hostfs:ro"&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;${FLEET_PORT}:8220&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;${APMSERVER_PORT}:8200&lt;/span&gt;
    &lt;span class="na"&gt;user&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;root&lt;/span&gt;
    &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;SSL_CERTIFICATE_AUTHORITIES=/certs/ca/ca.crt&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;CERTIFICATE_AUTHORITIES=/certs/ca/ca.crt&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;FLEET_CA=/certs/ca/ca.crt&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;FLEET_ENROLL=1&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;FLEET_INSECURE=true&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;FLEET_SERVER_ELASTICSEARCH_CA=/certs/ca/ca.crt&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;FLEET_SERVER_ELASTICSEARCH_HOST=https://es01:9200&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;FLEET_SERVER_ELASTICSEARCH_INSECURE=true&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;FLEET_SERVER_ENABLE=1&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;FLEET_SERVER_CERT=/certs/fleet-server/fleet-server.crt&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;FLEET_SERVER_CERT_KEY=/certs/fleet-server/fleet-server.key&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;FLEET_SERVER_INSECURE_HTTP=true&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;FLEET_SERVER_POLICY_ID=fleet-server-policy&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;FLEET_URL=https://fleet-server:8220&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;KIBANA_FLEET_CA=/certs/ca/ca.crt&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;KIBANA_FLEET_SETUP=1&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;KIBANA_FLEET_USERNAME=elastic&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;KIBANA_FLEET_PASSWORD=${ELASTIC_PASSWORD}&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;KIBANA_HOST=https://kibana:5601&lt;/span&gt;

&lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;elastic_data&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{}&lt;/span&gt;
  &lt;span class="na"&gt;certs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;driver&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;local&lt;/span&gt;
  &lt;span class="na"&gt;esdata01&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;driver&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;local&lt;/span&gt;
  &lt;span class="na"&gt;kibanadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;driver&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;local&lt;/span&gt;
  &lt;span class="na"&gt;metricbeatdata01&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;driver&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;local&lt;/span&gt;
  &lt;span class="na"&gt;filebeatdata01&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;driver&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;local&lt;/span&gt;
  &lt;span class="na"&gt;logstashdata01&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;driver&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;local&lt;/span&gt;
  &lt;span class="na"&gt;fleetserverdata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;driver&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;local&lt;/span&gt;
  &lt;span class="na"&gt;cache&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;driver&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;local&lt;/span&gt;

&lt;span class="na"&gt;networks&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;default&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;elastic&lt;/span&gt;
    &lt;span class="na"&gt;external&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Its not as complicated as it looks, some of the things the compose file does by service are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Setup&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Create an instance of an Elastic Search service to use on validation of the certificates. Then, via bash script we create the certificates using &lt;code&gt;elasticsearch-certutil&lt;/code&gt;, here we specify what are our cluster nodes and servers instances creating CA cert and node certs giving the correct file permissions and distributing the cert and keys. We test the authentication worked using elastic search service and then the service stops.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;ES01&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Main Elastic Search service node, xpack.security options need to be enabled and pointing to the correct folder with the node certificate and certificate authority. A basic health check is passed using auth validation as main test parameter.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Kibana&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Kibana service, security config is the same as ES, xpack need to be enabled, extra configuration will be passed in its own yml file. To Access this file we pass it as bind read only volume.&lt;br&gt;
  &lt;code&gt;./observability/kibana.yml:/usr/share/kibana/config/kibana.yml:ro&lt;/code&gt;&lt;br&gt;
  We add internal apm traces so once the initial setup go up we can verify kibana for itself.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;filebeat01&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For filebeat we bind the volumes &lt;code&gt;/var/lib/docker/containers&lt;/code&gt; and &lt;code&gt;/var/run/docker.sock&lt;/code&gt; allowing filebeat to get all containers internal logs, the config in &lt;code&gt;observability/filebeat.yml&lt;/code&gt; will set autodiscover for that purpose, using docker as a provider, will also config kibana and ES. Ingest data input and ES outupt.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;metricbeat01&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For Metricbeat the binding mounts are to allow getting host system metrics and information and sending to elastic search.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;logstash01&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For logstash the bind mount config file will setup ingest plugins and output to elasti search. We are using basic starter configuration but in the config file is where you will setup formating. The ingest data path you can send the log file to be read using the strategy described in the config file.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;fleet-server&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Fleet server will manage agents and its integrations.&lt;/p&gt;
&lt;h3&gt;
  
  
  Running Stack
&lt;/h3&gt;

&lt;p&gt;1 - Run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker compose up -d
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Wait untill is finished:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flyi5xf470seqrok8obiy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flyi5xf470seqrok8obiy.png" alt="Output" width="791" height="176"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;2 - Open &lt;a href="https://localhost:5601/app/home#/" rel="noopener noreferrer"&gt;kibana&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Enter username and password setup in .env file&lt;/p&gt;

&lt;p&gt;3 - Go to Fleet/Management&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkm1j0h7rn0apqsr9nqxh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkm1j0h7rn0apqsr9nqxh.png" alt="Fleet" width="472" height="1370"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;4 - Go to Settings&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fobh2i4rrnvvyuuhemk6f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fobh2i4rrnvvyuuhemk6f.png" alt="Settings" width="800" height="423"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;5 - Edit the output agent&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fymnbu4gl8riewcjdbfp5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fymnbu4gl8riewcjdbfp5.png" alt="Edit Output" width="800" height="1035"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You will need 3 items to configure the certificate.&lt;/p&gt;

&lt;p&gt;Url: &lt;code&gt;https://es01:9200&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Elastic CA trusted Fingerprint:&lt;/p&gt;

&lt;p&gt;To get this value you need to sign the certificate from ES container&lt;/p&gt;

&lt;p&gt;Firtst copy the certificate to a temp folder&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker &lt;span class="nb"&gt;cp &lt;/span&gt;woovi-es01-1:/usr/share/elasticsearch/config/certs/ca/ca.crt /tmp/.

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then sign the certificate getting the Fingerprint&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;openssl x509 -fingerprint -sha256 -noout -in /tmp/ca.crt | awk -F"=" {' print $2 '} | sed s/://g
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The output will be something like this:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;5A7464CEABC54FA60CAD3BDF16395E69243B827898F5CCC93E5A38B8F78D5E7&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Use the value on the field Elastic CA trusted Fingerprint&lt;/p&gt;

&lt;p&gt;Now in Advanced YAML configuration you will need the certificate.&lt;/p&gt;

&lt;p&gt;Print the certificate you copied from the container&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cat /tmp/ca.crt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And edit to match the format on this example &lt;a href="https://github.com/caiocampoos/elk-local-dev/blob/main/observability/exampl-cat-file.yml" rel="noopener noreferrer"&gt;file&lt;/a&gt;&lt;br&gt;
Its important not to input the wrong indentation, yml really sucks and this step is usually to blame when something goes wrong.&lt;/p&gt;

&lt;p&gt;Save and apply settings, wait a few seconds and you should be able to see system metrics like memory and cpu usage on the agent page &lt;a href="https://localhost:5601/app/fleet/agents" rel="noopener noreferrer"&gt;here&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;That indicates your fleet server is configured the correct way.&lt;/p&gt;

&lt;p&gt;Now in your local development you only need to use apm config, go over on the agent configuration for APM in your desired language and check out, most are like the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  serviceName: 'my-service-name',
  secretToken: 'supersecrettoken',
  serverUrl: 'http://localhost:8200',
  environment: 'my-environment'
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Check out for aditional information in this agent integration &lt;a href="https://localhost:5601/app/integrations/detail/apm-8.11.2/overview" rel="noopener noreferrer"&gt;page&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We're constantly seeking ways to automate and streamline processes. If you have any ideas on how to enhance this workflow, please don't hesitate to share them with us!&lt;/p&gt;




&lt;p&gt;Woovi&lt;br&gt;
Woovi is a Startup that enables shoppers to pay as they like. To make this possible, Woovi provides instant payment solutions for merchants to accept orders.&lt;/p&gt;

&lt;p&gt;If you want to work with us, we are &lt;a href="https://woovi.com/jobs/" rel="noopener noreferrer"&gt;hiring&lt;/a&gt;!&lt;/p&gt;




</description>
      <category>devops</category>
      <category>development</category>
      <category>kibana</category>
      <category>programming</category>
    </item>
    <item>
      <title>Automate homelab microK8s cluster provisioning with Vagrant and Ansible</title>
      <dc:creator>Caio Campos Borges Rosa</dc:creator>
      <pubDate>Mon, 05 Feb 2024 11:47:54 +0000</pubDate>
      <link>https://dev.to/caiocampoos/automate-homelab-microk8s-cluster-provisioning-with-vagrant-and-ansible-4idc</link>
      <guid>https://dev.to/caiocampoos/automate-homelab-microk8s-cluster-provisioning-with-vagrant-and-ansible-4idc</guid>
      <description>&lt;p&gt;Getting quick feedback is essential while developing. When it comes to setting up infrastructure, it often involves tons of scripting and is good to have a simple workflow you can iterate on. This tutorial is all about giving you a solid head start if you're itching to dive into DevOps and build your own homelab. We'll be installing microk8s, a fully compliant and up-to-date Kubernetes distribution that boasts minimal machine requirements.&lt;/p&gt;

&lt;p&gt;To manage our VMs we will use Vagrant, its a complete cli tool to manage virtual machines and provides a simple but complete workflow. Vagrant is very good at giving us a context to our project, instead of configuring each VM individually and running provisioning scripts against them we have one declarative file that describe the project. &lt;/p&gt;

&lt;h3&gt;
  
  
  Vagrant installation
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;brew &lt;span class="nb"&gt;install &lt;/span&gt;virtualbox

&lt;span class="nv"&gt;$ &lt;/span&gt;brew cask &lt;span class="nb"&gt;install &lt;/span&gt;vagrant
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Vagrantfile
&lt;/h3&gt;

&lt;p&gt;Vagrant uses a Vagrantfile to describe the machines the project is going to need. Here we will have our provisioning script that will install update repositories, install Ansible, and setup host records. For each machine we describe we will run the provisioning script when we call "vagrant up ".&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# -*- mode: ruby -*-
# vi: set ft=ruby :
$script = &amp;lt;&amp;lt;-SCRIPT
apt-get update
apt-get install -y ansible sshpass
echo "192.168.56.11 controller" &amp;gt;&amp;gt; /etc/hosts
echo "192.168.56.12 node-1" &amp;gt;&amp;gt; /etc/hosts
SCRIPT
Vagrant.configure("2") do |config|
  config.vm.define "controller" do |controller|
  controller.vm.box = "ubuntu/focal64"
  controller.vm.network "private_network", ip: "192.168.56.11"
  controller.vm.hostname = "controller"
  controller.vm.provider "virtualbox" do |vb|
    vb.memory = "2048"
  end
 controller.vm.provision "shell", inline: $script
end
config.vm.define "node1" do |node1|
  node1.vm.box = "ubuntu/focal64"
  node1.vm.network "private_network", ip: "192.168.56.12"
  node1.vm.hostname = "node1"
  node1.vm.provider "virtualbox" do |vb|
   vb.memory = "1024"
  end
  node1.vm.provision "shell", inline: $script
end
end
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;&lt;em&gt;NOTE:&lt;/em&gt;&lt;/strong&gt;  if you need a ip range out of vagrant default '192.168.56.0/21', create a file '/etc/vbox/networks.conf' and add the line with the range you need. Ex: * 192.168.32.0/24.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Next we call vagrant up:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;vagrant up controller
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you already run and need to make changes to any VM described in the Vagrantfile, run the command again with the flag --provision, that will run the provisioning script again for the targeted VM.&lt;/p&gt;

&lt;h3&gt;
  
  
  Ansible playbook
&lt;/h3&gt;

&lt;p&gt;For k8s provisioning we will be using a Ansible. Ansible is a powerful automation to provision and manage resources, we can automate rolling updates and state management of our deployments in a declarative file. Here we will use in its simplest form to install microk8s and add alias to kubectl and completion using a playbook.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nt"&gt;---&lt;/span&gt;
- name: Install Microk8s
  hosts: localhost
  gather_facts: &lt;span class="nb"&gt;false
  &lt;/span&gt;become: &lt;span class="nb"&gt;true
  &lt;/span&gt;tasks:
    - name: Install microk8s
      snap:
        name: microk8s
        state: present
        classic: &lt;span class="nb"&gt;yes&lt;/span&gt;
    - name: Add &lt;span class="nb"&gt;alias &lt;/span&gt;to kubectl
      become: &lt;span class="nb"&gt;false
      &lt;/span&gt;lineinfile:
        path: &lt;span class="s1"&gt;'{{ lookup("env", "HOME") }}/.bashrc'&lt;/span&gt;
        regexp: &lt;span class="s1"&gt;'^alias kubectl='&lt;/span&gt;
        line: &lt;span class="s1"&gt;'alias kubectl="microk8s kubectl"'&lt;/span&gt;
        state: present
    - name: Add bash completion &lt;span class="k"&gt;for &lt;/span&gt;kubectl
      become: &lt;span class="nb"&gt;false
      &lt;/span&gt;lineinfile:
        path: &lt;span class="s1"&gt;'{{ lookup("env", "HOME") }}/.bashrc'&lt;/span&gt;
        regexp: &lt;span class="s1"&gt;'^source \&amp;lt;\(kubectl'&lt;/span&gt;
        line: &lt;span class="s1"&gt;'source &amp;lt;(kubectl completion bash)'&lt;/span&gt;
        state: present
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now we just need to access the vm using ssh, Vagrant already setup a user with ssh access to the machines, so we only need to run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;vagrant ssh controller

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Ansible is already installed, we did that in the provisioning script so now we only need to run the playbook:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;ansible-playbook /vagrant/homelab-microk8s.yml

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After ansible finish to run the playbook, we need to source the .bashrc, you can just log out wait 2s and log back in with vagrant ssh:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;exit&lt;/span&gt;

&lt;span class="nv"&gt;$ &lt;/span&gt;vagrant ssh controller
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now you can use kubectl with completion and get info of the services running:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get nodes
NAME         STATUS   ROLES    AGE   VERSION
controller   Ready    &amp;lt;none&amp;gt;   60m   v1.28.3
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get pods -A

NAMESPACE     NAME                                     READY   STATUS    RESTARTS   AGE
kube-system   calico-node-sfngb                        1/1     Running   0          61m
kube-system   coredns-864597b5fd-drm79                 1/1     Running   0          60m
kube-system   calico-kube-controllers-77bd7c5b-vnx5j   1/1     Running   0          60m


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;references:&lt;br&gt;
&lt;a href="https://developer.hashicorp.com/vagrant/docs/vagrantfile" rel="noopener noreferrer"&gt;https://developer.hashicorp.com/vagrant/docs/vagrantfile&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.ansible.com/ansible/latest/playbook_guide/playbooks_intro.html" rel="noopener noreferrer"&gt;https://docs.ansible.com/ansible/latest/playbook_guide/playbooks_intro.html&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/caiocampoos/homelab-k8s/tree/main" rel="noopener noreferrer"&gt;Here&lt;/a&gt; you will find a repo with the files used in this guide.&lt;/p&gt;

&lt;p&gt;Photo by &lt;a href="https://unsplash.com/@jordanharrison?utm_content=creditCopyText&amp;amp;utm_medium=referral&amp;amp;utm_source=unsplash" rel="noopener noreferrer"&gt;Jordan Harrison&lt;/a&gt; on &lt;a href="https://unsplash.com/photos/blue-utp-cord-40XgDxBfYXM?utm_content=creditCopyText&amp;amp;utm_medium=referral&amp;amp;utm_source=unsplash" rel="noopener noreferrer"&gt;Unsplash&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>How to effectively use retry policies with BullJs/BullMQ</title>
      <dc:creator>Caio Campos Borges Rosa</dc:creator>
      <pubDate>Wed, 18 Oct 2023 01:44:51 +0000</pubDate>
      <link>https://dev.to/woovi/how-to-effectively-use-retry-policies-with-bulljsbullmq-45h9</link>
      <guid>https://dev.to/woovi/how-to-effectively-use-retry-policies-with-bulljsbullmq-45h9</guid>
      <description>&lt;p&gt;Jobs play a pivotal role in the majority of distributed systems these days. They allow us to achieve scalability at the cost of time. While it may take more time to process all the tasks in our queue, rest assured that we will process each one eventually. Except, if they fail.&lt;/p&gt;

&lt;p&gt;In distributed systems, it's generally a good practice to treat events as disposable and immutable. Unless you have a specific use case that necessitates storing events, this should be your default approach.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to deal with failed jobs
&lt;/h2&gt;

&lt;p&gt;When a job fails, there are several ways to address the situation. Reprocessing is often the first option that comes to mind, but it typically involves recreating the event, requiring user input, or asking the user to repeat an action. This can be detrimental to the system and user experience since unnecessary repetition is generally undesirable.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is a retry policy?
&lt;/h2&gt;

&lt;p&gt;Another approach to handling failing jobs that require reprocessing is through the use of retry policies. A retry policy comprises a set of rules that automate job reprocessing. Typically, these rules define the time intervals between each retry and the maximum number of retry attempts.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to configure in BullMQ/JS?
&lt;/h2&gt;

&lt;p&gt;In BullMQ/BullJS, you configure this directly when adding a job to a queue using the job options object during job creation.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;queue&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;test-retry&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;foo&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;bar&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;attempts&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;backoff&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;exponential&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;delay&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;1000&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;
&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The 'attempts' parameter determines how many times a job will be retried, starting from the first failure. The 'backoff' function involves a dynamic relationship between 'attempts' and the time intervals, which is described using 'type' and 'delay'."&lt;/p&gt;

&lt;p&gt;The 'type' specifies the strategy for handling the retry attempts, while 'delay' represents the time intervals between these attempts.&lt;/p&gt;

&lt;h3&gt;
  
  
  Fixed
&lt;/h3&gt;

&lt;p&gt;When the backoff type is set to 'fixed,' the retry attempts will be evenly spaced after the specified delay time. For instance, as shown in the example above, Bull will make 7 attempts, waiting 10 seconds between each one.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="na"&gt;attempts&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;7&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;backoff&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
          &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;fixed&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
          &lt;span class="na"&gt;delay&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;10000&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="p"&gt;},&lt;/span&gt;
      &lt;span class="p"&gt;},&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Exponential
&lt;/h3&gt;

&lt;p&gt;When the backoff type is set to 'exponential,' the retry attempts will follow an exponential pattern after the specified delay time. For example, as demonstrated in the above example, Bull will make 7 attempts, waiting 10 seconds after the first, 20 seconds after the second, 30 seconds after the third, and so on.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="na"&gt;attempts&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;7&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;backoff&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
          &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;exponential&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
          &lt;span class="na"&gt;delay&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;10000&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="p"&gt;},&lt;/span&gt;
      &lt;span class="p"&gt;},&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If the only issue you are trying to solve is handling errors and retries automatically that would be enough, but what if you want to control when a job fails?&lt;/p&gt;

&lt;h2&gt;
  
  
  How to explicit fail a job?
&lt;/h2&gt;

&lt;p&gt;The retry policy will only apply to jobs that explicitly fail. Jobs are considered failed when they've reached their maximum number of stalls (refer to the documentation) or when an error occurs while a worker is processing the job.&lt;/p&gt;

&lt;p&gt;You can create custom error classes and explicitly instantiate them based on logic your code controls. &lt;/p&gt;

&lt;p&gt;For example, if I make a request and the status code is anything other than 200:&lt;/p&gt;

&lt;p&gt;My error class:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;WorkerRetryError&lt;/span&gt; &lt;span class="kd"&gt;extends&lt;/span&gt; &lt;span class="nc"&gt;Error&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nf"&gt;constructor&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;message&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;super&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;message&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;WorkerRetryError&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;isWorkerRetryError&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;err&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;Error&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt;
  &lt;span class="nx"&gt;err&lt;/span&gt; &lt;span class="k"&gt;instanceof&lt;/span&gt; &lt;span class="nx"&gt;WorkerRetryError&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Using error class in this case is important, so you can use narrowing with typeguards at runtime to clear the custom errors you will be trowing.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt; &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;getHasError&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;?.&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;?.&lt;/span&gt;&lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;?.&lt;/span&gt;&lt;span class="nx"&gt;status&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="mi"&gt;200&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;?.&lt;/span&gt;&lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;?.&lt;/span&gt;&lt;span class="nx"&gt;ok&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="p"&gt;};&lt;/span&gt;

  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;hasError&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;getHasError&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;hasError&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;throw&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;WorkerRetryError&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
      &lt;span class="s2"&gt;`Webhook failed with status code 
        &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;?.&lt;/span&gt;&lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;?.&lt;/span&gt;&lt;span class="nx"&gt;status&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Photo by &lt;a href="https://unsplash.com/@markkoenig?utm_content=creditCopyText&amp;amp;utm_medium=referral&amp;amp;utm_source=unsplash" rel="noopener noreferrer"&gt;Mark König&lt;/a&gt; on &lt;a href="https://unsplash.com/photos/a-yellow-sign-that-says-try-3-on-it-OBPLW16Lp_4?utm_content=creditCopyText&amp;amp;utm_medium=referral&amp;amp;utm_source=unsplash" rel="noopener noreferrer"&gt;Unsplash&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>The right decisions can simplify and make it easier to scale</title>
      <dc:creator>Caio Campos Borges Rosa</dc:creator>
      <pubDate>Mon, 07 Aug 2023 01:07:53 +0000</pubDate>
      <link>https://dev.to/woovi/the-right-decisions-can-simplify-and-make-it-easier-to-scale-1mg9</link>
      <guid>https://dev.to/woovi/the-right-decisions-can-simplify-and-make-it-easier-to-scale-1mg9</guid>
      <description>&lt;p&gt;A couple of weeks ago, I talked about how Woovi uses MongoDB Change Streams to emit events in our event-driven architecture. Now, I'm going to talk about the repercussions of this kind of architectural decision and how much value lies solely in decision-making and good software development principles.&lt;/p&gt;

&lt;h2&gt;
  
  
  Principle
&lt;/h2&gt;

&lt;p&gt;Principles help narrow down options when deciding how to implement a new feature or adopt a new technology for a feature..&lt;br&gt;
At Woovi, we adopt an event-driven architecture to scale our platform. This means that in every design decision, we consider whether it aligns with this perspective.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Is there an organic event for that implementation, meaning the user produces some behavior that can be captured as an event? Or do we need to produce an event based on what we want to happen?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It has to process fast or is costly?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It needs to happen as fast as possible or can be delayed?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It needs to be atomic?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It needs to be idempotent?&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We can take a different approach for each of those questions, but it is common for the process to be very similar for every implementation due to the event-driven principle.&lt;/p&gt;
&lt;h2&gt;
  
  
  The problem and decision
&lt;/h2&gt;

&lt;p&gt;Our platform is very data heavy and we always wanted to offer the best search feature for our users, being able to find anything from a single search bar is our goal with this feature, for that we decided on using elastic search to power our search solution. &lt;br&gt;
The first problem we encountered was, how to index all our data?&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;First we need to be event driven, so no cronJob pooling all of our data from db and indexing.&lt;/li&gt;
&lt;li&gt;Second we need to be fast, as fast as data is created in our platform it needs to be searchable for the user. We deal with financial transactions, speed is essencial.
This two points complement each other, if i need to be event driven i have to find a way to index data as data is created on our platform.&lt;/li&gt;
&lt;li&gt;Third, we can't create a point of stress on our system, so it needs to be lean. We can't tax our database with heavy queries in order to create events.&lt;/li&gt;
&lt;li&gt;And last we need to be resilient, we need to deliver all the data the user wants access to.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The solution was to create events from the database itself.&lt;/p&gt;
&lt;h2&gt;
  
  
  The publisher
&lt;/h2&gt;

&lt;p&gt;A couple of weeks ago, we talked about this package, and it has matured a lot since then. Now, it consists of a changeStreamListen method that takes a mongoose Model for the change stream and a function that will receive the data from the stream. This approach allows us to scale by adding multiple methods inside this function for the same model and different types of streams.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;type&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;ChangeStreamDeleteDocument&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;ChangeStreamUpdateDocument&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;ChangeStreamInsertDocument&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;mongodb&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;type&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;Model&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;mongoose&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;changeStreamMiddleware&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;./changeStreamMiddleware&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="nx"&gt;type&lt;/span&gt; &lt;span class="nx"&gt;AllowedChangeStreams&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt;
  &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="nx"&gt;ChangeStreamInsertDocument&lt;/span&gt;
  &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="nx"&gt;ChangeStreamUpdateDocument&lt;/span&gt;
  &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="nx"&gt;ChangeStreamDeleteDocument&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="nx"&gt;type&lt;/span&gt; &lt;span class="nx"&gt;OperationType&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;AllowedChangeStreams&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;operationType&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;];&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="nx"&gt;type&lt;/span&gt; &lt;span class="nx"&gt;ChangeStreamData&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;T&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="na"&gt;_id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Record&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;unknown&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nl"&gt;operationType&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;OperationType&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nl"&gt;wallTime&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;Date&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nl"&gt;fullDocument&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;T&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nl"&gt;ns&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Record&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;unknown&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;changeStreamListen&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;T&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;model&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Model&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;T&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;fn&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;ChangeStreamData&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;T&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="k"&gt;void&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;stream&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;model&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;watch&lt;/span&gt;&lt;span class="p"&gt;([],&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;fullDocument&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;updateLookup&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;

  &lt;span class="nx"&gt;stream&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;on&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;change&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nf"&gt;changeStreamMiddleware&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;model&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;collection&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;fn&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Notice the changeStreamMiddleware method encapsulating the model and the function? This is a Higher-Order Function (HoF) designed to attach APM (Application Performance Monitoring) and index transactions in our observability tools, reducing the risk of intrusion or failure for the processes being watched. More on that in a future article. &lt;/p&gt;

&lt;p&gt;After that, we have setupSubscribers, which instantiates one subscriber for each model we want to watch and passes a function to handle the data from the events.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;Charge&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@woovi/charge&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;Company&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@woovi/company&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;Customer&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@woovi/customer&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;PixTransaction&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@woovi/transaction&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;User&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@woovi/user&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;changeStreamListen&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;./changeStreamListen&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;chargeChangeStreamHandler&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;../charge/chargeChangeStreamHandler&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;handleCompanySubscriberEvent&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;../company/handleCompanySubscriberEvent&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;handleCustomerSubscriberEvent&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;../customer/handleCustomerSubscriberEvent&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;handleTransactionSubscriberEvent&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;../transaction/handleTransactionSubscriberEvent&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;handleUserSubscriberEvent&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;../user/handleUserSubscriberEvent&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;setupSubscribers&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nf"&gt;changeStreamListen&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;Charge&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;chargeChangeStreamHandler&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="nf"&gt;changeStreamListen&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;PixTransaction&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;handleTransactionSubscriberEvent&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="nf"&gt;changeStreamListen&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;Customer&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;handleCustomerSubscriberEvent&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="nf"&gt;changeStreamListen&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;User&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;handleUserSubscriberEvent&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="nf"&gt;changeStreamListen&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;Company&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;handleCompanySubscriberEvent&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This implementation performs really well and delivers events very fast as soon as it hits consensus between replica members.&lt;/p&gt;

&lt;p&gt;This allows us to achieve one of the goals for the search implementation: fast indexing times. It takes less than one second for a transaction or charge to be available in Elasticsearch from a stream event.&lt;/p&gt;

&lt;p&gt;It is very resilient when compared with other event-driven implementations.&lt;/p&gt;

&lt;p&gt;We use BullJs as our event bus and worker implementation. One concept we always consider while working with jobs is that a job can always fail. For that reason, each queue has its own retry and rate policies. The publisher is intended to be used for processes that require both speed and resilience in execution.&lt;/p&gt;

&lt;p&gt;While using jobs, you always have to implement two sides of the processes: the creation of the event and the consuming part. By using the publisher, we can simplify the complexity of our processes and become even more resilient against unintended effects.&lt;/p&gt;

&lt;h2&gt;
  
  
  Benefits and Concerns of using Change Data Capture
&lt;/h2&gt;

&lt;p&gt;Almost every database has some kind of event capture implementation. There are many ways in which you can achieve CDC (Change Data Capture), but log-based approaches are generally considered the better choice. Examples of log-based approaches include write-ahead logs in PostgreSQL, MySQL binary logs, or MongoDB oplog. These methods offer more benefits than simply using database triggers or even worse a query based approach. They are usually more reliable since they utilize the database driver to perform the necessary security checks before emitting an event. Additionally, log-based approaches have a low impact on database performance, especially in the case of MongoDB where it is close to zero. Moreover, these methods typically have a low cost of implementation and add minimal complexity.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foewtphzrfw2i7cyonoba.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foewtphzrfw2i7cyonoba.png" alt="CDC on MongoDB" width="800" height="332"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;On the other hand, the downsides of using this approach include the lack of information on how the change takes place and its context. Much of the logic is obscured by the database layer, making it inaccessible from the publisher's side and limiting the implementation from that perspective. Other important point is change streams are infinite; as long as you are watching for changes in the model, they will be continuously streamed. Hence, it becomes crucial to adhere to software development principles and good practices more seriously.&lt;/p&gt;

&lt;p&gt;As every single event on your database is being streamed, every query you do as a result of that is even more important and has to be optimized with that in mind. It's quite trivial to create a loop on an update event, but those are easy to avoid. On the other hand, heavy and unnecessary queries are easy to overlook. So keep that in mind: simple and lean code is best for this kind of implementation.&lt;/p&gt;

&lt;h2&gt;
  
  
  Enabling more Products
&lt;/h2&gt;

&lt;p&gt;While jobs are a great way to scale asynchronous processes, it is possible to have too many, very fast. This is a common problem in event-driven systems. Woovi has more than 100 jobs, and we manage well with a very mature codebase. However, the problem is that it does not scale forever. To move forward, we need more ways to scale events so we can implement more products for our users.&lt;br&gt;
This implementation has enabled not just our search project (which is soon moving out of beta) but much more to come.&lt;/p&gt;

&lt;p&gt;Now, we have a strong base to implement:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Mqtt integration&lt;br&gt;
We are implementing Mqtt to integrate our Maquininha and solve an old problem with Windows machines. This will allow our customers to print QR codes with ease using any commercial thermal printer.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;PubSub&lt;br&gt;
We are implementing real-time updates to charges and transactions in our platform. This way, customers can be notified live and track information in real time.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Analytics&lt;br&gt;
We are creating a complete analytics pipeline to offer data-driven insights to ourselves and our customers. This implementation uses the publisher and jobs to push data to multiple pipelines.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And much more to come.&lt;/p&gt;

&lt;p&gt;We are shipping a lot every day. If you are interested in joining a fast-growing startup, we are  &lt;a href="https://woovi.com/jobs/" rel="noopener noreferrer"&gt;hiring!&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Photo by &lt;a href="https://unsplash.com/@possessedphotography?utm_source=unsplash&amp;amp;utm_medium=referral&amp;amp;utm_content=creditCopyText" rel="noopener noreferrer"&gt;Possessed Photography&lt;/a&gt; on &lt;a href="https://unsplash.com/photos/M7V9rglHaFE?utm_source=unsplash&amp;amp;utm_medium=referral&amp;amp;utm_content=creditCopyText" rel="noopener noreferrer"&gt;Unsplash&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Stack shopping and repo setup</title>
      <dc:creator>Caio Campos Borges Rosa</dc:creator>
      <pubDate>Wed, 02 Aug 2023 22:02:41 +0000</pubDate>
      <link>https://dev.to/caiocampoos/stack-shopping-and-repo-setup-51ac</link>
      <guid>https://dev.to/caiocampoos/stack-shopping-and-repo-setup-51ac</guid>
      <description>&lt;p&gt;In the last &lt;a href="https://dev.to/caiocampoos/building-a-business-out-of-maps-and-statistics-what-will-go-wrong-3pn7"&gt;article&lt;/a&gt;, we discussed the vision for the product and outlined its basic functionalities to guide development. In this article, I will delve into the first steps and fundamental tasks required, as well as showcase our basic development environment.&lt;/p&gt;

&lt;h2&gt;
  
  
  Chores
&lt;/h2&gt;

&lt;p&gt;A couple of things to consider when building a company are tax-related matters, legislation, IP registration, and domain purchase. I have already taken care of most of these aspects, but I won't focus on that here since they can vary significantly depending on your location. The truth is, you need a lot less than you might imagine to start, so the key is to focus on getting started and figure out the more tedious parts later.&lt;/p&gt;

&lt;h2&gt;
  
  
  Stack Shopping
&lt;/h2&gt;

&lt;p&gt;You are a new company, you have no friends, you need to move fast and alone(not me because i have you guys), it's essential to move quickly and independently. For the MVP, prioritize the stack that allows you to move faster, as long as it meets the minimum functionality requirements and the available tools are sufficient. Speed should always be the determining factor to break any ties in your decision-making process.&lt;/p&gt;

&lt;p&gt;The main part of our product will consist of one simple API and one simple web app. Given the constraint of having limited staff, our priority is to keep the development process concise and facilitate Developer Experience (DX), testing, and iteration. For these reasons, I have decided to use TypeScript for both the API and the app since it is the stack I work with daily and allows for seamless context switching without compromising development experience. Additionally, we can utilize a monorepo to keep our platform organized, and this decision becomes even more evident when considering all the reasons I previously mentioned.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Code Cohesion&lt;/li&gt;
&lt;li&gt;Simplified DX&lt;/li&gt;
&lt;li&gt;Simplified Testing Environment&lt;/li&gt;
&lt;li&gt;Type sharing&lt;/li&gt;
&lt;li&gt;Resource Sharing&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feucy8mnr9tqqyom6xn93.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feucy8mnr9tqqyom6xn93.png" alt="Monorepo" width="800" height="508"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The primary objective is simplicity, so we've decided to use KoaJS as our API framework for the back-end. KoaJS is a lightweight and straightforward choice that simply works. This API will handle fundamental user and business logic. If we encounter scenarios that require more specialized functionalities, we can create new packages within the monorepo and import them here.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;Koa&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;Request&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;koa&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;Router&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;koa-router&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;bodyParser&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;koa-bodyparser&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;cors&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@koa/cors&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;


&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;app&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Koa&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;router&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Router&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="mi"&gt;1&lt;/span&gt;
  &lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;use&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;cors&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;credentials&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt; &lt;span class="p"&gt;}));&lt;/span&gt;
  &lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;use&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;bodyParser&lt;/span&gt;&lt;span class="p"&gt;());&lt;/span&gt;
  &lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;use&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;router&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;routes&lt;/span&gt;&lt;span class="p"&gt;());&lt;/span&gt;
  &lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;use&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;router&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;allowedMethods&lt;/span&gt;&lt;span class="p"&gt;());&lt;/span&gt;

  &lt;span class="c1"&gt;//// healthcheck endpoint&lt;/span&gt;
  &lt;span class="nx"&gt;router&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;/&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nl"&gt;path&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;string&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nl"&gt;body&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;string&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt; &lt;span class="nx"&gt;next&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;any&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;path&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;/&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;body&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;lets map!&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
      &lt;span class="k"&gt;return&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;next&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
  &lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;default&lt;/span&gt; &lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Regarding the web app, I've selected Next.js because of its simplicity and ease of deployment. It's straightforward and provides a smooth development experience.&lt;/p&gt;

&lt;h2&gt;
  
  
  DX
&lt;/h2&gt;

&lt;p&gt;Currently, our development experience is kept simple. To run the project seamlessly in our local development environment, we will utilize TurboRepo and Docker. With just one command, we can have the project up and running smoothly.&lt;/p&gt;

&lt;p&gt;To run with Turborepo just run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
pnpm run dev

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To run the project with docker, have docker and docker-composed installed and run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
docker-compose up &lt;span class="nt"&gt;-d&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Docker setup in a monorepo
&lt;/h2&gt;

&lt;p&gt;In a monorepo, there are several ways to set up deployment depending on your specific requirements. For now, we'll opt for a simple approach: using one Docker container for each package. We will create a docker-compose file to coordinate these packages and ensure they communicate through a Docker network.&lt;/p&gt;

&lt;p&gt;This setup allows us to manage and deploy each package independently while maintaining clear communication between them through the designated Docker network. As we progress, we can explore more complex deployment strategies, but for the initial stages, this straightforward method should suffice.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;
&lt;span class="nx"&gt;version&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;3&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;

&lt;span class="nx"&gt;services&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
  &lt;span class="nx"&gt;web&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="nx"&gt;container_name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;web&lt;/span&gt;
    &lt;span class="nx"&gt;build&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
      &lt;span class="nx"&gt;context&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;.&lt;/span&gt;
      &lt;span class="nx"&gt;dockerfile&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;apps&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;web&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;Dockerfile&lt;/span&gt;
    &lt;span class="nx"&gt;restart&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;always&lt;/span&gt;
    &lt;span class="nx"&gt;ports&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
      &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="mi"&gt;3000&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;3000&lt;/span&gt;
    &lt;span class="nx"&gt;networks&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
      &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="nx"&gt;app_network&lt;/span&gt;

  &lt;span class="nx"&gt;api&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="nx"&gt;container_name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;api&lt;/span&gt;
    &lt;span class="nx"&gt;build&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
      &lt;span class="nx"&gt;context&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;.&lt;/span&gt;
      &lt;span class="nx"&gt;dockerfile&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;apps&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;api&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;Dockerfile&lt;/span&gt;
    &lt;span class="nx"&gt;restart&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;always&lt;/span&gt;
    &lt;span class="nx"&gt;ports&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
      &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="mi"&gt;3001&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;3001&lt;/span&gt;
    &lt;span class="nx"&gt;networks&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
      &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="nx"&gt;app_network&lt;/span&gt;


&lt;span class="nx"&gt;networks&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
  &lt;span class="nx"&gt;app_network&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As we move forward, we'll have the opportunity to implement some exciting building blocks that will enhance our development process and productivity. With the right tools and approaches, we can accelerate our progress, making the development experience more enjoyable and efficient for everyone involved.&lt;/p&gt;

&lt;p&gt;Next steps are: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Setting up DataBase&lt;/li&gt;
&lt;li&gt;Basic Api Endpoints&lt;/li&gt;
&lt;li&gt;Testing Environment&lt;/li&gt;
&lt;li&gt;Basic CI&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The repository for this project is at &lt;a href="https://github.com/caiocampoos/mapstat" rel="noopener noreferrer"&gt;github&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The repository is public, so you can check all the issues and pull requests to track the project's progress. Feel free to suggest or even make a contribution.&lt;/p&gt;

&lt;p&gt;Find me on &lt;a href="https://twitter.com/caiocbrr" rel="noopener noreferrer"&gt;twitter&lt;/a&gt; for a good time and lets build something.&lt;/p&gt;

&lt;p&gt;Photo by &lt;a href="https://unsplash.com/@mikepetrucci?utm_source=unsplash&amp;amp;utm_medium=referral&amp;amp;utm_content=creditCopyText" rel="noopener noreferrer"&gt;Mike Petrucci&lt;/a&gt; on &lt;a href="https://unsplash.com/photos/c9FQyqIECds?utm_source=unsplash&amp;amp;utm_medium=referral&amp;amp;utm_content=creditCopyText" rel="noopener noreferrer"&gt;Unsplash&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Building a business out of Maps and Statistics. What 'WILL' go wrong?</title>
      <dc:creator>Caio Campos Borges Rosa</dc:creator>
      <pubDate>Mon, 17 Jul 2023 11:59:03 +0000</pubDate>
      <link>https://dev.to/caiocampoos/building-a-business-out-of-maps-and-statistics-what-will-go-wrong-3pn7</link>
      <guid>https://dev.to/caiocampoos/building-a-business-out-of-maps-and-statistics-what-will-go-wrong-3pn7</guid>
      <description>&lt;p&gt;Since I remember understanding the concept of software, built by someone to somebody who need something i am deeply fascinated by it.&lt;/p&gt;

&lt;p&gt;I am not new to the whole game of building business, being part of both failures and successes im somehow aware of what a business need to achieve minimal success. What i am new is the concept of building in public, learning in public and from the start using the public attention as both, leverage to grow a organic audience to what can be a successful business and creating an early case for skin in the game as public failures are quite hurtful have significant moral cost.&lt;/p&gt;

&lt;p&gt;So with this series of articles i want to achieve the following goals.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Build a simple viable business(make some money maybe?)&lt;/li&gt;
&lt;li&gt;Learn as much as i can&lt;/li&gt;
&lt;li&gt;Share experiences so others can learn as well&lt;/li&gt;
&lt;li&gt;Make some money? &lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  The idea and motivation
&lt;/h3&gt;

&lt;p&gt;Most ideas I have for projects are, in fact, my wife's ideas. She is a Professor of Environmental Psychology at the Federal University of Roraima (UFRR), and she loves to tell me what I should build with software. &lt;/p&gt;

&lt;p&gt;This time, she needs to gather a huge list of statistical data from schools in major cities of Brazil to study the impact of green areas in school life. &lt;/p&gt;

&lt;p&gt;I believe good products are those that solve real problems we have in our lives so my motivation aligns with hers for this one, and I really think that would be a great product to use not just in academic life but in many other areas. &lt;/p&gt;

&lt;h3&gt;
  
  
  Product
&lt;/h3&gt;

&lt;p&gt;The product I am going to develop is a simple webpage that delivers statistical data based on an area the user draws in a map widget. That's it, map to statistics will be the basis for this product.&lt;/p&gt;

&lt;p&gt;The website will consist of two sections after users login. First, they will be greeted by a map widget, featuring a searchable bar to search for specific locations and a toolbox to draw polygons.&lt;/p&gt;

&lt;p&gt;The user will draw areas on the map from which they want to obtain statistical analysis and data. They can save these areas with a name attached to them. After saving, a card displaying the basic data of the drawn polygon will be shown in a side bar. This side bar will list all polygons created by the user.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7sd4ivxd2imargnhrxuu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7sd4ivxd2imargnhrxuu.png" alt="Map Section" width="800" height="600"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The cards will display a 'Statistics' button upon saving, which will initially be disabled. This button will be enabled when the reports for the selected types of statistical analysis and data sources, as specified by the user, are ready. This functionality will be handled by an asynchronous worker.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi49bawkbefhl0nqy9fo1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi49bawkbefhl0nqy9fo1.png" alt="Disabled Button" width="221" height="125"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After the report is ready, the button will be shown as enabled, and the user will have access to the full report on that area.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F87ddmabg5duddnzqpa0l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F87ddmabg5duddnzqpa0l.png" alt="Enabled Button" width="221" height="125"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the report sections, the user will have access to one or multiple reports, based on the data sources the user has subscribed to. This aspect will be discussed in more detail at a later stage, as most of the product development will be focused on this area where we can be the most creative. As we launch and grow, we will have the opportunity to add multiple data sources and types of analysis based on market feedback and research. For our Minimum Viable Product (MVP), we will initially include two data sources with basic analysis. &lt;/p&gt;

&lt;p&gt;Geospatial data consists of spectral analysis of the soil and weather data, as well as dvi (difference vegetation index) and ndvi (normalized difference vegetation index) imaging and statistics. This type of data is particularly useful in agriculture and engineering research.&lt;/p&gt;

&lt;p&gt;Geographical Statistics consists of the analysis and statistics of populations and socioeconomic databases. It is commonly used by researchers, students, journalists, and pretty much everybody who needs basic information from a particular area.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpf9s2uq4gvd0gggug5pc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpf9s2uq4gvd0gggug5pc.png" alt="Statistics Section" width="800" height="665"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;One important aspect of this product is that the data needs to be sourced from reputable sources. We will also need to implement an audit system to ensure our users can rely on the data we provide. This aspect of the project, coding the trust and reliability of the data, will be very fun as i have no idea how to do it yet.&lt;/p&gt;

&lt;h3&gt;
  
  
  Basic system design
&lt;/h3&gt;

&lt;p&gt;I am not going to dive too much into system design as I believe it is currently pointless. Most of the crucial decisions regarding system design will be made along the way. For now, I will opt for a simple implementation, including a server to handle all requests and events, a worker to process asynchronous jobs that require more processing power, a storage solution, and a front-end application.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbodf9m11gwsolvypippz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbodf9m11gwsolvypippz.png" alt="System Design Section" width="800" height="350"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We will delve deep into each topic as development progresses. All repositories will be public, and I will document each step, showcasing all interactions with anyone who chooses to participate in any form in this project. A lot of things can and will go wrong and i am expecting to get the most out of it. Until the next update!&lt;/p&gt;

&lt;p&gt;Catch me on &lt;a href="https://twitter.com/caiocbrr" rel="noopener noreferrer"&gt;twitter&lt;/a&gt; any time.&lt;/p&gt;

&lt;p&gt;Photo by &lt;a href="https://unsplash.com/photos/no2blvVYoJw" rel="noopener noreferrer"&gt;Clay Banks&lt;/a&gt; in &lt;a href="https://unsplash.com" rel="noopener noreferrer"&gt;Unsplash&lt;/a&gt;&lt;/p&gt;

</description>
      <category>busines</category>
      <category>startup</category>
      <category>analytics</category>
    </item>
    <item>
      <title>How to use mongoDB change streams as a powerful event-driven engine</title>
      <dc:creator>Caio Campos Borges Rosa</dc:creator>
      <pubDate>Wed, 12 Jul 2023 20:29:35 +0000</pubDate>
      <link>https://dev.to/woovi/how-to-use-mongodb-change-streams-as-a-powerful-event-driven-engine-4d9c</link>
      <guid>https://dev.to/woovi/how-to-use-mongodb-change-streams-as-a-powerful-event-driven-engine-4d9c</guid>
      <description>&lt;p&gt;Before change streams became a feature of MongoDB, developers who wanted to track real-time changes in the database use to monitor oplog entries and track the changes in a specific collection based on timestamps. This process was often complex, and the mechanisms required for resuming and recovering reading were not particularly secure.&lt;/p&gt;

&lt;p&gt;Change streams allow applications a direct interface in real time to deal with change in database collections, with powerful features to custom event-driven architectures.&lt;/p&gt;

&lt;p&gt;Change streams are available in MongoDB when a replica set environment is configured. They rely on changes to the members as they reach consensus on a particular change in a majority of the members. This ensures the safety of data in a specific collection, especially in scenarios where failures may occur.&lt;/p&gt;

&lt;p&gt;To utilize change streams in a local development environment, we configure a replica set using Docker Compose. For detailed instructions on this setup, you can refer to the &lt;a href="https://dev.to/woovi/best-dx-for-mongodb-replica-set-43lc"&gt;guide&lt;/a&gt; written by &lt;a class="mentioned-user" href="https://dev.to/sibelius"&gt;@sibelius&lt;/a&gt;. We have invested considerable effort to ensure that all our developers can easily set up a local environment and work with the full range of features that MongoDB offers. &lt;/p&gt;

&lt;h3&gt;
  
  
  Setup
&lt;/h3&gt;

&lt;p&gt;At &lt;a href="https://woovi.com/" rel="noopener noreferrer"&gt;Woovi&lt;/a&gt; we use change streams in a specific service called Publisher. This service is app that instantiates a subscriber for each collection in our Database from which we want to derive events.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;companySubscriber&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;./CompanySubscriber&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;customerSubscriber&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;./CustomerSubscriber&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;userSubscriber&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;./UserSubscriber&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;setupSubscribers&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nf"&gt;companySubscriber&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
  &lt;span class="nf"&gt;userSubscriber&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
  &lt;span class="nf"&gt;customerSubscriber&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Events
&lt;/h3&gt;

&lt;p&gt;The role of the subscriber is to monitor a collection and generate a data event. This event is then passed to a handler, which processes the data according to our specific needs.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;userSubscriber&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;stream&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;User&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;watch&lt;/span&gt;&lt;span class="p"&gt;([],&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;fullDocument&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;updateLookup&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;

  &lt;span class="nx"&gt;stream&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;on&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;change&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nf"&gt;handleUserSubscriberEvent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The stream output changes based on the event that occur on a particular collection. Mongodb has many change events as seen in the &lt;a href="https://www.mongodb.com/docs/manual/reference/change-events/#std-label-change-stream-output" rel="noopener noreferrer"&gt;documentation&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The stream pipeline accepts multiple options, &lt;code&gt;fullDocument: 'updateLookup'&lt;/code&gt; makes the event Update send the full document an not just updated fields as is default, each event has a particular stream payload you can use base on your application and may configurations to tailor as you need.&lt;/p&gt;

&lt;p&gt;The first param is an array of pipeline options you can pass to modify the stream output.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;pipeline&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
  &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;$match&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;fullDocument.username&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;alice&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;$addFields&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;newField&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;this is an added field!&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;];&lt;/span&gt;
 &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;stream&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;User&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;watch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;pipeline&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;fullDocument&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;updateLookup&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="nx"&gt;stream&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;on&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;change&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nf"&gt;handleUserSubscriberEvent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Use cases
&lt;/h3&gt;

&lt;p&gt;Finally, in our handleUserSubscriberEvent function, we utilize the data object to drive any event-driven service within our application environment. In our specific case, we use it to create and update indices in our Elastic Search Service, which serves as the core technology behind our internal search tools.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;
&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;handleUserSubscriberEvent&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;dataPicked&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;_id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;fullDocument&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;fullDocument&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;email&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;fullDocument&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;email&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;cellphone&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;fullDocument&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;cellphone&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;taxID&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;fullDocument&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;taxID&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;};&lt;/span&gt;

  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;obj&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;data&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;dataPicked&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;};&lt;/span&gt;

  &lt;span class="nf"&gt;handleDocumentIndexing&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;ELASTICSEARCH_INDEXES&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;USER&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;obj&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can use change streams as a powerful engine to event-driven applications such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Analytics Processing&lt;/li&gt;
&lt;li&gt;Notifications &lt;/li&gt;
&lt;li&gt;IoT integration with MQTT&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The performance of mongoDB change streams allows Woovi to scale even more event-driven products as fast and secure as we can.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://woovi.com/" rel="noopener noreferrer"&gt;Woovi&lt;/a&gt; is a Startup that enables shoppers to pay as they like. To make this possible, Woovi provides instant payment solutions for merchants to accept orders.&lt;/p&gt;

&lt;p&gt;If you want to work with us, we are &lt;a href="https://woovi.com/jobs/" rel="noopener noreferrer"&gt;hiring&lt;/a&gt;!&lt;/p&gt;

</description>
      <category>eventdriven</category>
      <category>mongodb</category>
      <category>architecture</category>
      <category>development</category>
    </item>
    <item>
      <title>Observability with Elasticsearch Kibana and APM</title>
      <dc:creator>Caio Campos Borges Rosa</dc:creator>
      <pubDate>Mon, 12 Jun 2023 12:14:32 +0000</pubDate>
      <link>https://dev.to/woovi/observability-with-elasticsearch-kibana-and-apm-3dhb</link>
      <guid>https://dev.to/woovi/observability-with-elasticsearch-kibana-and-apm-3dhb</guid>
      <description>&lt;h4&gt;
  
  
  Why at Woovi we invest in Observability?
&lt;/h4&gt;

&lt;p&gt;At &lt;a href="https://www.woovi.com" rel="noopener noreferrer"&gt;Woovi&lt;/a&gt; we focus on speed and innovation, in highly dynamic distributed systems, troubleshooting can be a challenging aspect, causing valuable time to be wasted on debugging. However, the concept of observability addresses this pain point by providing insights into data and processes, ultimately minimizing debugging costs and enabling more accurate future planning. Observability empowers users to gain visibility into the inner workings of their systems, helping them identify issues promptly and efficiently while freeing up time for the creation of new and innovative solutions.&lt;/p&gt;

&lt;h4&gt;
  
  
  Elastic Search
&lt;/h4&gt;

&lt;p&gt;Elasticsearch is a versatile and scalable search and analytics engine designed to handle large volumes of data. It provides near-real-time search capabilities and supports complex querying, making it an ideal solution for a wide range of use cases. Built on top of the Apache Lucene library, Elasticsearch uses a distributed architecture to ensure high availability, fault tolerance, and scalability. It stores data in the form of JSON documents, allowing for efficient indexing and retrieval. It also offers features such as automatic sharding, replication, and distributed document storage, making it suitable for both small-scale applications and enterprise-level deployments. &lt;/p&gt;

&lt;h4&gt;
  
  
  APM
&lt;/h4&gt;

&lt;p&gt;In Elasticsearch, the APM (Application Performance Monitoring) module provides insights into the performance of your applications and services that interact with Elasticsearch. APM in Elasticsearch works by instrumenting your application code and capturing detailed information about transactions, spans, and errors.&lt;/p&gt;

&lt;p&gt;To use APM in Elasticsearch, you need to integrate the APM agent into your application. The agent is available in various programming languages and frameworks. Once integrated, the agent automatically collects performance data from your application and sends it to the Elasticsearch cluster.&lt;/p&gt;

&lt;p&gt;Here we use event driven architecture, so our resources are divided in server and workers. One example of setup in a server for apm would be indexing the body of the request.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;

&lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;use&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;next&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;body&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;request&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;body&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

  &lt;span class="nx"&gt;apm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;setLabel&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;requestBody&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;request&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;rawBody&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

  &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;next&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This is the most simple form of indexing processes with apm, you set a label and a value for something you like to monitor, that way you can have a well structure stream of data to search. In our case we label requests bodies for all our api endpoints that way we can have a data drive approach to debugging live. Out of the box we can have for a endpoint requests, latency, throughput and error statistics.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9c3chwec7zjqm72ukt1w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9c3chwec7zjqm72ukt1w.png" alt="Request Statistics"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The APM module in Elasticsearch also supports distributed tracing, which allows you to follow a request's journey across multiple services and systems, providing insights into the end-to-end performance and metadata of your application.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbnhxcr2u0nv3dof0ug7c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbnhxcr2u0nv3dof0ug7c.png" alt="Request trace"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Kibana
&lt;/h4&gt;

&lt;p&gt;Kibana is a powerful data visualization and exploration tool that works in conjunction with Elasticsearch. It provides a user-friendly web interface for searching, analyzing, and visualizing data stored in Elasticsearch, making it easier to understand and derive insights from large datasets. Alongside with APM is a powerful tool to visualization and live monitoring.&lt;/p&gt;

&lt;p&gt;A great approach for application data visualization is to understand the team needs and the data profile before developing labels. With this approach we can ensure not only that we have all data we need but don't lose a hand on storage, as our indexes take space and infra costs.  &lt;/p&gt;

&lt;p&gt;One of the problems kibana and apm solves here at Woovi is on the worker side of our application, how to debug events in multiple queues with multiple data profiles as fast as possible and with confidence on the data? We introduced a label called job.Originator. That label is set in each job created by our event emitters so that way we can see where the event was created in the first place and with hat inquire the data for that event in specific moments of our data lifetime. &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;

&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;apm&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;elastic-apm-node&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;createJobBull&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;T&lt;/span&gt; &lt;span class="kd"&gt;extends&lt;/span&gt; &lt;span class="nx"&gt;Record&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;any&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;any&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;?:&lt;/span&gt; &lt;span class="nx"&gt;T&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;options&lt;/span&gt;&lt;span class="p"&gt;?:&lt;/span&gt; &lt;span class="nx"&gt;JobOptions&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;originator&lt;/span&gt;&lt;span class="p"&gt;?:&lt;/span&gt; &lt;span class="nx"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;queue&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Queue&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;queues&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;DEFAULT&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;):&lt;/span&gt; &lt;span class="nb"&gt;Promise&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;Job&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;T&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;

  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;span&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;apm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;startSpan&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;createJobBull&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;span&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;span&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;setLabel&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;jobName&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;name&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;span&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;setLabel&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;jobData&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;JSON&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;stringify&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;originator&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;span&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;setLabel&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;JobOriginator&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;originator&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;options&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;span&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;setLabel&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;jobOptions&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;JSON&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;stringify&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;options&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="nx"&gt;span&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;end&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;queue&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;options&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now we can search live and using data provided from the labels we set to query specific cases based on origin, data, resources and time.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnrj3ocfh1udg9gaj60kt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnrj3ocfh1udg9gaj60kt.png" alt="Query by job Originator"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you want to work with us, we are &lt;a href="https://woovi.com/jobs/" rel="noopener noreferrer"&gt;hiring&lt;/a&gt;!&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
