<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Unni P</title>
    <description>The latest articles on DEV Community by Unni P (@iamunnip).</description>
    <link>https://dev.to/iamunnip</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/iamunnip"/>
    <language>en</language>
    <item>
      <title>Prometheus - HTTPS &amp; Authentication - Part 4</title>
      <dc:creator>Unni P</dc:creator>
      <pubDate>Tue, 22 Aug 2023 15:31:52 +0000</pubDate>
      <link>https://dev.to/iamunnip/prometheus-https-authentication-part-4-5cgn</link>
      <guid>https://dev.to/iamunnip/prometheus-https-authentication-part-4-5cgn</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;In this article, we will look how we can configure HTTPS and Authentication on both Prometheus and Node Exporter&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;In my previous article, we looked at how we can set up Prometheus and Node Exporter as systemd services on an Ubuntu instance.&lt;/p&gt;


&lt;div class="crayons-card c-embed text-styles text-styles--secondary"&gt;
      &lt;div class="c-embed__cover"&gt;
        &lt;a href="https://iamunnip.hashnode.dev/prometheus-installation-on-amazon-ec2-ubuntu-part-3" class="c-link s:max-w-50 align-middle" rel="noopener noreferrer"&gt;
          &lt;img alt="" src="https://res.cloudinary.com/practicaldev/image/fetch/s--BQjxsxSv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://hashnode.com/utility/r%3Furl%3Dhttps%253A%252F%252Fcdn.hashnode.com%252Fres%252Fhashnode%252Fimage%252Fupload%252Fv1692635299789%252F5e5fbbe9-3851-4095-a088-5013442dd9d3.png%253Fw%253D1200%2526h%253D630%2526fit%253Dcrop%2526crop%253Dentropy%2526auto%253Dcompress%252Cformat%2526format%253Dwebp%2526fm%253Dpng" height="" class="m-0" width=""&gt;
        &lt;/a&gt;
      &lt;/div&gt;
    &lt;div class="c-embed__body"&gt;
      &lt;h2 class="fs-xl lh-tight"&gt;
        &lt;a href="https://iamunnip.hashnode.dev/prometheus-installation-on-amazon-ec2-ubuntu-part-3" rel="noopener noreferrer" class="c-link"&gt;
          Prometheus - Installation on Amazon EC2 (Ubuntu) - Part 3
        &lt;/a&gt;
      &lt;/h2&gt;
        &lt;p class="truncate-at-3"&gt;
          In this article, we will look how to install and configure Prometheus and Node Exporter on Amazon EC2 Ubuntu instance
        &lt;/p&gt;
      &lt;div class="color-secondary fs-s flex items-center"&gt;
          &lt;img alt="favicon" class="c-embed__favicon m-0 mr-2 radius-0" src="https://res.cloudinary.com/practicaldev/image/fetch/s--eOZpeFpx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1611242173172/AOX1gE2jc.png" width="32" height="32"&gt;
        iamunnip.hashnode.dev
      &lt;/div&gt;
    &lt;/div&gt;
&lt;/div&gt;


&lt;p&gt;But in the above setup, we were accessing the Prometheus expression browser and Node Exporter metrics endpoints via HTTP and there was no authentication enabled.&lt;br&gt;&lt;br&gt;
We are going to address these issues in this article.&lt;/p&gt;
&lt;h2&gt;
  
  
  HTTPS
&lt;/h2&gt;
&lt;h3&gt;
  
  
  Node Exporter
&lt;/h3&gt;

&lt;p&gt;Create a new directory for storing the Node Exporter configuration file and change its ownership to "node_exporter" user&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; /etc/node_exporter

&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo chown &lt;/span&gt;node_exporter:node_exporter /etc/node_exporter
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create a configuration file for Node Exporter and change its ownership to "node_exporter" user&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo touch&lt;/span&gt; /etc/node_exporter/node_exporter.yml

&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo chown &lt;/span&gt;node_exporter:node_exporter /etc/node_exporter/node_exporter.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Generate a certificate and key using OpenSSL&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;openssl req &lt;span class="nt"&gt;-new&lt;/span&gt; &lt;span class="nt"&gt;-newkey&lt;/span&gt; rsa:2048 &lt;span class="nt"&gt;-days&lt;/span&gt; 365 &lt;span class="nt"&gt;-nodes&lt;/span&gt; &lt;span class="nt"&gt;-x509&lt;/span&gt; &lt;span class="nt"&gt;-keyout&lt;/span&gt; prom.key &lt;span class="nt"&gt;-out&lt;/span&gt; prom.crt &lt;span class="nt"&gt;-subj&lt;/span&gt; &lt;span class="s2"&gt;"/C=US/ST=California/L=Oakland/O=MyOrg/CN=localhost"&lt;/span&gt; &lt;span class="nt"&gt;-addext&lt;/span&gt; &lt;span class="s2"&gt;"subjectAltName = DNS:localhost"&lt;/span&gt;

&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;ls
&lt;/span&gt;prom.crt  prom.key
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Copy the certificate and key files to the Node Exporter configuration directory and change their ownership to "node_exporter" user&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo cp &lt;/span&gt;prom.&lt;span class="k"&gt;*&lt;/span&gt; /etc/node_exporter

&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo chown &lt;/span&gt;node_exporter:node_exporter prom.crt

&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo chown &lt;/span&gt;node_exporter:node_exporter prom.key
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Add the tls_server_config details to the configuration file&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;vi /etc/node_exporter/node_exporter.yml

tls_server_config:
  cert_file: prom.crt
  key_file: prom.key
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Update the systemd unit file of node_exporter service to include the above configuration file&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;vi /etc/node_exporter/node_exporter.yml

&lt;span class="o"&gt;[&lt;/span&gt;Unit]
&lt;span class="nv"&gt;Description&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;Node Exporter
&lt;span class="nv"&gt;Wants&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;network-online.target
&lt;span class="nv"&gt;After&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;network-online.target

&lt;span class="o"&gt;[&lt;/span&gt;Service]
&lt;span class="nv"&gt;User&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;node_exporter
&lt;span class="nv"&gt;Group&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;node_exporter
&lt;span class="nv"&gt;Type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;simple
&lt;span class="nv"&gt;ExecStart&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/usr/local/bin/node_exporter &lt;span class="se"&gt;\&lt;/span&gt;
        &lt;span class="nt"&gt;--web&lt;/span&gt;.config.file /etc/node_exporter/node_exporter.yml

&lt;span class="o"&gt;[&lt;/span&gt;Install]
&lt;span class="nv"&gt;WantedBy&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;multi-user.target
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Restart the node_exporter service and verify its status&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl restart node_exporter

&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl status node_exporter
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The metrics endpoint is now accessible via HTTPS&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--8f0BkXCr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zqba4xwmnbklzxywauv9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--8f0BkXCr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zqba4xwmnbklzxywauv9.png" alt="node-2" width="800" height="343"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Open Prometheus expression browser and navigate to &lt;strong&gt;Status&lt;/strong&gt; -&amp;gt; &lt;strong&gt;Targets&lt;/strong&gt;, we can see our node_exporter target is showing down&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--V8ZbSOuW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/m1x43xx5icsomhxb4ecx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--V8ZbSOuW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/m1x43xx5icsomhxb4ecx.png" alt="prom-7" width="800" height="148"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Prometheus
&lt;/h3&gt;

&lt;p&gt;Copy the certificate and key files to the Prometheus configuration directory and change its ownership to "prometheus" user&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo cp &lt;/span&gt;prom.&lt;span class="k"&gt;*&lt;/span&gt; /etc/prometheus

&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo chown &lt;/span&gt;prometheus:prometheus /etc/prometheus/prom.crt

&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo chown &lt;/span&gt;prometheus:prometheus /etc/prometheus/prom.key
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Update the Prometheus configuration file to include scheme and tls_config for the "node_exporter" job&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;vi /etc/prometheus/prometheus.yml

global:
  scrape_interval: 15s
  scrape_timeout: 10s

scrape_configs:
  - job_name: &lt;span class="s2"&gt;"node_exporter"&lt;/span&gt;
    scheme: https
    tls_config:
      ca_file: prom.crt
      insecure_skip_verify: &lt;span class="nb"&gt;true
    &lt;/span&gt;static_configs:
      - targets: &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"172.31.81.113:9100"&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Validate the configuration file using the promtool&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;promtool check config /etc/prometheus/prometheus.yml
Checking /etc/prometheus/prometheus.yml
 SUCCESS: /etc/prometheus/prometheus.yml is valid prometheus config file syntax
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Restart the prometheus service to take effect the new configuration changes&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl restart prometheus

&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl status prometheus
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Open Prometheus expression browser and navigate to &lt;strong&gt;Status&lt;/strong&gt; -&amp;gt; &lt;strong&gt;Targets&lt;/strong&gt;, we can see our node_exporter target is showing up&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--s8-sdura--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fr6dy6hble38e9eb0kck.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--s8-sdura--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fr6dy6hble38e9eb0kck.png" alt="prom-8" width="800" height="148"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now we have enabled secure communication between the Prometheus server and Node Exporter but still our Prometheus expression browser is using an HTTP connection&lt;/p&gt;

&lt;p&gt;Create a new configuration file for configuring HTTPS connection and change its ownership to "prometheus" user&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo touch&lt;/span&gt; /etc/prometheus/webconfig.yml

&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo chown &lt;/span&gt;prometheus:prometheus /etc/prometheus/webconfig.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Add the tls_server_config details to the newly created configuration file&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;vi /etc/prometheus/webconfig.yml

tls_server_config:
  cert_file: prom.crt
  key_file: prom.key
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Update the systemd unit file of prometheus service to include the above configuration file&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;vi /etc/systemd/system/prometheus.service

&lt;span class="o"&gt;[&lt;/span&gt;Unit]
&lt;span class="nv"&gt;Description&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;Prometheus
&lt;span class="nv"&gt;Wants&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;network-online.target
&lt;span class="nv"&gt;After&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;network-online.target

&lt;span class="o"&gt;[&lt;/span&gt;Service]
&lt;span class="nv"&gt;User&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;prometheus
&lt;span class="nv"&gt;Group&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;prometheus
&lt;span class="nv"&gt;Type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;simple
&lt;span class="nv"&gt;ExecStart&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/usr/local/bin/prometheus &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--config&lt;/span&gt;.file /etc/prometheus/prometheus.yml &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--storage&lt;/span&gt;.tsdb.path /var/lib/prometheus &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--web&lt;/span&gt;.console.templates /etc/prometheus/consoles &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--web&lt;/span&gt;.console.libraries /etc/prometheus/console_libraries &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--web&lt;/span&gt;.config.file /etc/prometheus/webconfig.yml

&lt;span class="o"&gt;[&lt;/span&gt;Install]
&lt;span class="nv"&gt;WantedBy&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;multi-user.target
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now we can access the Prometheus expression browser using HTTPS&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Bi5L4eCB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2s5eqtm2a4r8nl1tqef5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Bi5L4eCB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2s5eqtm2a4r8nl1tqef5.png" alt="prom-11" width="800" height="403"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Authentication
&lt;/h2&gt;

&lt;p&gt;Install the apache2 utils package to generate a password&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt update

&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt &lt;span class="nb"&gt;install &lt;/span&gt;apache2-utils
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Generate the password using the htpasswd tool&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;htpasswd &lt;span class="nt"&gt;-nBC&lt;/span&gt; 16 admin | &lt;span class="nb"&gt;tr&lt;/span&gt; &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="s1"&gt;':\n'&lt;/span&gt;
New password:
Re-type new password:
admin&lt;span class="nv"&gt;$2y$16$fMgRIvex1Rn67dHErc&lt;/span&gt;.Ft.CSI3ng5b457FOe9JIZkMB7k7p3PfS8O
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Node Exporter
&lt;/h3&gt;

&lt;p&gt;Update the Node Exporter configuration file to include basic authentication&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;vi /etc/node_exporter/node_exporter.yml

tls_server_config:
  cert_file: prom.crt
  key_file: prom.key
basic_auth_users:
  admin: &lt;span class="nv"&gt;$2y$16$fMgRIvex1Rn67dHErc&lt;/span&gt;.Ft.CSI3ng5b457FOe9JIZkMB7k7p3PfS8O
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Restart the node_exporter service and verify its status&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl restart node_exporter

&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl status node_exporter
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now access the Node Exporter metrics endpoint and it will show a login prompt&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--4UgO77qV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nlfokvo386li7muquuxj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--4UgO77qV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nlfokvo386li7muquuxj.png" alt="node-3" width="800" height="332"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Open Prometheus expression browser and navigate to &lt;strong&gt;Status&lt;/strong&gt; -&amp;gt; &lt;strong&gt;Targets&lt;/strong&gt;, we can see our node_exporter target is showing down&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--7i0vjB9T--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8tmlz7072lueitk44od3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--7i0vjB9T--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8tmlz7072lueitk44od3.png" alt="prom-9" width="800" height="148"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Prometheus
&lt;/h3&gt;

&lt;p&gt;Update the Prometheus configuration file to include basic authentication for the "node_exporter" job&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;vi /etc/prometheus/prometheus.yml

global:
  scrape_interval: 15s
  scrape_timeout: 10s

scrape_configs:
  - job_name: &lt;span class="s2"&gt;"node_exporter"&lt;/span&gt;
    scheme: https
    tls_config:
      ca_file: prom.crt
      insecure_skip_verify: &lt;span class="nb"&gt;true
      &lt;/span&gt;basic_auth:
        username: admin
        password: Password!
    static_configs:
      - targets: &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"172.31.81.113:9100"&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Validate the configuration file using the promtool&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;promtool check config /etc/prometheus/prometheus.yml
Checking /etc/prometheus/prometheus.yml
 SUCCESS: /etc/prometheus/prometheus.yml is valid prometheus config file syntax
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Restart the prometheus service to effect new configuration changes&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl restart prometheus

&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl status prometheus
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Open the Prometheus expression browser and navigate to Status -&amp;gt; Targets, we can see our node_exporter target is showing up&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--sRafQ8UB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1j3ndzkjz2rbdbapbz4d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--sRafQ8UB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1j3ndzkjz2rbdbapbz4d.png" alt="prom-10" width="800" height="148"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now we have enabled basic authentication between the Prometheus server and Node Exporter but we need to enable authentication for the Prometheus server&lt;/p&gt;

&lt;p&gt;Update the below configuration file to include basic authentication on the Prometheus server&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;vi /etc/prometheus/webconfig.yml

tls_server_config:
  cert_file: prom.crt
  key_file: prom.key

basic_auth_users:
  admin: &lt;span class="nv"&gt;$2y$16$fMgRIvex1Rn67dHErc&lt;/span&gt;.Ft.CSI3ng5b457FOe9JIZkMB7k7p3PfS8O
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Restart the prometheus service for the new changes and check its status&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl restart prometheus

&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl status prometheus
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now access the Prometheus expression browser and it will show a login prompt&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--jppp7lmp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mzrut58n40cju1bq7qmz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--jppp7lmp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mzrut58n40cju1bq7qmz.png" alt="prom-12" width="800" height="294"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;That's all for now&lt;/p&gt;

&lt;h2&gt;
  
  
  Reference
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://prometheus.io/docs/prometheus/latest/configuration/https/"&gt;https://prometheus.io/docs/prometheus/latest/configuration/https/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://kodekloud.com/courses/prometheus-certified-associate-pca/"&gt;https://kodekloud.com/courses/prometheus-certified-associate-pca/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>prometheus</category>
      <category>observability</category>
      <category>cloudnative</category>
      <category>opensource</category>
    </item>
    <item>
      <title>Prometheus - Installation on Amazon EC2 (Ubuntu) - Part 3</title>
      <dc:creator>Unni P</dc:creator>
      <pubDate>Mon, 21 Aug 2023 16:48:56 +0000</pubDate>
      <link>https://dev.to/iamunnip/prometheus-installation-on-amazon-ec2-ubuntu-part-3-5484</link>
      <guid>https://dev.to/iamunnip/prometheus-installation-on-amazon-ec2-ubuntu-part-3-5484</guid>
      <description>&lt;p&gt;In my previous article, we looked at how we can quickly set up Prometheus and Node Exporter.&lt;/p&gt;


&lt;div class="crayons-card c-embed text-styles text-styles--secondary"&gt;
      &lt;div class="c-embed__cover"&gt;
        &lt;a href="https://iamunnip.hashnode.dev/prometheus-quick-setup-on-amazon-ec2-ubuntu-part-2" class="c-link s:max-w-50 align-middle" rel="noopener noreferrer"&gt;
          &lt;img alt="" src="https://res.cloudinary.com/practicaldev/image/fetch/s--h6PzVv28--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://hashnode.com/utility/r%3Furl%3Dhttps%253A%252F%252Fcdn.hashnode.com%252Fres%252Fhashnode%252Fimage%252Fupload%252Fv1683120162905%252Fc50ae9fa-1ce3-48c8-a618-67ff3f466aa3.png%253Fw%253D1200%2526h%253D630%2526fit%253Dcrop%2526crop%253Dentropy%2526auto%253Dcompress%252Cformat%2526format%253Dwebp%2526fm%253Dpng" height="" class="m-0" width=""&gt;
        &lt;/a&gt;
      &lt;/div&gt;
    &lt;div class="c-embed__body"&gt;
      &lt;h2 class="fs-xl lh-tight"&gt;
        &lt;a href="https://iamunnip.hashnode.dev/prometheus-quick-setup-on-amazon-ec2-ubuntu-part-2" rel="noopener noreferrer" class="c-link"&gt;
          Prometheus - Quick Setup on Amazon EC2 (Ubuntu) - Part 2
        &lt;/a&gt;
      &lt;/h2&gt;
        &lt;p class="truncate-at-3"&gt;
          In this article, we will look at how we can quickly setup Prometheus and Node Exporter on a Ubuntu instance on Amazon EC2
        &lt;/p&gt;
      &lt;div class="color-secondary fs-s flex items-center"&gt;
          &lt;img alt="favicon" class="c-embed__favicon m-0 mr-2 radius-0" src="https://res.cloudinary.com/practicaldev/image/fetch/s--eOZpeFpx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1611242173172/AOX1gE2jc.png" width="32" height="32"&gt;
        iamunnip.hashnode.dev
      &lt;/div&gt;
    &lt;/div&gt;
&lt;/div&gt;


&lt;p&gt;But in that setup, Prometheus and Node Exporter processes are running in the foreground and it won't start the process after a reboot of the VM.&lt;br&gt;&lt;br&gt;
We are going to fix that issue in this article.&lt;/p&gt;
&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Setup an EC2 instance of type t2.small&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Ubuntu 22.04 LTS as AMI&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;30 GB of hard disk space&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Open ports 22 for SSH, 9090 for Prometheus and 9100 for Node Exporter&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Installation
&lt;/h2&gt;
&lt;h3&gt;
  
  
  Prometheus
&lt;/h3&gt;

&lt;p&gt;Login to your EC2 instance&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;ssh &lt;span class="nt"&gt;-i&lt;/span&gt; prometheus.pem ubuntu@3.80.133.119
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;lsb_release &lt;span class="nt"&gt;-a&lt;/span&gt;
No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 22.04.2 LTS
Release:        22.04
Codename:       jammy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create a new user for managing the Prometheus process&lt;br&gt;&lt;br&gt;
Upon executing the command it will create a user called prometheus without a home directory and shell as /bin/false&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;useradd &lt;span class="nt"&gt;--shell&lt;/span&gt; /bin/false &lt;span class="nt"&gt;--no-create-home&lt;/span&gt; prometheus
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create a new directory for storing our Prometheus configuration file and update the ownership of the directory&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; /etc/prometheus

&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo chown &lt;/span&gt;prometheus:prometheus /etc/prometheus
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create another directory for storing Prometheus time series data and update the ownership of the directory&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; /var/lib/prometheus

&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo chown &lt;/span&gt;prometheus:prometheus /var/lib/prometheus
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Download and extract the latest Prometheus release from their GitHub page&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;wget https://github.com/prometheus/prometheus/releases/download/v2.46.0/prometheus-2.46.0.linux-amd64.tar.gz
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;tar&lt;/span&gt; &lt;span class="nt"&gt;-xzvf&lt;/span&gt; prometheus-2.46.0.linux-amd64.tar.gz
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Change the directory and install Prometheus and Promtool binaries&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;cd &lt;/span&gt;prometheus-2.46.0.linux-amd64

&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo install &lt;/span&gt;prometheus /usr/local/bin

&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo install &lt;/span&gt;promtool /usr/local/bin
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Change the permission of these binaries to prometheus user&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo chown &lt;/span&gt;prometheus:prometheus /usr/local/bin/prometheus

&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo chown &lt;/span&gt;prometheus:prometheus /usr/local/bin/promtool
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Check the version of prometheus or promtool binaries&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;prometheus &lt;span class="nt"&gt;--version&lt;/span&gt;
prometheus, version 2.46.0 &lt;span class="o"&gt;(&lt;/span&gt;branch: HEAD, revision: cbb69e51423565ec40f46e74f4ff2dbb3b7fb4f0&lt;span class="o"&gt;)&lt;/span&gt;
  build user:       root@42454fc0f41e
  build &lt;span class="nb"&gt;date&lt;/span&gt;:       20230725-12:31:24
  go version:       go1.20.6
  platform:         linux/amd64
  tags:             netgo,builtinassets,stringlabels
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;promtool &lt;span class="nt"&gt;--version&lt;/span&gt;
promtool, version 2.46.0 &lt;span class="o"&gt;(&lt;/span&gt;branch: HEAD, revision: cbb69e51423565ec40f46e74f4ff2dbb3b7fb4f0&lt;span class="o"&gt;)&lt;/span&gt;
  build user:       root@42454fc0f41e
  build &lt;span class="nb"&gt;date&lt;/span&gt;:       20230725-12:31:24
  go version:       go1.20.6
  platform:         linux/amd64
  tags:             netgo,builtinassets,stringlabels
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Copy the consoles and console_libraries directory which is used for dashboarding and visualization&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo cp&lt;/span&gt; &lt;span class="nt"&gt;-rv&lt;/span&gt; consoles /etc/prometheus

&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo cp&lt;/span&gt; &lt;span class="nt"&gt;-rv&lt;/span&gt; console_libraries /etc/prometheus
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Change the ownerships of the directories recursively to prometheus user&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo chown&lt;/span&gt; &lt;span class="nt"&gt;-R&lt;/span&gt; prometheus:prometheus /etc/prometheus/consoles

&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo chown&lt;/span&gt; &lt;span class="nt"&gt;-R&lt;/span&gt; prometheus:prometheus /etc/prometheus/console_libraries
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Copy the configuration file to the above-mentioned directory and change its ownership to prometheus user&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo cp &lt;/span&gt;prometheus.yml /etc/prometheus

&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo chown &lt;/span&gt;prometheus:prometheus /etc/prometheus/prometheus.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create a systemd unit file for the Prometheus service&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;vi /etc/systemd/system/prometheus.service

&lt;span class="o"&gt;[&lt;/span&gt;Unit]
&lt;span class="nv"&gt;Description&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;Prometheus
&lt;span class="nv"&gt;Wants&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;network-online.target
&lt;span class="nv"&gt;After&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;network-online.target

&lt;span class="o"&gt;[&lt;/span&gt;Service]
&lt;span class="nv"&gt;User&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;prometheus
&lt;span class="nv"&gt;Group&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;prometheus
&lt;span class="nv"&gt;Type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;simple
&lt;span class="nv"&gt;ExecStart&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/usr/local/bin/prometheus &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--config&lt;/span&gt;.file /etc/prometheus/prometheus.yml &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--storage&lt;/span&gt;.tsdb.path /var/lib/prometheus &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--web&lt;/span&gt;.console.templates /etc/prometheus/consoles &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--web&lt;/span&gt;.console.libraries /etc/prometheus/console_libraries

&lt;span class="o"&gt;[&lt;/span&gt;Install]
&lt;span class="nv"&gt;WantedBy&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;multi-user.target
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Start the prometheus process and enable the process at boot&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl daemon-reload

&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl start prometheus

&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl &lt;span class="nb"&gt;enable &lt;/span&gt;prometheus
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Check the prometheus service status&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl is-active prometheus
active

&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl is-enabled prometheus
enabled
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Open &lt;a href="http://3.80.133.119:9090"&gt;http://3.80.133.119:9090&lt;/a&gt; in the browser and you can see the Prometheus expression browser&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--CDpsn2pU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zjotngacehe1bqetkyrz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--CDpsn2pU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zjotngacehe1bqetkyrz.png" alt="prom-1" width="800" height="192"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Node Exporter
&lt;/h3&gt;

&lt;p&gt;Create a new user for managing the node exporter process&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;useradd &lt;span class="nt"&gt;--shell&lt;/span&gt; /bin/false &lt;span class="nt"&gt;--no-create-home&lt;/span&gt; node_exporter
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Download and extract the latest Node Exporter release from their GitHub page&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;wget https://github.com/prometheus/node_exporter/releases/download/v1.6.1/node_exporter-1.6.1.linux-amd64.tar.gz

&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;tar&lt;/span&gt; &lt;span class="nt"&gt;-xzvf&lt;/span&gt; node_exporter-1.6.1.linux-amd64.tar.gz
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Change the directory and install the node_exporter binary&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;cd &lt;/span&gt;node_exporter-1.6.1.linux-amd64

&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo install &lt;/span&gt;node_exporter /usr/local/bin
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Change the ownership of the binary to node_exporter user&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo chown &lt;/span&gt;node_exporter:node_exporter /usr/local/bin/node_exporter
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Check the version of the node_exporter binary&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;node_exporter &lt;span class="nt"&gt;--version&lt;/span&gt;
node_exporter, version 1.6.1 &lt;span class="o"&gt;(&lt;/span&gt;branch: HEAD, revision: 4a1b77600c1873a8233f3ffb55afcedbb63b8d84&lt;span class="o"&gt;)&lt;/span&gt;
  build user:       root@586879db11e5
  build &lt;span class="nb"&gt;date&lt;/span&gt;:       20230717-12:10:52
  go version:       go1.20.6
  platform:         linux/amd64
  tags:             netgo osusergo static_build
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create a systemd unit file for the Node Exporter service&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;vi /etc/systemd/system/node_exporter.service

&lt;span class="o"&gt;[&lt;/span&gt;Unit]
&lt;span class="nv"&gt;Description&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;Node Exporter
&lt;span class="nv"&gt;Wants&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;network-online.target
&lt;span class="nv"&gt;After&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;network-online.target

&lt;span class="o"&gt;[&lt;/span&gt;Service]
&lt;span class="nv"&gt;User&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;node_exporter
&lt;span class="nv"&gt;Group&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;node_exporter
&lt;span class="nv"&gt;Type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;simple
&lt;span class="nv"&gt;ExecStart&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/usr/local/bin/node_exporter

&lt;span class="o"&gt;[&lt;/span&gt;Install]
&lt;span class="nv"&gt;WantedBy&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;multi-user.target
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Start the node_exporter process and enable the process at boot&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl daemon-reload

&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl start node_exporter

&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl &lt;span class="nb"&gt;enable &lt;/span&gt;node_exporter
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Check the status of the node_exporter process&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl is-active node_exporter
active

&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl is-enabled node_exporter
enabled
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Open &lt;a href="http://3.80.133.119:9100/metrics"&gt;http://3.80.133.119:9100/metrics&lt;/a&gt; in your browser to view all the metrics&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--hdSWvHCV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1d5bhojhxunyyzkd09zo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--hdSWvHCV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1d5bhojhxunyyzkd09zo.png" alt="node-1" width="800" height="388"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Configuration
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Prometheus
&lt;/h3&gt;

&lt;p&gt;Now we have installed Prometheus and Node Exporter on the server and we need to modify the Prometheus configuration file to add new targets to scrape metrics&lt;/p&gt;

&lt;p&gt;In the installation step, we already copied a default Prometheus configuration file to the /etc/prometheus location. You can edit this file or replace it with the below contents.&lt;/p&gt;

&lt;p&gt;Here I have added a new job called "node_exporter" to scrape metrics from the same VM, instead of using localhost I have used the private IP of the instance&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;vi /etc/prometheus/prometheus.yml

global:
  scrape_interval: 15s
  scrape_timeout: 10s

scrape_configs:
  - job_name: &lt;span class="s2"&gt;"prometheus"&lt;/span&gt;
    static_configs:
      - targets: &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"localhost:9090"&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;
  - job_name: &lt;span class="s2"&gt;"node_exporter"&lt;/span&gt;
    static_configs:
      - targets: &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"172.31.93.117:9100"&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Validate the configuration file using the promtool&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;promtool check config /etc/prometheus/prometheus.yml

Checking /etc/prometheus/prometheus.yml
 SUCCESS: /etc/prometheus/prometheus.yml is valid prometheus config file syntax
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Restart the Prometheus service to effect new configuration changes&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl restart prometheus

&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl status prometheus
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Open the expression browser and navigate to &lt;strong&gt;Status&lt;/strong&gt; -&amp;gt; &lt;strong&gt;Targets&lt;/strong&gt; to view the newly added "node_exporter" target&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--A9MMP3c7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bjvowuazmp554bg92pri.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--A9MMP3c7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bjvowuazmp554bg92pri.png" alt="prom-2" width="800" height="220"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We can view an example metric called "node_memory_Active_bytes" using the expression browser&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--7h0mxrdU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/i4tgt8kmbn516d316pp6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--7h0mxrdU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/i4tgt8kmbn516d316pp6.png" alt="prom-3" width="800" height="185"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Select the Graph option to view it as a graph&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--BNOnh-yO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/d4pn81lnqvgj4svh2mq7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--BNOnh-yO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/d4pn81lnqvgj4svh2mq7.png" alt="prom-4" width="800" height="438"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;That's all for now&lt;/p&gt;

&lt;h2&gt;
  
  
  Reference
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://prometheus.io/docs/prometheus/latest/getting_started/"&gt;https://prometheus.io/docs/prometheus/latest/getting_started/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://kodekloud.com/courses/prometheus-certified-associate-pca/"&gt;https://kodekloud.com/courses/prometheus-certified-associate-pca/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>prometheus</category>
      <category>observability</category>
      <category>cloudnative</category>
      <category>opensource</category>
    </item>
    <item>
      <title>Infisical - Open Source SecretOps - Kubernetes Setup</title>
      <dc:creator>Unni P</dc:creator>
      <pubDate>Thu, 17 Aug 2023 17:21:51 +0000</pubDate>
      <link>https://dev.to/iamunnip/infisical-open-source-secretops-kubernetes-setup-25ja</link>
      <guid>https://dev.to/iamunnip/infisical-open-source-secretops-kubernetes-setup-25ja</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;In this article, we will talk about Infisical, an open-source secret management tool and how we can setup on a local Kubernetes cluster&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;An open-source, end-to-end secret management platform&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Enables teams to easily manage and sync their environment variables, API keys, secrets and other configurations&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Features
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Intuitive dashboard&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Client SDK's&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Infisical CLI&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Native platform integrations&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Automatic Kubernetes deployment secret reloads&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Complete control of data when hosted by ourself&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Secret versioning&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Point-in-Time recovery&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Role-based access control&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Secret scanning and leak prevention&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Effortless on-premise deployment&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;kubectl&lt;br&gt;&lt;br&gt;
Check my article to install &lt;a href="https://iamunnip.hashnode.dev/kubectl-installation-ubuntu-windows"&gt;kubectl&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl version &lt;span class="nt"&gt;--client&lt;/span&gt; &lt;span class="nt"&gt;--short&lt;/span&gt;
Client Version: v1.27.4
Kustomize Version: v5.0.1
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Helm&lt;br&gt;&lt;br&gt;
Check my article to install &lt;a href="https://iamunnip.hashnode.dev/helm-installation-ubuntu-windows"&gt;Helm&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;helm version
version.BuildInfo&lt;span class="o"&gt;{&lt;/span&gt;Version:&lt;span class="s2"&gt;"v3.12.1"&lt;/span&gt;, GitCommit:&lt;span class="s2"&gt;"f32a527a060157990e2aa86bf45010dfb3cc8b8d"&lt;/span&gt;, GitTreeState:&lt;span class="s2"&gt;"clean"&lt;/span&gt;, GoVersion:&lt;span class="s2"&gt;"go1.20.4"&lt;/span&gt;&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Kubernetes cluster&lt;br&gt;&lt;br&gt;
For this demo, I'm using &lt;a href="https://docs.rancherdesktop.io/getting-started/installation/"&gt;Rancher Desktop&lt;/a&gt; on Windows&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get nodes

NAME              STATUS   ROLES                  AGE   VERSION
rancher-desktop   Ready    control-plane,master   27s   v1.27.4+k3s1
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Installation
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Add the Infisical Helm repository&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;helm repo add infisical-helm-charts &lt;span class="s1"&gt;'https://dl.cloudsmith.io/public/infisical/helm-charts/helm/charts/'&lt;/span&gt; 

&lt;span class="nv"&gt;$ &lt;/span&gt;helm repo update
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Install Infisical using the below command.&lt;br&gt;&lt;br&gt;
It will install all the necessary components in the infisical namespace and also install Nginx ingress controller&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;helm &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-n&lt;/span&gt; infisical infisical infisical-helm-charts/infisical &lt;span class="nt"&gt;--set&lt;/span&gt; ingress.nginx.enabled&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt; &lt;span class="nt"&gt;--create-namespace&lt;/span&gt;

NAME: infisical
LAST DEPLOYED: Thu Aug 17 19:43:06 2023
NAMESPACE: infisical
STATUS: deployed
REVISION: 1
TEST SUITE: None
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Check the status of the components and the load balancer service created by Nginx ingress controller&lt;br&gt;&lt;br&gt;
Copy the Load Balancer IP, we will need it in the next step&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl &lt;span class="nt"&gt;-n&lt;/span&gt; infisical get pods

NAME                                                  READY   STATUS    RESTARTS   AGE
infisical-frontend-8588c9f65-sxz8d                    1/1     Running   0          3m19s
infisical-frontend-8588c9f65-nsnjr                    1/1     Running   0          3m19s
mongodb-0                                             1/1     Running   0          3m19s
infisical-backend-6777b66f58-79d84                    1/1     Running   0          3m19s
infisical-backend-6777b66f58-g2fwb                    1/1     Running   0          3m19s
infisical-ingress-nginx-controller-7785998dc6-lqxgk   1/1     Running   0          3m19s
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ul&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;```bash
$ kubectl -n infisical get svc

NAME                                           TYPE           CLUSTER-IP      EXTERNAL-IP      PORT(S)                      AGE
infisical-backend                              ClusterIP      10.43.203.3     &amp;lt;none&amp;gt;           4000/TCP                     9m47s
infisical-frontend                             ClusterIP      10.43.41.186    &amp;lt;none&amp;gt;           3000/TCP                     9m47s
infisical-ingress-nginx-controller-admission   ClusterIP      10.43.42.222    &amp;lt;none&amp;gt;           443/TCP                      9m47s
mongodb                                        ClusterIP      10.43.112.139   &amp;lt;none&amp;gt;           27017/TCP                    9m47s
infisical-ingress-nginx-controller             LoadBalancer   10.43.63.38     172.31.134.241   80:30035/TCP,443:31538/TCP   9m47s
```
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Initial Setup
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;In our local setup, we are skipping SMTP configuration and accessing the dashboard via the LoadBalancer IP address &lt;a href="http://172.31.134.241/signup"&gt;http://172.31.134.241/signup&lt;/a&gt;&lt;br&gt;
You can also set up the hostname using &lt;code&gt;ingress.hostName=&amp;lt;your-hostname&amp;gt;&lt;/code&gt; option during the installation&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create an account for the administrator by clicking the "Continue with Email" option&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--45Dy2a6o--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/a5r0r7tfs0lzpzqg4y5a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--45Dy2a6o--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/a5r0r7tfs0lzpzqg4y5a.png" alt="infisical-1" width="800" height="955"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Enter your email id and click the "Get Started" option&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Q59uprn9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xpl3e9wjxom5jxlvb8j5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Q59uprn9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xpl3e9wjxom5jxlvb8j5.png" alt="infisical-2" width="800" height="955"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Enter the details accordingly and click "Sign Up" option&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--78wsTtJA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/x4b51m4pthu3277h9s2o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--78wsTtJA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/x4b51m4pthu3277h9s2o.png" alt="infisical-3" width="800" height="955"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Once you sign up, you will need to download "Emergency Kit" and save it somewhere safe. If you get locked out of your account, we can use this emergency kit to unlock it&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--XccY7bDG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u51rkcoukpsilgiz156q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--XccY7bDG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u51rkcoukpsilgiz156q.png" alt="infisical-4" width="800" height="955"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Now we are redirected to the homepage of Infisical&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--vpFfHxPN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/aa0v2rlu7dr47n522tp2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--vpFfHxPN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/aa0v2rlu7dr47n522tp2.png" alt="infisical-5" width="800" height="388"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Configuration
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Create a new project by clicking the "Add New Project" button from the dashboard and name your project "MyApp"&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--mFgo3Mj2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/avtdlr9ebl3t925n6yvq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--mFgo3Mj2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/avtdlr9ebl3t925n6yvq.png" alt="infisical-6" width="800" height="407"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Once our project is created, we will get an interface like below. Here we can see different environments like Development, Staging and Production.
We are going to add the secrets in the Development environment by clicking the "Go to Development" option&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--RD8Q3o0o--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/71ckjq96w3z5ifziyhbx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--RD8Q3o0o--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/71ckjq96w3z5ifziyhbx.png" alt="infisical-7" width="800" height="269"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;We can copy secrets from other environments, upload env files etc. We can create a new secret by clicking the "Add a new secret" option&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--xK-r6LP9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tpggde35rcpjvwvcghom.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--xK-r6LP9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tpggde35rcpjvwvcghom.png" alt="infisical-8" width="800" height="451"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Add the required secrets and save the changes&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--5kQifLvU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3hu9njkrrhgtuqri1ysc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--5kQifLvU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3hu9njkrrhgtuqri1ysc.png" alt="infisical-9" width="800" height="215"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Secrets Operator Setup
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;First, we need to generate a Service Token from our project settings&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--2p0nyc-g--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5ps4xnmgt8s0dzmpxkp1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--2p0nyc-g--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5ps4xnmgt8s0dzmpxkp1.png" alt="infisical-10" width="800" height="404"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Select the "Create token" option and enter a name for the service token.
Select the environment, secrets path, expiration and permissions according to your use case and click on "Create" option&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--gb2c5idV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/db61e69tn4gt9apxod3d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--gb2c5idV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/db61e69tn4gt9apxod3d.png" alt="infisical-11" width="800" height="545"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Once the service token is created, copy and save it somewhere safe. We need this token to configure the secrets operator&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--scaWeIgo--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dfuhpwn75fp1jtad8zjd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--scaWeIgo--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dfuhpwn75fp1jtad8zjd.png" alt="infisical-12" width="800" height="432"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Back in our Kubernetes cluster, we need to install and configure the Infisical secrets operator to sync our secrets to the cluster&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;helm &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-n&lt;/span&gt; infisical infisical-secrets-operator infisical-helm-charts/secrets-operator

NAME: infisical-secrets-operator
LAST DEPLOYED: Thu Aug 17 21:05:56 2023
NAMESPACE: infisical
STATUS: deployed
REVISION: 1
TEST SUITE: None
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Check the infisical namespace, we can see a secret controller manager pod is created and it's up and running&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl &lt;span class="nt"&gt;-n&lt;/span&gt; infisical get pods

NAME                                                  READY   STATUS    RESTARTS   AGE
infisical-frontend-8588c9f65-sxz8d                    1/1     Running   0          84m
infisical-frontend-8588c9f65-nsnjr                    1/1     Running   0          84m
mongodb-0                                             1/1     Running   0          84m
infisical-backend-6777b66f58-79d84                    1/1     Running   0          84m
infisical-backend-6777b66f58-g2fwb                    1/1     Running   0          84m
infisical-ingress-nginx-controller-7785998dc6-lqxgk   1/1     Running   0          84m
infisical-secre-controller-manager-56c6f9b6d-lqk2v    2/2     Running   0          100s
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Create a new namespace for our application&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl create ns myapp

namespace/myapp created
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Create a Kubernetes secret containing the Service Token&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl &lt;span class="nt"&gt;-n&lt;/span&gt; myapp create secret generic myapp-service-token &lt;span class="nt"&gt;--from-literal&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;infisicalToken&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;st.64de3f83a96c27c805827382.a7acb98a535125353af9135009f9b974.3d41003562f52a730823347dd4a01f96

secret/myapp-service-token created
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ul&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;```bash
$ kubectl -n myapp get secrets

NAME                  TYPE     DATA   AGE
myapp-service-token   Opaque   1      25s
```
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Now we can sync our secrets to our cluster by creating an InfisicalSecret CRD&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;apiVersion: secrets.infisical.com/v1alpha1
kind: InfisicalSecret
metadata:
  name: myapp-infisical-secret
  namespace: myapp
spec:
  hostAPI: http://infisical-backend.infisical.svc.cluster.local:4000/api
  resyncInterval: 10
  authentication:
    serviceToken:
      serviceTokenSecretReference:
        secretName: myapp-service-token
        secretNamespace: myapp
      secretsScope:
        envSlug: dev
        secretsPath: &lt;span class="s2"&gt;"/"&lt;/span&gt;
  managedSecretReference:
    secretName: myapp-managed-secret
    secretNamespace: myapp
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ul&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;```bash
$ kubectl apply -f myapp-infisical-secret.yml
infisicalsecret.secrets.infisical.com/myapp-infisical-secret created
```
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Once it's created, check the status of the CRD&lt;/p&gt;

&lt;p&gt;From the status, we can see our secrets are synced to our cluster&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl &lt;span class="nt"&gt;-n&lt;/span&gt; myapp get infisicalsecrets

NAME                     AGE
myapp-infisical-secret   42s
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ul&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;```bash
$ kubectl -n myapp describe infisicalsecrets myapp-infisical-secret

Name:         myapp-infisical-secret
Namespace:    myapp
Labels:       &amp;lt;none&amp;gt;
Annotations:  &amp;lt;none&amp;gt;
API Version:  secrets.infisical.com/v1alpha1
Kind:         InfisicalSecret
Metadata:
  Creation Timestamp:  2023-08-17T16:07:04Z
  Generation:          1
  Resource Version:    4927
  UID:                 bb156757-1528-4827-afe9-09916fcd4372
Spec:
  Authentication:
    Service Token:
      Secrets Scope:
        Env Slug:      dev
        Secrets Path:  /
      Service Token Secret Reference:
        Secret Name:       myapp-service-token
        Secret Namespace:  myapp
  Host API:                http://infisical-backend.infisical.svc.cluster.local:4000/api
  Managed Secret Reference:
    Secret Name:       myapp-managed-secret
    Secret Namespace:  myapp
  Resync Interval:     10
Status:
  Conditions:
    Last Transition Time:  2023-08-17T16:07:04Z
    Message:               Infisical controller has located the Infisical token in provided Kubernetes secret
    Reason:                OK
    Status:                True
    Type:                  secrets.infisical.com/LoadedInfisicalToken
    Last Transition Time:  2023-08-17T16:07:05Z
    Message:               Infisical controller has started syncing your secrets
    Reason:                OK
    Status:                True
    Type:                  secrets.infisical.com/ReadyToSyncSecrets
    Last Transition Time:  2023-08-17T16:07:05Z
    Message:               Infisical has found 0 deployments which are ready to be auto redeployed when secrets change
    Reason:                OK
    Status:                True
    Type:                  secrets.infisical.com/AutoRedeployReady
Events:                    &amp;lt;none&amp;gt;
```
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;We can verify the secrets have been synced to our cluster by checking the myapp-managed-secret&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl &lt;span class="nt"&gt;-n&lt;/span&gt; myapp get secrets myapp-managed-secret

NAME                   TYPE     DATA   AGE
myapp-managed-secret   Opaque   4      5m16s
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ul&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;```bash
$ kubectl -n myapp describe secrets myapp-managed-secret

Name:         myapp-managed-secret
Namespace:    myapp
Labels:       &amp;lt;none&amp;gt;
Annotations:  secrets.infisical.com/version: W/"a5a-hBx83Z5L+nuphYXlN17htm8jAjo"

Type:  Opaque

Data
====
MYSQL_HOST:      9 bytes
MYSQL_PASSWORD:  13 bytes
MYSQL_PORT:      4 bytes
MYSQL_DATABASE:  13 bytes
```
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Application Deployment
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Deploy a sample Nginx application using the below manifest file&lt;br&gt;&lt;br&gt;
The annotation &lt;code&gt;secrets.infisical.com/auto-reload: "true"&lt;/code&gt; ensures that it automatically redeploys when managed secrets are changed&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp
  namespace: myapp
  labels:
    app: myapp
  annotations: 
    secrets.infisical.com/auto-reload: &lt;span class="s2"&gt;"true"&lt;/span&gt;
spec:
  replicas: 1
  selector:
    matchLabels:
      app: maypp
  template:
    metadata:
      labels:
        app: myapp
    spec:
      containers:
      - name: myapp
        image: nginx:1.25.2
        envFrom:
        - secretRef:
            name: myapp-managed-secret
        ports:
        - containerPort: 80
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ul&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;```bash
$ kubectl apply -f .\myapp-deployment.yml
deployment.apps/myapp created
```
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Once our application is up and running, exec into the pod and list the environment variables to view our secrets&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl &lt;span class="nt"&gt;-n&lt;/span&gt; myapp get pods

NAME                     READY   STATUS    RESTARTS   AGE
myapp-77c586c9ff-5xsxv   1/1     Running   0          48s
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ul&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;```bash
$ kubectl -n myapp exec -it myapp-77c586c9ff-5xsxv -- bash

root@myapp-77c586c9ff-5xsxv:/# env | grep -i MYSQL
MYSQL_PORT=3306
MYSQL_PASSWORD=mysqlPassword
MYSQL_HOST=mysqlHost
MYSQL_DATABASE=mysqlDatabase
```
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;That's all for now&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Reference
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://infisical.com/docs/integrations/platforms/kubernetes"&gt;https://infisical.com/docs/integrations/platforms/kubernetes&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.rancherdesktop.io/getting-started/installation/"&gt;https://docs.rancherdesktop.io/getting-started/installation/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>infisical</category>
      <category>kubernetes</category>
      <category>opensource</category>
      <category>secretops</category>
    </item>
    <item>
      <title>Uptime Kuma on Amazon EC2 (Ubuntu) - An Open-source Monitoring Tool</title>
      <dc:creator>Unni P</dc:creator>
      <pubDate>Wed, 03 May 2023 15:01:07 +0000</pubDate>
      <link>https://dev.to/iamunnip/uptime-kuma-on-amazon-ec2-ubuntu-an-open-source-monitoring-tool-59k4</link>
      <guid>https://dev.to/iamunnip/uptime-kuma-on-amazon-ec2-ubuntu-an-open-source-monitoring-tool-59k4</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;In this article, we will look at how we can install and configure Uptime Kuma an open-source monitoring tool on Amazon EC2 Ubuntu instance&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;An open-source self-hosted monitoring tool&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Monitor uptime for HTTP, HTTPS, DNS etc&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;SSL certificate information&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Notifications via Email, Slack, Discord etc&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;20 seconds interval&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Support for proxy and multi-factor authentication&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Setup an EC2 instance of type t2.micro&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Ubuntu 22.04 LTS as AMI&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;10 GB of hard disk space&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Open port 22 for SSH and 3001 for Uptime Kuma&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Install and configure Docker&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Need a temporary webpage with SSL enabled&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Installation
&lt;/h2&gt;

&lt;p&gt;Login to your EC2 instance&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;ssh &lt;span class="nt"&gt;-i&lt;/span&gt; uptime-kuma.pem ubuntu@54.234.8.87
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create a Docker volume for Uptime Kuma&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;docker volume create uptime-kuma

&lt;span class="nv"&gt;$ &lt;/span&gt;docker volume &lt;span class="nb"&gt;ls
&lt;/span&gt;DRIVER    VOLUME NAME
&lt;span class="nb"&gt;local     &lt;/span&gt;uptime-kuma
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Start the container and verify the status&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;docker container run &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="nt"&gt;--restart&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;always &lt;span class="nt"&gt;-p&lt;/span&gt; 3001:3001 &lt;span class="nt"&gt;-v&lt;/span&gt; uptime-kuma:/app/data &lt;span class="nt"&gt;--name&lt;/span&gt; uptime-kuma louislam/uptime-kuma:1
Unable to find image &lt;span class="s1"&gt;'louislam/uptime-kuma:1'&lt;/span&gt; locally
1: Pulling from louislam/uptime-kuma
3689b8de819b: Pull &lt;span class="nb"&gt;complete
&lt;/span&gt;4178a276654a: Pull &lt;span class="nb"&gt;complete
&lt;/span&gt;b46162c13de5: Pull &lt;span class="nb"&gt;complete
&lt;/span&gt;4d3ac03f17d8: Pull &lt;span class="nb"&gt;complete
&lt;/span&gt;b935255dae7e: Pull &lt;span class="nb"&gt;complete
&lt;/span&gt;792f129a81f3: Pull &lt;span class="nb"&gt;complete
&lt;/span&gt;4110002867ba: Pull &lt;span class="nb"&gt;complete
&lt;/span&gt;390f8662c74f: Pull &lt;span class="nb"&gt;complete
&lt;/span&gt;9dd174cf6e30: Pull &lt;span class="nb"&gt;complete
&lt;/span&gt;4f4fb700ef54: Pull &lt;span class="nb"&gt;complete
&lt;/span&gt;703bad70ccf2: Pull &lt;span class="nb"&gt;complete
&lt;/span&gt;Digest: sha256:cf61d3262b29e1c48cc2ac284c9264227bbc46168f408e5f4c4d6301f0629e41
Status: Downloaded newer image &lt;span class="k"&gt;for &lt;/span&gt;louislam/uptime-kuma:1
fcdfcc5a5d1470ca3b1dd1e23d29a4975d909e2a5ab2f53e1e2bef0fa4a58665
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;docker container &lt;span class="nb"&gt;ls&lt;/span&gt; &lt;span class="nt"&gt;-a&lt;/span&gt;
CONTAINER ID   IMAGE                    COMMAND                  CREATED              STATUS                        PORTS                                       NAMES
fcdfcc5a5d14   louislam/uptime-kuma:1   &lt;span class="s2"&gt;"/usr/bin/dumb-init …"&lt;/span&gt;   About a minute ago   Up About a minute &lt;span class="o"&gt;(&lt;/span&gt;healthy&lt;span class="o"&gt;)&lt;/span&gt;   0.0.0.0:3001-&amp;gt;3001/tcp, :::3001-&amp;gt;3001/tcp   uptime-kuma
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Open &lt;a href="http://54.234.8.87:3001"&gt;http://54.234.8.87:3001&lt;/a&gt; in your browser&lt;br&gt;&lt;br&gt;
Initially, we need to set up a username and password for accessing the dashboard&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--sOCAF0Iu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bza50a9q2qyaiwx3n4t1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--sOCAF0Iu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bza50a9q2qyaiwx3n4t1.png" alt="uptimekuma-01" width="493" height="805"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once done, we will redirect to the dashboard&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--3e6-dJcG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wkfbowrf1lbval1syhk8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--3e6-dJcG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wkfbowrf1lbval1syhk8.png" alt="uptimekuma-02" width="800" height="388"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Configuration
&lt;/h2&gt;

&lt;p&gt;I have already configured an Nginx website and enabled SSL using Let’s Encrypt&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--vZKUOZyd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/idy66jr9iy30765rjuav.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--vZKUOZyd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/idy66jr9iy30765rjuav.png" alt="uptimekuma-03" width="651" height="157"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We need to set up &lt;strong&gt;Notifications&lt;/strong&gt; to get notified whenever a host is down or a certificate is going to expire&lt;/p&gt;

&lt;p&gt;If we are using Email (SMTP) as a notification type then we need to do configuration in our Gmail account&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Enable multi-factor authentication&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create app password&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;From the dashboard, navigate to &lt;strong&gt;Settings&lt;/strong&gt; → &lt;strong&gt;Notifications&lt;/strong&gt; → &lt;strong&gt;Setup Notification&lt;/strong&gt; and configure as below to receive email notifications&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Notification Type&lt;/strong&gt;: Email (SMTP)&lt;br&gt;
&lt;strong&gt;Friendly Name&lt;/strong&gt;: Alerts&lt;br&gt;
&lt;strong&gt;Hostname&lt;/strong&gt;: &lt;a href="http://smtp.gmail.com"&gt;smtp.gmail.com&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;Port&lt;/strong&gt;: 587&lt;br&gt;
&lt;strong&gt;Security:&lt;/strong&gt; None/STARTTLS (25, 587)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--c5Q4KNpK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rk59oh70zga83fcmo2cr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--c5Q4KNpK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rk59oh70zga83fcmo2cr.png" alt="uptimekuma-04" width="748" height="750"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Username&lt;/strong&gt;: &amp;lt;your-email-id&amp;gt;&lt;br&gt;
&lt;strong&gt;Password&lt;/strong&gt;: &amp;lt;app-password&amp;gt;&lt;br&gt;
&lt;strong&gt;From Email&lt;/strong&gt;: &amp;lt;your-email-id&amp;gt;&lt;br&gt;
&lt;strong&gt;To Email&lt;/strong&gt;: &amp;lt;your-email-id&amp;gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--KnVyGqYV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8v2pr5omx5verbxfgwcl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--KnVyGqYV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8v2pr5omx5verbxfgwcl.png" alt="uptimekuma-05" width="724" height="510"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Custom Subject&lt;/strong&gt;: &lt;code&gt;{{NAME}} {{STATUS}}&lt;/code&gt;&lt;br&gt;
&lt;strong&gt;Default enabled&lt;/strong&gt;: Enable&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--KHh2KUBj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/m6xz87fsbfvj1y2in3vr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--KHh2KUBj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/m6xz87fsbfvj1y2in3vr.png" alt="uptimekuma-06" width="731" height="440"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once the required information is entered click the &lt;strong&gt;Test&lt;/strong&gt; and &lt;strong&gt;Save&lt;/strong&gt; button and you will receive a test mail in your inbox&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--jZrkRnnb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qt0w4mmz90xow2137fj2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--jZrkRnnb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qt0w4mmz90xow2137fj2.png" alt="uptimekuma-07" width="800" height="144"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let's add our URL to monitor by clicking the &lt;strong&gt;Add New Monitor&lt;/strong&gt; button and filling in the details and clicking the &lt;strong&gt;Save&lt;/strong&gt; button&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Friendly Name&lt;/strong&gt;: &lt;a href="http://158b5452c43c.mylabserver.com"&gt;158b5452c43c.mylabserver.com&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;URL&lt;/strong&gt;: &lt;a href="https://158b5452c43c.mylabserver.com"&gt;https://158b5452c43c.mylabserver.com&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;Heartbeat Interval&lt;/strong&gt;: 20&lt;br&gt;
&lt;strong&gt;Certificate Expiry Notification&lt;/strong&gt;: Enable&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--9Yf2__Rg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/e0v8gw89pamxfnqoydmd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--9Yf2__Rg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/e0v8gw89pamxfnqoydmd.png" alt="uptimekuma-08" width="800" height="745"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;From the dashboard, we can see the uptime, response time of our URL and certificate expiry date&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--177kmDME--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vpkyogg6ksutej4pj4bx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--177kmDME--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vpkyogg6ksutej4pj4bx.png" alt="uptimekuma-09" width="800" height="501"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now make a downtime by stopping the Nginx service and we can see our URL is showing the status as DOWN and will receive an email in your inbox&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--tDYZpfRV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2la3izfn9nm8wwf6cfbw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--tDYZpfRV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2la3izfn9nm8wwf6cfbw.png" alt="uptimekuma-10" width="800" height="519"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--TdaRi0mP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/40yr2f4vyknn9n1cxnnv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--TdaRi0mP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/40yr2f4vyknn9n1cxnnv.png" alt="uptimekuma-11" width="800" height="102"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let's start the Nginx service now our URL status is showing as UP and will receive an email in your inbox&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--efBTwwgg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qrrjp5eg0bgc14tdqizt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--efBTwwgg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qrrjp5eg0bgc14tdqizt.png" alt="uptimekuma-12" width="800" height="536"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Hw9Xa6N1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jwi7tvfoec7bc74ab0rn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Hw9Xa6N1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jwi7tvfoec7bc74ab0rn.png" alt="uptimekuma-13" width="800" height="101"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you see from the above status, the Let’s Encrypt certificate is going to expire in 89 days.&lt;br&gt;&lt;br&gt;
Let's configure Uptime Kuma to receive an alert for certificate expiry&lt;/p&gt;

&lt;p&gt;Navigate to &lt;strong&gt;Settings&lt;/strong&gt; → &lt;strong&gt;Notifications&lt;/strong&gt; → &lt;strong&gt;TLS Certificate Expiry&lt;br&gt;&lt;br&gt;
**By default, we will get certificate expiry notifications in 7, 14 and 21 days but we are going configure it as 89 days because my certificate will expire in 89 days click the **Save&lt;/strong&gt; button and wait to receive an email in your inbox&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--87zZpqNo--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/unptihfn8afgd31igee4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--87zZpqNo--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/unptihfn8afgd31igee4.png" alt="uptimekuma-14" width="747" height="702"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--fFlzBGbw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/du2cbj2ntz2k08ana4f6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--fFlzBGbw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/du2cbj2ntz2k08ana4f6.png" alt="uptimekuma-15" width="800" height="90"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Reference
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://github.com/louislam/uptime-kuma"&gt;https://github.com/louislam/uptime-kuma&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://mariushosting.com/synology-activate-gmail-smtp-for-docker-containers/"&gt;https://mariushosting.com/synology-activate-gmail-smtp-for-docker-containers/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>monitoring</category>
      <category>aws</category>
      <category>opensource</category>
    </item>
    <item>
      <title>Setting up IBM Db2 Community Edition on Amazon EC2 (Ubuntu)</title>
      <dc:creator>Unni P</dc:creator>
      <pubDate>Wed, 03 May 2023 13:51:39 +0000</pubDate>
      <link>https://dev.to/iamunnip/setting-up-ibm-db2-community-edition-on-amazon-ec2-ubuntu-3fk3</link>
      <guid>https://dev.to/iamunnip/setting-up-ibm-db2-community-edition-on-amazon-ec2-ubuntu-3fk3</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;In this article we will look how we can install IBM Db2 community edition on Amazon EC2 Ubuntu instance&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Db2 is a cloud-native database&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Low latency transactions and highly resilient&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Supports structured and unstructured data&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Entry-level edition of the Db2 data server for the developer and partner community&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Available for Linux, Windows, and AIX and also available as a Docker image&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Supports all core Db2 features&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Up to 4 cores and 16 GB RAM&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Always-on security&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Db2 Community support&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Setup an EC2 instance of type t2.medium&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Ubuntu 20.04 LTS as AMI&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;30 GB of hard disk space&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Open port 22 for SSH and 25000 for Db2&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create an IBM account for downloading the &lt;a href="https://www.ibm.com/account/reg/signup?formid=urx-33669"&gt;Db2 community edition&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Installation
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Login to your EC2 instance and verify the distribution version
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;ssh &lt;span class="nt"&gt;-i&lt;/span&gt; ibm-db2.pem ubuntu@54.234.180.34
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;lsb_release &lt;span class="nt"&gt;-a&lt;/span&gt;
No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 20.04.6 LTS
Release:        20.04
Codename:       focalDownload and extract Db2 community edition tarball on EC2 instance
You can download the community edition from this URL
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Download the latest IBM Db2 Linux (x64) version on your local machine&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Once the download is completed, copy the downloaded file to your EC2 instance&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;scp &lt;span class="nt"&gt;-i&lt;/span&gt; ibm-db2.pem v11.5.8_linuxx64_server_dec.tar.gz ubuntu@54.234.180.34:/home/ubuntu

&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;pwd&lt;/span&gt;
/home/ubuntu

&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;ls
&lt;/span&gt;v11.5.8_linuxx64_server_dec.tar.gz
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Extract the tarball in your home directory
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;tar&lt;/span&gt; &lt;span class="nt"&gt;-xzvf&lt;/span&gt; v11.5.8_linuxx64_server_dec.tar.gz

&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;ls
&lt;/span&gt;server_dec  v11.5.8_linuxx64_server_dec.tar.gz
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Move to the extracted directory and execute &lt;code&gt;db2prereqcheck&lt;/code&gt; command
This will check the prerequisite requirements for installing Db2
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;cd &lt;/span&gt;server_dec

&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;ls
&lt;/span&gt;db2  db2_deinstall  db2_install  db2checkCOL.tar.gz  db2checkCOL_readme.txt  db2ckupgrade  db2ls  db2prereqcheck  db2setup  installFixPack
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo&lt;/span&gt; ./db2prereqcheck

&lt;span class="o"&gt;==========================================================================&lt;/span&gt;

Sun Apr 30 04:51:27 2023
Checking prerequisites &lt;span class="k"&gt;for &lt;/span&gt;DB2 installation. Version &lt;span class="s2"&gt;"11.5.8.0"&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt; Operating system &lt;span class="s2"&gt;"Linux"&lt;/span&gt;

Validating &lt;span class="s2"&gt;"kernel level "&lt;/span&gt; ...
   Required minimum operating system kernel level: &lt;span class="s2"&gt;"3.10.0"&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt;
   Actual operating system kernel level: &lt;span class="s2"&gt;"5.15.0"&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt;
   Requirement matched.

Validating &lt;span class="s2"&gt;"Linux distribution "&lt;/span&gt; ...
   Required minimum &lt;span class="s2"&gt;"UBUNTU"&lt;/span&gt; version: &lt;span class="s2"&gt;"16.04"&lt;/span&gt;
   Actual version: &lt;span class="s2"&gt;"20.04"&lt;/span&gt;
   Requirement matched.

Validating &lt;span class="s2"&gt;"ksh symbolic link"&lt;/span&gt; ...
   WARNING : Requirement not matched.
ERROR:
   The &lt;span class="s1"&gt;'strings'&lt;/span&gt; utility that is used to detect prerequisite libraries
   is not present on this system.  Please use your package or software
   manager to &lt;span class="nb"&gt;install &lt;/span&gt;the GNU Binary Utilities.

Validating &lt;span class="s2"&gt;"C++ Library version "&lt;/span&gt; ...
   Required minimum C++ library: &lt;span class="s2"&gt;"libstdc++.so.6"&lt;/span&gt;
   Standard C++ library is located &lt;span class="k"&gt;in &lt;/span&gt;the following directory: &lt;span class="s2"&gt;"/usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.28"&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt;
DBT3512W  The db2prereqcheck utility failed to determine the currently-installed version of the C++ standard library, libstdc++.
   Requirement matched.

Validating &lt;span class="s2"&gt;"libaio.so version "&lt;/span&gt; ...
DBT3553I  The db2prereqcheck utility successfully loaded the libaio.so.1 file.
   Requirement matched.

Validating &lt;span class="s2"&gt;"libnuma.so version "&lt;/span&gt; ...
DBT3610I  The db2prereqcheck utility successfully loaded the libnuma.so.1 file.
   Requirement matched.

Validating &lt;span class="s2"&gt;"/lib/i386-linux-gnu/libpam.so*"&lt;/span&gt; ...
   DBT3514W  The db2prereqcheck utility failed to find the following 32-bit library file: &lt;span class="s2"&gt;"/lib/i386-linux-gnu/libpam.so*"&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt;
   WARNING : Requirement not matched.
Requirement not matched &lt;span class="k"&gt;for &lt;/span&gt;DB2 database &lt;span class="s2"&gt;"Server"&lt;/span&gt; &lt;span class="nb"&gt;.&lt;/span&gt; Version: &lt;span class="s2"&gt;"11.5.8.0"&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt;
Summary of prerequisites that are not met on the current system:
   DBT3514W  The db2prereqcheck utility failed to find the following 32-bit library file: &lt;span class="s2"&gt;"/lib/i386-linux-gnu/libpam.so*"&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt;


DBT3619W  The db2prereqcheck utility detected that ksh is not linked to ksh or ksh93. This is required &lt;span class="k"&gt;for &lt;/span&gt;Db2 High Availability Feature with Tivoli SA MP.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;&lt;p&gt;From the above output, we can see the requirement checks are failed and we need to fix them&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Enable 32-bit architecture on your instance and install the required packages&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;dpkg &lt;span class="nt"&gt;--add-architecture&lt;/span&gt; i386

&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt update

&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; ksh ksh93 lib32stdc++6 libpam0g:i386 binutils
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Once the required packages are installed, rerun the &lt;code&gt;db2prereqcheck&lt;/code&gt; command again and verify all the requirements are matched
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo&lt;/span&gt; ./db2prereqcheck

&lt;span class="o"&gt;==========================================================================&lt;/span&gt;

Sun Apr 30 04:55:30 2023
Checking prerequisites &lt;span class="k"&gt;for &lt;/span&gt;DB2 installation. Version &lt;span class="s2"&gt;"11.5.8.0"&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt; Operating system &lt;span class="s2"&gt;"Linux"&lt;/span&gt;

Validating &lt;span class="s2"&gt;"kernel level "&lt;/span&gt; ...
   Required minimum operating system kernel level: &lt;span class="s2"&gt;"3.10.0"&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt;
   Actual operating system kernel level: &lt;span class="s2"&gt;"5.15.0"&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt;
   Requirement matched.

Validating &lt;span class="s2"&gt;"Linux distribution "&lt;/span&gt; ...
   Required minimum &lt;span class="s2"&gt;"UBUNTU"&lt;/span&gt; version: &lt;span class="s2"&gt;"16.04"&lt;/span&gt;
   Actual version: &lt;span class="s2"&gt;"20.04"&lt;/span&gt;
   Requirement matched.

Validating &lt;span class="s2"&gt;"ksh symbolic link"&lt;/span&gt; ...
   Requirement matched.

Validating &lt;span class="s2"&gt;"C++ Library version "&lt;/span&gt; ...
   Required minimum C++ library: &lt;span class="s2"&gt;"libstdc++.so.6"&lt;/span&gt;
   Standard C++ library is located &lt;span class="k"&gt;in &lt;/span&gt;the following directory: &lt;span class="s2"&gt;"/usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.28"&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt;
   Actual C++ library: &lt;span class="s2"&gt;"CXXABI_1.3.1"&lt;/span&gt;
   Requirement matched.


Validating &lt;span class="s2"&gt;"32 bit version of "&lt;/span&gt;libstdc++.so.6&lt;span class="s2"&gt;" "&lt;/span&gt; ...
   Found the 64 bit &lt;span class="s2"&gt;"/lib/x86_64-linux-gnu/libstdc++.so.6"&lt;/span&gt; &lt;span class="k"&gt;in &lt;/span&gt;the following directory &lt;span class="s2"&gt;"/lib/x86_64-linux-gnu"&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt;
   Found the 32 bit &lt;span class="s2"&gt;"/lib32/libstdc++.so.6"&lt;/span&gt; &lt;span class="k"&gt;in &lt;/span&gt;the following directory &lt;span class="s2"&gt;"/lib32"&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt;
   Requirement matched.

Validating &lt;span class="s2"&gt;"libaio.so version "&lt;/span&gt; ...
DBT3553I  The db2prereqcheck utility successfully loaded the libaio.so.1 file.
   Requirement matched.

Validating &lt;span class="s2"&gt;"libnuma.so version "&lt;/span&gt; ...
DBT3610I  The db2prereqcheck utility successfully loaded the libnuma.so.1 file.
   Requirement matched.

Validating &lt;span class="s2"&gt;"/lib/i386-linux-gnu/libpam.so*"&lt;/span&gt; ...
   Requirement matched.
DBT3533I  The db2prereqcheck utility has confirmed that all installation prerequisites were met.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Multiple installation methods available for Db2 for specific use cases&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;We are going to install Db2 using &lt;code&gt;db2_install&lt;/code&gt; command as root user and wait for the installation to complete&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo&lt;/span&gt; ./db2_install
Read the license agreement file &lt;span class="k"&gt;in &lt;/span&gt;the db2/license directory.

&lt;span class="k"&gt;***********************************************************&lt;/span&gt;
To accept those terms, enter &lt;span class="s2"&gt;"yes"&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt; Otherwise, enter &lt;span class="s2"&gt;"no"&lt;/span&gt; to cancel the &lt;span class="nb"&gt;install &lt;/span&gt;process. &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;yes&lt;/span&gt;/no]
&lt;span class="nb"&gt;yes


&lt;/span&gt;Default directory &lt;span class="k"&gt;for &lt;/span&gt;installation of products - /opt/ibm/db2/V11.5

&lt;span class="k"&gt;***********************************************************&lt;/span&gt;
Install into default directory &lt;span class="o"&gt;(&lt;/span&gt;/opt/ibm/db2/V11.5&lt;span class="o"&gt;)&lt;/span&gt; ? &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;yes&lt;/span&gt;/no]
&lt;span class="nb"&gt;yes


&lt;/span&gt;Specify one of the following keywords to &lt;span class="nb"&gt;install &lt;/span&gt;DB2 products.

  SERVER
  CONSV
  CLIENT
  RTCL

Enter &lt;span class="s2"&gt;"help"&lt;/span&gt; to redisplay product names.

Enter &lt;span class="s2"&gt;"quit"&lt;/span&gt; to exit.

&lt;span class="k"&gt;***********************************************************&lt;/span&gt;
SERVER
&lt;span class="k"&gt;***********************************************************&lt;/span&gt;
Do you want to &lt;span class="nb"&gt;install &lt;/span&gt;the DB2 pureScale Feature? &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;yes&lt;/span&gt;/no]
no
DB2 installation is being initialized.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Once the installation is completed, you will see the below message and we can check the installation log file for post-installation steps
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;The execution completed successfully.

For more information see the DB2 installation log at
&lt;span class="s2"&gt;"/tmp/db2_install.log.5295"&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;cat&lt;/span&gt; /tmp/db2_install.log.5295
Post-installation instructions
&lt;span class="nt"&gt;-------------------------------&lt;/span&gt;

Required steps:
Set up a DB2 instance to work with DB2.

Optional steps:
Notification SMTP server has not been specified. Notifications cannot be sent to contacts &lt;span class="k"&gt;in &lt;/span&gt;your contact list &lt;span class="k"&gt;until &lt;/span&gt;this is specified. For more information see the DB2 administration documentation.

To validate your installation files, instance, and database functionality, run the Validation Tool, /opt/ibm/db2/V11.5/bin/db2val. For more information, see &lt;span class="s2"&gt;"db2val"&lt;/span&gt; &lt;span class="k"&gt;in &lt;/span&gt;the DB2 Information Center.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Let’s validate our installation by executing the &lt;code&gt;db2val&lt;/code&gt; command and verify the log file
We can see that everything is OK
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo&lt;/span&gt; /opt/ibm/db2/V11.5/bin/db2val
DBI1379I  The db2val &lt;span class="nb"&gt;command &lt;/span&gt;is running. This can take several minutes.

DBI1335I  Installation file validation &lt;span class="k"&gt;for &lt;/span&gt;the DB2 copy installed at
      /opt/ibm/db2/V11.5 was successful.

DBI1343I  The db2val &lt;span class="nb"&gt;command &lt;/span&gt;completed successfully. For details, see
      the log file /tmp/db2val-230430_051357.log.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;cat&lt;/span&gt; /tmp/db2val-230430_051357.log
Installation file validation &lt;span class="k"&gt;for &lt;/span&gt;the DB2 copy installed at &lt;span class="s2"&gt;"/opt/ibm/db2/V11.5"&lt;/span&gt; starts.

Task 1: Validating Installation file sets.
Status 1 : Success

Task 2: Validating embedded runtime path &lt;span class="k"&gt;for &lt;/span&gt;DB2 executables and libraries.
Status 2 : Success

Task 3: Validating the accessibility to the installation path.
Status 3 : Success

Task 4: Validating the accessibility to the /etc/services file.
Status 4 : Success

DBI1335I  Installation file validation &lt;span class="k"&gt;for &lt;/span&gt;the DB2 copy installed at
      /opt/ibm/db2/V11.5 was successful.

Installation file validation &lt;span class="k"&gt;for &lt;/span&gt;the DB2 copy installed at &lt;span class="s2"&gt;"/opt/ibm/db2/V11.5"&lt;/span&gt; ends.

DBI1343I  The db2val &lt;span class="nb"&gt;command &lt;/span&gt;completed successfully. For details, see
      the log file /tmp/db2val-230430_051357.log.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Post Installation
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Create required groups for Db2
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;groupadd &lt;span class="nt"&gt;-g&lt;/span&gt; 998 db2iadm1

&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;groupadd &lt;span class="nt"&gt;-g&lt;/span&gt; 997 db2fsdm1

&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;groupadd &lt;span class="nt"&gt;-g&lt;/span&gt; 996 dasadm1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Create required users for Db2
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;useradd &lt;span class="nt"&gt;-u&lt;/span&gt; 1004 &lt;span class="nt"&gt;-g&lt;/span&gt; db2iadm1 &lt;span class="nt"&gt;-m&lt;/span&gt; &lt;span class="nt"&gt;-d&lt;/span&gt; /home/db2inst1 db2inst1

&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;useradd &lt;span class="nt"&gt;-u&lt;/span&gt; 1003 &lt;span class="nt"&gt;-g&lt;/span&gt; db2fsdm1 &lt;span class="nt"&gt;-m&lt;/span&gt; &lt;span class="nt"&gt;-d&lt;/span&gt; /home/db2fenc1 db2fenc1

&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;useradd &lt;span class="nt"&gt;-u&lt;/span&gt; 1002 &lt;span class="nt"&gt;-g&lt;/span&gt; dasadm1 &lt;span class="nt"&gt;-m&lt;/span&gt; &lt;span class="nt"&gt;-d&lt;/span&gt; /home/dasusr1 dasusr1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Set passwords for users
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;passwd db2inst1
New password:
Retype new password:
passwd: password updated successfully

&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;passwd db2fenc1
New password:
Retype new password:
passwd: password updated successfully

&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;passwd dasusr1
New password:
Retype new password:
passwd: password updated successfully
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Create an instance for Db2 using &lt;code&gt;db2icrt&lt;/code&gt; command and verify the log file for information about connecting to the database
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo&lt;/span&gt; /opt/ibm/db2/V11.5/instance/db2icrt &lt;span class="nt"&gt;-a&lt;/span&gt; server &lt;span class="nt"&gt;-u&lt;/span&gt; db2fenc1 db2inst1
DBI1446I  The db2icrt &lt;span class="nb"&gt;command &lt;/span&gt;is running.


DB2 installation is being initialized.

 Total number of tasks to be performed: 4
Total estimated &lt;span class="nb"&gt;time &lt;/span&gt;&lt;span class="k"&gt;for &lt;/span&gt;all tasks to be performed: 309 second&lt;span class="o"&gt;(&lt;/span&gt;s&lt;span class="o"&gt;)&lt;/span&gt;

Task &lt;span class="c"&gt;#1 start&lt;/span&gt;
Description: Setting default global profile registry variables
Estimated &lt;span class="nb"&gt;time &lt;/span&gt;1 second&lt;span class="o"&gt;(&lt;/span&gt;s&lt;span class="o"&gt;)&lt;/span&gt;
Task &lt;span class="c"&gt;#1 end&lt;/span&gt;

Task &lt;span class="c"&gt;#2 start&lt;/span&gt;
Description: Initializing instance list
Estimated &lt;span class="nb"&gt;time &lt;/span&gt;5 second&lt;span class="o"&gt;(&lt;/span&gt;s&lt;span class="o"&gt;)&lt;/span&gt;
Task &lt;span class="c"&gt;#2 end&lt;/span&gt;

Task &lt;span class="c"&gt;#3 start&lt;/span&gt;
Description: Configuring DB2 instances
Estimated &lt;span class="nb"&gt;time &lt;/span&gt;300 second&lt;span class="o"&gt;(&lt;/span&gt;s&lt;span class="o"&gt;)&lt;/span&gt;
Task &lt;span class="c"&gt;#3 end&lt;/span&gt;

Task &lt;span class="c"&gt;#4 start&lt;/span&gt;
Description: Updating global profile registry
Estimated &lt;span class="nb"&gt;time &lt;/span&gt;3 second&lt;span class="o"&gt;(&lt;/span&gt;s&lt;span class="o"&gt;)&lt;/span&gt;
Task &lt;span class="c"&gt;#4 end&lt;/span&gt;

The execution completed successfully.

For more information see the DB2 installation log at &lt;span class="s2"&gt;"/tmp/db2icrt.log.85927"&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt;
DBI1070I  Program db2icrt completed successfully.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;cat&lt;/span&gt; /tmp/db2icrt.log.85927

Required steps:
You can connect to the DB2 instance &lt;span class="s2"&gt;"db2inst1"&lt;/span&gt; using the port number &lt;span class="s2"&gt;"25000"&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt; Record it &lt;span class="k"&gt;for &lt;/span&gt;future reference.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Let’s start the database instance using the below commands
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;su - db2inst1

&lt;span class="nv"&gt;$ &lt;/span&gt;db2ls

Install Path                       Level   Fix Pack   Special Install Number   Install Date                  Installer UID
&lt;span class="nt"&gt;---------------------------------------------------------------------------------------------------------------------&lt;/span&gt;
/opt/ibm/db2/V11.5               11.5.8.0        0                            Sun Apr 30 05:08:04 2023 UTC             0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt; sqllib/userprofile

&lt;span class="nv"&gt;$ &lt;/span&gt;db2ilist
db2inst1

&lt;span class="nv"&gt;$ &lt;/span&gt;db2start
04/30/2023 05:36:30     0   0   SQL1063N  DB2START processing was successful.
SQL1063N  DB2START processing was successful.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Now the database instance is started and we can connect to the instance
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;db2
&lt;span class="o"&gt;(&lt;/span&gt;c&lt;span class="o"&gt;)&lt;/span&gt; Copyright IBM Corporation 1993,2007
Command Line Processor &lt;span class="k"&gt;for &lt;/span&gt;DB2 Client 11.5.8.0

You can issue database manager commands and SQL statements from the &lt;span class="nb"&gt;command
&lt;/span&gt;prompt. For example:
    db2 &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; connect to sample
    db2 &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;bind &lt;/span&gt;sample.bnd

For general &lt;span class="nb"&gt;help&lt;/span&gt;, &lt;span class="nb"&gt;type&lt;/span&gt;: ?.
For &lt;span class="nb"&gt;command help&lt;/span&gt;, &lt;span class="nb"&gt;type&lt;/span&gt;: ? &lt;span class="nb"&gt;command&lt;/span&gt;, where &lt;span class="nb"&gt;command &lt;/span&gt;can be
the first few keywords of a database manager command. For example:
 ? CATALOG DATABASE &lt;span class="k"&gt;for &lt;/span&gt;&lt;span class="nb"&gt;help &lt;/span&gt;on the CATALOG DATABASE &lt;span class="nb"&gt;command&lt;/span&gt;
 ? CATALOG          &lt;span class="k"&gt;for &lt;/span&gt;&lt;span class="nb"&gt;help &lt;/span&gt;on all of the CATALOG commands.

To &lt;span class="nb"&gt;exit &lt;/span&gt;db2 interactive mode, &lt;span class="nb"&gt;type &lt;/span&gt;QUIT at the &lt;span class="nb"&gt;command &lt;/span&gt;prompt. Outside
interactive mode, all commands must be prefixed with &lt;span class="s1"&gt;'db2'&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt;
To list the current &lt;span class="nb"&gt;command &lt;/span&gt;option settings, &lt;span class="nb"&gt;type &lt;/span&gt;LIST COMMAND OPTIONS.

For more detailed &lt;span class="nb"&gt;help&lt;/span&gt;, refer to the Online Reference Manual.

db2 &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Create a test database and connect to the database using the below commands
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;db2 &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; create database ibm
DB20000I  The CREATE DATABASE &lt;span class="nb"&gt;command &lt;/span&gt;completed successfully.

db2 &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; connect to ibm

   Database Connection Information

 Database server        &lt;span class="o"&gt;=&lt;/span&gt; DB2/LINUXX8664 11.5.8.0
 SQL authorization ID   &lt;span class="o"&gt;=&lt;/span&gt; DB2INST1
 Local database &lt;span class="nb"&gt;alias&lt;/span&gt;   &lt;span class="o"&gt;=&lt;/span&gt; IBM
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Enable automatic start of database instance after reboot
Execute the below commands as &lt;code&gt;db2inst1&lt;/code&gt; user
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;db2greg &lt;span class="nt"&gt;-getinstrec&lt;/span&gt; &lt;span class="nv"&gt;instancename&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'db2inst1'&lt;/span&gt;
Retrieved record:
   Service      &lt;span class="o"&gt;=&lt;/span&gt; |DB2|
   Version      &lt;span class="o"&gt;=&lt;/span&gt; |11.5.8.0|
   InstanceName &lt;span class="o"&gt;=&lt;/span&gt; |db2inst1|
   InstancePath &lt;span class="o"&gt;=&lt;/span&gt; |/home/db2inst1/sqllib|
   Usage        &lt;span class="o"&gt;=&lt;/span&gt; |N/A|
   StartAtBoot  &lt;span class="o"&gt;=&lt;/span&gt; 1
   Maintenance  &lt;span class="o"&gt;=&lt;/span&gt; 0
   InstallPath  &lt;span class="o"&gt;=&lt;/span&gt; |/opt/ibm/db2/V11.5|
   RemoteProf   &lt;span class="o"&gt;=&lt;/span&gt; |N/A|
   Comment      &lt;span class="o"&gt;=&lt;/span&gt; |N/A|

&lt;span class="nv"&gt;$ &lt;/span&gt;db2iauto &lt;span class="nt"&gt;-on&lt;/span&gt; db2inst1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Reboot the EC2 instance and verify the Db2 process
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;su - db2inst1

&lt;span class="nv"&gt;$ &lt;/span&gt;ps &lt;span class="nt"&gt;-ef&lt;/span&gt; | &lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="nt"&gt;-i&lt;/span&gt; db2
root         469       1  0 05:48 ?        00:00:00 /opt/ibm/db2/V11.5/bin/db2fmcd
root        1127    1096  0 05:48 pts/0    00:00:00 &lt;span class="nb"&gt;sudo &lt;/span&gt;su - db2inst1
root        1128    1127  0 05:48 pts/0    00:00:00 su - db2inst1
db2inst1    1129    1128  0 05:48 pts/0    00:00:00 &lt;span class="nt"&gt;-sh&lt;/span&gt;
root        1626       1  0 05:49 ?        00:00:00 db2wdog 0 &lt;span class="o"&gt;[&lt;/span&gt;db2inst1]
db2inst1    1628    1626  1 05:49 ?        00:00:00 db2sysc 0
root        1635    1626  0 05:49 ?        00:00:00 db2ckpwd 0
root        1636    1626  0 05:49 ?        00:00:00 db2ckpwd 0
root        1637    1626  0 05:49 ?        00:00:00 db2ckpwd 0
db2inst1    1639    1626  0 05:49 ?        00:00:00 db2vend &lt;span class="o"&gt;(&lt;/span&gt;PD Vendor Process - 1&lt;span class="o"&gt;)&lt;/span&gt; 0
db2inst1    1647    1626  1 05:49 ?        00:00:00 db2acd 0 ,0,0,0,1,0,0,00000000,0,0,0000000000000000,0000000000000000,00000000,00000000,00000000,00000000,00000000,00000000,0000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000c3834000,0000000000000000,0000000000000000,1,0,0,,,,,a89f30,14,1e014,2,0,1,0000000000041fc0,0x240000000,0x240000000,1600000,2,2,13
db2inst1    1673    1129  0 05:49 pts/0    00:00:00 ps &lt;span class="nt"&gt;-ef&lt;/span&gt;
db2inst1    1674    1129  0 05:49 pts/0    00:00:00 &lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="nt"&gt;-i&lt;/span&gt; db2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;systemctl status db2fmcd
● db2fmcd.service - DB2 v11.5.8.0
     Loaded: loaded &lt;span class="o"&gt;(&lt;/span&gt;/etc/systemd/system/db2fmcd.service&lt;span class="p"&gt;;&lt;/span&gt; enabled&lt;span class="p"&gt;;&lt;/span&gt; vendor preset: enabled&lt;span class="o"&gt;)&lt;/span&gt;
     Active: active &lt;span class="o"&gt;(&lt;/span&gt;running&lt;span class="o"&gt;)&lt;/span&gt; since Sun 2023-04-30 05:48:42 UTC&lt;span class="p"&gt;;&lt;/span&gt; 11min ago
   Main PID: 469 &lt;span class="o"&gt;(&lt;/span&gt;db2fmcd&lt;span class="o"&gt;)&lt;/span&gt;
      Tasks: 58 &lt;span class="o"&gt;(&lt;/span&gt;limit: 4686&lt;span class="o"&gt;)&lt;/span&gt;
     Memory: 1.0G
     CGroup: /system.slice/db2fmcd.service
             ├─ 469 /opt/ibm/db2/V11.5/bin/db2fmcd
             ├─1626 db2wdog 0 &lt;span class="o"&gt;[&lt;/span&gt;db2inst1]
             ├─1628 db2sysc 0
             ├─1635 db2ckpwd 0
             ├─1636 db2ckpwd 0
             ├─1637 db2ckpwd 0
             ├─1639 db2vend &lt;span class="o"&gt;(&lt;/span&gt;PD Vendor Process - 1&lt;span class="o"&gt;)&lt;/span&gt; 0
             ├─1647 db2acd 0 ,0,0,0,1,0,0,00000000,0,0,0000000000000000,0000000000000000,00000000,00000000,00000000,00000000,00000000,00000000,0000,00000000,00000000,00000000,00000000,00000000&amp;gt;
             └─2450 db2fmp &lt;span class="o"&gt;(&lt;/span&gt; ,1,0,0,0,0,0,00000000,0,0,0000000000000000,0000000000000000,00000000,00000000,00000000,00000000,00000000,00000000,0000,00000000,00000000,00000000,00000000,00000000&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Connect to our test database and we can see everything is OK
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;db2
&lt;span class="o"&gt;(&lt;/span&gt;c&lt;span class="o"&gt;)&lt;/span&gt; Copyright IBM Corporation 1993,2007
Command Line Processor &lt;span class="k"&gt;for &lt;/span&gt;DB2 Client 11.5.8.0

You can issue database manager commands and SQL statements from the &lt;span class="nb"&gt;command
&lt;/span&gt;prompt. For example:
    db2 &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; connect to sample
    db2 &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;bind &lt;/span&gt;sample.bnd

For general &lt;span class="nb"&gt;help&lt;/span&gt;, &lt;span class="nb"&gt;type&lt;/span&gt;: ?.
For &lt;span class="nb"&gt;command help&lt;/span&gt;, &lt;span class="nb"&gt;type&lt;/span&gt;: ? &lt;span class="nb"&gt;command&lt;/span&gt;, where &lt;span class="nb"&gt;command &lt;/span&gt;can be
the first few keywords of a database manager command. For example:
 ? CATALOG DATABASE &lt;span class="k"&gt;for &lt;/span&gt;&lt;span class="nb"&gt;help &lt;/span&gt;on the CATALOG DATABASE &lt;span class="nb"&gt;command&lt;/span&gt;
 ? CATALOG          &lt;span class="k"&gt;for &lt;/span&gt;&lt;span class="nb"&gt;help &lt;/span&gt;on all of the CATALOG commands.

To &lt;span class="nb"&gt;exit &lt;/span&gt;db2 interactive mode, &lt;span class="nb"&gt;type &lt;/span&gt;QUIT at the &lt;span class="nb"&gt;command &lt;/span&gt;prompt. Outside
interactive mode, all commands must be prefixed with &lt;span class="s1"&gt;'db2'&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt;
To list the current &lt;span class="nb"&gt;command &lt;/span&gt;option settings, &lt;span class="nb"&gt;type &lt;/span&gt;LIST COMMAND OPTIONS.

For more detailed &lt;span class="nb"&gt;help&lt;/span&gt;, refer to the Online Reference Manual.

db2 &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; connect to ibm

   Database Connection Information

 Database server        &lt;span class="o"&gt;=&lt;/span&gt; DB2/LINUXX8664 11.5.8.0
 SQL authorization ID   &lt;span class="o"&gt;=&lt;/span&gt; DB2INST1
 Local database &lt;span class="nb"&gt;alias&lt;/span&gt;   &lt;span class="o"&gt;=&lt;/span&gt; IBM
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Reference
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.dbi-services.com/blog/setting-up-ibm-db2-on-linux-root-installation/"&gt;https://www.dbi-services.com/blog/setting-up-ibm-db2-on-linux-root-installation/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.ibm.com/docs/en/db2/11.5"&gt;https://www.ibm.com/docs/en/db2/11.5&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ibm</category>
      <category>cloudnative</category>
      <category>database</category>
      <category>aws</category>
    </item>
    <item>
      <title>Prometheus - Quick Setup on Amazon EC2 (Ubuntu) - Part 2</title>
      <dc:creator>Unni P</dc:creator>
      <pubDate>Wed, 03 May 2023 13:32:39 +0000</pubDate>
      <link>https://dev.to/iamunnip/prometheus-quick-setup-on-amazon-ec2-ubuntu-part-2-10kk</link>
      <guid>https://dev.to/iamunnip/prometheus-quick-setup-on-amazon-ec2-ubuntu-part-2-10kk</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;In this article, we will look at how we can quickly setup Prometheus and Node Exporter on a Ubuntu instance on Amazon EC2&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Setup an EC2 instance of type t2.micro&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Ubuntu 22.04 LTS as AMI&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;10 GB of hard disk space&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Open ports 22 for SSH, 9090 for Prometheus and 9100 for Node Exporter&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Installation
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Prometheus
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Login to your EC2 instance
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;ssh &lt;span class="nt"&gt;-i&lt;/span&gt; &amp;lt;key_name&amp;gt;.pem ubuntu@&amp;lt;ip_address&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;lsb_release &lt;span class="nt"&gt;-a&lt;/span&gt;
No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 22.04.2 LTS
Release:        22.04
Codename:       jammy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Download and extract the latest release from their GitHub page
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;wget https://github.com/prometheus/prometheus/releases/download/v2.43.0/prometheus-2.43.0.linux-amd64.tar.gz
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;tar&lt;/span&gt; &lt;span class="nt"&gt;-xzvf&lt;/span&gt; prometheus-2.43.0.linux-amd64.tar.gz
prometheus-2.43.0.linux-amd64/
prometheus-2.43.0.linux-amd64/LICENSE
prometheus-2.43.0.linux-amd64/consoles/
prometheus-2.43.0.linux-amd64/consoles/prometheus.html
prometheus-2.43.0.linux-amd64/consoles/node-disk.html
prometheus-2.43.0.linux-amd64/consoles/node-overview.html
prometheus-2.43.0.linux-amd64/consoles/prometheus-overview.html
prometheus-2.43.0.linux-amd64/consoles/index.html.example
prometheus-2.43.0.linux-amd64/consoles/node-cpu.html
prometheus-2.43.0.linux-amd64/consoles/node.html
prometheus-2.43.0.linux-amd64/prometheus
prometheus-2.43.0.linux-amd64/promtool
prometheus-2.43.0.linux-amd64/NOTICE
prometheus-2.43.0.linux-amd64/console_libraries/
prometheus-2.43.0.linux-amd64/console_libraries/prom.lib
prometheus-2.43.0.linux-amd64/console_libraries/menu.lib
prometheus-2.43.0.linux-amd64/prometheus.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Change the directory and run the Prometheus binary
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;cd &lt;/span&gt;prometheus-2.43.0.linux-amd64/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;./prometheus
&lt;span class="nv"&gt;ts&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2023-04-28T14:52:07.254Z &lt;span class="nb"&gt;caller&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;main.go:520 &lt;span class="nv"&gt;level&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;info &lt;span class="nv"&gt;msg&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"No time or size retention was set so using the default time retention"&lt;/span&gt; &lt;span class="nv"&gt;duration&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;15d
&lt;span class="nv"&gt;ts&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2023-04-28T14:52:07.255Z &lt;span class="nb"&gt;caller&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;main.go:564 &lt;span class="nv"&gt;level&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;info &lt;span class="nv"&gt;msg&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"Starting Prometheus Server"&lt;/span&gt; &lt;span class="nv"&gt;mode&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;server &lt;span class="nv"&gt;version&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"(version=2.43.0, branch=HEAD, revision=edfc3bcd025dd6fe296c167a14a216cab1e552ee)"&lt;/span&gt;
&lt;span class="nv"&gt;ts&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2023-04-28T14:52:07.255Z &lt;span class="nb"&gt;caller&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;main.go:569 &lt;span class="nv"&gt;level&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;info &lt;span class="nv"&gt;build_context&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"(go=go1.19.7, platform=linux/amd64, user=root@8a0ee342e522, date=20230321-12:56:07, tags=netgo,builtinassets)"&lt;/span&gt;
&lt;span class="nv"&gt;ts&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2023-04-28T14:52:07.255Z &lt;span class="nb"&gt;caller&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;main.go:570 &lt;span class="nv"&gt;level&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;info &lt;span class="nv"&gt;host_details&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"(Linux 5.15.0-1031-aws #35-Ubuntu SMP Fri Feb 10 02:07:18 UTC 2023 x86_64 ip-172-31-84-196 (none))"&lt;/span&gt;
&lt;span class="nv"&gt;ts&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2023-04-28T14:52:07.255Z &lt;span class="nb"&gt;caller&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;main.go:571 &lt;span class="nv"&gt;level&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;info &lt;span class="nv"&gt;fd_limits&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"(soft=1048576, hard=1048576)"&lt;/span&gt;
&lt;span class="nv"&gt;ts&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2023-04-28T14:52:07.255Z &lt;span class="nb"&gt;caller&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;main.go:572 &lt;span class="nv"&gt;level&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;info &lt;span class="nv"&gt;vm_limits&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"(soft=unlimited, hard=unlimited)"&lt;/span&gt;
&lt;span class="nv"&gt;ts&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2023-04-28T14:52:07.257Z &lt;span class="nb"&gt;caller&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;web.go:561 &lt;span class="nv"&gt;level&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;info &lt;span class="nv"&gt;component&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;web &lt;span class="nv"&gt;msg&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"Start listening for connections"&lt;/span&gt; &lt;span class="nv"&gt;address&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;0.0.0.0:9090
&lt;span class="nv"&gt;ts&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2023-04-28T14:52:07.258Z &lt;span class="nb"&gt;caller&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;main.go:1005 &lt;span class="nv"&gt;level&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;info &lt;span class="nv"&gt;msg&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"Starting TSDB ..."&lt;/span&gt;
&lt;span class="nv"&gt;ts&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2023-04-28T14:52:07.261Z &lt;span class="nb"&gt;caller&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;head.go:587 &lt;span class="nv"&gt;level&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;info &lt;span class="nv"&gt;component&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;tsdb &lt;span class="nv"&gt;msg&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"Replaying on-disk memory mappable chunks if any"&lt;/span&gt;
&lt;span class="nv"&gt;ts&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2023-04-28T14:52:07.261Z &lt;span class="nb"&gt;caller&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;head.go:658 &lt;span class="nv"&gt;level&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;info &lt;span class="nv"&gt;component&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;tsdb &lt;span class="nv"&gt;msg&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"On-disk memory mappable chunks replay completed"&lt;/span&gt; &lt;span class="nv"&gt;duration&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;3.434µs
&lt;span class="nv"&gt;ts&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2023-04-28T14:52:07.261Z &lt;span class="nb"&gt;caller&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;head.go:664 &lt;span class="nv"&gt;level&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;info &lt;span class="nv"&gt;component&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;tsdb &lt;span class="nv"&gt;msg&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"Replaying WAL, this may take a while"&lt;/span&gt;
&lt;span class="nv"&gt;ts&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2023-04-28T14:52:07.264Z &lt;span class="nb"&gt;caller&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;tls_config.go:232 &lt;span class="nv"&gt;level&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;info &lt;span class="nv"&gt;component&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;web &lt;span class="nv"&gt;msg&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"Listening on"&lt;/span&gt; &lt;span class="nv"&gt;address&lt;/span&gt;&lt;span class="o"&gt;=[&lt;/span&gt;::]:9090
&lt;span class="nv"&gt;ts&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2023-04-28T14:52:07.264Z &lt;span class="nb"&gt;caller&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;tls_config.go:235 &lt;span class="nv"&gt;level&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;info &lt;span class="nv"&gt;component&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;web &lt;span class="nv"&gt;msg&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"TLS is disabled."&lt;/span&gt; &lt;span class="nv"&gt;http2&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;false &lt;/span&gt;&lt;span class="nv"&gt;address&lt;/span&gt;&lt;span class="o"&gt;=[&lt;/span&gt;::]:9090
&lt;span class="nv"&gt;ts&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2023-04-28T14:52:07.265Z &lt;span class="nb"&gt;caller&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;head.go:735 &lt;span class="nv"&gt;level&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;info &lt;span class="nv"&gt;component&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;tsdb &lt;span class="nv"&gt;msg&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"WAL segment loaded"&lt;/span&gt; &lt;span class="nv"&gt;segment&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;0 &lt;span class="nv"&gt;maxSegment&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;0
&lt;span class="nv"&gt;ts&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2023-04-28T14:52:07.265Z &lt;span class="nb"&gt;caller&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;head.go:772 &lt;span class="nv"&gt;level&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;info &lt;span class="nv"&gt;component&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;tsdb &lt;span class="nv"&gt;msg&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"WAL replay completed"&lt;/span&gt; &lt;span class="nv"&gt;checkpoint_replay_duration&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;36.679µs &lt;span class="nv"&gt;wal_replay_duration&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;3.153164ms &lt;span class="nv"&gt;wbl_replay_duration&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;721ns &lt;span class="nv"&gt;total_replay_duration&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;3.306296ms
&lt;span class="nv"&gt;ts&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2023-04-28T14:52:07.267Z &lt;span class="nb"&gt;caller&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;main.go:1026 &lt;span class="nv"&gt;level&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;info &lt;span class="nv"&gt;fs_type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;EXT4_SUPER_MAGIC
&lt;span class="nv"&gt;ts&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2023-04-28T14:52:07.267Z &lt;span class="nb"&gt;caller&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;main.go:1029 &lt;span class="nv"&gt;level&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;info &lt;span class="nv"&gt;msg&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"TSDB started"&lt;/span&gt;
&lt;span class="nv"&gt;ts&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2023-04-28T14:52:07.267Z &lt;span class="nb"&gt;caller&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;main.go:1209 &lt;span class="nv"&gt;level&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;info &lt;span class="nv"&gt;msg&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"Loading configuration file"&lt;/span&gt; &lt;span class="nv"&gt;filename&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;prometheus.yml
&lt;span class="nv"&gt;ts&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2023-04-28T14:52:07.274Z &lt;span class="nb"&gt;caller&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;main.go:1246 &lt;span class="nv"&gt;level&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;info &lt;span class="nv"&gt;msg&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"Completed loading of configuration file"&lt;/span&gt; &lt;span class="nv"&gt;filename&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;prometheus.yml &lt;span class="nv"&gt;totalDuration&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;6.593992ms &lt;span class="nv"&gt;db_storage&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1.674µs &lt;span class="nv"&gt;remote_storage&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2.431µs &lt;span class="nv"&gt;web_handler&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;939ns &lt;span class="nv"&gt;query_engine&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1.285µs &lt;span class="nv"&gt;scrape&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;6.143405ms &lt;span class="nv"&gt;scrape_sd&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;30.548µs &lt;span class="nv"&gt;notify&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;32.577µs &lt;span class="nv"&gt;notify_sd&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;12.129µs &lt;span class="nv"&gt;rules&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1.905µs &lt;span class="nv"&gt;tracing&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;7.357µs
&lt;span class="nv"&gt;ts&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2023-04-28T14:52:07.274Z &lt;span class="nb"&gt;caller&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;main.go:990 &lt;span class="nv"&gt;level&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;info &lt;span class="nv"&gt;msg&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"Server is ready to receive web requests."&lt;/span&gt;
&lt;span class="nv"&gt;ts&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2023-04-28T14:52:07.274Z &lt;span class="nb"&gt;caller&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;manager.go:974 &lt;span class="nv"&gt;level&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;info &lt;span class="nv"&gt;component&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"rule manager"&lt;/span&gt; &lt;span class="nv"&gt;msg&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"Starting rule manager..."&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Open &lt;a href="http://public_ip:9090"&gt;http://public_ip:9090&lt;/a&gt; in the browser and you can see the Prometheus expression browser&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--O1KuZNee--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2k6gi20l9sh2zo09jviv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--O1KuZNee--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2k6gi20l9sh2zo09jviv.png" alt="prometheus-01" width="800" height="192"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Navigate to &lt;strong&gt;Status&lt;/strong&gt; → &lt;strong&gt;Targets&lt;/strong&gt; to view all targets configured in the configuration file&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--VJPfg17x--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/iftzww35ruvnz2a0hkeb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--VJPfg17x--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/iftzww35ruvnz2a0hkeb.png" alt="prometheus-02" width="800" height="148"&gt;&lt;/a&gt;    &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Navigate to &lt;strong&gt;Status&lt;/strong&gt; → &lt;strong&gt;Configuration&lt;/strong&gt; to view the configuration file contents&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--LmDp0SwS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/k0jf8xpv9tggnzmvpddc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--LmDp0SwS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/k0jf8xpv9tggnzmvpddc.png" alt="prometheus-03" width="800" height="303"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Navigate to &lt;strong&gt;Status&lt;/strong&gt; → &lt;strong&gt;TSDB Status&lt;/strong&gt; to view time-series database details&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Q7ZXHiyq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5x350frxtswwuiwvwkgr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Q7ZXHiyq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5x350frxtswwuiwvwkgr.png" alt="prometheus-04" width="800" height="370"&gt;&lt;/a&gt;    &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Open &lt;a href="http://public_ip:9090/metrics"&gt;http://public_ip:9090/metrics&lt;/a&gt; in the browser to view metrics&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--x8L_18wp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nl3fxj0v4u2aolva0gd3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--x8L_18wp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nl3fxj0v4u2aolva0gd3.png" alt="prometheus-05" width="800" height="388"&gt;&lt;/a&gt;    &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Press &lt;strong&gt;ctrl+c&lt;/strong&gt; to stop running the Prometheus server&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Node Exporter
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;A popular exporter that collects system-level metrics from Linux and Unix-based systems&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Provides a wide range of metrics that can be used to monitor system health&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Different metrics are CPU usage, memory usage, disk usage, network statistics etc&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Download and extract the latest release from their GitHub page&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;wget https://github.com/prometheus/node_exporter/releases/download/v1.5.0/node_exporter-1.5.0.linux-amd64.tar.gz
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;tar&lt;/span&gt; &lt;span class="nt"&gt;-xzvf&lt;/span&gt; node_exporter-1.5.0.linux-amd64.tar.gz
node_exporter-1.5.0.linux-amd64/
node_exporter-1.5.0.linux-amd64/LICENSE
node_exporter-1.5.0.linux-amd64/NOTICE
node_exporter-1.5.0.linux-amd64/node_exporter
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Change the directory and run the Node Exporter binary
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;cd &lt;/span&gt;node_exporter-1.5.0.linux-amd64/

&lt;span class="nv"&gt;$ &lt;/span&gt;./node_exporter
&lt;span class="nv"&gt;ts&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2023-04-28T15:27:11.226Z &lt;span class="nb"&gt;caller&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;node_exporter.go:180 &lt;span class="nv"&gt;level&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;info &lt;span class="nv"&gt;msg&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"Starting node_exporter"&lt;/span&gt; &lt;span class="nv"&gt;version&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"(version=1.5.0, branch=HEAD, revision=1b48970ffcf5630534fb00bb0687d73c66d1c959)"&lt;/span&gt;
&lt;span class="nv"&gt;ts&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2023-04-28T15:27:11.226Z &lt;span class="nb"&gt;caller&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;node_exporter.go:181 &lt;span class="nv"&gt;level&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;info &lt;span class="nv"&gt;msg&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"Build context"&lt;/span&gt; &lt;span class="nv"&gt;build_context&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"(go=go1.19.3, user=root@6e7732a7b81b, date=20221129-18:59:09)"&lt;/span&gt;
&lt;span class="nv"&gt;ts&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2023-04-28T15:27:11.227Z &lt;span class="nb"&gt;caller&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;filesystem_common.go:111 &lt;span class="nv"&gt;level&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;info &lt;span class="nv"&gt;collector&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;filesystem &lt;span class="nv"&gt;msg&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"Parsed flag --collector.filesystem.mount-points-exclude"&lt;/span&gt; &lt;span class="nv"&gt;flag&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;^/&lt;span class="o"&gt;(&lt;/span&gt;dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+&lt;span class="o"&gt;)(&lt;/span&gt;&lt;span class="nv"&gt;$|&lt;/span&gt;/&lt;span class="o"&gt;)&lt;/span&gt;
&lt;span class="nv"&gt;ts&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2023-04-28T15:27:11.227Z &lt;span class="nb"&gt;caller&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;filesystem_common.go:113 &lt;span class="nv"&gt;level&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;info &lt;span class="nv"&gt;collector&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;filesystem &lt;span class="nv"&gt;msg&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"Parsed flag --collector.filesystem.fs-types-exclude"&lt;/span&gt; &lt;span class="nv"&gt;flag&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;^&lt;span class="o"&gt;(&lt;/span&gt;autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs&lt;span class="o"&gt;)&lt;/span&gt;&lt;span class="err"&gt;$&lt;/span&gt;
&lt;span class="nv"&gt;ts&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2023-04-28T15:27:11.227Z &lt;span class="nb"&gt;caller&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;diskstats_common.go:111 &lt;span class="nv"&gt;level&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;info &lt;span class="nv"&gt;collector&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;diskstats &lt;span class="nv"&gt;msg&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"Parsed flag --collector.diskstats.device-exclude"&lt;/span&gt; &lt;span class="nv"&gt;flag&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;^&lt;span class="o"&gt;(&lt;/span&gt;ram|loop|fd|&lt;span class="o"&gt;(&lt;/span&gt;h|s|v|xv&lt;span class="o"&gt;)&lt;/span&gt;d[a-z]|nvme&lt;span class="se"&gt;\d&lt;/span&gt;+n&lt;span class="se"&gt;\d&lt;/span&gt;+p&lt;span class="o"&gt;)&lt;/span&gt;&lt;span class="se"&gt;\d&lt;/span&gt;+&lt;span class="err"&gt;$&lt;/span&gt;
&lt;span class="nv"&gt;ts&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2023-04-28T15:27:11.228Z &lt;span class="nb"&gt;caller&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;node_exporter.go:110 &lt;span class="nv"&gt;level&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;info &lt;span class="nv"&gt;msg&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"Enabled collectors"&lt;/span&gt;
&lt;span class="nv"&gt;ts&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2023-04-28T15:27:11.228Z &lt;span class="nb"&gt;caller&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;node_exporter.go:117 &lt;span class="nv"&gt;level&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;info &lt;span class="nv"&gt;collector&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;arp
&lt;span class="nv"&gt;ts&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2023-04-28T15:27:11.228Z &lt;span class="nb"&gt;caller&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;node_exporter.go:117 &lt;span class="nv"&gt;level&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;info &lt;span class="nv"&gt;collector&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;bcache
&lt;span class="nv"&gt;ts&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2023-04-28T15:27:11.228Z &lt;span class="nb"&gt;caller&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;node_exporter.go:117 &lt;span class="nv"&gt;level&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;info &lt;span class="nv"&gt;collector&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;bonding
&lt;span class="nv"&gt;ts&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2023-04-28T15:27:11.229Z &lt;span class="nb"&gt;caller&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;node_exporter.go:117 &lt;span class="nv"&gt;level&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;info &lt;span class="nv"&gt;collector&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;btrfs
&lt;span class="nv"&gt;ts&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2023-04-28T15:27:11.229Z &lt;span class="nb"&gt;caller&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;node_exporter.go:117 &lt;span class="nv"&gt;level&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;info &lt;span class="nv"&gt;collector&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;conntrack
&lt;span class="nv"&gt;ts&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2023-04-28T15:27:11.229Z &lt;span class="nb"&gt;caller&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;node_exporter.go:117 &lt;span class="nv"&gt;level&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;info &lt;span class="nv"&gt;collector&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;cpu
&lt;span class="nv"&gt;ts&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2023-04-28T15:27:11.229Z &lt;span class="nb"&gt;caller&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;node_exporter.go:117 &lt;span class="nv"&gt;level&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;info &lt;span class="nv"&gt;collector&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;cpufreq
&lt;span class="nv"&gt;ts&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2023-04-28T15:27:11.229Z &lt;span class="nb"&gt;caller&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;node_exporter.go:117 &lt;span class="nv"&gt;level&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;info &lt;span class="nv"&gt;collector&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;diskstats
&lt;span class="nv"&gt;ts&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2023-04-28T15:27:11.229Z &lt;span class="nb"&gt;caller&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;node_exporter.go:117 &lt;span class="nv"&gt;level&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;info &lt;span class="nv"&gt;collector&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;dmi
&lt;span class="nv"&gt;ts&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2023-04-28T15:27:11.229Z &lt;span class="nb"&gt;caller&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;node_exporter.go:117 &lt;span class="nv"&gt;level&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;info &lt;span class="nv"&gt;collector&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;edac
&lt;span class="nv"&gt;ts&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2023-04-28T15:27:11.229Z &lt;span class="nb"&gt;caller&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;node_exporter.go:117 &lt;span class="nv"&gt;level&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;info &lt;span class="nv"&gt;collector&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;entropy
&lt;span class="nv"&gt;ts&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2023-04-28T15:27:11.229Z &lt;span class="nb"&gt;caller&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;node_exporter.go:117 &lt;span class="nv"&gt;level&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;info &lt;span class="nv"&gt;collector&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;fibrechannel
&lt;span class="nv"&gt;ts&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2023-04-28T15:27:11.229Z &lt;span class="nb"&gt;caller&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;node_exporter.go:117 &lt;span class="nv"&gt;level&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;info &lt;span class="nv"&gt;collector&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;filefd
&lt;span class="nv"&gt;ts&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2023-04-28T15:27:11.229Z &lt;span class="nb"&gt;caller&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;node_exporter.go:117 &lt;span class="nv"&gt;level&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;info &lt;span class="nv"&gt;collector&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;filesystem
&lt;span class="nv"&gt;ts&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2023-04-28T15:27:11.229Z &lt;span class="nb"&gt;caller&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;node_exporter.go:117 &lt;span class="nv"&gt;level&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;info &lt;span class="nv"&gt;collector&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;hwmon
&lt;span class="nv"&gt;ts&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2023-04-28T15:27:11.229Z &lt;span class="nb"&gt;caller&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;node_exporter.go:117 &lt;span class="nv"&gt;level&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;info &lt;span class="nv"&gt;collector&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;infiniband
&lt;span class="nv"&gt;ts&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2023-04-28T15:27:11.229Z &lt;span class="nb"&gt;caller&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;node_exporter.go:117 &lt;span class="nv"&gt;level&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;info &lt;span class="nv"&gt;collector&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;ipvs
&lt;span class="nv"&gt;ts&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2023-04-28T15:27:11.229Z &lt;span class="nb"&gt;caller&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;node_exporter.go:117 &lt;span class="nv"&gt;level&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;info &lt;span class="nv"&gt;collector&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;loadavg
&lt;span class="nv"&gt;ts&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2023-04-28T15:27:11.229Z &lt;span class="nb"&gt;caller&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;node_exporter.go:117 &lt;span class="nv"&gt;level&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;info &lt;span class="nv"&gt;collector&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;mdadm
&lt;span class="nv"&gt;ts&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2023-04-28T15:27:11.230Z &lt;span class="nb"&gt;caller&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;node_exporter.go:117 &lt;span class="nv"&gt;level&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;info &lt;span class="nv"&gt;collector&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;meminfo
&lt;span class="nv"&gt;ts&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2023-04-28T15:27:11.230Z &lt;span class="nb"&gt;caller&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;node_exporter.go:117 &lt;span class="nv"&gt;level&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;info &lt;span class="nv"&gt;collector&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;netclass
&lt;span class="nv"&gt;ts&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2023-04-28T15:27:11.230Z &lt;span class="nb"&gt;caller&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;node_exporter.go:117 &lt;span class="nv"&gt;level&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;info &lt;span class="nv"&gt;collector&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;netdev
&lt;span class="nv"&gt;ts&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2023-04-28T15:27:11.230Z &lt;span class="nb"&gt;caller&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;node_exporter.go:117 &lt;span class="nv"&gt;level&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;info &lt;span class="nv"&gt;collector&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;netstat
&lt;span class="nv"&gt;ts&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2023-04-28T15:27:11.230Z &lt;span class="nb"&gt;caller&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;node_exporter.go:117 &lt;span class="nv"&gt;level&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;info &lt;span class="nv"&gt;collector&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;nfs
&lt;span class="nv"&gt;ts&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2023-04-28T15:27:11.230Z &lt;span class="nb"&gt;caller&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;node_exporter.go:117 &lt;span class="nv"&gt;level&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;info &lt;span class="nv"&gt;collector&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;nfsd
&lt;span class="nv"&gt;ts&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2023-04-28T15:27:11.230Z &lt;span class="nb"&gt;caller&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;node_exporter.go:117 &lt;span class="nv"&gt;level&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;info &lt;span class="nv"&gt;collector&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;nvme
&lt;span class="nv"&gt;ts&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2023-04-28T15:27:11.230Z &lt;span class="nb"&gt;caller&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;node_exporter.go:117 &lt;span class="nv"&gt;level&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;info &lt;span class="nv"&gt;collector&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;os
&lt;span class="nv"&gt;ts&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2023-04-28T15:27:11.230Z &lt;span class="nb"&gt;caller&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;node_exporter.go:117 &lt;span class="nv"&gt;level&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;info &lt;span class="nv"&gt;collector&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;powersupplyclass
&lt;span class="nv"&gt;ts&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2023-04-28T15:27:11.230Z &lt;span class="nb"&gt;caller&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;node_exporter.go:117 &lt;span class="nv"&gt;level&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;info &lt;span class="nv"&gt;collector&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;pressure
&lt;span class="nv"&gt;ts&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2023-04-28T15:27:11.230Z &lt;span class="nb"&gt;caller&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;node_exporter.go:117 &lt;span class="nv"&gt;level&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;info &lt;span class="nv"&gt;collector&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;rapl
&lt;span class="nv"&gt;ts&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2023-04-28T15:27:11.230Z &lt;span class="nb"&gt;caller&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;node_exporter.go:117 &lt;span class="nv"&gt;level&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;info &lt;span class="nv"&gt;collector&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;schedstat
&lt;span class="nv"&gt;ts&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2023-04-28T15:27:11.230Z &lt;span class="nb"&gt;caller&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;node_exporter.go:117 &lt;span class="nv"&gt;level&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;info &lt;span class="nv"&gt;collector&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;selinux
&lt;span class="nv"&gt;ts&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2023-04-28T15:27:11.230Z &lt;span class="nb"&gt;caller&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;node_exporter.go:117 &lt;span class="nv"&gt;level&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;info &lt;span class="nv"&gt;collector&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;sockstat
&lt;span class="nv"&gt;ts&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2023-04-28T15:27:11.230Z &lt;span class="nb"&gt;caller&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;node_exporter.go:117 &lt;span class="nv"&gt;level&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;info &lt;span class="nv"&gt;collector&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;softnet
&lt;span class="nv"&gt;ts&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2023-04-28T15:27:11.230Z &lt;span class="nb"&gt;caller&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;node_exporter.go:117 &lt;span class="nv"&gt;level&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;info &lt;span class="nv"&gt;collector&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;stat
&lt;/span&gt;&lt;span class="nv"&gt;ts&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2023-04-28T15:27:11.231Z &lt;span class="nb"&gt;caller&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;node_exporter.go:117 &lt;span class="nv"&gt;level&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;info &lt;span class="nv"&gt;collector&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;tapestats
&lt;span class="nv"&gt;ts&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2023-04-28T15:27:11.231Z &lt;span class="nb"&gt;caller&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;node_exporter.go:117 &lt;span class="nv"&gt;level&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;info &lt;span class="nv"&gt;collector&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;textfile
&lt;span class="nv"&gt;ts&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2023-04-28T15:27:11.231Z &lt;span class="nb"&gt;caller&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;node_exporter.go:117 &lt;span class="nv"&gt;level&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;info &lt;span class="nv"&gt;collector&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;thermal_zone
&lt;span class="nv"&gt;ts&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2023-04-28T15:27:11.231Z &lt;span class="nb"&gt;caller&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;node_exporter.go:117 &lt;span class="nv"&gt;level&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;info &lt;span class="nv"&gt;collector&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;time
&lt;/span&gt;&lt;span class="nv"&gt;ts&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2023-04-28T15:27:11.231Z &lt;span class="nb"&gt;caller&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;node_exporter.go:117 &lt;span class="nv"&gt;level&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;info &lt;span class="nv"&gt;collector&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;timex
&lt;span class="nv"&gt;ts&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2023-04-28T15:27:11.231Z &lt;span class="nb"&gt;caller&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;node_exporter.go:117 &lt;span class="nv"&gt;level&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;info &lt;span class="nv"&gt;collector&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;udp_queues
&lt;span class="nv"&gt;ts&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2023-04-28T15:27:11.231Z &lt;span class="nb"&gt;caller&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;node_exporter.go:117 &lt;span class="nv"&gt;level&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;info &lt;span class="nv"&gt;collector&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;uname
&lt;/span&gt;&lt;span class="nv"&gt;ts&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2023-04-28T15:27:11.231Z &lt;span class="nb"&gt;caller&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;node_exporter.go:117 &lt;span class="nv"&gt;level&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;info &lt;span class="nv"&gt;collector&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;vmstat
&lt;span class="nv"&gt;ts&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2023-04-28T15:27:11.231Z &lt;span class="nb"&gt;caller&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;node_exporter.go:117 &lt;span class="nv"&gt;level&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;info &lt;span class="nv"&gt;collector&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;xfs
&lt;span class="nv"&gt;ts&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2023-04-28T15:27:11.231Z &lt;span class="nb"&gt;caller&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;node_exporter.go:117 &lt;span class="nv"&gt;level&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;info &lt;span class="nv"&gt;collector&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;zfs
&lt;span class="nv"&gt;ts&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2023-04-28T15:27:11.231Z &lt;span class="nb"&gt;caller&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;tls_config.go:232 &lt;span class="nv"&gt;level&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;info &lt;span class="nv"&gt;msg&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"Listening on"&lt;/span&gt; &lt;span class="nv"&gt;address&lt;/span&gt;&lt;span class="o"&gt;=[&lt;/span&gt;::]:9100
&lt;span class="nv"&gt;ts&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2023-04-28T15:27:11.232Z &lt;span class="nb"&gt;caller&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;tls_config.go:235 &lt;span class="nv"&gt;level&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;info &lt;span class="nv"&gt;msg&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"TLS is disabled."&lt;/span&gt; &lt;span class="nv"&gt;http2&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;false &lt;/span&gt;&lt;span class="nv"&gt;address&lt;/span&gt;&lt;span class="o"&gt;=[&lt;/span&gt;::]:9100
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Open &lt;a href="http://public_ip:9100/metrics"&gt;http://public_ip:9100/metrics&lt;/a&gt; to view all the metrics&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--T-rdm2W4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/b48ryzu75drj863xa94a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--T-rdm2W4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/b48ryzu75drj863xa94a.png" alt="prometheus-06" width="800" height="388"&gt;&lt;/a&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Press &lt;strong&gt;ctrl+c&lt;/strong&gt; to stop running Node Exporter&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Reference
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://prometheus.io/docs/introduction/first_steps/"&gt;https://prometheus.io/docs/introduction/first_steps/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://prometheus.io/docs/guides/node-exporter/"&gt;https://prometheus.io/docs/guides/node-exporter/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>prometheus</category>
      <category>cloudnative</category>
      <category>observability</category>
      <category>opensource</category>
    </item>
    <item>
      <title>Prometheus - Architecture - Part 1</title>
      <dc:creator>Unni P</dc:creator>
      <pubDate>Wed, 03 May 2023 12:48:47 +0000</pubDate>
      <link>https://dev.to/iamunnip/prometheus-architecture-part-1-4i2</link>
      <guid>https://dev.to/iamunnip/prometheus-architecture-part-1-4i2</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;In the first part of the series we will look Prometheus introduction, features and its components&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Prometheus is an open-source monitoring and alerting tool&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Originally built by SoundCloud in 2012&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Donated to Cloud Native Computing Foundation in 2016&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Second project hosted by CNCF after Kubernetes&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;CNCF graduated project&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Collects and stores its metrics as time series data&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Metrics information is stored with the timestamp at which it was recorded&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Optional key-value pairs called labels are also stored&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Features
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Multi-dimensional data model — allows to define and store time-series data with multiple dimensions&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;PromQL — a query language to perform complex queries and calculations on metrics data&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Time-series collection — can collect metrics data from a wide range of sources, including servers, applications, and databases&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Data retention — can set retention policies on the metrics data&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Service discovery — can automatically discover and monitor new services in a dynamic infrastructure&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Exporters — different types of agents available for collecting metrics from different types of sources&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Grafana integration — can easily be integrated with Grafana for visualization and custom dashboards&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Alerting — can alert stakeholders when an issue occurs&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Architecture
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--O0kR8XCi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/k1mqr9pz6xrdis78nknd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--O0kR8XCi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/k1mqr9pz6xrdis78nknd.png" alt="prometheus architecture" width="800" height="480"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Prometheus Server
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;The main component of Prometheus&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Collects, stores and serves metrics data&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Follows a pull-based model for collecting metric data&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Periodically queries the targets configured in the scrape configuration&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Uses HTTP or HTTPS protocol&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Stores the data in a time-series database&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Time-series database allows Prometheus to do operations like querying, aggregation and visualization&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Exporters
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Agents that collect and expose metrics data from applications or third-party systems&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Written in a wide variety of programming languages&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Designed to be lightweight and easy to deploy&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Scrape metrics data from a target and expose it on an HTTP endpoint&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Metrics data is returned in a text-based format such as plain text, JSON, or protobuf&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Official exporters are maintained by Prometheus GitHub Organization like Node Exporter, MySQL Exporter etc&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Unofficial exporters are externally contributed and maintained like Apache Exporter, PostgreSQL Exporter etc&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Alertmanager
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Manages alerts generated by Prometheus&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Receives alerts from Prometheus and groups them based on criteria like severity&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Based on routing rules, it sends a notification to various receivers like Email, Slack, Pagerduty etc&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Applies a set of deduplication rules to ensure that alerts are not sent multiple times for the same issue&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Provides web interface that allows users to view and manage alerts&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Pushgateway
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Prometheus follows a pull-based model for collecting metrics data&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Sometimes we need to push metrics to Prometheus which cannot be scraped&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Helps to push metrics to Prometheus instead of pulling metric&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Best suitable for short-lived jobs&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Provides an HTTP API for pushing metrics from short-lived jobs&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Service Discovery
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Process of identifying and monitoring systems automatically&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Helps Prometheus to keep track of different services which are running&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Supports a variety of service discovery options for discovering scrape targets like Kubernetes, Consul, Docker etc&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If you need a service discovery system that is currently not supported, we can use file-based service discovery&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Enables you to list scrape targets in a JSON file along with metadata&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Metadata is a piece of useful information about targets, like the name of the service, description etc&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;HTTP service discovery helps to discover targets over an HTTP endpoint an alternative to file-based service discovery&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Client Libraries
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Set of libraries that allows developers to instrument their applications to expose metrics to Prometheus&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It exposes an HTTP endpoint and Prometheus can scrape metrics from these HTTP endpoints and store them in a time-series database&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Once the metrics are stored, Prometheus can generate graphs, reports and alerts based on the data&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Helps developers to get insights into their application performance&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Official client libraries are available in many programming languages like Go, Python, Java etc&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Unofficial client libraries are also available for C, C++, Node.js etc&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Reference
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://prometheus.io/docs/introduction/overview/"&gt;https://prometheus.io/docs/introduction/overview/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>prometheu</category>
      <category>cloudnative</category>
      <category>opensource</category>
    </item>
    <item>
      <title>Upgrading a Kubernetes Cluster using kubeadm</title>
      <dc:creator>Unni P</dc:creator>
      <pubDate>Wed, 03 May 2023 12:21:23 +0000</pubDate>
      <link>https://dev.to/iamunnip/upgrading-a-kubernetes-cluster-using-kubeadm-5ep7</link>
      <guid>https://dev.to/iamunnip/upgrading-a-kubernetes-cluster-using-kubeadm-5ep7</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;In this article, we will look how we can upgrade a Kubernetes cluster using kubeadm&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;We need a working Kubernetes cluster created using kubeadm&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;If you haven’t created the cluster, please check my previous article below&lt;br&gt;&lt;br&gt;
This will guide you to create a Kubernetes v1.27.0 cluster&lt;/p&gt;


&lt;div class="ltag__link"&gt;
  &lt;a href="/iamunnip" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__pic"&gt;
      &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--dJLm-Y-h--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://res.cloudinary.com/practicaldev/image/fetch/s--oibdHEqo--/c_fill%2Cf_auto%2Cfl_progressive%2Ch_150%2Cq_auto%2Cw_150/https://dev-to-uploads.s3.amazonaws.com/uploads/user/profile_image/218570/c002b3e9-63df-4684-ab35-b23a2a98a59c.jpg" alt="iamunnip"&gt;
    &lt;/div&gt;
  &lt;/a&gt;
  &lt;a href="/iamunnip/building-a-kubernetes-cluster-using-kubeadm-goh" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__content"&gt;
      &lt;h2&gt;Building a Kubernetes Cluster using kubeadm&lt;/h2&gt;
      &lt;h3&gt;Unni P ・ May 3 ・ 4 min read&lt;/h3&gt;
      &lt;div class="ltag__link__taglist"&gt;
        &lt;span class="ltag__link__tag"&gt;#kubernetes&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#cloudnative&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#opensource&lt;/span&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/a&gt;
&lt;/div&gt;

&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Steps
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Control Plane
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Verify the version of our cluster by executing the below command in the &lt;strong&gt;[control-plane]&lt;/strong&gt; instance
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get nodes
NAME            STATUS   ROLES           AGE    VERSION
control-plane   Ready    control-plane   7m5s   v1.27.0
node-1          Ready    &amp;lt;none&amp;gt;          91s    v1.27.0
node-2          Ready    &amp;lt;none&amp;gt;          17s    v1.27.0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Update the package index and find the latest kubeadm patch available in the repository
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt update

&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-cache madison kubeadm
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Unhold the kubeadm package for upgrading
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-cache unhold kubeadm
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Upgrade the kubeadm package to version v1.27.1 and hold the package from automatic upgrading
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; &lt;span class="nv"&gt;kubeadm&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1.27.1-00

&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-mark hold kubeadm
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Check the upgraded kubeadm version and verify the upgrade plan
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;kubeadm version

&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;kubeadm upgrade plan
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Once our plan is verified, we can upgrade the &lt;strong&gt;[control-plane]&lt;/strong&gt; by executing the below command
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;kubeadm upgrade apply v1.27.1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Prepare the &lt;strong&gt;[control-plane]&lt;/strong&gt; node for maintenance by marking it unschedulable and evicting the workloads
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl drain control-plane &lt;span class="nt"&gt;--ignore-daemonsets&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Unhold the kubelet and kubectl packages for an upgrade
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-mark unhold kubelet kubectl
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Upgrade kubelet and kubectl packages to version v1.27.1 and hold the packages from automatic upgrading
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt &lt;span class="nb"&gt;install &lt;/span&gt;&lt;span class="nv"&gt;kubelet&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1.27.1-00 &lt;span class="nv"&gt;kubectl&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1.27.1-00

&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-mark hold kubelet kubectl
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Restart the kubelet on &lt;strong&gt;[control-plane]&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl daemon-reload

&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl restart kubelet
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Uncordon the &lt;strong&gt;[control-plane]&lt;/strong&gt; for marking it as schedulable
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl uncordon control-plane
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Verify the nodes, now we can see &lt;strong&gt;[control-plane]&lt;/strong&gt; is upgraded to v1.27.1
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get nodes
NAME            STATUS   ROLES           AGE    VERSION
control-plane   Ready    control-plane   118m   v1.27.1
node-1          Ready    &amp;lt;none&amp;gt;          112m   v1.27.0
node-2          Ready    &amp;lt;none&amp;gt;          111m   v1.27.0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Nodes
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Update the package index on &lt;strong&gt;[node-1]&lt;/strong&gt; and unhold the kubeadm package for upgrading
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt update

&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-mark unhold kubeadm
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Upgrade the kubeadm package to version v1.27.1 on &lt;strong&gt;[node-1]&lt;/strong&gt; and hold the package from automatic upgrading
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt &lt;span class="nb"&gt;install &lt;/span&gt;&lt;span class="nv"&gt;kubeadm&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1.27.1-00

&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-mark hold kubeadm
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Upgrade the cluster configuration on &lt;strong&gt;[node-1]&lt;/strong&gt; using the below command
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;kubeadm upgrade node
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Prepare the &lt;strong&gt;[node-1]&lt;/strong&gt; for maintenance by marking it unschedulable and evicting the workloads
Execute the below drain command on &lt;strong&gt;[control-plane]&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl drain node-1 &lt;span class="nt"&gt;--ignore-daemonsets&lt;/span&gt; &lt;span class="nt"&gt;--force&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Unhold the kubelet and kubectl packages for upgrade
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-mark unhold kubelet kubectl
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Upgrade kubelet and kubectl packages to version v1.27.1 and hold the packages from automatic upgrading
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt &lt;span class="nb"&gt;install &lt;/span&gt;&lt;span class="nv"&gt;kubelet&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1.27.1-00 &lt;span class="nv"&gt;kubectl&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1.27.1-00

&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-mark hold kubelet kubectl
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Restart the kubelet on &lt;strong&gt;[node-1]&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl daemon-reload

&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl restart kubelet
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Uncordon the &lt;strong&gt;[node-1]&lt;/strong&gt; for marking it as schedulable
Execute the below uncordon command on &lt;strong&gt;[control-plane]&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl uncordon node-1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Repeat the above steps for &lt;strong&gt;[node-2]&lt;/strong&gt; also&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Now our cluster is fully upgraded to v1.27.1&lt;br&gt;&lt;br&gt;
Verify the same by executing the below command on &lt;strong&gt;[control-plane]&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get nodes
NAME            STATUS   ROLES           AGE    VERSION
control-plane   Ready    control-plane   136m   v1.27.1
node-1          Ready    &amp;lt;none&amp;gt;          131m   v1.27.1
node-2          Ready    &amp;lt;none&amp;gt;          129m   v1.27.1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Reference
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/"&gt;https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/upgrading-linux-nodes/"&gt;https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/upgrading-linux-nodes/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>cloudnative</category>
      <category>opensource</category>
    </item>
    <item>
      <title>Building a Kubernetes Cluster using kubeadm</title>
      <dc:creator>Unni P</dc:creator>
      <pubDate>Wed, 03 May 2023 12:09:45 +0000</pubDate>
      <link>https://dev.to/iamunnip/building-a-kubernetes-cluster-using-kubeadm-goh</link>
      <guid>https://dev.to/iamunnip/building-a-kubernetes-cluster-using-kubeadm-goh</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;In this article, we will look how we can create a three node Kubernetes cluster using kubeadm&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;kubeadm is a tool used to create Kubernetes clusters&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It automates the creation of Kubernetes clusters by bootstrapping the control plane, joining the nodes etc&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Follows Kubernetes release cycle&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Open-source tool maintained by the Kubernetes community&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Create three Ubuntu 20.04 LTS instances &lt;strong&gt;[control-plane, node-1, node-2]&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Each instance has a minimum specification of 2 CPU and 2 GB RAM&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Networking must be enabled between instances&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Required ports must be allowed between instances&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Swap must be disabled on instances&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Steps
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Set up unique hostnames on all instances &lt;strong&gt;[control-plane, node-1, node-2]&lt;/strong&gt;
Once the hostnames are set, logout from the current session and log back in to reflect the changes
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;hostnamectl set-hostname control-plane
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;hostnamectl set-hostname node-1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;hostnamectl set-hostname node-2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Update the hosts file on all instances &lt;strong&gt;[control-plane, node-1, node-2]&lt;/strong&gt; to enable communication via hostnames
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;vi /etc/hosts

172.31.12.122 control-plane
172.31.7.52  node-1
172.31.10.184 node-2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Disable swap on all instances &lt;strong&gt;[control-plane, node-1, node-2]&lt;/strong&gt; and if a swap entry is present in the fstab file then comment out the line
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;swapoff &lt;span class="nt"&gt;-a&lt;/span&gt;

&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;vi /etc/fstab
  &lt;span class="c"&gt;# comment out swap entry&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Setup containerd as our container runtime on all instances &lt;strong&gt;[control-plane, node-1, node-2]&lt;/strong&gt; and to do that, first we need to load some Kernel modules and modify system settings
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;cat&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt; &lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt; | sudo tee /etc/modules-load.d/containerd.conf
overlay
br_netfilter
&lt;/span&gt;&lt;span class="no"&gt;EOF
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;modprobe overlay

&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;modprobe br_netfilter
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;cat&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt; &lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt; | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
&lt;/span&gt;&lt;span class="no"&gt;EOF
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;sysctl &lt;span class="nt"&gt;--system&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Once the Kernel modules are loaded and modified the system settings, then can we can install containerd on all instances &lt;strong&gt;[control-plane, node-1, node-2]&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt update

&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; containerd
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Once installed, generate a default configuration file for contained on all instances &lt;strong&gt;[control-plane, node-1, node-2]&lt;/strong&gt; and restart the service
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; /etc/containerd

&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;containerd config default | &lt;span class="nb"&gt;sudo tee&lt;/span&gt; /etc/containerd/config.toml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl restart containerd
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Install some prerequisite packages on all instances &lt;strong&gt;[control-plane, node-1, node-2]&lt;/strong&gt; for configuring Kubernetes apt repository
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt update

&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; apt-transport-https ca-certificates curl
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Download the Google Cloud public signing key and configure Kubernetes apt repository on all instances &lt;strong&gt;[control-plane, node-1, node-2]&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo mkdir&lt;/span&gt; /etc/apt/keyrings

&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;curl &lt;span class="nt"&gt;-fsSLo&lt;/span&gt; /etc/apt/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg

&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"deb [signed-by=/etc/apt/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main"&lt;/span&gt; | &lt;span class="nb"&gt;sudo tee&lt;/span&gt; /etc/apt/sources.list.d/kubernetes.list
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Install kubeadm, kubelet and kubectl tools and hold their version on all instances &lt;strong&gt;[control-plane, node-1, node-2]&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt update

&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; &lt;span class="nv"&gt;kubeadm&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1.27.0-00 &lt;span class="nv"&gt;kubelet&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1.27.0-00 &lt;span class="nv"&gt;kubectl&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1.27.0-00

&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-mark hold kubeadm kubelet kubectl
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Initialize the cluster by executing the below command on &lt;strong&gt;[control-plane]&lt;/strong&gt; instance
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;kubeadm init &lt;span class="nt"&gt;--pod-network-cidr&lt;/span&gt; 192.168.0.0/16 &lt;span class="nt"&gt;--kubernetes-version&lt;/span&gt; 1.27.0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Once the installation is completed, set up our access to the cluster on &lt;strong&gt;[control-plane]&lt;/strong&gt; instance
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; &lt;span class="nv"&gt;$HOME&lt;/span&gt;/.kube

&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo cp&lt;/span&gt; &lt;span class="nt"&gt;-i&lt;/span&gt; /etc/kubernetes/admin.conf &lt;span class="nv"&gt;$HOME&lt;/span&gt;/.kube/config

&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo chown&lt;/span&gt; &lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;id&lt;/span&gt; &lt;span class="nt"&gt;-u&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;:&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;id&lt;/span&gt; &lt;span class="nt"&gt;-g&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt; &lt;span class="nv"&gt;$HOME&lt;/span&gt;/.kube/config
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Verify our cluster by listing the nodes on &lt;strong&gt;[control-plane]&lt;/strong&gt; instance
But our nodes are in NotReady state because we haven’t set up networking
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get nodes
NAME            STATUS     ROLES           AGE   VERSION
control-plane   NotReady   control-plane   36s   v1.27.0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Install the Calico network addon on the &lt;strong&gt;[control-plane]&lt;/strong&gt; instance and verify the status
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; https://raw.githubusercontent.com/projectcalico/calico/v3.25.1/manifests/calico.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get nodes
NAME            STATUS   ROLES           AGE     VERSION
control-plane   Ready    control-plane   2m51s   v1.27.0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Once the networking is enabled, join our workload nodes to the cluster
Get the join command from the &lt;strong&gt;[control-plane]&lt;/strong&gt; instance
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubeadm token create &lt;span class="nt"&gt;--print-join-command&lt;/span&gt;
kubeadm &lt;span class="nb"&gt;join &lt;/span&gt;172.31.12.122:6443 &lt;span class="nt"&gt;--token&lt;/span&gt; 6t3r2c.5zzsgtekwwltofwt &lt;span class="nt"&gt;--discovery-token-ca-cert-hash&lt;/span&gt; sha256:2f1c4115125ec62af8ae6ab2648277ace3a625ebc0281e85ca8145e0e9077ee4
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Once the join command is copied from the &lt;strong&gt;[control-plane]&lt;/strong&gt; instance, execute them in the &lt;strong&gt;[node-1, node-2]&lt;/strong&gt; instances
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;kubeadm &lt;span class="nb"&gt;join &lt;/span&gt;172.31.12.122:6443 &lt;span class="nt"&gt;--token&lt;/span&gt; 6t3r2c.5zzsgtekwwltofwt &lt;span class="nt"&gt;--discovery-token-ca-cert-hash&lt;/span&gt; sha256:2f1c4115125ec62af8ae6ab2648277ace3a625ebc0281e85ca8145e0e9077ee4
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Verify our cluster from the &lt;strong&gt;[control-plane]&lt;/strong&gt; instance, it should be in a READY state
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get nodes
NAME            STATUS   ROLES           AGE    VERSION
control-plane   Ready    control-plane   7m5s   v1.27.0
node-1          Ready    &amp;lt;none&amp;gt;          91s    v1.27.0
node-2          Ready    &amp;lt;none&amp;gt;          17s    v1.27.0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Deploy an Nginx pod, expose it as ClusterIP from the &lt;strong&gt;[control-plane]&lt;/strong&gt; instance and verify its status
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl run nginx &lt;span class="nt"&gt;--image&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;nginx &lt;span class="nt"&gt;--port&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;80 &lt;span class="nt"&gt;--expose&lt;/span&gt;
service/nginx created
pod/nginx created
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get pod nginx &lt;span class="nt"&gt;-o&lt;/span&gt; wide
NAME    READY   STATUS    RESTARTS   AGE   IP               NODE     NOMINATED NODE   READINESS GATES
nginx   1/1     Running   0          30s   192.168.84.129   node-1   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get svc nginx
NAME    TYPE        CLUSTER-IP      EXTERNAL-IP   PORT&lt;span class="o"&gt;(&lt;/span&gt;S&lt;span class="o"&gt;)&lt;/span&gt;   AGE
nginx   ClusterIP   10.111.228.20   &amp;lt;none&amp;gt;        80/TCP    39s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Reference
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/"&gt;https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://acloudguru.com/course/introduction-to-kubernetes"&gt;https://acloudguru.com/course/introduction-to-kubernetes&lt;/a&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>cloudnative</category>
      <category>opensource</category>
    </item>
    <item>
      <title>Kubernetes - Merging kubeconfig Files</title>
      <dc:creator>Unni P</dc:creator>
      <pubDate>Wed, 03 May 2023 09:13:21 +0000</pubDate>
      <link>https://dev.to/iamunnip/kubernetes-merging-kubeconfig-files-25dp</link>
      <guid>https://dev.to/iamunnip/kubernetes-merging-kubeconfig-files-25dp</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;In this short article, we will look how we can merge a new kubeconfig file to an existing config file&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Kubernetes is an open-source container orchestration tool to manage containerized applications&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;An important component of Kubernetes is kubeconfig, a configuration file used to connect to our clusters&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A kubeconfig file contains the cluster name and endpoint, user credentials and the context&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Steps
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Make a copy of your existing kubeconfig file
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;cp&lt;/span&gt; ~/.kube/config ~/.kube/config.bak
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Merge the old and new kubeconfig files using the below command
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ KUBECONFIG&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;~/.kube/config:~/new-kubeconfig.yml &lt;span class="se"&gt;\&lt;/span&gt;
             kubectl config view &lt;span class="nt"&gt;--flatten&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; ~/merged-kubeconfig.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Replace your old config with new merged config file
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;cp&lt;/span&gt; ~/merged-kubeconfig.yml ~/.kube/config
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;List the contexts, now you can see our new cluster
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl config get-contexts
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Reference
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://jacobtomlinson.dev/posts/2019/how-to-merge-kubernetes-kubectl-config-files/"&gt;https://jacobtomlinson.dev/posts/2019/how-to-merge-kubernetes-kubectl-config-files/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>opensource</category>
    </item>
    <item>
      <title>kind - Setting up CNI using Calico - Part 7</title>
      <dc:creator>Unni P</dc:creator>
      <pubDate>Wed, 03 May 2023 07:39:01 +0000</pubDate>
      <link>https://dev.to/iamunnip/kind-setting-up-cni-using-calico-part-7-31p2</link>
      <guid>https://dev.to/iamunnip/kind-setting-up-cni-using-calico-part-7-31p2</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;In this article we will look how we can configure our cluster to use Calico as CNI&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;kind ships with a simple networking implementation called kindnetd&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Based on standard CNI plugins (ptp, host-local) and simple netlink routes&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It also handles IP masquerade&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;We can disable the default CNI in kind and use Calico as our CNI&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Usage
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Create a cluster using the below configuration file
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;cat &lt;/span&gt;kind.yml 
apiVersion: kind.x-k8s.io/v1alpha4
kind: Cluster
name: dev
networking:
  disableDefaultCNI: &lt;span class="nb"&gt;true
&lt;/span&gt;nodes:
- role: control-plane
- role: worker
- role: worker
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kind create cluster &lt;span class="nt"&gt;--config&lt;/span&gt; kind.yml 
Creating cluster &lt;span class="s2"&gt;"dev"&lt;/span&gt; ...
 ✓ Ensuring node image &lt;span class="o"&gt;(&lt;/span&gt;kindest/node:v1.26.3&lt;span class="o"&gt;)&lt;/span&gt; 🖼
 ✓ Preparing nodes 📦 📦 📦  
 ✓ Writing configuration 📜 
 ✓ Starting control-plane 🕹️ 
 ✓ Installing StorageClass 💾 
 ✓ Joining worker nodes 🚜 
Set kubectl context to &lt;span class="s2"&gt;"kind-dev"&lt;/span&gt;
You can now use your cluster with:

kubectl cluster-info &lt;span class="nt"&gt;--context&lt;/span&gt; kind-dev
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Since we don’t have any CNI installed in the cluster and we can see the nodes are in NotReady state and CoreDNS pods are in pending state
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get nodes
NAME                STATUS     ROLES           AGE   VERSION
dev-control-plane   NotReady   control-plane   65s   v1.26.3
dev-worker          NotReady   &amp;lt;none&amp;gt;          48s   v1.26.3
dev-worker2         NotReady   &amp;lt;none&amp;gt;          35s   v1.26.3
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl &lt;span class="nt"&gt;-n&lt;/span&gt; kube-system get pods
NAME                                        READY   STATUS    RESTARTS   AGE
coredns-787d4945fb-c5xfr                    0/1     Pending   0          115s
coredns-787d4945fb-mlphp                    0/1     Pending   0          115s
etcd-dev-control-plane                      1/1     Running   0          2m8s
kube-apiserver-dev-control-plane            1/1     Running   0          2m8s
kube-controller-manager-dev-control-plane   1/1     Running   0          2m7s
kube-proxy-74mrj                            1/1     Running   0          114s
kube-proxy-txj8v                            1/1     Running   0          101s
kube-proxy-xrqnn                            1/1     Running   0          115s
kube-scheduler-dev-control-plane            1/1     Running   0          2m8s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Install Calico CNI using the manifest file available from their documentation
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; https://raw.githubusercontent.com/projectcalico/calico/v3.25.1/manifests/calico.yaml
poddisruptionbudget.policy/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
serviceaccount/calico-node created
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
deployment.apps/calico-kube-controllers created
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Verify the status of the nodes and pods
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get nodes
NAME                STATUS   ROLES           AGE     VERSION
dev-control-plane   Ready    control-plane   6m12s   v1.26.3
dev-worker          Ready    &amp;lt;none&amp;gt;          5m55s   v1.26.3
dev-worker2         Ready    &amp;lt;none&amp;gt;          5m42s   v1.26.3
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl &lt;span class="nt"&gt;-n&lt;/span&gt; kube-system get pods
NAME                                        READY   STATUS    RESTARTS   AGE
calico-kube-controllers-5857bf8d58-4kcbd    1/1     Running   0          4m14s
calico-node-8qrmb                           1/1     Running   0          4m14s
calico-node-v9mrj                           1/1     Running   0          4m14s
calico-node-vml2c                           1/1     Running   0          4m14s
coredns-787d4945fb-c5xfr                    1/1     Running   0          7m58s
coredns-787d4945fb-mlphp                    1/1     Running   0          7m58s
etcd-dev-control-plane                      1/1     Running   0          8m11s
kube-apiserver-dev-control-plane            1/1     Running   0          8m11s
kube-controller-manager-dev-control-plane   1/1     Running   0          8m10s
kube-proxy-74mrj                            1/1     Running   0          7m57s
kube-proxy-txj8v                            1/1     Running   0          7m44s
kube-proxy-xrqnn                            1/1     Running   0          7m58s
kube-scheduler-dev-control-plane            1/1     Running   0          8m11s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Deploy our Nginx application by creating a pod and exposing it as ClusterIP
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl run nginx &lt;span class="nt"&gt;--image&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;nginx &lt;span class="nt"&gt;--port&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;80 &lt;span class="nt"&gt;--expose&lt;/span&gt;
service/nginx created
pod/nginx created
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get pods nginx
NAME    READY   STATUS    RESTARTS   AGE
nginx   1/1     Running   0          32s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get svc nginx
NAME    TYPE        CLUSTER-IP      EXTERNAL-IP   PORT&lt;span class="o"&gt;(&lt;/span&gt;S&lt;span class="o"&gt;)&lt;/span&gt;   AGE
nginx   ClusterIP   10.96.106.155   &amp;lt;none&amp;gt;        80/TCP    34s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Verify our Nginx application
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl run busybox &lt;span class="nt"&gt;--image&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;busybox &lt;span class="nt"&gt;--restart&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;Never &lt;span class="nt"&gt;--rm&lt;/span&gt; &lt;span class="nt"&gt;-it&lt;/span&gt; &lt;span class="nt"&gt;--&lt;/span&gt; wget &lt;span class="nt"&gt;-O-&lt;/span&gt; http://nginx
If you don&lt;span class="s1"&gt;'t see a command prompt, try pressing enter.
warning: couldn'&lt;/span&gt;t attach to pod/busybox, falling back to streaming logs: Internal error occurred: error attaching to container: failed to load task: no running task found: task 9268947ec3741ac1bad25fab9454c9c56e51131e7d65098993a87a96ed7ea7d7 not found: not found
Connecting to nginx &lt;span class="o"&gt;(&lt;/span&gt;10.96.106.155:80&lt;span class="o"&gt;)&lt;/span&gt;
writing to stdout
&amp;lt;&lt;span class="o"&gt;!&lt;/span&gt;DOCTYPE html&amp;gt;
&amp;lt;html&amp;gt;
&amp;lt;&lt;span class="nb"&gt;head&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
&amp;lt;title&amp;gt;Welcome to nginx!&amp;lt;/title&amp;gt;
&amp;lt;style&amp;gt;
html &lt;span class="o"&gt;{&lt;/span&gt; color-scheme: light dark&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="o"&gt;}&lt;/span&gt;
body &lt;span class="o"&gt;{&lt;/span&gt; width: 35em&lt;span class="p"&gt;;&lt;/span&gt; margin: 0 auto&lt;span class="p"&gt;;&lt;/span&gt;
font-family: Tahoma, Verdana, Arial, sans-serif&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="o"&gt;}&lt;/span&gt;
&amp;lt;/style&amp;gt;
&amp;lt;/head&amp;gt;
&amp;lt;body&amp;gt;
&amp;lt;h1&amp;gt;Welcome to nginx!&amp;lt;/h1&amp;gt;
&amp;lt;p&amp;gt;If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.&amp;lt;/p&amp;gt;

&amp;lt;p&amp;gt;For online documentation and support please refer to
&amp;lt;a &lt;span class="nv"&gt;href&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"http://nginx.org/"&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;nginx.org&amp;lt;/a&amp;gt;.&amp;lt;br/&amp;gt;
Commercial support is available at
&amp;lt;a &lt;span class="nv"&gt;href&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"http://nginx.com/"&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;nginx.com&amp;lt;/a&amp;gt;.&amp;lt;/p&amp;gt;

&amp;lt;p&amp;gt;&amp;lt;em&amp;gt;Thank you &lt;span class="k"&gt;for &lt;/span&gt;using nginx.&amp;lt;/em&amp;gt;&amp;lt;/p&amp;gt;
&amp;lt;/body&amp;gt;
&amp;lt;/html&amp;gt;
-                    100% |&lt;span class="k"&gt;********************************&lt;/span&gt;|   615  0:00:00 ETA
written to stdout
pod &lt;span class="s2"&gt;"busybox"&lt;/span&gt; deleted
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Cleanup
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Delete our cluster after use
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kind delete cluster &lt;span class="nt"&gt;--name&lt;/span&gt; dev
Deleting cluster &lt;span class="s2"&gt;"dev"&lt;/span&gt; ...
Deleted nodes: &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"dev-control-plane"&lt;/span&gt; &lt;span class="s2"&gt;"dev-worker"&lt;/span&gt; &lt;span class="s2"&gt;"dev-worker2"&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Reference
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://kind.sigs.k8s.io/%EF%BF%BChttps://docs.tigera.io/calico/latest/getting-started/kubernetes/self-managed-onprem/onpremises"&gt;https://kind.sigs.k8s.io/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://kind.sigs.k8s.io/%EF%BF%BChttps://docs.tigera.io/calico/latest/getting-started/kubernetes/self-managed-onprem/onpremises"&gt;https://docs.tigera.io/calico/latest/getting-started/kubernetes/self-managed-onprem/onpremises&lt;/a&gt;&lt;/p&gt;

</description>
      <category>opensource</category>
      <category>docker</category>
      <category>kubernetes</category>
      <category>devops</category>
    </item>
    <item>
      <title>kind - Setting up Load Balancer using MetalLB - Part 6</title>
      <dc:creator>Unni P</dc:creator>
      <pubDate>Wed, 03 May 2023 07:37:23 +0000</pubDate>
      <link>https://dev.to/iamunnip/kind-setting-up-load-balancer-using-metallb-part-6-50m4</link>
      <guid>https://dev.to/iamunnip/kind-setting-up-load-balancer-using-metallb-part-6-50m4</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;In this article we will look how we can get service type LoadBalancer in our cluster using MetalLB&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;MetalLB provides a network load balancer implementation in our cluster&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Allows you to create Kubernetes services of type LoadBalancer in clusters that don’t run on a cloud provider&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Sets up MetalLB using layer2 protocol&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;We can send traffic directly to the load balancer’s external IP if the IP space is within the Docker IP space&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Usage
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Create a simple cluster using the below configuration file
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;cat &lt;/span&gt;kind.yml 
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
name: dev
nodes:
- role: control-plane
- role: worker
- role: worker
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kind create cluster &lt;span class="nt"&gt;--config&lt;/span&gt; kind.yml 
Creating cluster &lt;span class="s2"&gt;"dev"&lt;/span&gt; ...
 ✓ Ensuring node image &lt;span class="o"&gt;(&lt;/span&gt;kindest/node:v1.26.3&lt;span class="o"&gt;)&lt;/span&gt; 🖼
 ✓ Preparing nodes 📦 📦 📦  
 ✓ Writing configuration 📜 
 ✓ Starting control-plane 🕹️ 
 ✓ Installing CNI 🔌 
 ✓ Installing StorageClass 💾 
 ✓ Joining worker nodes 🚜 
Set kubectl context to &lt;span class="s2"&gt;"kind-dev"&lt;/span&gt;
You can now use your cluster with:

kubectl cluster-info &lt;span class="nt"&gt;--context&lt;/span&gt; kind-dev
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get nodes
NAME                STATUS   ROLES           AGE   VERSION
dev-control-plane   Ready    control-plane   68s   v1.26.3
dev-worker          Ready    &amp;lt;none&amp;gt;          37s   v1.26.3
dev-worker2         Ready    &amp;lt;none&amp;gt;          37s   v1.26.3
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Deploy MetalLB using the default manifests and verify the components are up and running
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; https://raw.githubusercontent.com/metallb/metallb/v0.13.7/config/manifests/metallb-native.yaml
namespace/metallb-system created
customresourcedefinition.apiextensions.k8s.io/addresspools.metallb.io created
customresourcedefinition.apiextensions.k8s.io/bfdprofiles.metallb.io created
customresourcedefinition.apiextensions.k8s.io/bgpadvertisements.metallb.io created
customresourcedefinition.apiextensions.k8s.io/bgppeers.metallb.io created
customresourcedefinition.apiextensions.k8s.io/communities.metallb.io created
customresourcedefinition.apiextensions.k8s.io/ipaddresspools.metallb.io created
customresourcedefinition.apiextensions.k8s.io/l2advertisements.metallb.io created
serviceaccount/controller created
serviceaccount/speaker created
role.rbac.authorization.k8s.io/controller created
role.rbac.authorization.k8s.io/pod-lister created
clusterrole.rbac.authorization.k8s.io/metallb-system:controller created
clusterrole.rbac.authorization.k8s.io/metallb-system:speaker created
rolebinding.rbac.authorization.k8s.io/controller created
rolebinding.rbac.authorization.k8s.io/pod-lister created
clusterrolebinding.rbac.authorization.k8s.io/metallb-system:controller created
clusterrolebinding.rbac.authorization.k8s.io/metallb-system:speaker created
secret/webhook-server-cert created
service/webhook-service created
deployment.apps/controller created
daemonset.apps/speaker created
validatingwebhookconfiguration.admissionregistration.k8s.io/metallb-webhook-configuration created
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl &lt;span class="nt"&gt;-n&lt;/span&gt; metallb-system get pods
NAME                          READY   STATUS    RESTARTS   AGE
controller-577b5bdfcc-p7sb5   1/1     Running   0          76s
speaker-cgmm4                 1/1     Running   0          76s
speaker-gwfqr                 1/1     Running   0          76s
speaker-jk684                 1/1     Running   0          76s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;As we said earlier in the introduction part, we are using the layer2 protocol of MetalLB.
For completing the layer2 configuration, we need to provide MetalLB a range of IP addresses it controls. This IP address range needs to be in the Docker kind network.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;docker network inspect &lt;span class="nt"&gt;-f&lt;/span&gt; &lt;span class="s1"&gt;'{{.IPAM.Config}}'&lt;/span&gt; kind
&lt;span class="o"&gt;[{&lt;/span&gt;172.18.0.0/16  172.18.0.1 map[]&lt;span class="o"&gt;}&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;fc00:f853:ccd:e793::/64  fc00:f853:ccd:e793::1 map[]&lt;span class="o"&gt;}]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Now we want our load balancer IP range to come from this subclass and we can configure MetalLB to use 172.19.255.200 to 172.19.255.250 by creating IPAddressPool and L2Advertisement resources.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create the necessary MetalLB resources using the below manifest file.&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;cat &lt;/span&gt;metallb.yml 
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: kind
  namespace: metallb-system
spec:
  addresses:
  - 172.18.255.200-172.18.255.250

&lt;span class="nt"&gt;---&lt;/span&gt;
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
  name: kind
  namespace: metallb-system
spec:
  ipAddressPools:
  - kind
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; metallb.yml 
ipaddresspool.metallb.io/kind unchanged
l2advertisement.metallb.io/kind created
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl &lt;span class="nt"&gt;-n&lt;/span&gt; metallb-system get ipaddresspools
NAME   AUTO ASSIGN   AVOID BUGGY IPS   ADDRESSES
kind   &lt;span class="nb"&gt;true          false&lt;/span&gt;             &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"172.18.255.200-172.18.255.250"&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl &lt;span class="nt"&gt;-n&lt;/span&gt; metallb-system get l2advertisements
NAME   IPADDRESSPOOLS   IPADDRESSPOOL SELECTORS   INTERFACES
kind   &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"kind"&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Deploy our Application
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Create an Nginx pod using the below manifest file and verify its status
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;cat &lt;/span&gt;nginx.yml 
apiVersion: v1
kind: Pod
metadata:
  labels:
    run: nginx
  name: nginx
spec:
  containers:
  - image: nginx
    name: nginx
    ports:
    - containerPort: 80
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; nginx.yml 
pod/nginx created
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get pods nginx
NAME    READY   STATUS    RESTARTS   AGE
nginx   1/1     Running   0          23s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Expose the Nginx pod as a LoadBalancer service using the below manifest file
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;cat &lt;/span&gt;nginx-loadbalancer.yml 
apiVersion: v1
kind: Service
metadata:
  labels:
    run: nginx
  name: nginx
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    run: nginx
  &lt;span class="nb"&gt;type&lt;/span&gt;: LoadBalancer
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; nginx-loadbalancer.yml 
service/nginx created
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Check the created nginx service and we can see an IP address in the EXTERNAL-IP section
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get svc nginx 
NAME    TYPE           CLUSTER-IP     EXTERNAL-IP      PORT&lt;span class="o"&gt;(&lt;/span&gt;S&lt;span class="o"&gt;)&lt;/span&gt;        AGE
nginx   LoadBalancer   10.96.43.161   172.18.255.200   80:30433/TCP   30s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Access the application using external IP and port
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;curl http://172.18.255.200:80
&amp;lt;&lt;span class="o"&gt;!&lt;/span&gt;DOCTYPE html&amp;gt;
&amp;lt;html&amp;gt;
&amp;lt;&lt;span class="nb"&gt;head&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
&amp;lt;title&amp;gt;Welcome to nginx!&amp;lt;/title&amp;gt;
&amp;lt;style&amp;gt;
html &lt;span class="o"&gt;{&lt;/span&gt; color-scheme: light dark&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="o"&gt;}&lt;/span&gt;
body &lt;span class="o"&gt;{&lt;/span&gt; width: 35em&lt;span class="p"&gt;;&lt;/span&gt; margin: 0 auto&lt;span class="p"&gt;;&lt;/span&gt;
font-family: Tahoma, Verdana, Arial, sans-serif&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="o"&gt;}&lt;/span&gt;
&amp;lt;/style&amp;gt;
&amp;lt;/head&amp;gt;
&amp;lt;body&amp;gt;
&amp;lt;h1&amp;gt;Welcome to nginx!&amp;lt;/h1&amp;gt;
&amp;lt;p&amp;gt;If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.&amp;lt;/p&amp;gt;

&amp;lt;p&amp;gt;For online documentation and support please refer to
&amp;lt;a &lt;span class="nv"&gt;href&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"http://nginx.org/"&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;nginx.org&amp;lt;/a&amp;gt;.&amp;lt;br/&amp;gt;
Commercial support is available at
&amp;lt;a &lt;span class="nv"&gt;href&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"http://nginx.com/"&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;nginx.com&amp;lt;/a&amp;gt;.&amp;lt;/p&amp;gt;

&amp;lt;p&amp;gt;&amp;lt;em&amp;gt;Thank you &lt;span class="k"&gt;for &lt;/span&gt;using nginx.&amp;lt;/em&amp;gt;&amp;lt;/p&amp;gt;
&amp;lt;/body&amp;gt;
&amp;lt;/html&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Cleanup
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Delete the cluster after use
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kind delete cluster &lt;span class="nt"&gt;--name&lt;/span&gt; dev
Deleting cluster &lt;span class="s2"&gt;"dev"&lt;/span&gt; ...
Deleted nodes: &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"dev-worker2"&lt;/span&gt; &lt;span class="s2"&gt;"dev-control-plane"&lt;/span&gt; &lt;span class="s2"&gt;"dev-worker"&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Reference
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://kind.sigs.k8s.io/%EF%BF%BChttps://metallb.universe.tf/"&gt;https://kind.sigs.k8s.io/&lt;/a&gt;&lt;br&gt;&lt;br&gt;
&lt;a href="https://kind.sigs.k8s.io/%EF%BF%BChttps://metallb.universe.tf/"&gt;&lt;br&gt;&lt;br&gt;
https://metallb.universe.tf/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>opensource</category>
      <category>docker</category>
      <category>kubernetes</category>
      <category>devops</category>
    </item>
  </channel>
</rss>
