<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Lorenzo Garuti</title>
    <description>The latest articles on DEV Community by Lorenzo Garuti (@garutilorenzo).</description>
    <link>https://dev.to/garutilorenzo</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/garutilorenzo"/>
    <language>en</language>
    <item>
      <title>Install and configure MySQL Server and MySQL InnoDB Cluster</title>
      <dc:creator>Lorenzo Garuti</dc:creator>
      <pubDate>Wed, 31 Aug 2022 09:50:30 +0000</pubDate>
      <link>https://dev.to/garutilorenzo/install-and-configure-mysql-server-and-mysql-innodb-cluster-311b</link>
      <guid>https://dev.to/garutilorenzo/install-and-configure-mysql-server-and-mysql-innodb-cluster-311b</guid>
      <description>&lt;p&gt;&lt;a href="https://github.com/garutilorenzo/ansible-role-linux-mysql"&gt;This&lt;/a&gt; role will install and configure MySQL server or MySQL in HA mode using &lt;a href="https://dev.mysql.com/doc/refman/8.0/en/mysql-innodb-cluster-introduction.html"&gt;MySQL InnoDB Cluster&lt;/a&gt; or &lt;a href="https://dev.mysql.com/doc/mysql-replication-excerpt/5.6/en/replication-gtids.html"&gt;GTID replication&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Table of Contents
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Role Variables&lt;/li&gt;
&lt;li&gt;Vagrant up, build the test infrastructure&lt;/li&gt;
&lt;li&gt;Ansible setup and pre-flight check&lt;/li&gt;
&lt;li&gt;Deploy MySQL InnoDB Cluster &lt;/li&gt;
&lt;li&gt;Cluster high availability check&lt;/li&gt;
&lt;li&gt;Restore from complete outage&lt;/li&gt;
&lt;li&gt;Clean up&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Role Variables
&lt;/h3&gt;

&lt;p&gt;This role accept this variables:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Var&lt;/th&gt;
&lt;th&gt;Required&lt;/th&gt;
&lt;th&gt;Default&lt;/th&gt;
&lt;th&gt;Desc&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;mysql_subnet&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;yes&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;192.168.25.0/24&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Subnet where MySQL will be listen. If the VM or bare metal server has more than one interface, Ansible will filter the interface and MySQL wil listen only on a specific interface. This variable is also used to calculate the MySQL server ID.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;mysql_root_pw&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;yes&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;MySQL root password.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;mysql_authentication&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;no&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;mysql_native_password&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;MySQL authentication method.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;disable_firewall&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;no&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;no&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;If set to yes Ansible will disable the firewall.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;disable_selinux&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;no&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;no&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Disable SELinux. Default no, if you want to configure SELinux use another Role. You can disable SELinux setting this variable to yes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;resolv_mode&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;no&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;dns&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;How MySQL resolve the names, default dns. If set to &lt;em&gt;host&lt;/em&gt; the /etc/hosts file will be overwritten&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;mysql_listen_all_interfaces&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;no&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;no&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Set this variable to yes to allow MySQL to listen on all interfaces 0.0.0.0/0. Otherwise the listen ip address will be retrieved using &lt;em&gt;mysql_subnet&lt;/em&gt; variable&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;mysql_user&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;no&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;mysql&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;MySQL system user&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;mysql_group&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;no&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;mysql&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Group of the MySQL search system user&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;mysql_data_dir&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;no&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;/var/lib/mysql&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;MySQL data dir&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;mysql_log_dir&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;no&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;/var/log/mysql&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;MySQL log dir&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;mysql_conf_dir&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;no&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;/etc/mysql&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;MySQL conf dir&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;mysql_pid_dir&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;no&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;/var/run/mysqld&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;MySQL pid dir&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;mysql_operator_user&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;no&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;operator&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;MySQL operator user, used to bootstrap MySQL InnoDB Cluster.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;mysql_operator_password&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;no&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;Op3r4torMyPw&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Password of operator user&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;mysql_replica_user&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;no&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;replica&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;MySQL replica user. Used for all the replica operations&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;mysql_replica_password&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;no&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;rEpL1c4p4Sw0,rd&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Password of replica user&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;mysql_replication_mode&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;no&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;
&lt;a href="https://dev.mysql.com/doc/refman/8.0/en/mysql-innodb-cluster-introduction.html"&gt;InnoDB Cluster&lt;/a&gt;, &lt;a href="https://dev.mysql.com/doc/mysql-replication-excerpt/5.6/en/replication-gtids.html"&gt;GTID&lt;/a&gt;, Empty/None (default)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;mysql_gr_name&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;no&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;Required if &lt;em&gt;mysql_replication_mode&lt;/em&gt; is set to &lt;em&gt;InnoDB Cluster&lt;/em&gt;. UUID of the Group Replication&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;mysql_gr_vcu&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;no&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;Required if &lt;em&gt;mysql_replication_mode&lt;/em&gt; is set to &lt;em&gt;InnoDB Cluster&lt;/em&gt;. Group Replication &lt;a href="https://dev.mysql.com/doc/refman/8.0/en/group-replication-options.html#sysvar_group_replication_view_change_uuid"&gt;view change uuid&lt;/a&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;mysql_innodb_cluster_name&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;no&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;Required if &lt;em&gt;mysql_replication_mode&lt;/em&gt; is set to &lt;em&gt;InnoDB Cluster&lt;/em&gt;. The name of MySQL InnoDB Cluster&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  Vagrant up, build the test infrastructure
&lt;/h3&gt;

&lt;p&gt;To test this role we use &lt;a href="https://www.vagrantup.com/"&gt;Vagrant&lt;/a&gt; and &lt;a href="https://www.virtualbox.org/"&gt;Virtualbox&lt;/a&gt;, but if you prefer you can also use your own VMs or your baremetal machines.&lt;/p&gt;

&lt;p&gt;The first step is to download &lt;a href="https://github.com/garutilorenzo/ansible-role-linux-mysql"&gt;this&lt;/a&gt; repo and birng up all the VMs. But first in the Vagrantfile paste your public ssh key in the &lt;em&gt;CHANGE_ME&lt;/em&gt; variable. You can also adjust the number of the vm deployed by changing the NNODES variable (in this exaple we will use 5 nodes). Now we are ready to provision the machines:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git clone https://github.com/garutilorenzo/ansible-role-linux-mysql.git

cd ansible-role-linux-mysql/

vagrant up
Bringing machine 'my-ubuntu-0' up with 'virtualbox' provider...
Bringing machine 'my-ubuntu-1' up with 'virtualbox' provider...
Bringing machine 'my-ubuntu-2' up with 'virtualbox' provider...
Bringing machine 'my-ubuntu-3' up with 'virtualbox' provider...
Bringing machine 'my-ubuntu-4' up with 'virtualbox' provider...

[...]
[...]

    my-ubuntu-4: Inserting generated public key within guest...
==&amp;gt; my-ubuntu-4: Machine booted and ready!
==&amp;gt; my-ubuntu-4: Checking for guest additions in VM...
    my-ubuntu-4: The guest additions on this VM do not match the installed version of
    my-ubuntu-4: VirtualBox! In most cases this is fine, but in rare cases it can
    my-ubuntu-4: prevent things such as shared folders from working properly. If you see
    my-ubuntu-4: shared folder errors, please make sure the guest additions within the
    my-ubuntu-4: virtual machine match the version of VirtualBox you have installed on
    my-ubuntu-4: your host and reload your VM.
    my-ubuntu-4:
    my-ubuntu-4: Guest Additions Version: 6.0.0 r127566
    my-ubuntu-4: VirtualBox Version: 6.1
==&amp;gt; my-ubuntu-4: Setting hostname...
==&amp;gt; my-ubuntu-4: Configuring and enabling network interfaces...
==&amp;gt; my-ubuntu-4: Mounting shared folders...
    my-ubuntu-4: /vagrant =&amp;gt; C:/Users/Lorenzo Garuti/workspaces/simple-ubuntu
==&amp;gt; my-ubuntu-4: Running provisioner: shell...
    my-ubuntu-4: Running: inline script
==&amp;gt; my-ubuntu-4: Running provisioner: shell...
    my-ubuntu-4: Running: inline script
    my-ubuntu-4: hello from node 5
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Ansible setup and pre-flight check
&lt;/h3&gt;

&lt;p&gt;Now if you don't have Ansible installed, install ansible and all the requirements:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apt-get install python3 python3-pip uuidgen openssl
pip3 install pipenv

pipenv shell
pip install -r requirements.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now with Ansible installed we can download the role directly from GitHub:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ansible-galaxy install git+https://github.com/garutilorenzo/ansible-role-linux-mysql.git
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Whit Ansible and the role installed we can setup our inventory file (hosts.ini):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[mysql]
my-ubuntu-0 ansible_host=192.168.25.110
my-ubuntu-1 ansible_host=192.168.25.111
my-ubuntu-2 ansible_host=192.168.25.112
my-ubuntu-3 ansible_host=192.168.25.113
my-ubuntu-4 ansible_host=192.168.25.114
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and the vars.yml file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="nn"&gt;---&lt;/span&gt;

&lt;span class="na"&gt;disable_firewall&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;yes&lt;/span&gt;
&lt;span class="na"&gt;disable_selinux&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;yes&lt;/span&gt;
&lt;span class="na"&gt;mysql_resolv_mode&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;hosts&lt;/span&gt;
&lt;span class="na"&gt;mysql_subnet&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;192.168.25.0/24&lt;/span&gt;
&lt;span class="na"&gt;mysql_listen_all_interfaces&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;yes&lt;/span&gt;

&lt;span class="na"&gt;mysql_root_pw&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;&amp;lt;CHANGE_ME&amp;gt;'&lt;/span&gt; &lt;span class="c1"&gt;# &amp;lt;- openssl rand -base64 32 | sed 's/=//'&lt;/span&gt;
&lt;span class="na"&gt;mysql_replication_mode&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;InnoDB&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Cluster'&lt;/span&gt;
&lt;span class="na"&gt;mysql_gr_name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;&amp;lt;CHANGE_ME&amp;gt;'&lt;/span&gt; &lt;span class="c1"&gt;# &amp;lt;- uuidgen&lt;/span&gt;
&lt;span class="na"&gt;mysql_gr_vcu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;&amp;lt;CHANGE_ME&amp;gt;'&lt;/span&gt; &lt;span class="c1"&gt;#  &amp;lt;- uuidgen&lt;/span&gt;
&lt;span class="na"&gt;mysql_innodb_cluster_name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;cluster_lab'&lt;/span&gt; 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;NOTE&lt;/strong&gt; mysql_gr_name and mysql_gr_vcu are different uuid, so run uuidgen twice.&lt;br&gt;
With this vars we are going to deploy MySQL in HA Mode with MySQL InnoDB Cluster, the cluster will be created from an existing &lt;a href="https://dev.mysql.com/doc/refman/8.0/en/group-replication.html"&gt;Group Replication&lt;/a&gt; configuration.&lt;/p&gt;

&lt;p&gt;The final step before proceed with the installation is to create the site.yml file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;hosts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;mysql&lt;/span&gt;
  &lt;span class="na"&gt;become&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;yes&lt;/span&gt;
  &lt;span class="na"&gt;remote_user&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;vagrant&lt;/span&gt;
  &lt;span class="na"&gt;roles&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; 
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;role&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ansible-role-linux-mysql&lt;/span&gt;
  &lt;span class="na"&gt;vars_files&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;vars.yml&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Deploy MySQL InnoDB Cluster
&lt;/h3&gt;

&lt;p&gt;We are finally ready to deploy MySQL InnoDB Cluster using ansible:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export ANSIBLE_HOST_KEY_CHECKING=False # Ansible skip ssh-key validation

ansible-playbook -i hosts.ini site.yml -e mysql_bootstrap_host=my-ubuntu-0

TASK [ansible-role-linux-mysql : render mysql.conf.d/mysqld.cnf] *******************************************************************************************
ok: [my-ubuntu-0]
ok: [my-ubuntu-2]
ok: [my-ubuntu-1]
ok: [my-ubuntu-3]
ok: [my-ubuntu-4]

TASK [ansible-role-linux-mysql : render mysql.conf.d/gtid.cnf] *********************************************************************************************
ok: [my-ubuntu-1]
ok: [my-ubuntu-0]
ok: [my-ubuntu-3]
ok: [my-ubuntu-2]
ok: [my-ubuntu-4]

TASK [ansible-role-linux-mysql : ansible.builtin.fail] *****************************************************************************************************
skipping: [my-ubuntu-0]
skipping: [my-ubuntu-1]
skipping: [my-ubuntu-2]
skipping: [my-ubuntu-3]
skipping: [my-ubuntu-4]

TASK [ansible-role-linux-mysql : ansible.builtin.fail] *****************************************************************************************************
skipping: [my-ubuntu-0]
skipping: [my-ubuntu-1]
skipping: [my-ubuntu-2]
skipping: [my-ubuntu-3]
skipping: [my-ubuntu-4]

TASK [ansible-role-linux-mysql : render innodb_cluster.cnf] ************************************************************************************************
ok: [my-ubuntu-0]
ok: [my-ubuntu-1]
ok: [my-ubuntu-3]
ok: [my-ubuntu-2]
ok: [my-ubuntu-4]

RUNNING HANDLER [ansible-role-linux-mysql : reload systemd] ************************************************************************************************
ok: [my-ubuntu-3]
ok: [my-ubuntu-0]
ok: [my-ubuntu-2]
ok: [my-ubuntu-4]
ok: [my-ubuntu-1]

PLAY RECAP *************************************************************************************************************************************************
my-ubuntu-0               : ok=69   changed=27   unreachable=0    failed=0    skipped=12   rescued=0    ignored=0   
my-ubuntu-1               : ok=71   changed=28   unreachable=0    failed=0    skipped=10   rescued=0    ignored=0   
my-ubuntu-2               : ok=71   changed=28   unreachable=0    failed=0    skipped=10   rescued=0    ignored=0   
my-ubuntu-3               : ok=71   changed=28   unreachable=0    failed=0    skipped=10   rescued=0    ignored=0   
my-ubuntu-4               : ok=71   changed=28   unreachable=0    failed=0    skipped=10   rescued=0    ignored=0 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The cluster is now installed, but we have to persist some configurations. Since the cluster is a new cluster, Ansible has started the Group Replication in &lt;a href="https://dev.mysql.com/doc/refman/8.0/en/group-replication-bootstrap.html"&gt;bootstrap&lt;/a&gt; mode. This means that the first instance (in this case my-ubuntu-0) has te value &lt;em&gt;group_replication_bootstrap_group&lt;/em&gt; set to &lt;em&gt;ON&lt;/em&gt;, and &lt;em&gt;group_replication_group_seeds&lt;/em&gt; set to an empty value. With this second run of Ansible, this variables will be set to the correct values:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ansible-playbook -i hosts.ini site.yml

TASK [ansible-role-linux-mysql : ansible.builtin.fail] *****************************************************************************************************
skipping: [my-ubuntu-0]
skipping: [my-ubuntu-1]
skipping: [my-ubuntu-2]
skipping: [my-ubuntu-3]
skipping: [my-ubuntu-4]

TASK [ansible-role-linux-mysql : ansible.builtin.fail] *****************************************************************************************************
skipping: [my-ubuntu-0]
skipping: [my-ubuntu-1]
skipping: [my-ubuntu-2]
skipping: [my-ubuntu-3]
skipping: [my-ubuntu-4]

TASK [ansible-role-linux-mysql : render innodb_cluster.cnf] ************************************************************************************************
ok: [my-ubuntu-2]
ok: [my-ubuntu-4]
ok: [my-ubuntu-1]
ok: [my-ubuntu-3]
changed: [my-ubuntu-0]

PLAY RECAP *************************************************************************************************************************************************
my-ubuntu-0               : ok=30   changed=1    unreachable=0    failed=0    skipped=18   rescued=0    ignored=0   
my-ubuntu-1               : ok=30   changed=0    unreachable=0    failed=0    skipped=18   rescued=0    ignored=0   
my-ubuntu-2               : ok=30   changed=0    unreachable=0    failed=0    skipped=18   rescued=0    ignored=0   
my-ubuntu-3               : ok=30   changed=0    unreachable=0    failed=0    skipped=18   rescued=0    ignored=0   
my-ubuntu-4               : ok=30   changed=0    unreachable=0    failed=0    skipped=18   rescued=0    ignored=0   
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this guide mysqlsh is used to make operations on MySQL InnoDB Cluster. &lt;a href="https://dev.mysql.com/doc/mysql-shell/8.0/en/"&gt;Here&lt;/a&gt; you can find more information about mysqlsh.&lt;/p&gt;

&lt;p&gt;Now we can finally check our cluster:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;root@my-ubuntu-0:~# mysqlsh root@my-ubuntu-0
Please provide the password for 'root@my-ubuntu-0': ******************************************
MySQL  localhost:33060+ ssl  JS &amp;gt; clu = dba.getCluster()
MySQL  localhost:33060+ ssl  JS &amp;gt; clu.status()
{
    "clusterName": "cluster_lab", 
    "defaultReplicaSet": {
        "name": "default", 
        "primary": "my-ubuntu-0:3306", 
        "ssl": "DISABLED", 
        "status": "OK", 
        "statusText": "Cluster is ONLINE and can tolerate up to 2 failures.", 
        "topology": {
            "my-ubuntu-0:3306": {
                "address": "my-ubuntu-0:3306", 
                "memberRole": "PRIMARY", 
                "mode": "R/W", 
                "readReplicas": {}, 
                "replicationLag": "applier_queue_applied", 
                "role": "HA", 
                "status": "ONLINE", 
                "version": "8.0.30"
            }, 
            "my-ubuntu-1:3306": {
                "address": "my-ubuntu-1:3306", 
                "memberRole": "SECONDARY", 
                "mode": "R/O", 
                "readReplicas": {}, 
                "replicationLag": "applier_queue_applied", 
                "role": "HA", 
                "status": "ONLINE", 
                "version": "8.0.30"
            }, 
            "my-ubuntu-2:3306": {
                "address": "my-ubuntu-2:3306", 
                "memberRole": "SECONDARY", 
                "mode": "R/O", 
                "readReplicas": {}, 
                "replicationLag": "applier_queue_applied", 
                "role": "HA", 
                "status": "ONLINE", 
                "version": "8.0.30"
            }, 
            "my-ubuntu-3:3306": {
                "address": "my-ubuntu-3:3306", 
                "memberRole": "SECONDARY", 
                "mode": "R/O", 
                "readReplicas": {}, 
                "replicationLag": "applier_queue_applied", 
                "role": "HA", 
                "status": "ONLINE", 
                "version": "8.0.30"
            }, 
            "my-ubuntu-4:3306": {
                "address": "my-ubuntu-4:3306", 
                "memberRole": "SECONDARY", 
                "mode": "R/O", 
                "readReplicas": {}, 
                "replicationLag": "applier_queue_applied", 
                "role": "HA", 
                "status": "ONLINE", 
                "version": "8.0.30"
            }
        }, 
        "topologyMode": "Single-Primary"
    }, 
    "groupInformationSourceMember": "my-ubuntu-0:3306"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Cluster high availability check
&lt;/h3&gt;

&lt;p&gt;To test the cluster we can use a sample Docker compose stack, the example uses:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Wordpress as frontend&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/garutilorenzo/mysqlrouter"&gt;mysqlrouter&lt;/a&gt; will connect WP to MySQL&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To run this test you have to install &lt;a href="https://docs.docker.com/get-docker/"&gt;Docker&lt;/a&gt; and &lt;a href="https://docs.docker.com/compose/install/"&gt;Docker compose&lt;/a&gt;.&lt;/p&gt;

&lt;h4&gt;
  
  
  User and Database creation
&lt;/h4&gt;

&lt;p&gt;We need to create one db and one user for wordpress, to do this we have to find the primary server (check the cluster status and find the node with -&amp;gt; "mode": "R/W")&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;root@my-ubuntu-0:~# mysqlsh root@localhost
Please provide the password for 'root@localhost': ******************************************

MySQL  localhost:33060+ ssl  JS &amp;gt; \sql # &amp;lt;- SWITCH TO SQL MODE
Switching to SQL mode... Commands end with ;

create database wordpress;
create user 'wordpress'@'%' identified by 'wordpress';
grant all on wordpress.* TO 'wordpress'@'%';
flush privileges;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Sample Dokcer compose stack
&lt;/h4&gt;

&lt;p&gt;The sample stack can be found in the &lt;a href="https://dev.toexamples/"&gt;examples&lt;/a&gt; folder, this is the compose file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;3.4'&lt;/span&gt;
&lt;span class="na"&gt;services&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;wordpress&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;wordpress:latest&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;80:80&lt;/span&gt;
    &lt;span class="na"&gt;restart&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;always&lt;/span&gt;
    &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;WORDPRESS_DB_HOST=mysqlrouter:6446&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;WORDPRESS_DB_USER=wordpress&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;WORDPRESS_DB_PASSWORD=wordpress&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;WORDPRESS_DB_NAME=wordpress&lt;/span&gt;

  &lt;span class="na"&gt;mysqlrouter&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;garutilorenzo/mysqlrouter:8.0.30&lt;/span&gt;
    &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;volume&lt;/span&gt;
        &lt;span class="na"&gt;source&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;mysqlrouter&lt;/span&gt;
        &lt;span class="na"&gt;target&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/app/mysqlrouter/&lt;/span&gt;
        &lt;span class="na"&gt;volume&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;nocopy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="no"&gt;true&lt;/span&gt;
    &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
     &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;MYSQL_HOST=my-ubuntu-0&lt;/span&gt;
     &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;MYSQL_PORT=3306&lt;/span&gt;
     &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;MYSQL_USER=root&lt;/span&gt;
     &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;MYSQL_PASSWORD=&amp;lt;CHANGE_ME&amp;gt;&lt;/span&gt; &lt;span class="c1"&gt;# &amp;lt;- the same password in the vars.yml file&lt;/span&gt;
     &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;MYSQL_ROUTER_ACCOUNT=mysql_router_user&lt;/span&gt;
     &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;MYSQL_ROUTER_PASSWORD=&amp;lt;CHANGE_ME&amp;gt;&lt;/span&gt; &lt;span class="c1"&gt;# &amp;lt;- openssl rand -base64 32 | sed 's/=//'&lt;/span&gt;
    &lt;span class="na"&gt;extra_hosts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;my-ubuntu-0&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;192.168.25.110&lt;/span&gt;
      &lt;span class="na"&gt;my-ubuntu-1&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;192.168.25.111&lt;/span&gt;
      &lt;span class="na"&gt;my-ubuntu-2&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;192.168.25.112&lt;/span&gt;
      &lt;span class="na"&gt;my-ubuntu-3&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;192.168.25.113&lt;/span&gt;
      &lt;span class="na"&gt;my-ubuntu-4&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;192.168.25.114&lt;/span&gt;

&lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
 &lt;span class="na"&gt;mysqlrouter&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now we can start our stack and inspect the logs:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker-compose -f mysql-router-compose.yml up -d
docker-compose -f mysql-router-compose.yml logs mysqlrouter

examples-mysqlrouter-1  | Succesfully contacted mysql server at my-ubuntu-0. Checking for cluster state.
examples-mysqlrouter-1  | Check if config exist
examples-mysqlrouter-1  | bootstrap mysqlrouter with account mysql_router_user
examples-mysqlrouter-1  | Succesfully contacted mysql server at my-ubuntu-0. Trying to bootstrap.
examples-mysqlrouter-1  | Please enter MySQL password for root: 
examples-mysqlrouter-1  | # Bootstrapping MySQL Router instance at '/app/mysqlrouter'...
examples-mysqlrouter-1  | 
examples-mysqlrouter-1  | Please enter MySQL password for mysql_router_user: 
examples-mysqlrouter-1  | - Creating account(s) (only those that are needed, if any)
examples-mysqlrouter-1  | - Verifying account (using it to run SQL queries that would be run by Router)
examples-mysqlrouter-1  | - Storing account in keyring
examples-mysqlrouter-1  | - Adjusting permissions of generated files
examples-mysqlrouter-1  | - Creating configuration /app/mysqlrouter/mysqlrouter.conf
examples-mysqlrouter-1  | 
examples-mysqlrouter-1  | # MySQL Router configured for the InnoDB Cluster 'cluster_lab'
examples-mysqlrouter-1  | 
examples-mysqlrouter-1  | After this MySQL Router has been started with the generated configuration
examples-mysqlrouter-1  | 
examples-mysqlrouter-1  |     $ mysqlrouter -c /app/mysqlrouter/mysqlrouter.conf
examples-mysqlrouter-1  | 
examples-mysqlrouter-1  | InnoDB Cluster 'cluster_lab' can be reached by connecting to:
examples-mysqlrouter-1  | 
examples-mysqlrouter-1  | ## MySQL Classic protocol
examples-mysqlrouter-1  | 
examples-mysqlrouter-1  | - Read/Write Connections: localhost:6446
examples-mysqlrouter-1  | - Read/Only Connections:  localhost:6447
examples-mysqlrouter-1  | 
examples-mysqlrouter-1  | ## MySQL X protocol
examples-mysqlrouter-1  | 
examples-mysqlrouter-1  | - Read/Write Connections: localhost:6448
examples-mysqlrouter-1  | - Read/Only Connections:  localhost:6449
examples-mysqlrouter-1  | 
examples-mysqlrouter-1  | Starting mysql-router.
examples-mysqlrouter-1  | 2022-08-30 12:03:57 io INFO [7f794f1e0bc0] starting 4 io-threads, using backend 'linux_epoll'
examples-mysqlrouter-1  | 2022-08-30 12:03:57 http_server INFO [7f794f1e0bc0] listening on 0.0.0.0:8443
examples-mysqlrouter-1  | 2022-08-30 12:03:57 metadata_cache_plugin INFO [7f794a606700] Starting Metadata Cache
examples-mysqlrouter-1  | 2022-08-30 12:03:57 metadata_cache INFO [7f794a606700] Connections using ssl_mode 'PREFERRED'
examples-mysqlrouter-1  | 2022-08-30 12:03:57 metadata_cache INFO [7f7948602700] Starting metadata cache refresh thread
examples-mysqlrouter-1  | 2022-08-30 12:03:57 routing INFO [7f790e7fc700] [routing:bootstrap_rw] started: routing strategy = first-available
examples-mysqlrouter-1  | 2022-08-30 12:03:57 routing INFO [7f790e7fc700] Start accepting connections for routing routing:bootstrap_rw listening on 6446
examples-mysqlrouter-1  | 2022-08-30 12:03:57 routing INFO [7f790dffb700] [routing:bootstrap_x_ro] started: routing strategy = round-robin-with-fallback
examples-mysqlrouter-1  | 2022-08-30 12:03:57 routing INFO [7f790dffb700] Start accepting connections for routing routing:bootstrap_x_ro listening on 6449
examples-mysqlrouter-1  | 2022-08-30 12:03:57 routing INFO [7f790effd700] [routing:bootstrap_ro] started: routing strategy = round-robin-with-fallback
examples-mysqlrouter-1  | 2022-08-30 12:03:57 routing INFO [7f790effd700] Start accepting connections for routing routing:bootstrap_ro listening on 6447
examples-mysqlrouter-1  | 2022-08-30 12:03:57 metadata_cache INFO [7f7948602700] Connected with metadata server running on my-ubuntu-2:3306
examples-mysqlrouter-1  | 2022-08-30 12:03:57 routing INFO [7f790d7fa700] [routing:bootstrap_x_rw] started: routing strategy = first-available
examples-mysqlrouter-1  | 2022-08-30 12:03:57 routing INFO [7f790d7fa700] Start accepting connections for routing routing:bootstrap_x_rw listening on 6448
examples-mysqlrouter-1  | 2022-08-30 12:03:57 metadata_cache INFO [7f7948602700] Potential changes detected in cluster 'cluster_lab' after metadata refresh
examples-mysqlrouter-1  | 2022-08-30 12:03:57 metadata_cache INFO [7f7948602700] Metadata for cluster 'cluster_lab' has 5 member(s), single-primary: (view_id=0)
examples-mysqlrouter-1  | 2022-08-30 12:03:57 metadata_cache INFO [7f7948602700]     my-ubuntu-2:3306 / 33060 - mode=RO 
examples-mysqlrouter-1  | 2022-08-30 12:03:57 metadata_cache INFO [7f7948602700]     my-ubuntu-1:3306 / 33060 - mode=RO 
examples-mysqlrouter-1  | 2022-08-30 12:03:57 metadata_cache INFO [7f7948602700]     my-ubuntu-0:3306 / 33060 - mode=RW 
examples-mysqlrouter-1  | 2022-08-30 12:03:57 metadata_cache INFO [7f7948602700]     my-ubuntu-3:3306 / 33060 - mode=RO 
examples-mysqlrouter-1  | 2022-08-30 12:03:57 metadata_cache INFO [7f7948602700]     my-ubuntu-4:3306 / 33060 - mode=RO 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Test the frontend
&lt;/h4&gt;

&lt;p&gt;Now if you try to access &lt;a href="http://localhost"&gt;localhost&lt;/a&gt; you can see the Wordpress installation page:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--VZpQSMqt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://garutilorenzo.github.io/images/k3s-wp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--VZpQSMqt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://garutilorenzo.github.io/images/k3s-wp.png" alt="wp-install" width="488" height="777"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Install and configure WP, and now we are ready for some &lt;a href="https://netflix.github.io/chaosmonkey/"&gt;Chaos Monkey&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Simulate disaster
&lt;/h4&gt;

&lt;p&gt;To test WP reachability we can start this simple test:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;while true; do curl -s -o /dev/null -w "%{http_code}" http://localhost; echo; sleep 1; done
200
200

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;now shutdown the RW node (in this case my-ubuntu-0):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;root@my-ubuntu-0:~# sudo halt -p
Connection to 192.168.25.110 closed by remote host.
Connection to 192.168.25.110 closed.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and check the output of the test script:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;while true; do curl -s -o /dev/null -w "%{http_code}" http://localhost; echo; sleep 1; done
200
500 # &amp;lt;- my-ubuntu-0 shutdown and MySQL primary switch
200
200

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now we check the cluster status from the second node, and we see that the cluster is still &lt;em&gt;ONLINE&lt;/em&gt; and can tolerate one more failure:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;root@my-ubuntu-1:~# mysqlsh root@localhost
Please provide the password for 'root@localhost': ******************************************

MySQL  localhost:33060+ ssl  JS &amp;gt; clu = dba.getCluster()
MySQL  localhost:33060+ ssl  JS &amp;gt; clu.status()
{
    "clusterName": "cluster_lab", 
    "defaultReplicaSet": {
        "name": "default", 
        "primary": "my-ubuntu-2:3306", 
        "ssl": "DISABLED", 
        "status": "OK_PARTIAL", 
        "statusText": "Cluster is ONLINE and can tolerate up to ONE failure. 1 member is not active.", 
        "topology": {
            "my-ubuntu-0:3306": {
                "address": "my-ubuntu-0:3306", 
                "memberRole": "SECONDARY", 
                "mode": "n/a", 
                "readReplicas": {}, 
                "role": "HA", 
                "shellConnectError": "MySQL Error 2003: Could not open connection to 'my-ubuntu-0:3306': Can't connect to MySQL server on 'my-ubuntu-0:3306' (110)", 
                "status": "(MISSING)"
            }, 
            "my-ubuntu-1:3306": {
                "address": "my-ubuntu-1:3306", 
                "memberRole": "SECONDARY", 
                "mode": "R/O", 
                "readReplicas": {}, 
                "replicationLag": "applier_queue_applied", 
                "role": "HA", 
                "status": "ONLINE", 
                "version": "8.0.30"
            }, 
            "my-ubuntu-2:3306": {
                "address": "my-ubuntu-2:3306", 
                "memberRole": "PRIMARY", 
                "mode": "R/W", 
                "readReplicas": {}, 
                "replicationLag": "applier_queue_applied", 
                "role": "HA", 
                "status": "ONLINE", 
                "version": "8.0.30"
            }, 
            "my-ubuntu-3:3306": {
                "address": "my-ubuntu-3:3306", 
                "memberRole": "SECONDARY", 
                "mode": "R/O", 
                "readReplicas": {}, 
                "replicationLag": "applier_queue_applied", 
                "role": "HA", 
                "status": "ONLINE", 
                "version": "8.0.30"
            }, 
            "my-ubuntu-4:3306": {
                "address": "my-ubuntu-4:3306", 
                "memberRole": "SECONDARY", 
                "mode": "R/O", 
                "readReplicas": {}, 
                "replicationLag": "applier_queue_applied", 
                "role": "HA", 
                "status": "ONLINE", 
                "version": "8.0.30"
            }
        }, 
        "topologyMode": "Single-Primary"
    }, 
    "groupInformationSourceMember": "my-ubuntu-2:3306"
}
 MySQL  localhost:33060+ ssl  JS &amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you check WP at &lt;a href="http://localhost"&gt;localhost&lt;/a&gt; is still available, the master now is the &lt;em&gt;my-ubuntu-2&lt;/em&gt; node.&lt;br&gt;
Now if we bring up again the &lt;em&gt;my-ubuntu-0&lt;/em&gt; node, the node wil rejoin the cluster and will get the updates from the other nodes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;root@my-ubuntu-1:~# mysqlsh root@localhost
Please provide the password for 'root@localhost': ******************************************

MySQL  localhost:33060+ ssl  JS &amp;gt; clu = dba.getCluster()
MySQL  localhost:33060+ ssl  JS &amp;gt; clu.status()
{
    "clusterName": "cluster_lab", 
    "defaultReplicaSet": {
        "name": "default", 
        "primary": "my-ubuntu-2:3306", 
        "ssl": "DISABLED", 
        "status": "OK", 
        "statusText": "Cluster is ONLINE and can tolerate up to 2 failures.", 
        "topology": {
            "my-ubuntu-0:3306": {
                "address": "my-ubuntu-0:3306", 
                "memberRole": "SECONDARY", 
                "mode": "R/O", 
                "readReplicas": {}, 
                "replicationLag": "applier_queue_applied", 
                "role": "HA", 
                "status": "ONLINE", 
                "version": "8.0.30"
            }, 
            "my-ubuntu-1:3306": {
                "address": "my-ubuntu-1:3306", 
                "memberRole": "SECONDARY", 
                "mode": "R/O", 
                "readReplicas": {}, 
                "replicationLag": "applier_queue_applied", 
                "role": "HA", 
                "status": "ONLINE", 
                "version": "8.0.30"
            }, 
            "my-ubuntu-2:3306": {
                "address": "my-ubuntu-2:3306", 
                "memberRole": "PRIMARY", 
                "mode": "R/W", 
                "readReplicas": {}, 
                "replicationLag": "applier_queue_applied", 
                "role": "HA", 
                "status": "ONLINE", 
                "version": "8.0.30"
            }, 
            "my-ubuntu-3:3306": {
                "address": "my-ubuntu-3:3306", 
                "memberRole": "SECONDARY", 
                "mode": "R/O", 
                "readReplicas": {}, 
                "replicationLag": "applier_queue_applied", 
                "role": "HA", 
                "status": "ONLINE", 
                "version": "8.0.30"
            }, 
            "my-ubuntu-4:3306": {
                "address": "my-ubuntu-4:3306", 
                "memberRole": "SECONDARY", 
                "mode": "R/O", 
                "readReplicas": {}, 
                "replicationLag": "applier_queue_applied", 
                "role": "HA", 
                "status": "ONLINE", 
                "version": "8.0.30"
            }
        }, 
        "topologyMode": "Single-Primary"
    }, 
    "groupInformationSourceMember": "my-ubuntu-2:3306"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Restore from complete outage
&lt;/h3&gt;

&lt;p&gt;If for any reason all the servers went down, the cluster has to be restored from a &lt;a href="https://dev.mysql.com/doc/mysql-shell/8.0/en/troubleshooting-innodb-cluster.html"&gt;complete outage&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;To do this we have to connect to one instance, edit /etc/mysql/mysql.conf.d/innodb_cluster.cnf and set &lt;em&gt;group_replication_bootstrap_group&lt;/em&gt; to &lt;em&gt;ON&lt;/em&gt; and comment &lt;em&gt;group_replication_group_seeds&lt;/em&gt;. We have now to restart MySQL:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;vagrant@my-ubuntu-0:~$
vi /etc/mysql/mysql.conf.d/innodb_cluster.cnf

group_replication_bootstrap_group=on
#group_replication_group_seeds=my-ubuntu-0:33061,my-ubuntu-1:33061,my-ubuntu-2:33061,my-ubuntu-3:33061,my-ubuntu-4:33061

systemctl restart mysqld
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For all the other (four) members we have to start the group replication:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;vagrant@my-ubuntu-4:~$ mysqlsh root@localhost
Please provide the password for 'root@localhost': ******************************************

MySQL  localhost:33060+ ssl  JS &amp;gt; \sql # &amp;lt;- SWITCH TO SQL MODE
Switching to SQL mode... Commands end with ;
MySQL  localhost:33060+ ssl  SQL &amp;gt; start group_replication;
Query OK, 0 rows affected (1.7095 sec)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If the traffic on the cluster was low or absent the cluster will be ONLINE:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;MySQL  localhost:33060+ ssl  SQL &amp;gt; \js
Switching to JavaScript mode...
MySQL  localhost:33060+ ssl  JS &amp;gt; clu = dba.getCluster()
MySQL  localhost:33060+ ssl  JS &amp;gt; clu.status()
{
    "clusterName": "cluster_lab", 
    "defaultReplicaSet": {
        "name": "default", 
        "primary": "my-ubuntu-0:3306", 
        "ssl": "DISABLED", 
        "status": "OK", 
        "statusText": "Cluster is ONLINE and can tolerate up to 2 failures.", 
        "topology": {
            "my-ubuntu-0:3306": {
                "address": "my-ubuntu-0:3306", 
                "memberRole": "PRIMARY", 
                "mode": "R/W", 
                "readReplicas": {}, 
                "replicationLag": "applier_queue_applied", 
                "role": "HA", 
                "status": "ONLINE", 
                "version": "8.0.30"
            }, 
            "my-ubuntu-1:3306": {
                "address": "my-ubuntu-1:3306", 
                "memberRole": "SECONDARY", 
                "mode": "R/O", 
                "readReplicas": {}, 
                "replicationLag": "applier_queue_applied", 
                "role": "HA", 
                "status": "ONLINE", 
                "version": "8.0.30"
            }, 
            "my-ubuntu-2:3306": {
                "address": "my-ubuntu-2:3306", 
                "memberRole": "SECONDARY", 
                "mode": "R/O", 
                "readReplicas": {}, 
                "replicationLag": "applier_queue_applied", 
                "role": "HA", 
                "status": "ONLINE", 
                "version": "8.0.30"
            }, 
            "my-ubuntu-3:3306": {
                "address": "my-ubuntu-3:3306", 
                "memberRole": "SECONDARY", 
                "mode": "R/O", 
                "readReplicas": {}, 
                "replicationLag": "applier_queue_applied", 
                "role": "HA", 
                "status": "ONLINE", 
                "version": "8.0.30"
            }, 
            "my-ubuntu-4:3306": {
                "address": "my-ubuntu-4:3306", 
                "memberRole": "SECONDARY", 
                "mode": "R/O", 
                "readReplicas": {}, 
                "replicationLag": "applier_queue_applied", 
                "role": "HA", 
                "status": "ONLINE", 
                "version": "8.0.30"
            }
        }, 
        "topologyMode": "Single-Primary"
    }, 
    "groupInformationSourceMember": "my-ubuntu-0:3306"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If the cluster has a high volume traffic at the moment of the &lt;a href="https://dev.mysql.com/doc/mysql-shell/8.0/en/troubleshooting-innodb-cluster.html"&gt;complete outage&lt;/a&gt;. you have to probably run form mysqlsh:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;MySQL  localhost:33060+ ssl  JS &amp;gt; var clu = dba.rebootClusterFromCompleteOutage();
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Clean up
&lt;/h3&gt;

&lt;p&gt;When you have done you can finally destroy the cluster with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;vagrant destroy

    my-ubuntu-4: Are you sure you want to destroy the 'my-ubuntu-4' VM? [y/N] y
==&amp;gt; my-ubuntu-4: Forcing shutdown of VM...
==&amp;gt; my-ubuntu-4: Destroying VM and associated drives...
    my-ubuntu-3: Are you sure you want to destroy the 'my-ubuntu-3' VM? [y/N] y
==&amp;gt; my-ubuntu-3: Forcing shutdown of VM...
==&amp;gt; my-ubuntu-3: Destroying VM and associated drives...
    my-ubuntu-2: Are you sure you want to destroy the 'my-ubuntu-2' VM? [y/N] y
==&amp;gt; my-ubuntu-2: Forcing shutdown of VM...
==&amp;gt; my-ubuntu-2: Destroying VM and associated drives...
    my-ubuntu-1: Are you sure you want to destroy the 'my-ubuntu-1' VM? [y/N] y
==&amp;gt; my-ubuntu-1: Forcing shutdown of VM...
==&amp;gt; my-ubuntu-1: Destroying VM and associated drives...
    my-ubuntu-0: Are you sure you want to destroy the 'my-ubuntu-0' VM? [y/N] y
==&amp;gt; my-ubuntu-0: Forcing shutdown of VM...
==&amp;gt; my-ubuntu-0: Destroying VM and associated drives...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>ansible</category>
      <category>mysql</category>
      <category>devops</category>
    </item>
    <item>
      <title>Install and configure the ELK stack with Ansible</title>
      <dc:creator>Lorenzo Garuti</dc:creator>
      <pubDate>Tue, 23 Aug 2022 07:48:26 +0000</pubDate>
      <link>https://dev.to/garutilorenzo/install-and-configure-the-elk-stack-with-ansible-4hgo</link>
      <guid>https://dev.to/garutilorenzo/install-and-configure-the-elk-stack-with-ansible-4hgo</guid>
      <description>&lt;p&gt;This &lt;a href="https://github.com/garutilorenzo/ansible-collection-elk"&gt;ansible collection&lt;/a&gt; will install and configure a high available &lt;a href="https://www.elastic.co/elasticsearch/"&gt;Elasticsearch cluster&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;With this collection you can also install and configure:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://www.elastic.co/logstash/"&gt;Logstash&lt;/a&gt; is a free and open server-side data processing pipeline that ingests data from a multitude of sources, transforms it, and then sends it to your favorite "stash."&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.elastic.co/kibana/"&gt;Kibana&lt;/a&gt; is a free and open user interface that lets you visualize your Elasticsearch data and navigate the Elastic Stack. Do anything from tracking query load to understanding the way requests flow through your apps&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And also the Elastic Beats:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://www.elastic.co/beats/filebeat"&gt;filebeat&lt;/a&gt; - Whether you’re collecting from security devices, cloud, containers, hosts, or OT, Filebeat helps you keep the simple things simple by offering a lightweight way to forward and centralize logs and files.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.elastic.co/beats/metricbeat"&gt;metricbeat&lt;/a&gt; Collect metrics from your systems and services. From CPU to memory, Redis to NGINX, and much more, Metricbeat is a lightweight way to send system and service statistics.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.elastic.co/beats/heartbeat"&gt;heartbeath&lt;/a&gt; Monitor services for their availability with active probing. Given a list of URLs, Heartbeat asks the simple question: Are you alive? Heartbeat ships this information and response time to the rest of the Elastic Stack for further analysis.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In most cases you may prefer &lt;a href="https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-quickstart.html"&gt;ECK&lt;/a&gt; or &lt;a href="https://www.elastic.co/cloud/"&gt;Elastic Cloud&lt;/a&gt; but if Kubernetes for you is like kryptonite for Superman or if you are jelous about your data or even more if you don't trust any Cloud provider this is the right place.&lt;/p&gt;

&lt;h2&gt;
  
  
  Table of Contents
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Vagrant up, build the test infrastructure&lt;/li&gt;
&lt;li&gt;Ansible setup and pre-flight check&lt;/li&gt;
&lt;li&gt;Deploy ELK with Ansible&lt;/li&gt;
&lt;li&gt;Clean up&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Vagrant up, build the test infrastructure
&lt;/h3&gt;

&lt;p&gt;To test this collection we use &lt;a href="https://www.vagrantup.com/"&gt;Vagrant&lt;/a&gt; and &lt;a href="https://www.virtualbox.org/"&gt;Virtualbox&lt;/a&gt;, but if you prefer you can also use your own VMs or your baremetal machines.&lt;/p&gt;

&lt;p&gt;The first step is to download this &lt;a href="https://github.com/garutilorenzo/ansible-collection-elk"&gt;repo&lt;/a&gt; and birng up all the VMs. But first in the Vagrantfile paste your public ssh key in the &lt;em&gt;CHANGE_ME&lt;/em&gt; variable. You can also adjust the number of the vm deployed by changing the NNODES variable (in this exaple we will use 6 nodes). Now we are ready to provision the machines:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git clone https://github.com/garutilorenzo/ansible-collection-elk.git

cd nsible-collection-elk/

vagrant up
Bringing machine 'elk-ubuntu-0' up with 'virtualbox' provider...
Bringing machine 'elk-ubuntu-1' up with 'virtualbox' provider...
Bringing machine 'elk-ubuntu-2' up with 'virtualbox' provider...
Bringing machine 'elk-ubuntu-3' up with 'virtualbox' provider...
Bringing machine 'elk-ubuntu-4' up with 'virtualbox' provider...
Bringing machine 'elk-ubuntu-5' up with 'virtualbox' provider...

[...]
[...]

    elk-ubuntu-5: Inserting generated public key within guest...
==&amp;gt; elk-ubuntu-5: Machine booted and ready!
==&amp;gt; elk-ubuntu-5: Checking for guest additions in VM...
    elk-ubuntu-5: The guest additions on this VM do not match the installed version of
    elk-ubuntu-5: VirtualBox! In most cases this is fine, but in rare cases it can
    elk-ubuntu-5: prevent things such as shared folders from working properly. If you see
    elk-ubuntu-5: shared folder errors, please make sure the guest additions within the
    elk-ubuntu-5: virtual machine match the version of VirtualBox you have installed on
    elk-ubuntu-5: your host and reload your VM.
    elk-ubuntu-5:
    elk-ubuntu-5: Guest Additions Version: 6.0.0 r127566
    elk-ubuntu-5: VirtualBox Version: 6.1
==&amp;gt; elk-ubuntu-5: Setting hostname...
==&amp;gt; elk-ubuntu-5: Configuring and enabling network interfaces...
==&amp;gt; elk-ubuntu-5: Mounting shared folders...
    elk-ubuntu-5: /vagrant =&amp;gt; C:/Users/Lorenzo Garuti/workspaces/simple-ubuntu
==&amp;gt; elk-ubuntu-5: Running provisioner: shell...
    elk-ubuntu-5: Running: inline script
==&amp;gt; elk-ubuntu-5: Running provisioner: shell...
    elk-ubuntu-5: Running: inline script
    elk-ubuntu-5: hello from node 5
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Ansible setup and pre-flight check
&lt;/h3&gt;

&lt;p&gt;Now if you don't have Ansible installed, install ansible and all the requirements:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apt-get install python3 python3-pip
pip3 install pipenv

pipenv shell
pip install -r requirements.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now with Ansible installed we can download the collection directly from GitHub:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ansible-galaxy collection install git+https://github.com/garutilorenzo/ansible-collection-elk
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Whit Ansible and the collection installed we can setup our inventory file (hosts.ini):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[elasticsearch_master]
elk-ubuntu-0 ansible_host=192.168.25.110
elk-ubuntu-1 ansible_host=192.168.25.111
elk-ubuntu-2 ansible_host=192.168.25.112

[elasticsearch_data]
elk-ubuntu-3 ansible_host=192.168.25.113
elk-ubuntu-4 ansible_host=192.168.25.114
elk-ubuntu-5 ansible_host=192.168.25.115

[elasticsearch_ca]
elk-ubuntu-0 ansible_host=192.168.25.110

[kibana]
elk-ubuntu-1 ansible_host=192.168.25.111
elk-ubuntu-4 ansible_host=192.168.25.114

[logstash]
elk-ubuntu-2 ansible_host=192.168.25.112
elk-ubuntu-5 ansible_host=192.168.25.115

[elasticsearch:children]
elasticsearch_master
elasticsearch_data
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and the vars.yml file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="nn"&gt;---&lt;/span&gt;

&lt;span class="na"&gt;disable_firewall&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;yes&lt;/span&gt;
&lt;span class="na"&gt;disable_selinux&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;yes&lt;/span&gt;

&lt;span class="na"&gt;elasticsearch_version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;8.3.3&lt;/span&gt;
&lt;span class="na"&gt;kibana_version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;8.3.3&lt;/span&gt;
&lt;span class="na"&gt;logstash_version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;8.3.3&lt;/span&gt;
&lt;span class="na"&gt;beats_version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;8.3.3&lt;/span&gt;

&lt;span class="na"&gt;elasticsearch_resolv_mode&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;hosts&lt;/span&gt;
&lt;span class="na"&gt;elasticsearch_install_mode&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;local&lt;/span&gt;
&lt;span class="na"&gt;elasticsearch_local_tar_path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;~/elk_tar_path&lt;/span&gt;
&lt;span class="na"&gt;elasticsearch_monitoring_enabled&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;yes&lt;/span&gt;
&lt;span class="na"&gt;elasticsearch_master_is_also_data_node&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;yes&lt;/span&gt;

&lt;span class="na"&gt;kibana_install_mode&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;local&lt;/span&gt;
&lt;span class="na"&gt;kibana_local_tar_path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;~/elk_tar_path&lt;/span&gt;
&lt;span class="na"&gt;setup_kibana_dashboards&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;yes&lt;/span&gt;
&lt;span class="na"&gt;kibana_url&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;http://elk-ubuntu-1:5601&lt;/span&gt;

&lt;span class="na"&gt;heartbeat_number_of_shards&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3&lt;/span&gt;
&lt;span class="na"&gt;heartbeat_number_of_replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3&lt;/span&gt;

&lt;span class="na"&gt;metricbeat_number_of_shards&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3&lt;/span&gt;
&lt;span class="na"&gt;metricbeat_number_of_replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3&lt;/span&gt;

&lt;span class="na"&gt;filebeat_number_of_shards&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3&lt;/span&gt;
&lt;span class="na"&gt;filebeat_number_of_replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3&lt;/span&gt;

&lt;span class="na"&gt;elasticsearch_hosts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;elk-ubuntu-0&lt;/span&gt; 
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;elk-ubuntu-1&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;elk-ubuntu-2&lt;/span&gt; 
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;elk-ubuntu-3&lt;/span&gt; 
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;elk-ubuntu-5&lt;/span&gt; 
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;elk-ubuntu-5&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The final cluster will be made of:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;6 elasticsearch nodes (the master nodes will be also data nodes)&lt;/li&gt;
&lt;li&gt;2 kibana instances&lt;/li&gt;
&lt;li&gt;2 logstash instances&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;and:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;all VMs will be monitored by the Elastic Beats&lt;/li&gt;
&lt;li&gt;all the index templates will have 3 shards and 3 replicas&lt;/li&gt;
&lt;li&gt;Since we don't have any DNS available, Ansible will insert all the node names in the /etc/hosts file&lt;/li&gt;
&lt;li&gt;firewall and selinux will be disabled&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To preserve bandwidth we download elasticsearch and kibana on our Ansible machine:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mkdir -p ~/elk_tar_path # &amp;lt;- you can customize this path by changing elasticsearch_local_tar_path variable
curl  -o ~/elk_tar_path/kibana-8.3.3-linux-x86_64.tar.gz https://artifacts.elastic.co/downloads/kibana/kibana-8.3.3-linux-x86_64.tar.gz
curl  -o ~/elk_tar_path/elk_tar_path/elasticsearch-8.3.3-linux-x86_64.tar.gz https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-8.3.3-linux-x86_64.tar.gz
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and we have to create the certificate directory, where elastic certificates will be stored:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mkdir -p ~/very_secure_dir # &amp;lt;- you can customize this path by changing elasticsearch_local_certs_dir variable
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The final step before proceed with the installation is to create the site.yml file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;hosts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;elasticsearch&lt;/span&gt; 
  &lt;span class="na"&gt;become&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;yes&lt;/span&gt;
  &lt;span class="na"&gt;remote_user&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;vagrant&lt;/span&gt;
  &lt;span class="na"&gt;collections&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;garutilorenzo.ansible_collection_elk&lt;/span&gt;
  &lt;span class="na"&gt;vars_files&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;vars.yml&lt;/span&gt;

  &lt;span class="na"&gt;tasks&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;import_role&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;elasticsearch&lt;/span&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;hosts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kibana&lt;/span&gt; 
  &lt;span class="na"&gt;become&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;yes&lt;/span&gt;
  &lt;span class="na"&gt;remote_user&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;vagrant&lt;/span&gt;
  &lt;span class="na"&gt;collections&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;garutilorenzo.ansible_collection_elk&lt;/span&gt;
  &lt;span class="na"&gt;vars_files&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;vars.yml&lt;/span&gt;

  &lt;span class="na"&gt;tasks&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;import_role&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kibana&lt;/span&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;hosts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;logstash&lt;/span&gt; 
  &lt;span class="na"&gt;become&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;yes&lt;/span&gt;
  &lt;span class="na"&gt;remote_user&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;vagrant&lt;/span&gt;
  &lt;span class="na"&gt;collections&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;garutilorenzo.ansible_collection_elk&lt;/span&gt;
  &lt;span class="na"&gt;vars_files&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;vars.yml&lt;/span&gt;

  &lt;span class="na"&gt;tasks&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;import_role&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;logstash&lt;/span&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;hosts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;elasticsearch&lt;/span&gt; 
  &lt;span class="na"&gt;become&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;yes&lt;/span&gt;
  &lt;span class="na"&gt;remote_user&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;vagrant&lt;/span&gt;
  &lt;span class="na"&gt;collections&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;garutilorenzo.ansible_collection_elk&lt;/span&gt;
  &lt;span class="na"&gt;vars_files&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;vars.yml&lt;/span&gt;

  &lt;span class="na"&gt;tasks&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;import_role&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;beats&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Deploy ELK with Ansible
&lt;/h3&gt;

&lt;p&gt;We are finally ready to install the ELK stack with ansible, since we don't have any CA certificate we pass an extra variable &lt;em&gt;generateca&lt;/em&gt; to our playbook:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export ANSIBLE_HOST_KEY_CHECKING=False # Ansible skip ssh-key validation
ansible-playbook site.yml -i hosts.ini -e "generateca=yes"

ansible-playbook site.yml -i hosts.ini

PLAY [elasticsearch] ***************************************************************************************************************************************

TASK [Gathering Facts] *************************************************************************************************************************************
ok: [elk-ubuntu-0]
ok: [elk-ubuntu-1]
ok: [elk-ubuntu-2]
ok: [elk-ubuntu-3]
ok: [elk-ubuntu-4]
ok: [elk-ubuntu-5]

TASK [garutilorenzo.ansible_collection_elk.elasticsearch : setup] ******************************************************************************************
ok: [elk-ubuntu-2]
ok: [elk-ubuntu-0]
ok: [elk-ubuntu-3]
ok: [elk-ubuntu-1]
ok: [elk-ubuntu-4]
ok: [elk-ubuntu-5]

TASK [garutilorenzo.ansible_collection_elk.elasticsearch : include_tasks] **********************************************************************************
included: /home/lorenzo/workspaces-local/ansible-test-collection/elk/collections/ansible_collections/garutilorenzo/ansible_collection_elk/roles/elasticsearch/tasks/preflight.yml for elk-ubuntu-0, elk-ubuntu-1, elk-ubuntu-2, elk-ubuntu-3, elk-ubuntu-4, elk-ubuntu-5

TASK [garutilorenzo.ansible_collection_elk.elasticsearch : Put SELinux in permissive mode, logging actions that would be blocked.] *************************
skipping: [elk-ubuntu-0]
skipping: [elk-ubuntu-1]
skipping: [elk-ubuntu-2]
skipping: [elk-ubuntu-3]
skipping: [elk-ubuntu-4]
skipping: [elk-ubuntu-5]

TASK [garutilorenzo.ansible_collection_elk.elasticsearch : Disable SELinux] ********************************************************************************
skipping: [elk-ubuntu-0]
skipping: [elk-ubuntu-1]
skipping: [elk-ubuntu-2]
skipping: [elk-ubuntu-3]
skipping: [elk-ubuntu-4]
skipping: [elk-ubuntu-5]

TASK [garutilorenzo.ansible_collection_elk.elasticsearch : disable firewalld] ******************************************************************************
skipping: [elk-ubuntu-0]
skipping: [elk-ubuntu-1]
skipping: [elk-ubuntu-2]
skipping: [elk-ubuntu-3]
skipping: [elk-ubuntu-4]
skipping: [elk-ubuntu-5]

TASK [garutilorenzo.ansible_collection_elk.elasticsearch : disable ufw] ************************************************************************************
changed: [elk-ubuntu-3]
changed: [elk-ubuntu-0]
changed: [elk-ubuntu-2]
changed: [elk-ubuntu-4]
changed: [elk-ubuntu-1]
changed: [elk-ubuntu-5]

[...]
[...]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This can take a while, but if everything goes well the final Ansible output will be:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[...]
[...]

TASK [garutilorenzo.ansible_collection_elk.beats : enable and start heartbeat] *****************************************************************************
changed: [elk-ubuntu-0]
changed: [elk-ubuntu-1]
changed: [elk-ubuntu-3]
changed: [elk-ubuntu-4]
changed: [elk-ubuntu-5]
changed: [elk-ubuntu-2]

RUNNING HANDLER [garutilorenzo.ansible_collection_elk.beats : reload systemd] ******************************************************************************
ok: [elk-ubuntu-4]
ok: [elk-ubuntu-0]
ok: [elk-ubuntu-3]
ok: [elk-ubuntu-1]
ok: [elk-ubuntu-2]
ok: [elk-ubuntu-5]

RUNNING HANDLER [garutilorenzo.ansible_collection_elk.beats : reload filebeat] *****************************************************************************
changed: [elk-ubuntu-0]
changed: [elk-ubuntu-4]
changed: [elk-ubuntu-1]
changed: [elk-ubuntu-2]
changed: [elk-ubuntu-5]
changed: [elk-ubuntu-3]

RUNNING HANDLER [garutilorenzo.ansible_collection_elk.beats : reload heartbeat] ****************************************************************************
changed: [elk-ubuntu-2]
changed: [elk-ubuntu-1]
changed: [elk-ubuntu-0]
changed: [elk-ubuntu-3]
changed: [elk-ubuntu-4]
changed: [elk-ubuntu-5]

RUNNING HANDLER [garutilorenzo.ansible_collection_elk.beats : reload metricbeat] ***************************************************************************
changed: [elk-ubuntu-1]
changed: [elk-ubuntu-4]
changed: [elk-ubuntu-0]
changed: [elk-ubuntu-3]
changed: [elk-ubuntu-5]
changed: [elk-ubuntu-2]

PLAY RECAP *************************************************************************************************************************************************
elk-ubuntu-0               : ok=119  changed=61   unreachable=0    failed=0    skipped=12   rescued=0    ignored=1   
elk-ubuntu-1               : ok=135  changed=68   unreachable=0    failed=0    skipped=15   rescued=0    ignored=1   
elk-ubuntu-2               : ok=136  changed=72   unreachable=0    failed=0    skipped=14   rescued=0    ignored=1   
elk-ubuntu-3               : ok=114  changed=58   unreachable=0    failed=0    skipped=12   rescued=0    ignored=1   
elk-ubuntu-4               : ok=135  changed=68   unreachable=0    failed=0    skipped=15   rescued=0    ignored=1   
elk-ubuntu-5               : ok=136  changed=72   unreachable=0    failed=0    skipped=14   rescued=0    ignored=1 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now we use Kibana to analyze our Elasticsearch data:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--1Mo_Zu-J--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://garutilorenzo.github.io/images/elk-login.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--1Mo_Zu-J--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://garutilorenzo.github.io/images/elk-login.png" alt="elk-login" width="556" height="636"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The password is the default bootstrap password &lt;em&gt;changeme&lt;/em&gt;. You can customize this variable in the elasticsearch role, you have to modify the &lt;em&gt;elasticsearch_bootstrap_password&lt;/em&gt; variable. &lt;strong&gt;Remember&lt;/strong&gt; to set in the vars.yml file the same value form &lt;em&gt;elasticsearch_password&lt;/em&gt; and &lt;em&gt;elasticsearch_bootstrap_password&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;This is the default Kibana dashboard:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--tm7GSdrk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://garutilorenzo.github.io/images/elk-home.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--tm7GSdrk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://garutilorenzo.github.io/images/elk-home.png" alt="elk-home" width="781" height="567"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now we can &lt;em&gt;Discover&lt;/em&gt; our data in Analytics -&amp;gt; Discover:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--2T5CAj7s--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://garutilorenzo.github.io/images/elk-discover.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--2T5CAj7s--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://garutilorenzo.github.io/images/elk-discover.png" alt="elk-discover" width="800" height="856"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here we can filter our data using the KQL syntax or use one of the many Kibana features.&lt;/p&gt;

&lt;p&gt;We can also inspect our data using one of the many Dashboards provided by the various Beats (Analytics -&amp;gt; Dashboards). We can inspect for example the metricbeat system dashboard:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--739Kw3gF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://garutilorenzo.github.io/images/elk-dashboard.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--739Kw3gF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://garutilorenzo.github.io/images/elk-dashboard.png" alt="elk-dashboard" width="798" height="720"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In Management -&amp;gt; Stack Management -&amp;gt; Index Management we can see all of our indexes. We have to enable the &lt;em&gt;Include hidden indices&lt;/em&gt; because in our brand new cluster all the indexes are hidden (filebeat, heartbeat, metricbeat):&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--riT2V5pN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://garutilorenzo.github.io/images/elk-index-management.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--riT2V5pN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://garutilorenzo.github.io/images/elk-index-management.png" alt="elk-index-management" width="880" height="707"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In Management -&amp;gt; Stack Monitoring we can enable the &lt;em&gt;self monitoring&lt;/em&gt; feature:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--IqjQQMdq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://garutilorenzo.github.io/images/elk-self-monitoring.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--IqjQQMdq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://garutilorenzo.github.io/images/elk-self-monitoring.png" alt="elk-self-monitoring.png" width="880" height="640"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And now we can inspect the cluster stats:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--KibU1and--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://garutilorenzo.github.io/images/elk-self-status.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--KibU1and--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://garutilorenzo.github.io/images/elk-self-status.png" alt="elk-self-status.png" width="880" height="771"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is only a short introduction on the ELK Stack, you can read more on &lt;a href="https://www.elastic.co/guide/index.html"&gt;Elastic Docs&lt;/a&gt;. To get the best form the ELK stack read also &lt;a href="https://www.elastic.co/observability"&gt;Elastic Observability&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Clean up
&lt;/h3&gt;

&lt;p&gt;When you have done you can finally destroy the cluster with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;vagrant destroy
    elk-ubuntu-5: Are you sure you want to destroy the 'elk-ubuntu-5' VM? [y/N] y
==&amp;gt; elk-ubuntu-5: Forcing shutdown of VM...
==&amp;gt; elk-ubuntu-5: Destroying VM and associated drives...
    elk-ubuntu-4: Are you sure you want to destroy the 'elk-ubuntu-4' VM? [y/N] y
==&amp;gt; elk-ubuntu-4: Forcing shutdown of VM...
==&amp;gt; elk-ubuntu-4: Destroying VM and associated drives...
    elk-ubuntu-3: Are you sure you want to destroy the 'elk-ubuntu-3' VM? [y/N] y
==&amp;gt; elk-ubuntu-3: Forcing shutdown of VM...
==&amp;gt; elk-ubuntu-3: Destroying VM and associated drives...
    elk-ubuntu-2: Are you sure you want to destroy the 'elk-ubuntu-2' VM? [y/N] y
==&amp;gt; elk-ubuntu-2: Forcing shutdown of VM...
==&amp;gt; elk-ubuntu-2: Destroying VM and associated drives...
    elk-ubuntu-1: Are you sure you want to destroy the 'elk-ubuntu-1' VM? [y/N] y
==&amp;gt; elk-ubuntu-1: Forcing shutdown of VM...
==&amp;gt; elk-ubuntu-1: Destroying VM and associated drives...
    elk-ubuntu-0: Are you sure you want to destroy the 'elk-ubuntu-0' VM? [y/N] y
==&amp;gt; elk-ubuntu-0: Forcing shutdown of VM...
==&amp;gt; elk-ubuntu-0: Destroying VM and associated drives...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>ansible</category>
      <category>elasticsearch</category>
      <category>elk</category>
      <category>devops</category>
    </item>
    <item>
      <title>Install and configure a high available Kubernetes cluster with Ansible</title>
      <dc:creator>Lorenzo Garuti</dc:creator>
      <pubDate>Wed, 17 Aug 2022 14:52:08 +0000</pubDate>
      <link>https://dev.to/garutilorenzo/install-and-configure-a-high-available-kubernetes-cluster-with-ansible-46ai</link>
      <guid>https://dev.to/garutilorenzo/install-and-configure-a-high-available-kubernetes-cluster-with-ansible-46ai</guid>
      <description>&lt;p&gt;This &lt;a href="https://github.com/garutilorenzo/ansible-role-linux-kubernetes"&gt;ansible role&lt;/a&gt; will install and configure a high available Kubernetes cluster. This repo automate the installation process of Kubernetes using &lt;a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/"&gt;kubeadm&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;This repo is only a example on how to use Ansible automation to install and configure a Kubernetes cluster. For a production environment use &lt;a href="https://kubernetes.io/docs/setup/production-environment/tools/kubespray/"&gt;Kubespray&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Requirements
&lt;/h2&gt;

&lt;p&gt;Install ansible, ipaddr and netaddr:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pip install -r requirements.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Download the role form GitHub:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ansible-galaxy install git+https://github.com/garutilorenzo/ansible-role-linux-kubernetes.git
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Role Variables
&lt;/h2&gt;

&lt;p&gt;This role accept this variables:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Var&lt;/th&gt;
&lt;th&gt;Required&lt;/th&gt;
&lt;th&gt;Default&lt;/th&gt;
&lt;th&gt;Desc&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;kubernetes_subnet&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;yes&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;192.168.25.0/24&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Subnet where Kubernetess will be deployed. If the VM or bare metal server has more than one interface, Ansible will filter the interface used by Kubernetes based on the interface subnet&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;disable_firewall&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;no&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;no&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;If set to yes Ansible will disable the firewall.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;kubernetes_version&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;no&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;1.24.3&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Kubernetes version to install&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;kubernetes_cri&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;no&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;containerd&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Kubernetes &lt;a href="https://kubernetes.io/docs/concepts/architecture/cri/"&gt;CRI&lt;/a&gt; to install.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;kubernetes_cni&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;no&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;flannel&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Kubernetes &lt;a href="https://github.com/containernetworking/cni"&gt;CNI&lt;/a&gt; to install.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;kubernetes_dns_domain&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;no&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;cluster.local&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Kubernetes default DNS domain&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;kubernetes_pod_subnet&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;no&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;10.244.0.0/16&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Kubernetes pod subnet&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;kubernetes_service_subnet&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;no&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;10.96.0.0/12&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Kubernetes service subnet&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;kubernetes_api_port&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;no&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;6443&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;kubeapi listen port&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;setup_vip&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;no&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;no&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Setup kubernetes VIP addres using &lt;a href="https://kube-vip.io/"&gt;kube-vip&lt;/a&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;kubernetes_vip_ip&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;no&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;192.168.25.225&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;
&lt;strong&gt;Required&lt;/strong&gt; if setup_vip is set to &lt;em&gt;yes&lt;/em&gt;. Vip ip address for the control plane&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;kubevip_version&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;no&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;v0.4.3&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;kube-vip container version&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;install_longhorn&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;no&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;no&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Install Longhorn, Cloud native distributed block storage for Kubernetes.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;longhorn_version&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;no&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;v1.3.1&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Longhorn release.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;install_nginx_ingress&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;no&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;no&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Install nginx ingress controller
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;nginx_ingress_controller_version&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;no&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;controller-v1.3.0&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;nginx ingress controller version&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;nginx_ingress_controller_http_nodeport&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;no&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;30080&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;NodePort used by nginx ingress controller for the incoming http traffic&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;nginx_ingress_controller_https_nodeport&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;no&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;30443&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;NodePort used by nginx ingress controller for the incoming https traffic&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;enable_nginx_ingress_proxy_protocol&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;no&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;no&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Enable  nginx ingress controller proxy protocol mode&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;enable_nginx_real_ip&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;no&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;no&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Enable nginx ingress controller real-ip module&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;nginx_ingress_real_ip_cidr&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;no&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;0.0.0.0/0&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;
&lt;strong&gt;Required&lt;/strong&gt; if enable_nginx_real_ip is set to &lt;em&gt;yes&lt;/em&gt; Trusted subnet to use with the real-ip module&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;nginx_ingress_proxy_body_size&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;no&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;20m&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;nginx ingress controller max proxy body size&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;sans_base&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;no&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;[list of values, see defaults/main.yml]&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;list of ip addresses or FQDN uset to sign the kube-api certificate&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Extra Variables
&lt;/h2&gt;

&lt;p&gt;This role accept an extra variable &lt;em&gt;kubernetes_init_host&lt;/em&gt;. This variable is used when the cluster is bootstrapped for the first time. The value of this variable must be the hostname of one of the master nodes. When ansible will run on the matched host kubernetes will be initialized.&lt;/p&gt;

&lt;h2&gt;
  
  
  Cluster resource deployed
&lt;/h2&gt;

&lt;p&gt;Whit this role Nginx ingress controller and Longhorn will be installed.&lt;/p&gt;

&lt;h3&gt;
  
  
  Nginx ingress controller
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://kubernetes.github.io/ingress-nginx/"&gt;Nginx ingress controller&lt;/a&gt; is used as ingress controller.&lt;/p&gt;

&lt;p&gt;The installation is the &lt;a href="https://kubernetes.github.io/ingress-nginx/deploy/#bare-metal-clusters"&gt;bare metal&lt;/a&gt; installation, the ingress controller then is exposed via a NodePort Service.&lt;br&gt;
You can customize the ports exposed by the NodePort service, use Role Variables to change this values.&lt;/p&gt;
&lt;h3&gt;
  
  
  Longhorn
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://longhorn.io"&gt;Longhorn&lt;/a&gt; is a lightweight, reliable, and powerful distributed block storage system for Kubernetes.&lt;/p&gt;

&lt;p&gt;Longhorn implements distributed block storage using containers and microservices. Longhorn creates a dedicated storage controller for each block device volume and synchronously replicates the volume across multiple replicas stored on multiple nodes. The storage controller and replicas are themselves orchestrated using Kubernetes.&lt;/p&gt;
&lt;h2&gt;
  
  
  Vagrant
&lt;/h2&gt;

&lt;p&gt;To test this role you can use &lt;a href="https://www.vagrantup.com/"&gt;Vagrant&lt;/a&gt; and &lt;a href="https://www.virtualbox.org/"&gt;Virtualbox&lt;/a&gt; to bring up a example infrastructure. Once you have downloaded this repo use Vagrant to start the virtual machines:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;vagrant up
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the Vagrantfile you can inject your public ssh key directly in the authorized_keys of the vagrant user. You have to change the &lt;em&gt;CHANGE_ME&lt;/em&gt; placeholder in the Vagrantfile. You can also adjust the number of the vm deployed by changing the NNODES variable (Default: 6)&lt;/p&gt;

&lt;h2&gt;
  
  
  Using this role
&lt;/h2&gt;

&lt;p&gt;To use this role you follow the example in the &lt;a href="https://github.com/garutilorenzo/ansible-role-linux-kubernetes/tree/master/examples"&gt;examples/&lt;/a&gt; dir. Adjust the hosts.ini file with your hosts and run the playbook:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;lorenzo@mint-virtual:~$ ansible-playbook -i hosts-ubuntu.ini site.yml -e kubernetes_init_host=k8s-ubuntu-0

PLAY [kubemaster] ***************************************************************************************************************************************************

TASK [Gathering Facts] **********************************************************************************************************************************************
ok: [k8s-ubuntu-2]
ok: [k8s-ubuntu-1]
ok: [k8s-ubuntu-0]

TASK [ansible-role-kubernetes : include_tasks] **********************************************************************************************************************
included: /home/lorenzo/workspaces-local/ansible-role-kubernetes/tasks/setup_repo_Debian.yml for k8s-ubuntu-0, k8s-ubuntu-1, k8s-ubuntu-2 =&amp;gt; (item=/home/lorenzo/workspaces-local/ansible-role-kubernetes/tasks/setup_repo_Debian.yml)

TASK [ansible-role-kubernetes : Install required system packages] ***************************************************************************************************
ok: [k8s-ubuntu-2]
ok: [k8s-ubuntu-1]
ok: [k8s-ubuntu-0]

TASK [ansible-role-kubernetes : Add Google GPG apt Key] *************************************************************************************************************
ok: [k8s-ubuntu-0]
ok: [k8s-ubuntu-1]
ok: [k8s-ubuntu-2]

TASK [ansible-role-kubernetes : Add K8s Repository] *****************************************************************************************************************
ok: [k8s-ubuntu-1]
ok: [k8s-ubuntu-2]
ok: [k8s-ubuntu-0]

TASK [ansible-role-kubernetes : Add Docker GPG apt Key] *************************************************************************************************************
ok: [k8s-ubuntu-1]
ok: [k8s-ubuntu-0]
ok: [k8s-ubuntu-2]

TASK [ansible-role-kubernetes : shell] ******************************************************************************************************************************
changed: [k8s-ubuntu-1]
changed: [k8s-ubuntu-2]
changed: [k8s-ubuntu-0]

TASK [ansible-role-kubernetes : Add Docker Repository] **************************************************************************************************************
ok: [k8s-ubuntu-0]
ok: [k8s-ubuntu-1]
ok: [k8s-ubuntu-2]

TASK [ansible-role-kubernetes : setup] ******************************************************************************************************************************
ok: [k8s-ubuntu-1]
ok: [k8s-ubuntu-0]
ok: [k8s-ubuntu-2]

TASK [ansible-role-kubernetes : include_tasks] **********************************************************************************************************************
included: /home/lorenzo/workspaces-local/ansible-role-kubernetes/tasks/preflight.yml for k8s-ubuntu-0, k8s-ubuntu-1, k8s-ubuntu-2

TASK [ansible-role-kubernetes : disable ufw] ************************************************************************************************************************
ok: [k8s-ubuntu-2]
ok: [k8s-ubuntu-0]
ok: [k8s-ubuntu-1]

TASK [ansible-role-kubernetes : Install iptables-legacy] ************************************************************************************************************
skipping: [k8s-ubuntu-0]
skipping: [k8s-ubuntu-1]
skipping: [k8s-ubuntu-2]

TASK [ansible-role-kubernetes : Remove zram-generator-defaults] *****************************************************************************************************
skipping: [k8s-ubuntu-0]
skipping: [k8s-ubuntu-1]
skipping: [k8s-ubuntu-2]

TASK [ansible-role-kubernetes : disable firewalld] ******************************************************************************************************************
skipping: [k8s-ubuntu-0]
skipping: [k8s-ubuntu-1]
skipping: [k8s-ubuntu-2]

TASK [ansible-role-kubernetes : Put SELinux in permissive mode, logging actions that would be blocked.] *************************************************************
skipping: [k8s-ubuntu-0]
skipping: [k8s-ubuntu-1]
skipping: [k8s-ubuntu-2]

TASK [ansible-role-kubernetes : Disable SELinux] ********************************************************************************************************************
skipping: [k8s-ubuntu-0]
skipping: [k8s-ubuntu-1]
skipping: [k8s-ubuntu-2]

TASK [ansible-role-kubernetes : Install openssl] ********************************************************************************************************************
ok: [k8s-ubuntu-2]
ok: [k8s-ubuntu-1]
ok: [k8s-ubuntu-0]

TASK [ansible-role-kubernetes : load overlay kernel module] *********************************************************************************************************
changed: [k8s-ubuntu-1]
changed: [k8s-ubuntu-0]
changed: [k8s-ubuntu-2]

TASK [ansible-role-kubernetes : load br_netfilter kernel module] ****************************************************************************************************
changed: [k8s-ubuntu-1]
changed: [k8s-ubuntu-0]
changed: [k8s-ubuntu-2]

[...]
[...]
[...]

TASK [ansible-role-kubernetes : Add KUBELET_ROOT_DIR env var] *******************************************************************************************************
skipping: [k8s-ubuntu-3]

TASK [ansible-role-kubernetes : Add KUBELET_ROOT_DIR env var, set value] ********************************************************************************************
skipping: [k8s-ubuntu-3]

TASK [ansible-role-kubernetes : Install longhorn] *******************************************************************************************************************
skipping: [k8s-ubuntu-3]

TASK [ansible-role-kubernetes : Install longhorn storageclass] ******************************************************************************************************
skipping: [k8s-ubuntu-3]

TASK [ansible-role-kubernetes : include_tasks] **********************************************************************************************************************
included: /home/lorenzo/workspaces-local/ansible-role-kubernetes/tasks/install_nginx_ingress.yml for k8s-ubuntu-3, k8s-ubuntu-4, k8s-ubuntu-5

TASK [ansible-role-kubernetes : Check if ingress-nginx is installed] ************************************************************************************************
changed: [k8s-ubuntu-3 -&amp;gt; k8s-ubuntu-0(192.168.25.110)]

TASK [ansible-role-kubernetes : Install ingress-nginx] **************************************************************************************************************
skipping: [k8s-ubuntu-3]

TASK [ansible-role-kubernetes : render nginx_ingress_config.yml] ****************************************************************************************************
skipping: [k8s-ubuntu-3]

TASK [ansible-role-kubernetes : Apply nginx ingress config] *********************************************************************************************************
skipping: [k8s-ubuntu-3]

PLAY RECAP **********************************************************************************************************************************************************
k8s-ubuntu-0               : ok=78   changed=24   unreachable=0    failed=0    skipped=25   rescued=0    ignored=3   
k8s-ubuntu-1               : ok=52   changed=12   unreachable=0    failed=0    skipped=30   rescued=0    ignored=1   
k8s-ubuntu-2               : ok=52   changed=12   unreachable=0    failed=0    skipped=30   rescued=0    ignored=1
k8s-ubuntu-3               : ok=58   changed=30   unreachable=0    failed=0    skipped=35   rescued=0    ignored=1   
k8s-ubuntu-4               : ok=52   changed=28   unreachable=0    failed=0    skipped=27   rescued=0    ignored=1   
k8s-ubuntu-5               : ok=52   changed=28   unreachable=0    failed=0    skipped=27   rescued=0    ignored=1   
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now we have a Kubernetes cluster deployed in high available mode, we can check the status of the cluster:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;root@k8s-ubuntu-0:~# kubectl get nodes
NAME           STATUS   ROLES           AGE    VERSION
k8s-ubuntu-0   Ready    control-plane   139m   v1.24.3
k8s-ubuntu-1   Ready    control-plane   136m   v1.24.3
k8s-ubuntu-2   Ready    control-plane   136m   v1.24.3
k8s-ubuntu-3   Ready    &amp;lt;none&amp;gt;          117m   v1.24.3
k8s-ubuntu-4   Ready    &amp;lt;none&amp;gt;          117m   v1.24.3
k8s-ubuntu-5   Ready    &amp;lt;none&amp;gt;          117m   v1.24.3
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Check the pods status:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;root@k8s-ubuntu-0:~# kubectl get pods --all-namespaces
NAMESPACE         NAME                                           READY   STATUS      RESTARTS       AGE
ingress-nginx     ingress-nginx-admission-create-tsc8p           0/1     Completed   0              135m
ingress-nginx     ingress-nginx-admission-patch-48tpn            0/1     Completed   0              135m
ingress-nginx     ingress-nginx-controller-6dc865cd86-kfq88      1/1     Running     0              135m
kube-flannel      kube-flannel-ds-fm4s6                          1/1     Running     0              117m
kube-flannel      kube-flannel-ds-hhvxx                          1/1     Running     0              117m
kube-flannel      kube-flannel-ds-ngdtc                          1/1     Running     0              117m
kube-flannel      kube-flannel-ds-q5ncb                          1/1     Running     0              136m
kube-flannel      kube-flannel-ds-vq4kk                          1/1     Running     0              139m
kube-flannel      kube-flannel-ds-zshpf                          1/1     Running     0              137m
kube-system       coredns-6d4b75cb6d-8dh9h                       1/1     Running     0              139m
kube-system       coredns-6d4b75cb6d-xq98k                       1/1     Running     0              139m
kube-system       etcd-k8s-ubuntu-0                              1/1     Running     0              139m
kube-system       etcd-k8s-ubuntu-1                              1/1     Running     0              136m
kube-system       etcd-k8s-ubuntu-2                              1/1     Running     0              136m
kube-system       kube-apiserver-k8s-ubuntu-0                    1/1     Running     0              139m
kube-system       kube-apiserver-k8s-ubuntu-1                    1/1     Running     0              135m
kube-system       kube-apiserver-k8s-ubuntu-2                    1/1     Running     0              136m
kube-system       kube-controller-manager-k8s-ubuntu-0           1/1     Running     0              139m
kube-system       kube-controller-manager-k8s-ubuntu-1           1/1     Running     0              136m
kube-system       kube-controller-manager-k8s-ubuntu-2           1/1     Running     0              135m
kube-system       kube-proxy-59jqx                               1/1     Running     0              136m
kube-system       kube-proxy-8mjwr                               1/1     Running     0              139m
kube-system       kube-proxy-8nhbw                               1/1     Running     0              117m
kube-system       kube-proxy-j2rrx                               1/1     Running     0              117m
kube-system       kube-proxy-qwd5r                               1/1     Running     0              117m
kube-system       kube-proxy-vcs7g                               1/1     Running     0              137m
kube-system       kube-scheduler-k8s-ubuntu-0                    1/1     Running     0              139m
kube-system       kube-scheduler-k8s-ubuntu-1                    1/1     Running     0              136m
kube-system       kube-scheduler-k8s-ubuntu-2                    1/1     Running     0              135m
kube-system       kube-vip-k8s-ubuntu-0                          1/1     Running     1 (136m ago)   139m
kube-system       kube-vip-k8s-ubuntu-1                          1/1     Running     0              136m
kube-system       kube-vip-k8s-ubuntu-2                          1/1     Running     0              136m
longhorn-system   csi-attacher-dcb85d774-jrggr                   1/1     Running     0              114m
longhorn-system   csi-attacher-dcb85d774-slhqt                   1/1     Running     0              114m
longhorn-system   csi-attacher-dcb85d774-xcbxn                   1/1     Running     0              114m
longhorn-system   csi-provisioner-5d8dd96b57-74x6h               1/1     Running     0              114m
longhorn-system   csi-provisioner-5d8dd96b57-kdzdf               1/1     Running     0              114m
longhorn-system   csi-provisioner-5d8dd96b57-xmpjf               1/1     Running     0              114m
longhorn-system   csi-resizer-7c5bb5fd65-4262v                   1/1     Running     0              114m
longhorn-system   csi-resizer-7c5bb5fd65-mfjgv                   1/1     Running     0              114m
longhorn-system   csi-resizer-7c5bb5fd65-qw944                   1/1     Running     0              114m
longhorn-system   csi-snapshotter-5586bc7c79-bs2xn               1/1     Running     0              114m
longhorn-system   csi-snapshotter-5586bc7c79-d927b               1/1     Running     0              114m
longhorn-system   csi-snapshotter-5586bc7c79-v99t6               1/1     Running     0              114m
longhorn-system   engine-image-ei-766a591b-hrs6g                 1/1     Running     0              114m
longhorn-system   engine-image-ei-766a591b-n9fsn                 1/1     Running     0              114m
longhorn-system   engine-image-ei-766a591b-vxhbb                 1/1     Running     0              114m
longhorn-system   instance-manager-e-3dba6914                    1/1     Running     0              114m
longhorn-system   instance-manager-e-7bd8b1ff                    1/1     Running     0              114m
longhorn-system   instance-manager-e-aca0fdc4                    1/1     Running     0              114m
longhorn-system   instance-manager-r-244c040c                    1/1     Running     0              114m
longhorn-system   instance-manager-r-39bd81b1                    1/1     Running     0              114m
longhorn-system   instance-manager-r-3b7f12b1                    1/1     Running     0              114m
longhorn-system   longhorn-admission-webhook-858d86b96b-j5rcv    1/1     Running     0              135m
longhorn-system   longhorn-admission-webhook-858d86b96b-lphkq    1/1     Running     0              135m
longhorn-system   longhorn-conversion-webhook-576b5c45c7-4p55x   1/1     Running     0              135m
longhorn-system   longhorn-conversion-webhook-576b5c45c7-lq686   1/1     Running     0              135m
longhorn-system   longhorn-csi-plugin-f7zmn                      2/2     Running     0              114m
longhorn-system   longhorn-csi-plugin-hs58p                      2/2     Running     0              114m
longhorn-system   longhorn-csi-plugin-wfpfs                      2/2     Running     0              114m
longhorn-system   longhorn-driver-deployer-96cf98c98-7hzft       1/1     Running     0              135m
longhorn-system   longhorn-manager-92xws                         1/1     Running     0              116m
longhorn-system   longhorn-manager-b6knm                         1/1     Running     0              116m
longhorn-system   longhorn-manager-tg2zc                         1/1     Running     0              116m
longhorn-system   longhorn-ui-86b56b95c8-ctbvf                   1/1     Running     0              135m
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;we can see, longhorn, nginx ingress and all the kube-system pods.&lt;/p&gt;

&lt;p&gt;We can also inspect the service of the nginx ingress controller:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;root@k8s-ubuntu-0:~# kubectl get svc -n ingress-nginx
NAME                                    TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE
ingress-nginx-controller                NodePort    10.111.203.177   &amp;lt;none&amp;gt;        80:30080/TCP,443:30443/TCP   136m
ingress-nginx-controller-admission      ClusterIP   10.105.11.11     &amp;lt;none&amp;gt;        443/TCP                      136m
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;we can see the nginx ingress controller listening port, in this case the http port is 30080 and  the https port is 30443. From an external machine we can test the ingress controller:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;lorenzo@mint-virtual:~$ curl -v http://192.168.25.110:30080
*   Trying 192.168.25.110:30080...
* TCP_NODELAY set
* Connected to 192.168.25.110 (192.168.25.110) port 30080 (#0)
&amp;gt; GET / HTTP/1.1
&amp;gt; Host: 192.168.25.110:30080
&amp;gt; User-Agent: curl/7.68.0
&amp;gt; Accept: */*
&amp;gt; 
* Mark bundle as not supporting multiuse
&amp;lt; HTTP/1.1 404 Not Found
&amp;lt; Date: Wed, 17 Aug 2022 12:26:17 GMT
&amp;lt; Content-Type: text/html
&amp;lt; Content-Length: 146
&amp;lt; Connection: keep-alive
&amp;lt; 
&amp;lt;html&amp;gt;
&amp;lt;head&amp;gt;&amp;lt;title&amp;gt;404 Not Found&amp;lt;/title&amp;gt;&amp;lt;/head&amp;gt;
&amp;lt;body&amp;gt;
&amp;lt;center&amp;gt;&amp;lt;h1&amp;gt;404 Not Found&amp;lt;/h1&amp;gt;&amp;lt;/center&amp;gt;
&amp;lt;hr&amp;gt;&amp;lt;center&amp;gt;nginx&amp;lt;/center&amp;gt;
&amp;lt;/body&amp;gt;
&amp;lt;/html&amp;gt;
* Connection #0 to host 192.168.25.110 left intact
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Congratulations
&lt;/h2&gt;

&lt;p&gt;You have successfuly deployed a high available Kubernetes cluster, you are now ready to deploy your applications!&lt;/p&gt;

</description>
      <category>ansible</category>
      <category>kubernetes</category>
      <category>tutorial</category>
      <category>devops</category>
    </item>
    <item>
      <title>Deploy Kubernetes (K8s) on Amazon AWS using mixed on-demand and spot instances</title>
      <dc:creator>Lorenzo Garuti</dc:creator>
      <pubDate>Thu, 14 Apr 2022 09:00:52 +0000</pubDate>
      <link>https://dev.to/garutilorenzo/deploy-kubernetes-k8s-on-amazon-aws-using-mixed-on-demand-and-spot-instances-5eia</link>
      <guid>https://dev.to/garutilorenzo/deploy-kubernetes-k8s-on-amazon-aws-using-mixed-on-demand-and-spot-instances-5eia</guid>
      <description>&lt;p&gt;Deploy in a few minutes an high available Kubernetes cluster on Amazon AWS using mixed on-demand and spot instances.&lt;/p&gt;

&lt;p&gt;Please &lt;strong&gt;note&lt;/strong&gt;, this is only an example on how to Deploy a Kubernetes cluster. For a production environment you should use &lt;a href="https://aws.amazon.com/eks/"&gt;EKS&lt;/a&gt; or &lt;a href="https://aws.amazon.com/it/ecs/"&gt;ECS&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The scope of this repo is to show all the AWS components needed to deploy a high available K8s cluster.&lt;/p&gt;

&lt;h2&gt;
  
  
  Table of Contents
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Requirements&lt;/li&gt;
&lt;li&gt;Infrastructure overview&lt;/li&gt;
&lt;li&gt;Before you start&lt;/li&gt;
&lt;li&gt;Project setup&lt;/li&gt;
&lt;li&gt;AWS provider setup&lt;/li&gt;
&lt;li&gt;Pre flight checklist&lt;/li&gt;
&lt;li&gt;Deploy&lt;/li&gt;
&lt;li&gt;Deploy a sample stack&lt;/li&gt;
&lt;li&gt;Clean up&lt;/li&gt;
&lt;li&gt;Todo&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Requirements
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://www.terraform.io/"&gt;Terraform&lt;/a&gt; - Terraform is an open-source infrastructure as code software tool that provides a consistent CLI workflow to manage hundreds of cloud services. Terraform codifies cloud APIs into declarative configuration files.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://aws.amazon.com/it/console/"&gt;Amazon AWS Account&lt;/a&gt; - Amazon AWS account with billing enabled&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://kubernetes.io/docs/tasks/tools/"&gt;kubectl&lt;/a&gt; - The Kubernetes command-line tool (optional)&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://aws.amazon.com/cli/"&gt;aws cli&lt;/a&gt; optional&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You need also:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;one VPC with private and public subnets&lt;/li&gt;
&lt;li&gt;one ssh key already uploaded on your AWS account&lt;/li&gt;
&lt;li&gt;one bastion host to reach all the private EC2 instances&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For VPC and bastion host you can refer to &lt;a href="https://github.com/garutilorenzo/aws-terraform-examples"&gt;this&lt;/a&gt; repository.&lt;/p&gt;

&lt;h2&gt;
  
  
  Infrastructure overview
&lt;/h2&gt;

&lt;p&gt;The final infrastructure will be made by:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;two autoscaling group, one for the kubernetes master nodes and one for the worker nodes&lt;/li&gt;
&lt;li&gt;two launch template, used by the asg&lt;/li&gt;
&lt;li&gt;one internal load balancer (L4) that will route traffic to Kubernetes servers&lt;/li&gt;
&lt;li&gt;one external load balancer (L7) that will route traffic to Kubernetes workers&lt;/li&gt;
&lt;li&gt;one security group that will allow traffic from the VPC subnet CIDR on all the k8s ports (kube api, nginx ingress node port etc)&lt;/li&gt;
&lt;li&gt;one security group that will allow traffic from all the internet into the public load balancer (L7) on port 80 and 443&lt;/li&gt;
&lt;li&gt;one S3 bucket, used to store the cluster join certificates&lt;/li&gt;
&lt;li&gt;one IAM role, used to allow all the EC2 instances in the cluster to write on the S3 bucket, used to share the join certificates&lt;/li&gt;
&lt;li&gt;one certificate used by the public LB, stored on AWS ACM. The certificate is a self signed certificate.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--D0LeeN_M--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://garutilorenzo.github.io/images/k8s-infra.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--D0LeeN_M--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://garutilorenzo.github.io/images/k8s-infra.png" alt="k8s infra" width="880" height="648"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Kubernetes setup
&lt;/h2&gt;

&lt;p&gt;The installation of K8s id done by &lt;a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/"&gt;kubeadm&lt;/a&gt;. In this installation &lt;a href="https://containerd.io/"&gt;Containerd&lt;/a&gt; is used as CRI and &lt;a href="https://github.com/flannel-io/flannel"&gt;flannel&lt;/a&gt; is used as CNI.&lt;/p&gt;

&lt;p&gt;You can optionally install &lt;a href="https://kubernetes.github.io/ingress-nginx/"&gt;Nginx ingress controller&lt;/a&gt; and Longhorn.&lt;/p&gt;

&lt;p&gt;To install Nginx ingress set the variable &lt;em&gt;install_nginx_ingress&lt;/em&gt; to yes (default no). To install longhorn set the variable &lt;em&gt;install_longhorn&lt;/em&gt; to yes (default no). &lt;strong&gt;NOTE&lt;/strong&gt; if you don't install the nginx ingress, the public Load Balancer and the SSL certificate won't be deployed.&lt;/p&gt;

&lt;p&gt;In this installation is used a S3 bucket to store the join certificate/token. At the first startup of the instance, if the cluster does not exist, the S3 bucket is used to get the join certificates/token.&lt;/p&gt;

&lt;h2&gt;
  
  
  Before you start
&lt;/h2&gt;

&lt;p&gt;Note that this tutorial uses AWS resources that are outside the AWS free tier, so be careful!&lt;/p&gt;

&lt;h2&gt;
  
  
  Project setup
&lt;/h2&gt;

&lt;p&gt;Clone &lt;a href="https://github.com/garutilorenzo/k8s-aws-terraform-cluster"&gt;this&lt;/a&gt; repo and go in the example/ directory:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git clone https://github.com/garutilorenzo/k8s-aws-terraform-cluster
cd k8s-aws-terraform-cluster/example/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now you have to edit the main.tf file and you have to create the terraform.tfvars file. For more detail see AWS provider setup and Pre flight checklist.&lt;/p&gt;

&lt;p&gt;Or if you prefer you can create an new empty directory in your workspace and create this three files:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;terraform.tfvars&lt;/li&gt;
&lt;li&gt;main.tf&lt;/li&gt;
&lt;li&gt;provider.tf&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The main.tf file will look like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;variable "AWS_ACCESS_KEY" {

}

variable "AWS_SECRET_KEY" {

}

variable "environment" {
  default = "staging"
}

variable "AWS_REGION" {
  default = "&amp;lt;YOUR_REGION&amp;gt;"
}

module "k8s-cluster" {
  ssk_key_pair_name      = "&amp;lt;SSH_KEY_NAME&amp;gt;"
  uuid                   = "&amp;lt;GENERATE_UUID&amp;gt;"
  environment            = var.environment
  vpc_id                 = "&amp;lt;VPC_ID&amp;gt;"
  vpc_private_subnets    = "&amp;lt;PRIVATE_SUBNET_LIST&amp;gt;"
  vpc_public_subnets     = "&amp;lt;PUBLIC_SUBNET_LIST&amp;gt;"
  vpc_subnet_cidr        = "&amp;lt;SUBNET_CIDR&amp;gt;"
  PATH_TO_PUBLIC_LB_CERT = "&amp;lt;PAHT_TO_PUBLIC_LB_CERT&amp;gt;"
  PATH_TO_PUBLIC_LB_KEY  = "&amp;lt;PAHT_TO_PRIVATE_LB_CERT&amp;gt;"
  install_nginx_ingress  = true
  source                 = "github.com/garutilorenzo/k8s-aws-terraform-cluster"
}

output "k8s_dns_name" {
  value = module.k8s-cluster.k8s_dns_name
}

output "k8s_server_private_ips" {
  value = module.k8s-cluster.k8s_server_private_ips
}

output "k8s_workers_private_ips" {
  value = module.k8s-cluster.k8s_workers_private_ips
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For all the possible variables see Pre flight checklist&lt;/p&gt;

&lt;p&gt;The provider.tf will look like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;provider "aws" {
  region     = var.AWS_REGION
  access_key = var.AWS_ACCESS_KEY
  secret_key = var.AWS_SECRET_KEY
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The terraform.tfvars will look like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;AWS_ACCESS_KEY = "xxxxxxxxxxxxxxxxx"
AWS_SECRET_KEY = "xxxxxxxxxxxxxxxxx"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now we can init terraform with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform init

Initializing modules...
- k8s-cluster in ..

Initializing the backend...

Initializing provider plugins...
- Finding latest version of hashicorp/template...
- Finding latest version of hashicorp/aws...
- Installing hashicorp/template v2.2.0...
- Installed hashicorp/template v2.2.0 (signed by HashiCorp)
- Installing hashicorp/aws v4.9.0...
- Installed hashicorp/aws v4.9.0 (signed by HashiCorp)

Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run "terraform init" in the future.

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Generate self signed SSL certificate for the public LB (L7)
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;NOTE&lt;/strong&gt; If you already own a valid certificate skip this step and set the correct values for the variables: PATH_TO_PUBLIC_LB_CERT and PATH_TO_PUBLIC_LB_KEY&lt;/p&gt;

&lt;p&gt;We need to generate the certificates (sel signed) for our public load balancer (Layer 7). To do this we need &lt;em&gt;openssl&lt;/em&gt;, open a terminal and follow this step:&lt;/p&gt;

&lt;p&gt;Generate the key:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;openssl genrsa 2048 &amp;gt; privatekey.pem
Generating RSA private key, 2048 bit long modulus (2 primes)
.......+++++
...............+++++
e is 65537 (0x010001)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Generate the a new certificate request:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;openssl req -new -key privatekey.pem -out csr.pem
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [AU]:IT
State or Province Name (full name) [Some-State]:Italy
Locality Name (eg, city) []:Brescia
Organization Name (eg, company) [Internet Widgits Pty Ltd]:GL Ltd
Organizational Unit Name (eg, section) []:IT
Common Name (e.g. server FQDN or YOUR name) []:testlb.domainexample.com
Email Address []:email@you.com

Please enter the following 'extra' attributes
to be sent with your certificate request
A challenge password []:
An optional company name []:
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Generate the public CRT:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;openssl x509 -req -days 365 -in csr.pem -signkey privatekey.pem -out public.crt
Signature ok
subject=C = IT, ST = Italy, L = Brescia, O = GL Ltd, OU = IT, CN = testlb.domainexample.com, emailAddress = email@you.com
Getting Private key
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is the final result:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ls

csr.pem  privatekey.pem  public.crt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now set the variables:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;PATH_TO_PUBLIC_LB_CERT: ~/full_path/public.crt&lt;/li&gt;
&lt;li&gt;PATH_TO_PUBLIC_LB_KEY: ~/full_path/privatekey.pem&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  AWS provider setup
&lt;/h2&gt;

&lt;p&gt;Follow the prerequisites step on &lt;a href="https://learn.hashicorp.com/tutorials/terraform/aws-build?in=terraform/aws-get-started"&gt;this&lt;/a&gt; link.&lt;br&gt;
In your workspace folder or in the examples directory of this repo create a file named terraform.tfvars:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;AWS_ACCESS_KEY = "xxxxxxxxxxxxxxxxx"
AWS_SECRET_KEY = "xxxxxxxxxxxxxxxxx"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Pre flight checklist
&lt;/h2&gt;

&lt;p&gt;Once you have created the terraform.tfvars file edit the main.tf file (always in the example/ directory) and set the following variables:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Var&lt;/th&gt;
&lt;th&gt;Required&lt;/th&gt;
&lt;th&gt;Desc&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;region&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;yes&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;set the correct OCI region based on your needs&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;environment&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;yes&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Current work environment (Example: staging/dev/prod). This value is used for tag all the deployed resources&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;uuid&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;yes&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;UUID used to tag all resources&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;ssk_key_pair_name&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;yes&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Name of the ssh key to use&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;vpc_id&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;yes&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;ID of the VPC to use. You can find your vpc_id in your AWS console (Example: vpc-xxxxx)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;vpc_private_subnets&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;yes&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;List of private subnets to use. This subnets are used for the public LB You can find the list of your vpc subnets in your AWS console (Example: subnet-xxxxxx)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;vpc_public_subnets&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;yes&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;List of public subnets to use. This subnets are used for the EC2 instances and the private LB. You can find the list of your vpc subnets in your AWS console (Example: subnet-xxxxxx)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;vpc_subnet_cidr&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;yes&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Your subnet CIDR. You can find the VPC subnet CIDR in your AWS console (Example: 172.31.0.0/16)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;PATH_TO_PUBLIC_LB_CERT&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;yes&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Path to the public LB certificate. See how to generate the certificate&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;PATH_TO_PUBLIC_LB_KEY&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;yes&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Path to the public LB key. See how to generate the key&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;ec2_associate_public_ip_address&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;no&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Assign or not a pulic ip to the EC2 instances. Default: false&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;s3_bucket_name&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;no&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;S3 bucket name used for sharing the kubernetes token used for joining the cluster. Default: my-very-secure-k8s-bucket&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;instance_profile_name&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;no&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Instance profile name. Default: K8sInstanceProfile&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;iam_role_name&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;no&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;IAM role name. Default: K8sIamRole&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;ami&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;no&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Ami image name. Default: ami-0a2616929f1e63d91, ubuntu 20.04&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;default_instance_type&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;no&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Default instance type used by the Launch template. Default: t3.large&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;instance_types&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;no&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Array of instances used by the ASG. Dfault: { asg_instance_type_1 = "t3.large", asg_instance_type_3 = "m4.large", asg_instance_type_4 = "t3a.large" }&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;k8s_master_template_prefix&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;no&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Template prefix for the master instances. Default: k8s_master_tpl&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;k8s_worker_template_prefix&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;no&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Template prefix for the worker instances. Default: k8s_worker_tpl&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;k8s_version&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;no&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Kubernetes version to install&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;k8s_pod_subnet&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;no&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Kubernetes pod subnet managed by the CNI (Flannel). Default: 10.244.0.0/16&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;k8s_service_subnet&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;no&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Kubernetes pod service managed by the CNI (Flannel). Default: 10.96.0.0/12&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;k8s_dns_domain&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;no&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Internal kubernetes DNS domain. Default: cluster.local&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;kube_api_port&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;no&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Kubernetes api port. Default: 6443&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;k8s_internal_lb_name&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;no&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Internal load balancer name. Default: k8s-server-tcp-lb&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;k8s_server_desired_capacity&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;no&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Desired number of k8s servers. Default 3&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;k8s_server_min_capacity&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;no&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Min number of k8s servers: Default 4&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;k8s_server_max_capacity&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;no&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Max number of k8s servers: Default 3&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;k8s_worker_desired_capacity&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;no&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Desired number of k8s workers. Default 3&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;k8s_worker_min_capacity&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;no&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Min number of k8s workers: Default 4&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;k8s_worker_max_capacity&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;no&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Max number of k8s workers: Default 3&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;cluster_name&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;no&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Kubernetes cluster name. Default: k8s-cluster&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;install_longhorn&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;no&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Install or not longhorn. Default: false&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;longhorn_release&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;no&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;longhorn release. Default: v1.2.3&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;install_nginx_ingress&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;no&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Install or not nginx ingress controller. Default: false&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;k8s_ext_lb_name&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;no&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;External load balancer name. Default: k8s-ext-lb&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;extlb_listener_http_port&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;no&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;HTTP nodeport where nginx ingress controller will listen. Default: 30080&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;extlb_listener_https_port&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;no&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;HTTPS nodeport where nginx ingress controller will listen. Default 30443&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;extlb_http_port&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;no&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;External LB HTTP listen port. Default: 80&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;extlb_https_port&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;no&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;External LB HTTPS listen port. Default 443&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Deploy
&lt;/h2&gt;

&lt;p&gt;We are now ready to deploy our infrastructure. First we ask terraform to plan the execution with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform plan

...
...
      + name                   = "k8s-sg"
      + name_prefix            = (known after apply)
      + owner_id               = (known after apply)
      + revoke_rules_on_delete = false
      + tags                   = {
          + "Name"        = "sg-k8s-cluster-staging"
          + "environment" = "staging"
          + "provisioner" = "terraform"
          + "scope"       = "k8s-cluster"
          + "uuid"        = "xxxxx-xxxxx-xxxx-xxxxxx-xxxxxx"
        }
      + tags_all               = {
          + "Name"        = "sg-k8s-cluster-staging"
          + "environment" = "staging"
          + "provisioner" = "terraform"
          + "scope"       = "k8s-cluster"
          + "uuid"        = "xxxxx-xxxxx-xxxx-xxxxxx-xxxxxx"
        }
      + vpc_id                 = "vpc-xxxxxx"
    }

Plan: 25 to add, 0 to change, 0 to destroy.

Changes to Outputs:
  + k8s_dns_name            = (known after apply)
  + k8s_server_private_ips  = [
      + (known after apply),
    ]
  + k8s_workers_private_ips = [
      + (known after apply),
    ]

Note: You didn't use the -out option to save this plan, so Terraform can't guarantee to take exactly these actions if you run "terraform apply" now.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;now we can deploy our resources with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform apply

...

      + tags_all               = {
          + "Name"        = "sg-k8s-cluster-staging"
          + "environment" = "staging"
          + "provisioner" = "terraform"
          + "scope"       = "k8s-cluster"
          + "uuid"        = "xxxxx-xxxxx-xxxx-xxxxxx-xxxxxx"
        }
      + vpc_id                 = "vpc-xxxxxxxx"
    }

Plan: 25 to add, 0 to change, 0 to destroy.

Changes to Outputs:
  + k8s_dns_name            = (known after apply)
  + k8s_server_private_ips  = [
      + (known after apply),
    ]
  + k8s_workers_private_ips = [
      + (known after apply),
    ]

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

...
...

Apply complete! Resources: 25 added, 0 changed, 0 destroyed.

Outputs:

k8s_dns_name = "k8s-ext-&amp;lt;REDACTED&amp;gt;.elb.amazonaws.com"
k8s_server_private_ips = [
  tolist([
    "172.x.x.x",
    "172.x.x.x",
    "172.x.x.x",
  ]),
]
k8s_workers_private_ips = [
  tolist([
    "172.x.x.x",
    "172.x.x.x",
    "172.x.x.x",
  ]),
]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now on one master node you can check the status of the cluster with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ssh -j bastion@&amp;lt;BASTION_IP&amp;gt; ubuntu@172.x.x.x

Welcome to Ubuntu 20.04.4 LTS (GNU/Linux 5.13.0-1021-aws x86_64)

 * Documentation:  https://help.ubuntu.com
 * Management:     https://landscape.canonical.com
 * Support:        https://ubuntu.com/advantage

  System information as of Wed Apr 13 12:41:52 UTC 2022

  System load:  0.52               Processes:             157
  Usage of /:   17.8% of 19.32GB   Users logged in:       0
  Memory usage: 11%                IPv4 address for cni0: 10.244.0.1
  Swap usage:   0%                 IPv4 address for ens3: 172.68.4.237


0 updates can be applied immediately.


Last login: Wed Apr 13 12:40:32 2022 from 172.68.0.6
ubuntu@i-04d089ed896cfafe1:~$ sudo su -

root@i-04d089ed896cfafe1:~# kubectl get nodes
NAME                  STATUS   ROLES                  AGE     VERSION
i-0033b408f7a1d55f3   Ready    control-plane,master   3m33s   v1.23.5
i-0121c2149821379cc   Ready    &amp;lt;none&amp;gt;                 4m16s   v1.23.5
i-04d089ed896cfafe1   Ready    control-plane,master   4m53s   v1.23.5
i-072bf7de2e94e6f2d   Ready    &amp;lt;none&amp;gt;                 4m15s   v1.23.5
i-09b23242f40eabcca   Ready    control-plane,master   3m56s   v1.23.5
i-0cb1e2e7784768b22   Ready    &amp;lt;none&amp;gt;                 3m57s   v1.23.5

root@i-04d089ed896cfafe1:~# kubectl get ns
NAME              STATUS   AGE
default           Active   5m18s
ingress-nginx     Active   111s # &amp;lt;- ingress controller ns
kube-node-lease   Active   5m19s
kube-public       Active   5m19s
kube-system       Active   5m19s
longhorn-system   Active   109s  # &amp;lt;- longhorn ns

root@i-04d089ed896cfafe1:~# kubectl get pods --all-namespaces
NAMESPACE         NAME                                          READY   STATUS      RESTARTS        AGE
ingress-nginx     ingress-nginx-admission-create-v2fpx          0/1     Completed   0               2m33s
ingress-nginx     ingress-nginx-admission-patch-54d9f           0/1     Completed   0               2m33s
ingress-nginx     ingress-nginx-controller-7fc8d55869-cxv87     1/1     Running     0               2m33s
kube-system       coredns-64897985d-8cg8g                       1/1     Running     0               5m46s
kube-system       coredns-64897985d-9v2r8                       1/1     Running     0               5m46s
kube-system       etcd-i-0033b408f7a1d55f3                      1/1     Running     0               4m33s
kube-system       etcd-i-04d089ed896cfafe1                      1/1     Running     0               5m42s
kube-system       etcd-i-09b23242f40eabcca                      1/1     Running     0               5m
kube-system       kube-apiserver-i-0033b408f7a1d55f3            1/1     Running     1 (4m30s ago)   4m30s
kube-system       kube-apiserver-i-04d089ed896cfafe1            1/1     Running     0               5m46s
kube-system       kube-apiserver-i-09b23242f40eabcca            1/1     Running     0               5m1s
kube-system       kube-controller-manager-i-0033b408f7a1d55f3   1/1     Running     0               4m36s
kube-system       kube-controller-manager-i-04d089ed896cfafe1   1/1     Running     1 (4m50s ago)   5m49s
kube-system       kube-controller-manager-i-09b23242f40eabcca   1/1     Running     0               5m1s
kube-system       kube-flannel-ds-7c65s                         1/1     Running     0               5m2s
kube-system       kube-flannel-ds-bb842                         1/1     Running     0               4m10s
kube-system       kube-flannel-ds-q27gs                         1/1     Running     0               5m21s
kube-system       kube-flannel-ds-sww7p                         1/1     Running     0               5m3s
kube-system       kube-flannel-ds-z8h5p                         1/1     Running     0               5m38s
kube-system       kube-flannel-ds-zrwdq                         1/1     Running     0               5m22s
kube-system       kube-proxy-6rbks                              1/1     Running     0               5m2s
kube-system       kube-proxy-9npgg                              1/1     Running     0               5m21s
kube-system       kube-proxy-px6br                              1/1     Running     0               5m3s
kube-system       kube-proxy-q9889                              1/1     Running     0               4m10s
kube-system       kube-proxy-s5qnv                              1/1     Running     0               5m22s
kube-system       kube-proxy-tng4x                              1/1     Running     0               5m46s
kube-system       kube-scheduler-i-0033b408f7a1d55f3            1/1     Running     0               4m27s
kube-system       kube-scheduler-i-04d089ed896cfafe1            1/1     Running     1 (4m50s ago)   5m58s
kube-system       kube-scheduler-i-09b23242f40eabcca            1/1     Running     0               5m1s
longhorn-system   csi-attacher-6454556647-767p2                 1/1     Running     0               115s
longhorn-system   csi-attacher-6454556647-hz8lj                 1/1     Running     0               115s
longhorn-system   csi-attacher-6454556647-z5ftg                 1/1     Running     0               115s
longhorn-system   csi-provisioner-869bdc4b79-2v4wx              1/1     Running     0               115s
longhorn-system   csi-provisioner-869bdc4b79-4xcv4              1/1     Running     0               114s
longhorn-system   csi-provisioner-869bdc4b79-9q95d              1/1     Running     0               114s
longhorn-system   csi-resizer-6d8cf5f99f-dwdrq                  1/1     Running     0               114s
longhorn-system   csi-resizer-6d8cf5f99f-klvcr                  1/1     Running     0               114s
longhorn-system   csi-resizer-6d8cf5f99f-ptpzb                  1/1     Running     0               114s
longhorn-system   csi-snapshotter-588457fcdf-dlkdq              1/1     Running     0               113s
longhorn-system   csi-snapshotter-588457fcdf-p2c7c              1/1     Running     0               113s
longhorn-system   csi-snapshotter-588457fcdf-p5smn              1/1     Running     0               113s
longhorn-system   engine-image-ei-fa2dfbf0-bkwhx                1/1     Running     0               2m7s
longhorn-system   engine-image-ei-fa2dfbf0-cqq9n                1/1     Running     0               2m8s
longhorn-system   engine-image-ei-fa2dfbf0-lhjjc                1/1     Running     0               2m7s
longhorn-system   instance-manager-e-542b1382                   1/1     Running     0               119s
longhorn-system   instance-manager-e-a5e124bb                   1/1     Running     0               2m4s
longhorn-system   instance-manager-e-acb2a517                   1/1     Running     0               2m7s
longhorn-system   instance-manager-r-11ab6af6                   1/1     Running     0               119s
longhorn-system   instance-manager-r-5b82fba2                   1/1     Running     0               2m4s
longhorn-system   instance-manager-r-c2561fa0                   1/1     Running     0               2m6s
longhorn-system   longhorn-csi-plugin-4br28                     2/2     Running     0               113s
longhorn-system   longhorn-csi-plugin-8gdxf                     2/2     Running     0               113s
longhorn-system   longhorn-csi-plugin-wc6tt                     2/2     Running     0               113s
longhorn-system   longhorn-driver-deployer-7dddcdd5bb-zjh4k     1/1     Running     0               2m31s
longhorn-system   longhorn-manager-cbsh7                        1/1     Running     0               2m31s
longhorn-system   longhorn-manager-d2t75                        1/1     Running     1 (2m9s ago)    2m31s
longhorn-system   longhorn-manager-xqlfv                        1/1     Running     1 (2m9s ago)    2m31s
longhorn-system   longhorn-ui-7648d6cd69-tc6b9                  1/1     Running     0               2m31s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Public LB check
&lt;/h3&gt;

&lt;p&gt;We can now test the public load balancer, nginx ingress controller and the security group ingress rules. On your local PC run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -k -v https://k8s-ext-&amp;lt;REDACTED&amp;gt;.elb.amazonaws.com/
*   Trying 34.x.x.x:443...
* TCP_NODELAY set
* Connected to k8s-ext-&amp;lt;REDACTED&amp;gt;.elb.amazonaws.com (34.x.x.x) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
*   CAfile: /etc/ssl/certs/ca-certificates.crt
  CApath: /etc/ssl/certs
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
* TLSv1.2 (IN), TLS handshake, Server finished (14):
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
* TLSv1.2 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.2 (OUT), TLS handshake, Finished (20):
* TLSv1.2 (IN), TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / ECDHE-RSA-AES128-GCM-SHA256
* ALPN, server accepted to use h2
* Server certificate:
*  subject: C=IT; ST=Italy; L=Brescia; O=GL Ltd; OU=IT; CN=testlb.domainexample.com; emailAddress=email@you.com
*  start date: Apr 11 08:20:12 2022 GMT
*  expire date: Apr 11 08:20:12 2023 GMT
*  issuer: C=IT; ST=Italy; L=Brescia; O=GL Ltd; OU=IT; CN=testlb.domainexample.com; emailAddress=email@you.com
*  SSL certificate verify result: self signed certificate (18), continuing anyway.
* Using HTTP2, server supports multi-use
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* Using Stream ID: 1 (easy handle 0x55c6560cde10)
&amp;gt; GET / HTTP/2
&amp;gt; Host: k8s-ext-&amp;lt;REDACTED&amp;gt;.elb.amazonaws.com
&amp;gt; user-agent: curl/7.68.0
&amp;gt; accept: */*
&amp;gt; 
* Connection state changed (MAX_CONCURRENT_STREAMS == 128)!
&amp;lt; HTTP/2 404 
&amp;lt; date: Tue, 12 Apr 2022 10:08:18 GMT
&amp;lt; content-type: text/html
&amp;lt; content-length: 146
&amp;lt; strict-transport-security: max-age=15724800; includeSubDomains
&amp;lt; 
&amp;lt;html&amp;gt;
&amp;lt;head&amp;gt;&amp;lt;title&amp;gt;404 Not Found&amp;lt;/title&amp;gt;&amp;lt;/head&amp;gt;
&amp;lt;body&amp;gt;
&amp;lt;center&amp;gt;&amp;lt;h1&amp;gt;404 Not Found&amp;lt;/h1&amp;gt;&amp;lt;/center&amp;gt;
&amp;lt;hr&amp;gt;&amp;lt;center&amp;gt;nginx&amp;lt;/center&amp;gt;
&amp;lt;/body&amp;gt;
&amp;lt;/html&amp;gt;
* Connection #0 to host k8s-ext-&amp;lt;REDACTED&amp;gt;.elb.amazonaws.com left intact
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;404&lt;/em&gt; is a correct response since the cluster is empty.&lt;/p&gt;

&lt;h2&gt;
  
  
  Deploy a sample stack
&lt;/h2&gt;

&lt;p&gt;We use the same stack used in &lt;a href="https://github.com/garutilorenzo/k3s-oci-cluster"&gt;this&lt;/a&gt; repository.&lt;br&gt;
This stack &lt;strong&gt;need&lt;/strong&gt; longhorn and nginx ingress.&lt;/p&gt;

&lt;p&gt;To test all the components of the cluster we can deploy a sample stack. The stack is composed by the following components:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;MariaDB&lt;/li&gt;
&lt;li&gt;Nginx&lt;/li&gt;
&lt;li&gt;Wordpress&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Each component is made by: one deployment and one service.&lt;br&gt;
Wordpress and nginx share the same persistent volume (ReadWriteMany with longhorn storage class). The nginx configuration is stored in four ConfigMaps and  the nginx service is exposed by the nginx ingress controller.&lt;/p&gt;

&lt;p&gt;Deploy the resources with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f https://raw.githubusercontent.com/garutilorenzo/k3s-oci-cluster/master/deployments/mariadb/all-resources.yml
kubectl apply -f https://raw.githubusercontent.com/garutilorenzo/k3s-oci-cluster/master/deployments/nginx/all-resources.yml
kubectl apply -f https://raw.githubusercontent.com/garutilorenzo/k3s-oci-cluster/master/deployments/wordpress/all-resources.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;NOTE&lt;/strong&gt; to install WP and reach the &lt;em&gt;wp-admin&lt;/em&gt; path you have to edit the nginx deployment and change this line:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt; &lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;SECURE_SUBNET&lt;/span&gt;
    &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;8.8.8.8/32&lt;/span&gt; &lt;span class="c1"&gt;# change-me&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and set your public ip address.&lt;/p&gt;

&lt;p&gt;To check the status:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;root@i-04d089ed896cfafe1:~# kubectl get pods -o wide
NAME                         READY   STATUS    RESTARTS   AGE     IP            NODE                  NOMINATED NODE   READINESS GATES
mariadb-6cbf998bd6-s98nh     1/1     Running   0          2m21s   10.244.2.13   i-072bf7de2e94e6f2d   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
nginx-68b4dfbcb6-s6zfh       1/1     Running   0          19s     10.244.1.12   i-0121c2149821379cc   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
wordpress-558948b576-jgvm2   1/1     Running   0          71s     10.244.3.14   i-0cb1e2e7784768b22   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;

root@i-04d089ed896cfafe1:~# kubectl get deployments
NAME        READY   UP-TO-DATE   AVAILABLE   AGE
mariadb     1/1     1            1           2m32s
nginx       1/1     1            1           30s
wordpress   1/1     1            1           82s

root@i-04d089ed896cfafe1:~# kubectl get svc
NAME            TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
kubernetes      ClusterIP   10.96.0.1       &amp;lt;none&amp;gt;        443/TCP    14m
mariadb-svc     ClusterIP   10.108.78.60    &amp;lt;none&amp;gt;        3306/TCP   2m43s
nginx-svc       ClusterIP   10.103.145.57   &amp;lt;none&amp;gt;        80/TCP     41s
wordpress-svc   ClusterIP   10.103.49.246   &amp;lt;none&amp;gt;        9000/TCP   93s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now you are ready to setup WP, open the LB public ip and follow the wizard. &lt;strong&gt;NOTE&lt;/strong&gt; nginx and the Kubernetes Ingress rule are configured without virthual host/server name.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Lf9ZpoRu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://garutilorenzo.github.io/images/k8s-wp.png%3F" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Lf9ZpoRu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://garutilorenzo.github.io/images/k8s-wp.png%3F" alt="k8s wp install" width="469" height="346"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To clean the deployed resources:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl delete -f https://raw.githubusercontent.com/garutilorenzo/k3s-oci-cluster/master/deployments/mariadb/all-resources.yml
kubectl delete -f https://raw.githubusercontent.com/garutilorenzo/k3s-oci-cluster/master/deployments/nginx/all-resources.yml
kubectl delete -f https://raw.githubusercontent.com/garutilorenzo/k3s-oci-cluster/master/deployments/wordpress/all-resources.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Clean up
&lt;/h2&gt;

&lt;p&gt;Before destroy all the infrastructure &lt;strong&gt;DELETE&lt;/strong&gt; all the object in the S3 bucket.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform destroy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>terraform</category>
      <category>kubernetes</category>
      <category>aws</category>
    </item>
    <item>
      <title>Deploy a Kubernetes cluster for free, using K3s and Oracle always free resources</title>
      <dc:creator>Lorenzo Garuti</dc:creator>
      <pubDate>Tue, 22 Feb 2022 10:54:42 +0000</pubDate>
      <link>https://dev.to/garutilorenzo/deploy-a-kubernetes-cluster-for-free-using-k3s-and-oracle-always-free-resources-5fcm</link>
      <guid>https://dev.to/garutilorenzo/deploy-a-kubernetes-cluster-for-free-using-k3s-and-oracle-always-free-resources-5fcm</guid>
      <description>&lt;p&gt;Deploy a Kubernetes cluster for free, using K3s and Oracle &lt;a href="https://docs.oracle.com/en-us/iaas/Content/FreeTier/freetier_topic-Always_Free_Resources.htm" rel="noopener noreferrer"&gt;always free&lt;/a&gt; resources.&lt;/p&gt;

&lt;h2&gt;
  
  
  Table of Contents
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Important notes&lt;/li&gt;
&lt;li&gt;Requirements&lt;/li&gt;
&lt;li&gt;Example RSA key generation&lt;/li&gt;
&lt;li&gt;Project setup&lt;/li&gt;
&lt;li&gt;Oracle provider setup&lt;/li&gt;
&lt;li&gt;Pre flight checklist&lt;/li&gt;
&lt;li&gt;Notes about OCI always free resources&lt;/li&gt;
&lt;li&gt;Notes about K3s&lt;/li&gt;
&lt;li&gt;Infrastructure overview&lt;/li&gt;
&lt;li&gt;Cluster resource deployed&lt;/li&gt;
&lt;li&gt;Deploy&lt;/li&gt;
&lt;li&gt;Deploy a sample stack&lt;/li&gt;
&lt;li&gt;Clean up&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt; choose a region with enough ARM capacity&lt;/p&gt;

&lt;h3&gt;
  
  
  Important notes
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;This is tutorial shows only how to use terraform with the Oracle Cloud infrastructure and use only the &lt;strong&gt;always free&lt;/strong&gt; resources. This examples are &lt;strong&gt;not&lt;/strong&gt; for a production environment.&lt;/li&gt;
&lt;li&gt;At the end of your trial period (30 days). All the paid resources deployed will be stopped/terminated&lt;/li&gt;
&lt;li&gt;At the end of your trial period (30 days), if you have a running compute instance it will be stopped/hibernated&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Requirements
&lt;/h3&gt;

&lt;p&gt;To use this tutorial you will need:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;an Oracle Cloud account. You can register &lt;a href="https://cloud.oracle.com" rel="noopener noreferrer"&gt;here&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Once you get the account, follow the &lt;em&gt;Before you begin&lt;/em&gt; and &lt;em&gt;1. Prepare&lt;/em&gt; step in &lt;a href="https://docs.oracle.com/en-us/iaas/developer-tutorials/tutorials/tf-provider/01-summary.htm" rel="noopener noreferrer"&gt;this&lt;/a&gt; document.&lt;/p&gt;

&lt;p&gt;you need also:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://www.terraform.io/" rel="noopener noreferrer"&gt;Terraform&lt;/a&gt; - Terraform is an open-source infrastructure as code software tool that provides a consistent CLI workflow to manage hundreds of cloud services. Terraform codifies cloud APIs into declarative configuration files.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://kubernetes.io/docs/tasks/tools/" rel="noopener noreferrer"&gt;kubectl&lt;/a&gt; - The Kubernetes command-line tool (optional)&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://docs.oracle.com/en-us/iaas/Content/API/Concepts/cliconcepts.htm" rel="noopener noreferrer"&gt;oci cli&lt;/a&gt; - Oracle command line interface (optional)&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Example RSA key generation
&lt;/h4&gt;

&lt;p&gt;To use terraform with the Oracle Cloud infrastructure you need to generate an RSA key. Generate the rsa key with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;openssl genrsa -out ~/.oci/&amp;lt;your_name&amp;gt;-oracle-cloud.pem 4096
chmod 600 ~/.oci/&amp;lt;your_name&amp;gt;-oracle-cloud.pem
openssl rsa -pubout -in ~/.oci/&amp;lt;your_name&amp;gt;-oracle-cloud.pem -out ~/.oci/&amp;lt;your_name&amp;gt;-oracle-cloud_public.pem
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;replace &lt;em&gt;&lt;/em&gt; with your name or a string you prefer.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NOTE&lt;/strong&gt; ~/.oci/-oracle-cloud_public.pem this string will be used on the &lt;em&gt;terraform.tfvars&lt;/em&gt; used by the Oracle provider plugin, so please take note of this string.&lt;/p&gt;

&lt;h3&gt;
  
  
  Project setup
&lt;/h3&gt;

&lt;p&gt;You can clone &lt;a href="https://github.com/garutilorenzo/k3s-oci-cluster" rel="noopener noreferrer"&gt;this&lt;/a&gt; repository and work in the &lt;em&gt;example/&lt;/em&gt; directory. You have to edit the &lt;em&gt;main.tf&lt;/em&gt; file and you have to create the &lt;em&gt;terraform.tfvars&lt;/em&gt; file. For more detail see Oracle provider setup and Pre flight checklist.&lt;/p&gt;

&lt;p&gt;Or if you prefer you can create an new empty directory in your workspace and create this three files:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;terraform.tfvars - More details in Oracle provider setup
&lt;/li&gt;
&lt;li&gt;main.tf&lt;/li&gt;
&lt;li&gt;provider.tf&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The main.tf file will look like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;variable "compartment_ocid" {

}

variable "tenancy_ocid" {

}

variable "user_ocid" {

}

variable "fingerprint" {

}

variable "private_key_path" {

}

variable "region" {
  default = "&amp;lt;change_me&amp;gt;"
}

module "k3s_cluster" {
  region              = var.region
  availability_domain = "&amp;lt;change_me&amp;gt;"
  compartment_ocid    = var.compartment_ocid
  my_public_ip_cidr   = "&amp;lt;change_me&amp;gt;"
  cluster_name        = "&amp;lt;change_me&amp;gt;"
  environment         = "staging"
  k3s_token           = "&amp;lt;change_me&amp;gt;"
  source              = "github.com/garutilorenzo/k3s-oci-cluster"
}

output "k3s_servers_ips" {
  value = module.k3s_cluster.k3s_servers_ips
}

output "k3s_workers_ips" {
  value = module.k3s_cluster.k3s_workers_ips
}

output "public_lb_ip" {
  value = module.k3s_cluster.public_lb_ip
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For all the possible variables see Pre flight checklist&lt;/p&gt;

&lt;p&gt;The provider.tf will look like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;provider "oci" {
  tenancy_ocid     = var.tenancy_ocid
  user_ocid        = var.user_ocid
  private_key_path = var.private_key_path
  fingerprint      = var.fingerprint
  region           = var.region
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now we can init terraform with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform init

terraform init
Initializing modules...
Downloading git::https://github.com/garutilorenzo/k3s-oci-cluster.git for k3s_cluster...
- k3s_cluster in .terraform/modules/k3s_cluster

Initializing the backend...

Initializing provider plugins...
- Reusing previous version of hashicorp/oci from the dependency lock file
- Reusing previous version of hashicorp/template from the dependency lock file
- Using previously-installed hashicorp/template v2.2.0
- Using previously-installed hashicorp/oci v4.64.0

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Generate sel signed SSL certificate for the public LB (L7)
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;NOTE&lt;/strong&gt; If you already own a valid certificate skip this step and set the correct values for the variables: PATH_TO_PUBLIC_LB_CERT and PATH_TO_PUBLIC_LB_KEY&lt;/p&gt;

&lt;p&gt;We need to generate the certificates (sel signed) for our public load balancer (Layer 7). To do this we need &lt;em&gt;openssl&lt;/em&gt;, open a terminal and follow this step:&lt;/p&gt;

&lt;p&gt;Generate the key:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;openssl genrsa 2048 &amp;gt; privatekey.pem
Generating RSA private key, 2048 bit long modulus (2 primes)
.......+++++
...............+++++
e is 65537 (0x010001)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Generate the a new certificate request:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;openssl req -new -key privatekey.pem -out csr.pem
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [AU]:IT
State or Province Name (full name) [Some-State]:Italy
Locality Name (eg, city) []:Brescia
Organization Name (eg, company) [Internet Widgits Pty Ltd]:GL Ltd
Organizational Unit Name (eg, section) []:IT
Common Name (e.g. server FQDN or YOUR name) []:testlb.domainexample.com
Email Address []:email@you.com

Please enter the following 'extra' attributes
to be sent with your certificate request
A challenge password []:
An optional company name []:
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Generate the public CRT:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;openssl x509 -req -days 365 -in csr.pem -signkey privatekey.pem -out public.crt
Signature ok
subject=C = IT, ST = Italy, L = Brescia, O = GL Ltd, OU = IT, CN = testlb.domainexample.com, emailAddress = email@you.com
Getting Private key
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is the final result:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ls

csr.pem  privatekey.pem  public.crt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now set the variables:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;PATH_TO_PUBLIC_LB_CERT: ~/full_path/public.crt&lt;/li&gt;
&lt;li&gt;PATH_TO_PUBLIC_LB_KEY: ~/full_path/privatekey.pem&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Oracle provider setup
&lt;/h3&gt;

&lt;p&gt;This is an example of the &lt;em&gt;terraform.tfvars&lt;/em&gt; file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;fingerprint      = "&amp;lt;rsa_key_fingerprint&amp;gt;"
private_key_path = "~/.oci/&amp;lt;your_name&amp;gt;-oracle-cloud_public.pem"
user_ocid        = "&amp;lt;user_ocid&amp;gt;"
tenancy_ocid     = "&amp;lt;tenency_ocid&amp;gt;"
compartment_ocid = "&amp;lt;compartment_ocid&amp;gt;"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To find your tenency_ocid in the Ocacle Cloud console go to: Governance and Administration &amp;gt; Tenency details, then copy the OCID.&lt;/p&gt;

&lt;p&gt;To find you user_ocid in the Ocacle Cloud console go to User setting (click on the icon in the top right corner, then click on User settings), click your username and then copy the OCID&lt;/p&gt;

&lt;p&gt;The compartment_ocid is the same as tenency_ocid.&lt;/p&gt;

&lt;p&gt;The fingerprint is the fingerprint of your RSA key, you can find this vale under User setting &amp;gt; API Keys&lt;/p&gt;

&lt;h3&gt;
  
  
  Pre flight checklist
&lt;/h3&gt;

&lt;p&gt;Once you have created the terraform.tfvars file edit the main.tf file (always in the &lt;em&gt;example/&lt;/em&gt; directory) and set the following variables:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Var&lt;/th&gt;
&lt;th&gt;Required&lt;/th&gt;
&lt;th&gt;Desc&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;region&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;yes&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;set the correct OCI region based on your needs&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;availability_domain&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;yes&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Set the correct availability domain. See how to find the availability domain&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;compartment_ocid&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;yes&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Set the correct compartment ocid. See how to find the compartment ocid&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;cluster_name&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;yes&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;the name of your K3s cluster. Default: k3s-cluster&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;k3s_token&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;yes&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;The token of your K3s cluster. How to generate a random token&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;my_public_ip_cidr&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;yes&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;your public ip in cidr format (Example: 195.102.xxx.xxx/32)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;environment&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;yes&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Current work environment (Example: staging/dev/prod). This value is used for tag all the deployed resources&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;PATH_TO_PUBLIC_LB_CERT&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;yes&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Path to the public LB certificate. See how to generate the certificate&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;PATH_TO_PUBLIC_LB_KEY&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;yes&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Path to the public LB key. See how to generate the key&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;compute_shape&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;no&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Compute shape to use. Default VM.Standard.A1.Flex. &lt;strong&gt;NOTE&lt;/strong&gt; Is mandatory to use this compute shape for provision 4 always free VMs&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;os_image_id&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;no&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Image id to use. Default image: Canonical-Ubuntu-20.04-aarch64-2022.01.18-0. See how to list all available OS images&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;oci_core_vcn_dns_label&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;no&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;VCN DNS label. Default: defaultvcn&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;oci_core_subnet_dns_label10&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;no&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;First subnet DNS label. Default: defaultsubnet10&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;oci_core_subnet_dns_label11&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;no&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Second subnet DNS label. Default: defaultsubnet11&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;oci_core_vcn_cidr&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;no&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;VCN CIDR. Default: oci_core_vcn_cidr&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;oci_core_subnet_cidr10&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;no&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;First subnet CIDR. Default: 10.0.0.0/24&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;oci_core_subnet_cidr11&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;no&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Second subnet CIDR. Default: 10.0.1.0/24&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;oci_identity_dynamic_group_name&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;no&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Dynamic group name. This dynamic group will contains all the instances of this specific compartment. Default: Compute_Dynamic_Group&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;oci_identity_policy_name&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;no&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Policy name. This policy will allow dynamic group 'oci_identity_dynamic_group_name' to read OCI api without auth. Default: Compute_To_Oci_Api_Policy&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;k3s_load_balancer_name&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;no&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Internal LB name. Default: k3s internal load balancer&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;public_load_balancer_name&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;no&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Public LB name. Default: K3s public LB&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;kube_api_port&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;no&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Kube api default port Default: 6443&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;public_lb_shape&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;no&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;LB shape for the public LB. Default: flexible. &lt;strong&gt;NOTE&lt;/strong&gt; is mandatory to use this kind of shape to provision two always free LB (public and private)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;http_lb_port&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;no&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;http port used by the public LB. Default: 80&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;https_lb_port&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;no&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;http port used by the public LB. Default: 443&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;k3s_server_pool_size&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;no&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Number of k3s servers deployed. Default 2&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;k3s_worker_pool_size&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;no&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Number of k3s workers deployed. Default 2&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;install_nginx_ingress&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;no&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Boolean value, install kubernetes nginx ingress controller instead of Traefik. Default: true. For more information see Nginx ingress controller
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;install_longhorn&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;no&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Boolean value, install longhorn "Cloud native distributed block storage for Kubernetes". Default: true&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;longhorn_release&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;no&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Longhorn release. Default: v1.2.3&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;unique_tag_key&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;no&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Unique tag name used for tagging all the deployed resources. Default: k3s-provisioner&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;unique_tag_value&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;no&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Unique value used with  unique_tag_key. Default: &lt;a href="https://github.com/garutilorenzo/k3s-oci-cluster" rel="noopener noreferrer"&gt;https://github.com/garutilorenzo/k3s-oci-cluster&lt;/a&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;PATH_TO_PUBLIC_KEY&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;no&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Path to your public ssh key (Default: "~/.ssh/id_rsa.pub)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;PATH_TO_PRIVATE_KEY&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;no&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Path to your private ssh key (Default: "~/.ssh/id_rsa)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h4&gt;
  
  
  Generate random token
&lt;/h4&gt;

&lt;p&gt;Generate random k3s token with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 55 | head -n 1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  How to find the availability doamin name
&lt;/h4&gt;

&lt;p&gt;To find the list of the availability domains run this command on che Cloud Shell:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;oci iam availability-domain list
{
  "data": [
    {
      "compartment-id": "&amp;lt;compartment_ocid&amp;gt;",
      "id": "ocid1.availabilitydomain.oc1..xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
      "name": "iAdc:EU-ZURICH-1-AD-1"
    }
  ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  How to list all the OS images
&lt;/h4&gt;

&lt;p&gt;To filter the OS images by shape and OS run this command on che Cloud Shell:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;oci compute image list --compartment-id &amp;lt;compartment_ocid&amp;gt; --operating-system "Canonical Ubuntu" --shape "VM.Standard.A1.Flex"
{
  "data": [
    {
      "agent-features": null,
      "base-image-id": null,
      "billable-size-in-gbs": 2,
      "compartment-id": null,
      "create-image-allowed": true,
      "defined-tags": {},
      "display-name": "Canonical-Ubuntu-20.04-aarch64-2022.01.18-0",
      "freeform-tags": {},
      "id": "ocid1.image.oc1.eu-zurich-1.aaaaaaaag2uyozo7266bmg26j5ixvi42jhaujso2pddpsigtib6vfnqy5f6q",
      "launch-mode": "NATIVE",
      "launch-options": {
        "boot-volume-type": "PARAVIRTUALIZED",
        "firmware": "UEFI_64",
        "is-consistent-volume-naming-enabled": true,
        "is-pv-encryption-in-transit-enabled": true,
        "network-type": "PARAVIRTUALIZED",
        "remote-data-volume-type": "PARAVIRTUALIZED"
      },
      "lifecycle-state": "AVAILABLE",
      "listing-type": null,
      "operating-system": "Canonical Ubuntu",
      "operating-system-version": "20.04",
      "size-in-mbs": 47694,
      "time-created": "2022-01-27T22:53:34.270000+00:00"
    },
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; this setup was only tested with Ubuntu 20.04&lt;/p&gt;

&lt;h2&gt;
  
  
  Notes about OCI always free resources
&lt;/h2&gt;

&lt;p&gt;In order to get the maximum resources available within the oracle always free tier, the max amount of the k3s servers and k3s workers must be 2. So the max value for &lt;em&gt;k3s_server_pool_size&lt;/em&gt; and &lt;em&gt;k3s_worker_pool_size&lt;/em&gt; &lt;strong&gt;is&lt;/strong&gt; 2.&lt;/p&gt;

&lt;p&gt;In this setup we use two LB, one internal LB and one public LB (Layer 7). In order to use two LB using the always free resources, one lb must be a &lt;a href="https://docs.oracle.com/en-us/iaas/Content/NetworkLoadBalancer/introducton.htm#Overview" rel="noopener noreferrer"&gt;network load balancer&lt;/a&gt; an the other must be a &lt;a href="https://docs.oracle.com/en-us/iaas/Content/Balance/Concepts/balanceoverview.htm" rel="noopener noreferrer"&gt;load balancer&lt;/a&gt;. The public LB &lt;strong&gt;must&lt;/strong&gt; use the &lt;em&gt;flexible&lt;/em&gt; shape (&lt;em&gt;public_lb_shape&lt;/em&gt; variable).&lt;/p&gt;

&lt;h2&gt;
  
  
  Notes about K3s
&lt;/h2&gt;

&lt;p&gt;In this environment the High Availability of the K3s cluster is provided using the Embedded DB. More details &lt;a href="https://rancher.com/docs/k3s/latest/en/installation/ha-embedded/" rel="noopener noreferrer"&gt;here&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The default installation of K3s install &lt;a href="https://traefik.io/" rel="noopener noreferrer"&gt;Traefik&lt;/a&gt; as ingress the controller. In this environment Traefik is replaced by &lt;a href="https://kubernetes.github.io/ingress-nginx/" rel="noopener noreferrer"&gt;Nginx ingress controller&lt;/a&gt;. To install Traefik as the ingress controller set the variable &lt;em&gt;install_nginx_ingress&lt;/em&gt; to &lt;em&gt;false&lt;/em&gt;.&lt;br&gt;
For more details on Nginx ingress controller see the Nginx ingress controller section.&lt;/p&gt;
&lt;h2&gt;
  
  
  Infrastructure overview
&lt;/h2&gt;

&lt;p&gt;The final infrastructure will be made by:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;two instance pool:

&lt;ul&gt;
&lt;li&gt;one instance pool for the server nodes named "k3s-servers"&lt;/li&gt;
&lt;li&gt;one instance pool for the worker nodes named "k3s-workers"&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;one internal load balancer that will route traffic to K3s servers&lt;/li&gt;
&lt;li&gt;one external load balancer that will route traffic to K3s workers&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The other resources created by terraform are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;two instance configurations (one for the servers and one for the workers) used by the instance pools&lt;/li&gt;
&lt;li&gt;one vcn&lt;/li&gt;
&lt;li&gt;two public subnets&lt;/li&gt;
&lt;li&gt;two security list&lt;/li&gt;
&lt;li&gt;one dynamic group&lt;/li&gt;
&lt;li&gt;one identity policy&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgarutilorenzo.github.io%2Fimages%2Fk3s-oci-always-free.drawio.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgarutilorenzo.github.io%2Fimages%2Fk3s-oci-always-free.drawio.png" alt="k3s-infra"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Cluster resource deployed
&lt;/h2&gt;

&lt;p&gt;This setup will automatically install &lt;a href="https://longhorn.io/" rel="noopener noreferrer"&gt;longhorn&lt;/a&gt;. Longhorn is a &lt;em&gt;Cloud native distributed block storage for Kubernetes&lt;/em&gt;. To disable the longhorn deployment set &lt;em&gt;install_longhorn&lt;/em&gt; variable to &lt;em&gt;false&lt;/em&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Nginx ingress controller
&lt;/h3&gt;

&lt;p&gt;In this environment &lt;a href="https://kubernetes.github.io/ingress-nginx/" rel="noopener noreferrer"&gt;Nginx ingress controller&lt;/a&gt; is used instead of the standard &lt;a href="https://traefik.io/" rel="noopener noreferrer"&gt;Traefik&lt;/a&gt; ingress controller.&lt;/p&gt;

&lt;p&gt;The installation is the &lt;a href="https://kubernetes.github.io/ingress-nginx/deploy/#bare-metal-clusters" rel="noopener noreferrer"&gt;bare metal&lt;/a&gt; installation, the ingress controller then is exposed via a LoadBalancer Service.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Service&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ingress-nginx-controller-loadbalancer&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ingress-nginx&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app.kubernetes.io/component&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;controller&lt;/span&gt;
    &lt;span class="na"&gt;app.kubernetes.io/instance&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ingress-nginx&lt;/span&gt;
    &lt;span class="na"&gt;app.kubernetes.io/name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ingress-nginx&lt;/span&gt;
  &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;http&lt;/span&gt;
      &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
      &lt;span class="na"&gt;protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;TCP&lt;/span&gt;
      &lt;span class="na"&gt;targetPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https&lt;/span&gt;
      &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;443&lt;/span&gt;
      &lt;span class="na"&gt;protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;TCP&lt;/span&gt;
      &lt;span class="na"&gt;targetPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
  &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;LoadBalancer&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To properly configure all the Forwarded HTTP Headers (L7 Headers) this parameters are added to che ConfigMap:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;data&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;allow-snippet-annotations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;true"&lt;/span&gt;
  &lt;span class="na"&gt;use-forwarded-headers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;true"&lt;/span&gt;
  &lt;span class="na"&gt;compute-full-forwarded-for&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;true"&lt;/span&gt;
  &lt;span class="na"&gt;enable-real-ip&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;true"&lt;/span&gt;
  &lt;span class="na"&gt;forwarded-for-header&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;X-Forwarded-For"&lt;/span&gt;
  &lt;span class="na"&gt;proxy-real-ip-cidr&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;0.0.0.0/0"&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ConfigMap&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app.kubernetes.io/component&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;controller&lt;/span&gt;
    &lt;span class="na"&gt;app.kubernetes.io/instance&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ingress-nginx&lt;/span&gt;
    &lt;span class="na"&gt;app.kubernetes.io/managed-by&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Helm&lt;/span&gt;
    &lt;span class="na"&gt;app.kubernetes.io/name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ingress-nginx&lt;/span&gt;
    &lt;span class="na"&gt;app.kubernetes.io/part-of&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ingress-nginx&lt;/span&gt;
    &lt;span class="na"&gt;app.kubernetes.io/version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;1.1.1&lt;/span&gt;
    &lt;span class="na"&gt;helm.sh/chart&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ingress-nginx-4.0.16&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ingress-nginx-controller&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ingress-nginx&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Deploy
&lt;/h2&gt;

&lt;p&gt;We are now ready to deploy our infrastructure. First we ask terraform to plan the execution with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform plan

...
...
      + id                             = (known after apply)
      + ip_addresses                   = (known after apply)
      + is_preserve_source_destination = false
      + is_private                     = true
      + lifecycle_details              = (known after apply)
      + nlb_ip_version                 = (known after apply)
      + state                          = (known after apply)
      + subnet_id                      = (known after apply)
      + system_tags                    = (known after apply)
      + time_created                   = (known after apply)
      + time_updated                   = (known after apply)

      + reserved_ips {
          + id = (known after apply)
        }
    }

Plan: 27 to add, 0 to change, 0 to destroy.

Changes to Outputs:
  + k3s_servers_ips = [
      + (known after apply),
      + (known after apply),
    ]
  + k3s_workers_ips = [
      + (known after apply),
      + (known after apply),
    ]
  + public_lb_ip    = (known after apply)

──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────

Note: You didn't use the -out option to save this plan, so Terraform can't guarantee to take exactly these actions if you run "terraform apply" now.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;now we can deploy our resources with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform apply

...
...
      + is_preserve_source_destination = false
      + is_private                     = true
      + lifecycle_details              = (known after apply)
      + nlb_ip_version                 = (known after apply)
      + state                          = (known after apply)
      + subnet_id                      = (known after apply)
      + system_tags                    = (known after apply)
      + time_created                   = (known after apply)
      + time_updated                   = (known after apply)

      + reserved_ips {
          + id = (known after apply)
        }
    }

Plan: 27 to add, 0 to change, 0 to destroy.

Changes to Outputs:
  + k3s_servers_ips = [
      + (known after apply),
      + (known after apply),
    ]
  + k3s_workers_ips = [
      + (known after apply),
      + (known after apply),
    ]
  + public_lb_ip    = (known after apply)

  Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.
  Enter a value: yes

...
...

module.k3s_cluster.oci_network_load_balancer_backend.k3s_kube_api_backend[0]: Still creating... [50s elapsed]
module.k3s_cluster.oci_network_load_balancer_backend.k3s_kube_api_backend[0]: Still creating... [1m0s elapsed]
module.k3s_cluster.oci_network_load_balancer_backend.k3s_kube_api_backend[0]: Creation complete after 1m1s [...]

Apply complete! Resources: 27 added, 0 changed, 0 destroyed.

Outputs:

k3s_servers_ips = [
  "X.X.X.X",
  "X.X.X.X",
]
k3s_workers_ips = [
  "X.X.X.X",
  "X.X.X.X",
]
public_lb_ip = tolist([
  "X.X.X.X",
])
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now on one master node you can check the status of the cluster with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ssh X.X.X.X -lubuntu

ubuntu@inst-iwlqz-k3s-servers:~$ sudo su -
root@inst-iwlqz-k3s-servers:~# kubectl get nodes

NAME                     STATUS   ROLES                       AGE     VERSION
inst-axdzf-k3s-workers   Ready    &amp;lt;none&amp;gt;                      4m34s   v1.22.6+k3s1
inst-hmgnl-k3s-servers   Ready    control-plane,etcd,master   4m14s   v1.22.6+k3s1
inst-iwlqz-k3s-servers   Ready    control-plane,etcd,master   6m4s    v1.22.6+k3s1
inst-lkvem-k3s-workers   Ready    &amp;lt;none&amp;gt;                      5m35s   v1.22.6+k3s1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Public LB check
&lt;/h3&gt;

&lt;p&gt;We can now test the public load balancer, traefik and the security list ingress rules. On your local PC run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -v http://&amp;lt;PUBLIC_LB_IP&amp;gt;

*   Trying PUBLIC_LB_IP:80...
* TCP_NODELAY set
* Connected to PUBLIC_LB_IP (PUBLIC_LB_IP) port 80 (#0)
&amp;gt; GET / HTTP/1.1
&amp;gt; Host: PUBLIC_LB_IP
&amp;gt; User-Agent: curl/7.68.0
&amp;gt; Accept: */*
&amp;gt; 
* Mark bundle as not supporting multiuse
&amp;lt; HTTP/1.1 404 Not Found
&amp;lt; Date: Fri, 25 Feb 2022 14:03:09 GMT
&amp;lt; Content-Type: text/html
&amp;lt; Content-Length: 146
&amp;lt; Connection: keep-alive
&amp;lt; 
&amp;lt;html&amp;gt;
&amp;lt;head&amp;gt;&amp;lt;title&amp;gt;404 Not Found&amp;lt;/title&amp;gt;&amp;lt;/head&amp;gt;
&amp;lt;body&amp;gt;
&amp;lt;center&amp;gt;&amp;lt;h1&amp;gt;404 Not Found&amp;lt;/h1&amp;gt;&amp;lt;/center&amp;gt;
&amp;lt;hr&amp;gt;&amp;lt;center&amp;gt;nginx&amp;lt;/center&amp;gt;
&amp;lt;/body&amp;gt;
&amp;lt;/html&amp;gt;
* Connection #0 to host PUBLIC_LB_IP left intact
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;404&lt;/em&gt; is a correct response since the cluster is empty. We can test also the https listener/backends:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -k -v https://&amp;lt;PUBLIC_LB_IP&amp;gt;

curl -k -v https://&amp;lt;PUBLIC_LB_IP&amp;gt;

* Trying PUBLIC_LB_IP:443...
* TCP_NODELAY set
* Connected to PUBLIC_LB_IP (PUBLIC_LB_IP) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
*   CAfile: /etc/ssl/certs/ca-certificates.crt
  CApath: /etc/ssl/certs
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
* TLSv1.2 (IN), TLS handshake, Server finished (14):
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
* TLSv1.2 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.2 (OUT), TLS handshake, Finished (20):
* TLSv1.2 (IN), TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / ECDHE-RSA-AES256-GCM-SHA384
* ALPN, server accepted to use http/1.1
* Server certificate:
*  subject: C=IT; ST=Italy; L=Brescia; O=GL Ltd; OU=IT; CN=testlb.domainexample.com; emailAddress=email@you.com
*  start date: Feb 25 10:28:29 2022 GMT
*  expire date: Feb 25 10:28:29 2023 GMT
*  issuer: C=IT; ST=Italy; L=Brescia; O=GL Ltd; OU=IT; CN=testlb.domainexample.com; emailAddress=email@you.com
*  SSL certificate verify result: self signed certificate (18), continuing anyway.
&amp;gt; GET / HTTP/1.1
&amp;gt; Host: PUBLIC_LB_IP
&amp;gt; User-Agent: curl/7.68.0
&amp;gt; Accept: */*
&amp;gt; 
* Mark bundle as not supporting multiuse
&amp;lt; HTTP/1.1 404 Not Found
&amp;lt; Date: Fri, 25 Feb 2022 13:48:19 GMT
&amp;lt; Content-Type: text/html
&amp;lt; Content-Length: 146
&amp;lt; Connection: keep-alive
&amp;lt; 
&amp;lt;html&amp;gt;
&amp;lt;head&amp;gt;&amp;lt;title&amp;gt;404 Not Found&amp;lt;/title&amp;gt;&amp;lt;/head&amp;gt;
&amp;lt;body&amp;gt;
&amp;lt;center&amp;gt;&amp;lt;h1&amp;gt;404 Not Found&amp;lt;/h1&amp;gt;&amp;lt;/center&amp;gt;
&amp;lt;hr&amp;gt;&amp;lt;center&amp;gt;nginx&amp;lt;/center&amp;gt;
&amp;lt;/body&amp;gt;
&amp;lt;/html&amp;gt;
* Connection #0 to host PUBLIC_LB_IP left intact
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Longhorn check
&lt;/h3&gt;

&lt;p&gt;To check if longhorn was successfully installed run on one master nodes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get ns
NAME              STATUS   AGE
default           Active   9m40s
kube-node-lease   Active   9m39s
kube-public       Active   9m39s
kube-system       Active   9m40s
longhorn-system   Active   8m52s   &amp;lt;- longhorn namespace 


root@inst-hmgnl-k3s-servers:~# kubectl get pods -n longhorn-system
NAME                                        READY   STATUS    RESTARTS        AGE
csi-attacher-5f46994f7-8w9sg                1/1     Running   0               7m52s
csi-attacher-5f46994f7-qz7d4                1/1     Running   0               7m52s
csi-attacher-5f46994f7-rjqlx                1/1     Running   0               7m52s
csi-provisioner-6ccbfbf86f-fw7q4            1/1     Running   0               7m52s
csi-provisioner-6ccbfbf86f-gwmrg            1/1     Running   0               7m52s
csi-provisioner-6ccbfbf86f-nsf84            1/1     Running   0               7m52s
csi-resizer-6dd8bd4c97-7l67f                1/1     Running   0               7m51s
csi-resizer-6dd8bd4c97-g66wj                1/1     Running   0               7m51s
csi-resizer-6dd8bd4c97-nksmd                1/1     Running   0               7m51s
csi-snapshotter-86f65d8bc-2gcwt             1/1     Running   0               7m50s
csi-snapshotter-86f65d8bc-kczrw             1/1     Running   0               7m50s
csi-snapshotter-86f65d8bc-sjmnv             1/1     Running   0               7m50s
engine-image-ei-fa2dfbf0-6rpz2              1/1     Running   0               8m30s
engine-image-ei-fa2dfbf0-7l5k8              1/1     Running   0               8m30s
engine-image-ei-fa2dfbf0-7nph9              1/1     Running   0               8m30s
engine-image-ei-fa2dfbf0-ndkck              1/1     Running   0               8m30s
instance-manager-e-31a0b3f5                 1/1     Running   0               8m26s
instance-manager-e-37aa4663                 1/1     Running   0               8m27s
instance-manager-e-9cc7cc9d                 1/1     Running   0               8m20s
instance-manager-e-f39d9f2c                 1/1     Running   0               8m29s
instance-manager-r-1364d994                 1/1     Running   0               8m26s
instance-manager-r-c1670269                 1/1     Running   0               8m20s
instance-manager-r-c20ebeb3                 1/1     Running   0               8m28s
instance-manager-r-c54bf9a5                 1/1     Running   0               8m27s
longhorn-csi-plugin-2qj94                   2/2     Running   0               7m50s
longhorn-csi-plugin-4t8jm                   2/2     Running   0               7m50s
longhorn-csi-plugin-ws82l                   2/2     Running   0               7m50s
longhorn-csi-plugin-zmc9q                   2/2     Running   0               7m50s
longhorn-driver-deployer-784546d78d-s6cd2   1/1     Running   0               8m58s
longhorn-manager-l8sd8                      1/1     Running   0               9m1s
longhorn-manager-r2q5c                      1/1     Running   1 (8m30s ago)   9m1s
longhorn-manager-s6wql                      1/1     Running   0               9m1s
longhorn-manager-zrrf2                      1/1     Running   0               9m
longhorn-ui-9fdb94f9-6shsr                  1/1     Running   0               8m59s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Deploy a sample stack
&lt;/h2&gt;

&lt;p&gt;Finally to test all the components of the cluster we can deploy a sample stack. The stack is composed by the following components:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;MariaDB&lt;/li&gt;
&lt;li&gt;Nginx&lt;/li&gt;
&lt;li&gt;Wordpress&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Each component is made by: one deployment and one service.&lt;br&gt;
Wordpress and nginx share the same persistent volume (ReadWriteMany with longhorn storage class). The nginx configuration is stored in four ConfigMaps and  the nginx service is exposed by the nginx ingress controller.&lt;/p&gt;

&lt;p&gt;Deploy the resources with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f https://raw.githubusercontent.com/garutilorenzo/k3s-oci-cluster/master/deployments/mariadb/all-resources.yml
kubectl apply -f https://raw.githubusercontent.com/garutilorenzo/k3s-oci-cluster/master/deployments/nginx/all-resources.yml
kubectl apply -f https://raw.githubusercontent.com/garutilorenzo/k3s-oci-cluster/master/deployments/wordpress/all-resources.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and check the status:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get deployments
NAME        READY   UP-TO-DATE   AVAILABLE   AGE
mariadb       1/1     1            1           92m
nginx         1/1     1            1           79m
wordpress     1/1     1            1           91m

kubectl get svc
NAME            TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
kubernetes        ClusterIP   10.43.0.1       &amp;lt;none&amp;gt;        443/TCP    5h8m
mariadb-svc       ClusterIP   10.43.184.188   &amp;lt;none&amp;gt;        3306/TCP   92m
nginx-svc         ClusterIP   10.43.9.202     &amp;lt;none&amp;gt;        80/TCP     80m
wordpress-svc     ClusterIP   10.43.242.26    &amp;lt;none&amp;gt;        9000/TCP   91m
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now you are ready to setup WP, open the LB public ip and follow the wizard. &lt;strong&gt;NOTE&lt;/strong&gt; nginx and the Kubernetes Ingress rule are configured without virthual host/server name.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgarutilorenzo.github.io%2Fimages%2Fk3s-wp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgarutilorenzo.github.io%2Fimages%2Fk3s-wp.png" alt="k3s-wp-install"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To clean the deployed resources:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl delete -f https://raw.githubusercontent.com/garutilorenzo/k3s-oci-cluster/master/deployments/mariadb/all-resources.yml
kubectl delete -f https://raw.githubusercontent.com/garutilorenzo/k3s-oci-cluster/master/deployments/nginx/all-resources.yml
kubectl delete -f https://raw.githubusercontent.com/garutilorenzo/k3s-oci-cluster/master/deployments/wordpress/all-resources.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Clean up
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform destroy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>kubernetes</category>
      <category>oci</category>
      <category>terraform</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Build a simple web app using BottlePy</title>
      <dc:creator>Lorenzo Garuti</dc:creator>
      <pubDate>Wed, 29 Dec 2021 16:51:10 +0000</pubDate>
      <link>https://dev.to/garutilorenzo/build-a-simple-web-app-using-bottlepy-2mp</link>
      <guid>https://dev.to/garutilorenzo/build-a-simple-web-app-using-bottlepy-2mp</guid>
      <description>&lt;p&gt;Build a simple web application with SQLAlchemy and Redis support using BottlePy &lt;/p&gt;

&lt;h2&gt;
  
  
  Table of Contents
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Requirements&lt;/li&gt;
&lt;li&gt;The application stack&lt;/li&gt;
&lt;li&gt;Data used in this example site&lt;/li&gt;
&lt;li&gt;Setup the environmant&lt;/li&gt;
&lt;li&gt;Download sample data&lt;/li&gt;
&lt;li&gt;Start the environment&lt;/li&gt;
&lt;li&gt;Application Overview&lt;/li&gt;
&lt;li&gt;App configuration&lt;/li&gt;
&lt;li&gt;DB configuration&lt;/li&gt;
&lt;li&gt;SQLAlchemy plugin&lt;/li&gt;
&lt;li&gt;Redis Cache&lt;/li&gt;
&lt;li&gt;Data export&lt;/li&gt;
&lt;li&gt;Data import&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Requirements
&lt;/h2&gt;

&lt;p&gt;To use this environment you need &lt;a href="https://docs.docker.com/get-docker/"&gt;Docker&lt;/a&gt; an &lt;a href="https://docs.docker.com/compose/install/"&gt;Docker compose&lt;/a&gt; installed.&lt;/p&gt;

&lt;h2&gt;
  
  
  The application stack
&lt;/h2&gt;

&lt;p&gt;Backend:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://bottlepy.org/docs/dev/"&gt;BottlePy&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.sqlalchemy.org/"&gt;SQLAlchemy&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Frontend:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://getbootstrap.com/"&gt;Bootstrap 5&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://jquery.com/"&gt;jQuery&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Database:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.postgresql.org/"&gt;PostgreSQL&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://redis.io/"&gt;Redis&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Webserver:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://www.nginx.com/"&gt;Nginx&lt;/a&gt; - Only Prod env.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Data used
&lt;/h2&gt;

&lt;p&gt;To use this example application we need to import some example data. This example application use the "Stack Exchange Data Dump" available on &lt;a href="https://archive.org/details/stackexchange"&gt;archive.org&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;All the data used by this site is under the &lt;a href="https://creativecommons.org/licenses/by-sa/4.0/"&gt;cc-by-sa 4.0&lt;/a&gt; license.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setup the environmant
&lt;/h2&gt;

&lt;p&gt;First of all clone &lt;a href="https://github.com/garutilorenzo/simple-bottlepy-application"&gt;this&lt;/a&gt; repository&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone https://github.com/garutilorenzo/simple-bottlepy-application.git

&lt;span class="nb"&gt;cd &lt;/span&gt;simple-bottlepy-application
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now we are ready to setup our environment. For dev purposes link the docker-compose-dev.yml do docker-compose.yml&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;ln&lt;/span&gt; &lt;span class="nt"&gt;-s&lt;/span&gt; docker-compose-dev.yml docker-compose.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For prod environments:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;ln&lt;/span&gt; &lt;span class="nt"&gt;-s&lt;/span&gt; docker-compose-dev.yml docker-compose.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The difference between dev and prod envs are:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Prod&lt;/th&gt;
&lt;th&gt;Dev&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Nginx is used to expose our example application&lt;/td&gt;
&lt;td&gt;Built-in HTTP development server&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Http port 80&lt;/td&gt;
&lt;td&gt;Http port 8080&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Debug mode is disabled&lt;/td&gt;
&lt;td&gt;Debug mode is enabled&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Reloader is disabled&lt;/td&gt;
&lt;td&gt;Reloader is enabled&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Now we can download some sample data.&lt;/p&gt;

&lt;h2&gt;
  
  
  Download sample data
&lt;/h2&gt;

&lt;p&gt;To dwonload some example data run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;./download_samples.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;By default the archive with the 'meta' attribute will be downloaded. If you want more data remove in download_samples.sh 'meta' form the archive name.&lt;/p&gt;

&lt;p&gt;Small data:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="k"&gt;for &lt;/span&gt;sample &lt;span class="k"&gt;in &lt;/span&gt;workplace.meta.stackexchange.com.7z unix.meta.stackexchange.com.7z
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Big data:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="k"&gt;for &lt;/span&gt;sample &lt;span class="k"&gt;in &lt;/span&gt;workplace.stackexchange.com.7z unix.stackexchange.com.7z
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt; not all the stackexchange sites where imported on this example. After you choose the archives you will download adjust the network.py schema under src/schema/&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;Sites&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;enum&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Enum&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="n"&gt;vi&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;'vi.stackexchange.com'&lt;/span&gt;
    &lt;span class="n"&gt;workplace&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;'workplace.stackexchange.com'&lt;/span&gt;
    &lt;span class="n"&gt;wordpress&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;'wordpress.stackexchange.com'&lt;/span&gt;
    &lt;span class="n"&gt;unix&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;'unix.stackexchange.com'&lt;/span&gt;
    &lt;span class="n"&gt;tex&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;'tex.stackexchange.com'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once the data is downloaded we can import the data:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker-compose run &lt;span class="nt"&gt;--rm&lt;/span&gt; bottle bash

web@4edf053b7e4f:~/src&lt;span class="nv"&gt;$ &lt;/span&gt; python init_db.py &lt;span class="c"&gt;# &amp;lt;- Initialize DB&lt;/span&gt;
web@4edf053b7e4f:~/src&lt;span class="nv"&gt;$ &lt;/span&gt; python import_data.py &lt;span class="c"&gt;# &amp;lt;- Import sample data&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now for each data sample you have downloaded (Eg. tex, unix, vi) a python subprocess is started and will import in order:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;all the tags&lt;/li&gt;
&lt;li&gt;all the users&lt;/li&gt;
&lt;li&gt;all the posts&lt;/li&gt;
&lt;li&gt;all the post history events&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Once the import is finished we can start our environment&lt;/p&gt;

&lt;h2&gt;
  
  
  Start the environment
&lt;/h2&gt;

&lt;p&gt;With our DB populated we can now start our web application:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker-compose up &lt;span class="nt"&gt;-d&lt;/span&gt;

Creating network &lt;span class="s2"&gt;"bottle-exchange_default"&lt;/span&gt; with the default driver
Creating postgres ... &lt;span class="k"&gt;done
&lt;/span&gt;Creating redis    ... &lt;span class="k"&gt;done
&lt;/span&gt;Creating bottle   ... &lt;span class="k"&gt;done&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The application will be available at &lt;a href="http://localhost:8080"&gt;http://localhost:8080&lt;/a&gt; for the dev and &lt;a href="http://localhost"&gt;http://localhost&lt;/a&gt; for the prod&lt;/p&gt;

&lt;h2&gt;
  
  
  Application Overview
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Index
&lt;/h3&gt;

&lt;p&gt;Home page with a search form. On every change on the "Network" select, the "Tags" select is populated by an ajax call on /api/autocomplete/form/get_tags (POST) (for more details see src/bottle/static/asset/js/custom.js).&lt;/p&gt;

&lt;p&gt;The POST call is authenticated with a random hard coded string (see src/bottle/app.py, api_get_tags)&lt;/p&gt;

&lt;h3&gt;
  
  
  Tags
&lt;/h3&gt;

&lt;p&gt;List of all available tags with a pagination nav.&lt;br&gt;
Clicking on the tag name the application will search all questions matching the tag you have selected, by clicking the site name the application will search all questions matching the tag and the site you ave selected.&lt;/p&gt;
&lt;h3&gt;
  
  
  Users
&lt;/h3&gt;

&lt;p&gt;Table view of all available users with a pagination nav.&lt;/p&gt;

&lt;p&gt;Clicking on the username, we enter on the detail's page of the user. In the details user page we see: UP Votes,Views,Down Votes. If the user has populated the "About me" field, we see a button that trigger a modal with the about me details.&lt;br&gt;
If the user ha asked or answered some question we see a list of question in the "Post" section.&lt;/p&gt;
&lt;h3&gt;
  
  
  Posts
&lt;/h3&gt;

&lt;p&gt;List of all posts with a pagination nav&lt;/p&gt;
&lt;h3&gt;
  
  
  API/REST endpoint
&lt;/h3&gt;

&lt;p&gt;This application expose one api/rest route: /api/get/tags. You can query this route making a POST call, you have to make the call using a json payload:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;--header&lt;/span&gt; &lt;span class="s2"&gt;"Content-Type: application/json"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--request&lt;/span&gt; POST &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--data&lt;/span&gt; &lt;span class="s1"&gt;'{"auth_key":"dd4d5ff1c13!28356236c402d7ada.aed8b797ebd299b942291bc66,f804492be2009f14"}'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  http://localhost:8080/api/get/tags | jq

 &lt;span class="o"&gt;{&lt;/span&gt;
  &lt;span class="s2"&gt;"data"&lt;/span&gt;: &lt;span class="o"&gt;[&lt;/span&gt;
    &lt;span class="o"&gt;{&lt;/span&gt;
      &lt;span class="s2"&gt;"clean_name"&lt;/span&gt;: &lt;span class="s2"&gt;"html5"&lt;/span&gt;,
      &lt;span class="s2"&gt;"created_time"&lt;/span&gt;: &lt;span class="s2"&gt;"2021-12-29 11:33:06.517152+00:00"&lt;/span&gt;,
      &lt;span class="s2"&gt;"id"&lt;/span&gt;: &lt;span class="s2"&gt;"1"&lt;/span&gt;,
      &lt;span class="s2"&gt;"name"&lt;/span&gt;: &lt;span class="s2"&gt;"html5"&lt;/span&gt;,
      &lt;span class="s2"&gt;"network_sites"&lt;/span&gt;: &lt;span class="s2"&gt;"Sites.wordpress"&lt;/span&gt;,
      &lt;span class="s2"&gt;"questions"&lt;/span&gt;: &lt;span class="s2"&gt;"91"&lt;/span&gt;,
      &lt;span class="s2"&gt;"tag_id"&lt;/span&gt;: &lt;span class="s2"&gt;"2"&lt;/span&gt;,
      &lt;span class="s2"&gt;"updated_time"&lt;/span&gt;: &lt;span class="s2"&gt;"None"&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;,
    ...
    &lt;span class="o"&gt;]&lt;/span&gt;,
  &lt;span class="s2"&gt;"errors"&lt;/span&gt;: &lt;span class="o"&gt;[]&lt;/span&gt;,
  &lt;span class="s2"&gt;"items"&lt;/span&gt;: 5431,
  &lt;span class="s2"&gt;"last_page"&lt;/span&gt;: 27
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The auth_key is hard coded in src/bottle/app.py&lt;/p&gt;

&lt;h2&gt;
  
  
  App configuration
&lt;/h2&gt;

&lt;p&gt;The application's configuration are loaded by the load_config module (src/load_config.py).&lt;/p&gt;

&lt;p&gt;This module will load a .yml file under:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/app/src/&amp;lt;BOTTLE_APP_NAME&amp;gt;/config/&amp;lt;BOTTLE_APP_ENVIRONMENT&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;BOTTLE_APP_NAME&lt;/em&gt; and &lt;em&gt;BOTTLE_APP_ENVIRONMENT&lt;/em&gt; are environment variables.&lt;/p&gt;

&lt;p&gt;BOTTLE_APP_NAME is the name of the path where our bottle application lives, in this case &lt;em&gt;bottle&lt;/em&gt;. BOTTLE_APP_ENVIRONMENT value is prod or env.&lt;/p&gt;

&lt;p&gt;An example configuration is:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;enable_debug&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;True&lt;/span&gt;
&lt;span class="na"&gt;enable_reloader&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;True&lt;/span&gt;
&lt;span class="na"&gt;http_port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;8080&lt;/span&gt;
&lt;span class="na"&gt;pgsql_username&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;bottle"&lt;/span&gt;
&lt;span class="na"&gt;pgsql_password&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;b0tTl3_Be#"&lt;/span&gt;
&lt;span class="na"&gt;pgsql_db&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;bottle_exchange"&lt;/span&gt;
&lt;span class="na"&gt;pgsql_host&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;pgsql"&lt;/span&gt;
&lt;span class="na"&gt;pgsql_port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;5432&lt;/span&gt;
&lt;span class="na"&gt;create_db_schema&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;True&lt;/span&gt;
&lt;span class="na"&gt;default_result_limit&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;50&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  DB configuration
&lt;/h2&gt;

&lt;p&gt;The database configuration is defined under the src/schema module.&lt;/p&gt;

&lt;p&gt;The base.py file contains the engine configuration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="nn"&gt;load_config&lt;/span&gt; &lt;span class="c1"&gt;# &amp;lt;- See App configuration
&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="nn"&gt;sqlalchemy&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;create_engine&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="nn"&gt;sqlalchemy.ext.declarative&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;declarative_base&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="nn"&gt;sqlalchemy.orm&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;sessionmaker&lt;/span&gt;

&lt;span class="n"&gt;main_config&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;load_config&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;load_config&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="n"&gt;conn_string&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;'postgresql+psycopg2://{pgsql_username}:{pgsql_password}@{pgsql_host}:{pgsql_port}/{pgsql_db}'&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nb"&gt;format&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;**&lt;/span&gt;&lt;span class="n"&gt;main_config&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;engine&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;create_engine&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;conn_string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;pool_size&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;80&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;pool_recycle&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;60&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;Session&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;sessionmaker&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;bind&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;engine&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;Base&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;declarative_base&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;All the tables are defined in a separate file always under schema:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;schema&lt;/th&gt;
&lt;th&gt;Tables&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;network.py&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;None&lt;/td&gt;
&lt;td&gt;Sites class is not a SQLAlchemy object, but is an Enum used by all the other tables&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;posts.py&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;posts,post_history&lt;/td&gt;
&lt;td&gt;Table definition for post and post_history tables. This module contains also three enum definitions: PostType, PostHistoryType, CloseReason&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;tags.py&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;tags&lt;/td&gt;
&lt;td&gt;Tags table&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;users.py&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;users&lt;/td&gt;
&lt;td&gt;Users table&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  SQLAlchemy plugin
&lt;/h2&gt;

&lt;p&gt;In this example application is used and insalled a SQLAlchemy plugin (src/bottle/bottle_sa.py). This plugin is used to handle the SQLAlchemy session:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="nn"&gt;schema.base&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Base&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;engine&lt;/span&gt; &lt;span class="c1"&gt;# &amp;lt;- Base and engine are defined in the schema module, see "DB configuration"
&lt;/span&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="nn"&gt;bottle_sa&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;SQLAlchemyPlugin&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="nn"&gt;load_config&lt;/span&gt;

&lt;span class="n"&gt;main_config&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;load_config&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;load_config&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="c1"&gt;# Main Bottle app/application
&lt;/span&gt;&lt;span class="n"&gt;app&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;application&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;Bottle&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="c1"&gt;# DB Plugin
&lt;/span&gt;&lt;span class="n"&gt;saPlugin&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;SQLAlchemyPlugin&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;engine&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;engine&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;metadata&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;Base&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;metadata&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; 
    &lt;span class="n"&gt;create&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;main_config&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;'create_db_schema'&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; 
    &lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;main_config&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;application&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;install&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;saPlugin&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This plugin pass an extra parameter on each function defined in src/bottle/app.py. By default this parameter is 'db', but it can be changed by passing the extra parameter 'keyword' on the SQLAlchemyPlugin init.&lt;/p&gt;

&lt;p&gt;So an example function will be:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="o"&gt;@&lt;/span&gt;&lt;span class="n"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;route&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;'/docs'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="o"&gt;@&lt;/span&gt;&lt;span class="n"&gt;view&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;'docs'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;index&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;db&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt; &lt;span class="c1"&gt;# &amp;lt;- db is our SQLAlchemy session
&lt;/span&gt;    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nb"&gt;dict&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;page_name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s"&gt;'docs'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Redis Cache
&lt;/h2&gt;

&lt;p&gt;In this example application we use redis to cache some pages. The caching "approach" is very useful if you have:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;a site with few updates&lt;/li&gt;
&lt;li&gt;a slow page/route &lt;/li&gt;
&lt;li&gt;you have to decrease the load of your DB&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The RedisCache is defined in src/bottle/bottle_cache.py and this is an example usage:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="nn"&gt;bottle_cache&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;RedisCache&lt;/span&gt;

&lt;span class="c1"&gt;# Cache
&lt;/span&gt;&lt;span class="n"&gt;cache&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;RedisCache&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="o"&gt;@&lt;/span&gt;&lt;span class="n"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;route&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;'/tags'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="o"&gt;@&lt;/span&gt;&lt;span class="n"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;route&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;'/tags/&amp;lt;page_nr:int&amp;gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="o"&gt;@&lt;/span&gt;&lt;span class="n"&gt;cache&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;cached&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="o"&gt;@&lt;/span&gt;&lt;span class="n"&gt;view&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;'tags'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;get_tags&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;db&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;page_nr&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="n"&gt;do_something&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;something&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can init RedisCache class with an extra parameter config:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;config&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="s"&gt;'redis_host'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s"&gt;'&amp;lt;redis_hostname'&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;'redis_port'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;6379&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;'redis_db'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;'cache_expiry'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;86400&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="n"&gt;cache&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;RedisCache&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;By default the configurations are:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Param&lt;/th&gt;
&lt;th&gt;Default&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;redis_host&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;redis&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Redis FQDN or ip address&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;redis_port&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;6379&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Redis listen port&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;redis_db&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;0&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Redis database&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;cache_expiry&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;3600&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Global cache expiry time in seconds&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The @cached decorator can accept some arguments:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Param&lt;/th&gt;
&lt;th&gt;Default&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;expiry&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;None&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Route cache expiry time. If not defined is the same value as the global expiry time&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;key_prefix&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;bottle_cache_%s&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Redis key prefix&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;content_type&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;text/html; charset=UTF-8&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Default content type&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  Caching json requests
&lt;/h3&gt;

&lt;p&gt;A json caching example would be:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="o"&gt;@&lt;/span&gt;&lt;span class="n"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;route&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;'/api/get/tags'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;method&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s"&gt;'POST'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="o"&gt;@&lt;/span&gt;&lt;span class="n"&gt;cache&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;cached&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;content_type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s"&gt;'application/json'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;api_get_tags&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;db&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="n"&gt;do_something&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;something&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Invalidate cache
&lt;/h3&gt;

&lt;p&gt;To invalidate the cache pass &lt;strong&gt;invalidate_cache&lt;/strong&gt; key as query parameter or in the body request if you make a POST call&lt;/p&gt;

&lt;h3&gt;
  
  
  Skip/bypass cache
&lt;/h3&gt;

&lt;p&gt;To skip or bypass the cache pass &lt;strong&gt;skip_cache&lt;/strong&gt; key as query parameter or in the body request if you make a POST call&lt;/p&gt;

&lt;h2&gt;
  
  
  Data export
&lt;/h2&gt;

&lt;p&gt;To backup PgSQL data run dump_db.sh&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;./dump_db.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;the dump will be placed in the root directory of this repository, the file will be named dump.sql.gz (Gzipped format)&lt;/p&gt;

&lt;h2&gt;
  
  
  Data import
&lt;/h2&gt;

&lt;p&gt;To import an existing DB uncomment the following line in the docker-compose.yml:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;volume&lt;/span&gt;
      &lt;span class="na"&gt;source&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;postgres&lt;/span&gt;
      &lt;span class="na"&gt;target&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/var/lib/postgresql/data&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;./sql:/docker-entrypoint-initdb.d&lt;/span&gt; &lt;span class="c1"&gt;# &amp;lt;- uncomment this line&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and plache your dump in gzip or plain text format under sql/ (create the directory first)&lt;/p&gt;

&lt;h2&gt;
  
  
  Stop the environment
&lt;/h2&gt;

&lt;p&gt;To stop the environment run&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker-compose down

Stopping bottle   ... &lt;span class="k"&gt;done
&lt;/span&gt;Stopping postgres ... &lt;span class="k"&gt;done
&lt;/span&gt;Stopping redis    ... &lt;span class="k"&gt;done
&lt;/span&gt;Removing bottle   ... &lt;span class="k"&gt;done
&lt;/span&gt;Removing postgres ... &lt;span class="k"&gt;done
&lt;/span&gt;Removing redis    ... &lt;span class="k"&gt;done
&lt;/span&gt;Removing network bottle-exchange_default
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To clean up all the data (pgsql data) pass the extra argument "-v" to docker-compose down. With this parameter the pgsql volume will be deleted.&lt;/p&gt;

</description>
      <category>python</category>
      <category>sqlalchemy</category>
      <category>redis</category>
      <category>bootstrap</category>
    </item>
    <item>
      <title>Deploy a k3s cluster on Oracle Cloud using terraform</title>
      <dc:creator>Lorenzo Garuti</dc:creator>
      <pubDate>Fri, 29 Oct 2021 15:56:20 +0000</pubDate>
      <link>https://dev.to/garutilorenzo/deploy-a-k3s-cluster-on-oracle-cloud-using-terraform-3n10</link>
      <guid>https://dev.to/garutilorenzo/deploy-a-k3s-cluster-on-oracle-cloud-using-terraform-3n10</guid>
      <description>&lt;p&gt;Welcome  to the last chapter of the series dedicated to the Oracle cloud infrastructure and terraform, if you have missed the previous chapters here you can find the links:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://dev.to/garutilorenzo/setup-terraform-oracle-cloud-provider-3eck"&gt;Setup Terraform Oracle Cloud provider&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://dev.to/garutilorenzo/deploy-an-oracle-cloud-compute-instance-using-terraform-1hfa"&gt;Deploy an Oracle Cloud compute instance using terraform&lt;/a&gt; &lt;/li&gt;
&lt;li&gt;&lt;a href="https://dev.to/garutilorenzo/deploy-multiple-oracle-compute-instances-using-an-instance-pool-and-terraform-a5o"&gt;Deploy multiple Oracle Cloud compute instances using an instance pool using terraform&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now we have all the knowledge for deploying a &lt;a href="https://k3s.io/"&gt;k3s&lt;/a&gt; cluster on Oracle Cloud infrastructure.&lt;/p&gt;

&lt;h3&gt;
  
  
  Notes
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;NOTE&lt;/strong&gt; This k3s setup is &lt;strong&gt;non&lt;/strong&gt; high available, the k3s server is a single point of failure. This setup is for testing purposes &lt;strong&gt;only&lt;/strong&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Environment setup
&lt;/h3&gt;

&lt;p&gt;In &lt;a href="https://github.com/garutilorenzo/oracle-cloud-terraform-examples"&gt;our&lt;/a&gt; repository, change directory and go inside the k3s-cluster directory:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd oracle-cloud-terraform-examples/k3s-cluster/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Modify the vars.tf in the same way whe have modified the vars.tf file in the simple instance &lt;a href="https://dev.to/garutilorenzo/deploy-an-oracle-cloud-compute-instance-using-terraform-1hfa"&gt;example&lt;/a&gt; (to setup the vars.tf file from scratch follow the Variables setup section) &lt;/p&gt;

&lt;h3&gt;
  
  
  Extra variables
&lt;/h3&gt;

&lt;p&gt;We have some extra variables in this example:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Variable&lt;/th&gt;
&lt;th&gt;Default&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;fault_domains&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;"FAULT-DOMAIN-1", "FAULT-DOMAIN-2", "FAULT-DOMAIN-3"&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;This variable is a list of fault domains where our instance pool will deploy our instances&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;instance_pool_size&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;3&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Number of instances to launch in the instance pool. Number of k3s agents to deploy&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;k3s_server_private_ip&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;10.0.0.50&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;private ip address that will be associated to the k3s-server&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;k3s_token&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;2aaf122eed3409ds2c6fagfad4073-92dcdgade664d8c1c7f49z&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;token used to install the k3s cluster&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;install_longhorn&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;true&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;boolean value, if true (default) will install &lt;a href="https://longhorn.io/"&gt;longhorn&lt;/a&gt; block storage&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;longhorn_release&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;v1.2.2&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;longorn release version&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  Infrastructure overview
&lt;/h3&gt;

&lt;p&gt;The infrastructure is the same as the instance-pool &lt;a href="https://dev.to/garutilorenzo/deploy-multiple-oracle-compute-instances-using-an-instance-pool-and-terraform-a5o"&gt;example&lt;/a&gt;, but in the network load balancer we have one more listener (port 443, HTTPS).&lt;/p&gt;

&lt;h3&gt;
  
  
  Notes
&lt;/h3&gt;

&lt;p&gt;Some important notes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;By default firewall on the compute instances is disabled. On some test the firewall has created some problems&lt;/li&gt;
&lt;li&gt;k3s will be installed in all the instances&lt;/li&gt;
&lt;li&gt;The operating system used is Ubuntu 20.04&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Deploy
&lt;/h3&gt;

&lt;p&gt;Now &lt;a href="https://dev.to/garutilorenzo/setup-terraform-oracle-cloud-provider-3eck"&gt;create&lt;/a&gt; the terraform.tfvars file (Terraform setup section), and initialize terraform:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform init

Initializing the backend...

Initializing provider plugins...
- Finding latest version of hashicorp/oci...
- Installing hashicorp/oci v4.50.0...
- Installed hashicorp/oci v4.50.0 (signed by HashiCorp)

Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run "terraform init" in the future.

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;we are now ready to deploy our infrastructure:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform plan

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # oci_core_default_route_table.default_oci_core_default_route_table will be created
  + resource "oci_core_default_route_table" "default_oci_core_default_route_table" {
      + compartment_id             = (known after apply)
      + defined_tags               = (known after apply)
      + display_name               = (known after apply)
      + freeform_tags              = (known after apply)
      + id                         = (known after apply)
      + manage_default_resource_id = (known after apply)
      + state                      = (known after apply)
      + time_created               = (known after apply)

      + route_rules {
          + cidr_block        = (known after apply)
          + description       = (known after apply)
          + destination       = "0.0.0.0/0"
          + destination_type  = "CIDR_BLOCK"
          + network_entity_id = (known after apply)
        }
    }


&amp;lt;TRUNCATED OUTPUT&amp;gt;

Plan: 21 to add, 0 to change, 0 to destroy.

Changes to Outputs:
  + k3s_agents_ips = [
      + (known after apply),
      + (known after apply),
      + (known after apply),
    ]
  + k3s_server_ip  = (known after apply)
  + lb_ip          = (known after apply)

──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────

Note: You didn't use the -out option to save this plan, so Terraform can't guarantee to take exactly these actions if you run "terraform apply" now.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;if we have no error run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform apply

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  + create
 &amp;lt;= read (data resources)

Terraform will perform the following actions:

  # data.oci_core_instance.k3s_agents_instances_ips[0] will be read during apply
  # (config refers to values not yet known)
 &amp;lt;= data "oci_core_instance" "k3s_agents_instances_ips"  {
      + agent_config                        = (known after apply)
      + async                               = (known after apply)
      + availability_config                 = (known after apply)
      + availability_domain                 = (known after apply)
      + boot_volume_id                      = (known after apply)
      + capacity_reservation_id             = (known after apply)
      + compartment_id                      = (known after apply)
      + create_vnic_details                 = (known after apply)
      + dedicated_vm_host_id                = (known after apply)
      + defined_tags                        = (known after apply)
      + display_name                        = (known after apply)
      + extended_metadata                   = (known after apply)
      + fault_domain                        = (known after apply)
      + freeform_tags                       = (known after apply)
      + hostname_label                      = (known after apply)
      + id                                  = (known after apply)
      + image                               = (known after apply)
      + instance_id                         = (known after apply)
      + instance_options                    = (known after apply)
      + ipxe_script                         = (known after apply)
      + is_pv_encryption_in_transit_enabled = (known after apply)
      + launch_mode                         = (known after apply)
      + launch_options                      = (known after apply)
      + metadata                            = (known after apply)
      + platform_config                     = (known after apply)
      + preemptible_instance_config         = (known after apply)
      + preserve_boot_volume                = (known after apply)
      + private_ip                          = (known after apply)
      + public_ip                           = (known after apply)
      + region                              = (known after apply)
      + shape                               = (known after apply)
      + shape_config                        = (known after apply)
      + source_details                      = (known after apply)
      + state                               = (known after apply)
      + subnet_id                           = (known after apply)
      + system_tags                         = (known after apply)
      + time_created                        = (known after apply)
      + time_maintenance_reboot_due         = (known after apply)
    }

&amp;lt;TRUNCATED OUTPUT&amp;gt;

oci_network_load_balancer_backend.k3s_http_backend[2]: Still creating... [1m40s elapsed]
oci_network_load_balancer_backend.k3s_https_backend[1]: Still creating... [1m51s elapsed]
oci_network_load_balancer_backend.k3s_https_backend[0]: Still creating... [1m51s elapsed]
oci_network_load_balancer_backend.k3s_http_backend[2]: Still creating... [1m50s elapsed]
oci_network_load_balancer_backend.k3s_http_backend[0]: Still creating... [1m50s elapsed]
oci_network_load_balancer_backend.k3s_https_backend[0]: Still creating... [2m1s elapsed]
oci_network_load_balancer_backend.k3s_https_backend[1]: Still creating... [2m1s elapsed]
oci_network_load_balancer_backend.k3s_http_backend[0]: Still creating... [2m0s elapsed]
oci_network_load_balancer_backend.k3s_http_backend[2]: Still creating... [2m0s elapsed]
oci_network_load_balancer_backend.k3s_http_backend[0]: Creation complete after 2m3s [id=networkLoadBalancers/ocid1.networkloadbalancer.oc1.eu-zurich-1.amaaaaaa5kjm7pyaglmtxbdrio5sp5vetj3b7hwhhxxhd7xtgytvqo4ckfsq/backendSets/k3s%20http%20backend/backends/ocid1.instance.oc1.eu-zurich-1.an5heljr5kjm7pycxldegwyajkogb3hdz6sup45qwpnqirxuck6l5y3jxwqq.80]
oci_network_load_balancer_backend.k3s_https_backend[1]: Creation complete after 2m6s [id=networkLoadBalancers/ocid1.networkloadbalancer.oc1.eu-zurich-1.amaaaaaa5kjm7pyaglmtxbdrio5sp5vetj3b7hwhhxxhd7xtgytvqo4ckfsq/backendSets/k3s%20https%20backend/backends/ocid1.instance.oc1.eu-zurich-1.an5heljr5kjm7pycs67mnhim6h46vimtx6akahatgssfprxkpti6ij5aqm5q.443]
oci_network_load_balancer_backend.k3s_https_backend[0]: Still creating... [2m11s elapsed]
oci_network_load_balancer_backend.k3s_http_backend[2]: Still creating... [2m10s elapsed]
oci_network_load_balancer_backend.k3s_https_backend[0]: Still creating... [2m21s elapsed]
oci_network_load_balancer_backend.k3s_http_backend[2]: Still creating... [2m20s elapsed]
oci_network_load_balancer_backend.k3s_http_backend[2]: Creation complete after 2m21s [id=networkLoadBalancers/ocid1.networkloadbalancer.oc1.eu-zurich-1.amaaaaaa5kjm7pyaglmtxbdrio5sp5vetj3b7hwhhxxhd7xtgytvqo4ckfsq/backendSets/k3s%20http%20backend/backends/ocid1.instance.oc1.eu-zurich-1.an5heljr5kjm7pyca6sjt44vji4axtni5ebyc2mq66hobokh2yfz5qsljnia.80]
oci_network_load_balancer_backend.k3s_https_backend[0]: Still creating... [2m31s elapsed]
oci_network_load_balancer_backend.k3s_https_backend[0]: Creation complete after 2m38s [id=networkLoadBalancers/ocid1.networkloadbalancer.oc1.eu-zurich-1.amaaaaaa5kjm7pyaglmtxbdrio5sp5vetj3b7hwhhxxhd7xtgytvqo4ckfsq/backendSets/k3s%20https%20backend/backends/ocid1.instance.oc1.eu-zurich-1.an5heljr5kjm7pycxldegwyajkogb3hdz6sup45qwpnqirxuck6l5y3jxwqq.443]

Apply complete! Resources: 21 added, 0 changed, 0 destroyed.

Outputs:

k3s_agents_ips = [
  "132.x.x.x",
  "152.x.x.x",
  "152.x.x.x",
]
k3s_server_ip = "132.x.x.x"
lb_ip = tolist([
  {
    "ip_address" = "152.x.x.x"
    "is_public" = true
    "reserved_ip" = tolist([])
  },
])
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now we can ssh in our k3s-server instance:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ssh ubuntu@132.x.x.x

...
35 updates can be applied immediately.
25 of these updates are standard security updates.
To see these additional updates run: apt list --upgradable



The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.

To run a command as administrator (user "root"), use "sudo &amp;lt;command&amp;gt;".
See "man sudo_root" for details.

ubuntu@k3s-server:~$
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After some minutes (at least one backend must be in HEALTH state) also the network load balancer will respond to our requests:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -v http://152.x.x.x/
*   Trying 152.x.x.x:80...
* TCP_NODELAY set
* Connected to 152.x.x.x (152.x.x.x) port 80 (#0)
&amp;gt; GET / HTTP/1.1
&amp;gt; Host: 152.x.x.x
&amp;gt; User-Agent: curl/7.68.0
&amp;gt; Accept: */*
&amp;gt; 
* Mark bundle as not supporting multiuse
&amp;lt; HTTP/1.1 404 Not Found
&amp;lt; Content-Type: text/plain; charset=utf-8
&amp;lt; X-Content-Type-Options: nosniff
&amp;lt; Date: Wed, 27 Oct 2021 13:20:05 GMT
&amp;lt; Content-Length: 19
&amp;lt; 
404 page not found
* Connection #0 to host 152.x.x.x left intact
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;NOTE&lt;/strong&gt; 404 is a correct response since there are no deployment yet&lt;/p&gt;

&lt;h3&gt;
  
  
  k3s cluster management
&lt;/h3&gt;

&lt;p&gt;To manage the cluster, open a ssh connection to the k3s-server, here some basic kubectl commands:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;List the nodes&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;root@k3s-server:~# kubectl get nodes
NAME                    STATUS   ROLES                  AGE   VERSION
inst-vr4sv-k3s-agents   Ready    &amp;lt;none&amp;gt;                 23m   v1.21.5+k3s2
inst-zkcyl-k3s-agents   Ready    &amp;lt;none&amp;gt;                 23m   v1.21.5+k3s2
k3s-server              Ready    control-plane,master   23m   v1.21.5+k3s2
inst-fhayc-k3s-agents   Ready    &amp;lt;none&amp;gt;                 23m   v1.21.5+k3s2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Get the pods running on kube-system namespace&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get pods -n kube-system
NAME                                      READY   STATUS      RESTARTS   AGE
coredns-7448499f4d-jwgzt                  1/1     Running     0          34m
metrics-server-86cbb8457f-qjgr9           1/1     Running     0          34m
local-path-provisioner-5ff76fc89d-56c7n   1/1     Running     0          34m
helm-install-traefik-crd-9ftr8            0/1     Completed   0          34m
helm-install-traefik-2v48n                0/1     Completed   2          34m
svclb-traefik-2x9q9                       2/2     Running     0          33m
svclb-traefik-d72cf                       2/2     Running     0          33m
svclb-traefik-jq5wv                       2/2     Running     0          33m
svclb-traefik-xnhgs                       2/2     Running     0          33m
traefik-97b44b794-4dz2x                   1/1     Running     0          33m
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Get the pods running on longhorn-system namespace (optional)&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;root@k3s-server:~# kubectl get pods -n longhorn-system
NAME                                        READY   STATUS             RESTARTS   AGE
longhorn-ui-788fd8cf9d-76x84                1/1     Running            0          29m
longhorn-manager-97vzd                      1/1     Running            0          29m
longhorn-driver-deployer-5dff5c7554-c7wbk   1/1     Running            0          29m
longhorn-manager-sq2xn                      1/1     Running            1          29m
csi-attacher-75588bff58-xv9sn               1/1     Running            0          28m
csi-resizer-5c88bfd4cf-ngm2j                1/1     Running            0          28m
engine-image-ei-d4c780c6-ktvs7              1/1     Running            0          28m
csi-provisioner-669c8cc698-mqvjx            1/1     Running            0          28m
longhorn-csi-plugin-9x5wj                   2/2     Running            0          28m
engine-image-ei-d4c780c6-r7r2t              1/1     Running            0          28m
csi-provisioner-669c8cc698-tvs9r            1/1     Running            0          28m
csi-resizer-5c88bfd4cf-h8g6w                1/1     Running            0          28m
instance-manager-e-7aca498c                 1/1     Running            0          28m
instance-manager-r-98153684                 1/1     Running            0          28m
longhorn-csi-plugin-wf24d                   2/2     Running            0          28m
csi-snapshotter-69f8bc8dcf-n85hq            1/1     Running            0          28m
longhorn-csi-plugin-82hv5                   2/2     Running            0          28m
longhorn-csi-plugin-rlcw2                   2/2     Running            0          28m
longhorn-manager-rttww                      1/1     Running            1          29m
instance-manager-e-e43d97f9                 1/1     Running            0          28m
longhorn-manager-47zxl                      1/1     Running            1          29m
instance-manager-r-de0dc83b                 1/1     Running            0          28m
engine-image-ei-d4c780c6-hp4mb              1/1     Running            0          28m
engine-image-ei-d4c780c6-hcwpg              1/1     Running            0          28m
instance-manager-r-464299ad                 1/1     Running            0          28m
instance-manager-e-ccb8666b                 1/1     Running            0          28m
instance-manager-r-3b35070e                 1/1     Running            0          28m
instance-manager-e-9d117ead                 1/1     Running            0          28m
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Dploy an example application
&lt;/h3&gt;

&lt;p&gt;We are now deploying an example app (nginx) with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;a persistent storage (longhorn NFS)&lt;/li&gt;
&lt;li&gt;a ingress route, exposed by &lt;a href="https://doc.traefik.io/traefik/providers/kubernetes-ingress/"&gt;Traefik&lt;/a&gt; ingress controller&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Create the deployment:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="s"&gt;cat &amp;lt;&amp;lt;EOF &amp;gt; test-deployment.yml&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;PersistentVolumeClaim&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;longhorn-volv-pvc&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;accessModes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;ReadWriteMany&lt;/span&gt;
  &lt;span class="na"&gt;storageClassName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;longhorn&lt;/span&gt;
  &lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;requests&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;storage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;2Gi&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-deployment&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;metrics&lt;/span&gt;
      &lt;span class="na"&gt;department&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;oci&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;metrics&lt;/span&gt;
        &lt;span class="na"&gt;department&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;oci&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;volume-test&lt;/span&gt;
        &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx:stable-alpine&lt;/span&gt;
        &lt;span class="na"&gt;imagePullPolicy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;IfNotPresent&lt;/span&gt;
        &lt;span class="na"&gt;livenessProbe&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;exec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;ls&lt;/span&gt;
              &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;/data/&lt;/span&gt;
          &lt;span class="na"&gt;initialDelaySeconds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;5&lt;/span&gt;
          &lt;span class="na"&gt;periodSeconds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;5&lt;/span&gt;
        &lt;span class="na"&gt;volumeMounts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;volv&lt;/span&gt;
          &lt;span class="na"&gt;mountPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/data&lt;/span&gt;
        &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
      &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;volv&lt;/span&gt;
        &lt;span class="na"&gt;persistentVolumeClaim&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;claimName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;longhorn-volv-pvc&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Service&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-cip-service&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ClusterIP&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;metrics&lt;/span&gt;
    &lt;span class="na"&gt;department&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;oci&lt;/span&gt;
  &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;TCP&lt;/span&gt;
    &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
    &lt;span class="na"&gt;targetPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Ingress&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;  &lt;span class="s"&gt;networking.k8s.io/v1&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;foo"&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;rules&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;host&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;example.net&lt;/span&gt;
      &lt;span class="na"&gt;http&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;paths&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/&lt;/span&gt;
            &lt;span class="na"&gt;pathType&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Prefix&lt;/span&gt;
            &lt;span class="na"&gt;backend&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
                &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;  &lt;span class="s"&gt;my-cip-service&lt;/span&gt;
                &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
                  &lt;span class="na"&gt;number&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
&lt;span class="s"&gt;EOF&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;apply the deployment with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f test-deployment.yml 
persistentvolumeclaim/longhorn-volv-pvc created
deployment.apps/my-deployment created
service/my-cip-service created
ingress.networking.k8s.io/foo created
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;monitor the deployment status:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get deployments
NAME            READY   UP-TO-DATE   AVAILABLE   AGE
my-deployment   3/3     3            3           48s

kubectl get pods
NAME                             READY   STATUS    RESTARTS   AGE
my-deployment-5d75fc6b89-xmxzt   1/1     Running   0          46s
my-deployment-5d75fc6b89-xbll2   1/1     Running   0          46s
my-deployment-5d75fc6b89-4g2w8   1/1     Running   0          46s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now test the reachability of the example.net domain:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -H 'Host: example.net' http://152.x.x.x # &amp;lt;- This is the network load balancer IP

&amp;lt;!DOCTYPE html&amp;gt;
&amp;lt;html&amp;gt;
&amp;lt;head&amp;gt;
&amp;lt;title&amp;gt;Welcome to nginx!&amp;lt;/title&amp;gt;
&amp;lt;style&amp;gt;
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
&amp;lt;/style&amp;gt;
&amp;lt;/head&amp;gt;
&amp;lt;body&amp;gt;
&amp;lt;h1&amp;gt;Welcome to nginx!&amp;lt;/h1&amp;gt;
&amp;lt;p&amp;gt;If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.&amp;lt;/p&amp;gt;

&amp;lt;p&amp;gt;For online documentation and support please refer to
&amp;lt;a href="http://nginx.org/"&amp;gt;nginx.org&amp;lt;/a&amp;gt;.&amp;lt;br/&amp;gt;
Commercial support is available at
&amp;lt;a href="http://nginx.com/"&amp;gt;nginx.com&amp;lt;/a&amp;gt;.&amp;lt;/p&amp;gt;

&amp;lt;p&amp;gt;&amp;lt;em&amp;gt;Thank you for using nginx.&amp;lt;/em&amp;gt;&amp;lt;/p&amp;gt;
&amp;lt;/body&amp;gt;
&amp;lt;/html&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Cleanup
&lt;/h3&gt;

&lt;p&gt;To cleanup/destroy our infrastrucure:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform destroy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>terraform</category>
      <category>oracle</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Deploy multiple Oracle compute instances using an instance pool and terraform</title>
      <dc:creator>Lorenzo Garuti</dc:creator>
      <pubDate>Fri, 29 Oct 2021 15:49:44 +0000</pubDate>
      <link>https://dev.to/garutilorenzo/deploy-multiple-oracle-compute-instances-using-an-instance-pool-and-terraform-a5o</link>
      <guid>https://dev.to/garutilorenzo/deploy-multiple-oracle-compute-instances-using-an-instance-pool-and-terraform-a5o</guid>
      <description>&lt;p&gt;Welcome  to the third chapter of the series dedicated to the Oracle cloud infrastructure and terraform, if you have missed the previous chapters here you can find the links:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://dev.to/garutilorenzo/setup-terraform-oracle-cloud-provider-3eck"&gt;Setup Terraform Oracle Cloud provider&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://dev.to/garutilorenzo/deploy-an-oracle-cloud-compute-instance-using-terraform-1hfa"&gt;Deploy an Oracle Cloud compute instance using terraform&lt;/a&gt; &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;After we have successfully launched our &lt;a href="https://dev.to/garutilorenzo/deploy-an-oracle-cloud-compute-instance-using-terraform-1hfa"&gt;first instance &lt;/a&gt; we are now ready for a more complicated example.&lt;/p&gt;

&lt;h3&gt;
  
  
  Environment setup
&lt;/h3&gt;

&lt;p&gt;In &lt;a href="https://github.com/garutilorenzo/oracle-cloud-terraform-examples"&gt;our&lt;/a&gt; repository, change directory and go inside the instance-pool directory:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd oracle-cloud-terraform-examples/instance-pool/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Modify the vars.tf in the same way whe have modified the vars.tf file in the simple instance &lt;a href="https://dev.to/garutilorenzo/deploy-an-oracle-cloud-compute-instance-using-terraform-1hfa"&gt;example&lt;/a&gt; (to setup the vars.tf file from scratch follow the Variables setup section) &lt;/p&gt;

&lt;h3&gt;
  
  
  Extra variables
&lt;/h3&gt;

&lt;p&gt;We have some extra variables in this example:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Variable&lt;/th&gt;
&lt;th&gt;Default&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;fault_domains&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;"FAULT-DOMAIN-1", "FAULT-DOMAIN-2", "FAULT-DOMAIN-3"&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;This variable is a list of fault domains where our instance pool will deploy our instances&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;instance_pool_size&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;2&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Number of instances to launch in the instance pool&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  Infrastructure overview
&lt;/h3&gt;

&lt;p&gt;The infrastructure is the same as the simple instance &lt;a href="https://dev.to/garutilorenzo/deploy-an-oracle-cloud-compute-instance-using-terraform-1hfa"&gt;example&lt;/a&gt; but we have also:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;one network load balancer, that will route the traffic from the internet to our instance pool instances&lt;/li&gt;
&lt;li&gt;one instance configuration used by the instance pool&lt;/li&gt;
&lt;li&gt;one instance pool&lt;/li&gt;
&lt;li&gt;two Oracle compute instances launched by the instance pool&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The network load balancer is made by:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;one listener (port 80)&lt;/li&gt;
&lt;li&gt;one backed set&lt;/li&gt;
&lt;li&gt;one backed for each of the instances in the instance pool&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Notes
&lt;/h3&gt;

&lt;p&gt;Some important notes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;By default firewall on the compute instances is disabled. On some test the firewall has created some problems&lt;/li&gt;
&lt;li&gt;Nginx will be installed by default (nginx is used for testing the security list rules, and for testing the network load balancer setup)&lt;/li&gt;
&lt;li&gt;The operating system used is Ubuntu 20.04&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Deploy
&lt;/h3&gt;

&lt;p&gt;Now &lt;a href="https://dev.to/garutilorenzo/setup-terraform-oracle-cloud-provider-3eck"&gt;create&lt;/a&gt; the terraform.tfvars file (Terraform setup section), and initialize terraform:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform init

Initializing the backend...

Initializing provider plugins...
- Finding latest version of hashicorp/oci...
- Installing hashicorp/oci v4.50.0...
- Installed hashicorp/oci v4.50.0 (signed by HashiCorp)

Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run "terraform init" in the future.

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;we are now ready to deploy our infrastructure:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform plan

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # oci_core_default_route_table.default_oci_core_default_route_table will be created
  + resource "oci_core_default_route_table" "default_oci_core_default_route_table" {
      + compartment_id             = (known after apply)
      + defined_tags               = (known after apply)
      + display_name               = (known after apply)
      + freeform_tags              = (known after apply)
      + id                         = (known after apply)
      + manage_default_resource_id = (known after apply)
      + state                      = (known after apply)
      + time_created               = (known after apply)

      + route_rules {
          + cidr_block        = (known after apply)
          + description       = (known after apply)
          + destination       = "0.0.0.0/0"
          + destination_type  = "CIDR_BLOCK"
          + network_entity_id = (known after apply)
        }
    }


&amp;lt;TRUNCATED OUTPUT&amp;gt;

Plan: 14 to add, 0 to change, 0 to destroy.

Changes to Outputs:
  + instances_ips = [
      + (known after apply),
      + (known after apply),
    ]
  + lb_ip         = (known after apply)

──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────

Note: You didn't use the -out option to save this plan, so Terraform can't guarantee to take exactly these actions if you run "terraform apply" now.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;if we have no error run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform apply

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  + create
 &amp;lt;= read (data resources)


Terraform will perform the following actions:

  # data.oci_core_instance.ubuntu_instance_pool_instances_ips[0] will be read during apply
  # (config refers to values not yet known)
 &amp;lt;= data "oci_core_instance" "ubuntu_instance_pool_instances_ips"  {
      + agent_config                        = (known after apply)
      + async                               = (known after apply)
      + availability_config                 = (known after apply)
      + availability_domain                 = (known after apply)
      + boot_volume_id                      = (known after apply)
      + capacity_reservation_id             = (known after apply)
      + compartment_id                      = (known after apply)
      + create_vnic_details                 = (known after apply)
      + dedicated_vm_host_id                = (known after apply)
      + defined_tags                        = (known after apply)
      + display_name                        = (known after apply)
      + extended_metadata                   = (known after apply)
      + fault_domain                        = (known after apply)
      + freeform_tags                       = (known after apply)
      + hostname_label                      = (known after apply)
      + id                                  = (known after apply)
      + image                               = (known after apply)
      + instance_id                         = (known after apply)
      + instance_options                    = (known after apply)
      + ipxe_script                         = (known after apply)
      + is_pv_encryption_in_transit_enabled = (known after apply)
      + launch_mode                         = (known after apply)
      + launch_options                      = (known after apply)
      + metadata                            = (known after apply)
      + platform_config                     = (known after apply)
      + preemptible_instance_config         = (known after apply)
      + preserve_boot_volume                = (known after apply)
      + private_ip                          = (known after apply)
      + public_ip                           = (known after apply)
      + region                              = (known after apply)
      + shape                               = (known after apply)
      + shape_config                        = (known after apply)
      + source_details                      = (known after apply)
      + state                               = (known after apply)
      + subnet_id                           = (known after apply)
      + system_tags                         = (known after apply)
      + time_created                        = (known after apply)
      + time_maintenance_reboot_due         = (known after apply)
    }

&amp;lt;TRUNCATED OUTPUT&amp;gt;

oci_network_load_balancer_listener.test_listener: Creation complete after 25s [id=networkLoadBalancers/ocid1.networkloadbalancer.oc1.eu-zurich-1.amaaaaaa5kjm7pyarkfapfnqqxrwaowlnmj5mnd3etmig5nfcwd3m5yb7uha/listeners/LB%20test%20listener]
oci_network_load_balancer_backend.test_backend[1]: Still creating... [31s elapsed]
oci_network_load_balancer_backend.test_backend[0]: Still creating... [31s elapsed]
oci_network_load_balancer_backend.test_backend[0]: Still creating... [41s elapsed]
oci_network_load_balancer_backend.test_backend[1]: Still creating... [41s elapsed]
oci_network_load_balancer_backend.test_backend[0]: Creation complete after 42s [id=networkLoadBalancers/ocid1.networkloadbalancer.oc1.eu-zurich-1.amaaaaaa5kjm7pyarkfapfnqqxrwaowlnmj5mnd3etmig5nfcwd3m5yb7uha/backendSets/Backend%20set%20test/backends/ocid1.instance.oc1.eu-zurich-1.an5heljr5kjm7pycu5exolhnubsq5isqo6nveddlmlsblkz7geb6vbwsvbtq.80]
oci_network_load_balancer_backend.test_backend[1]: Still creating... [51s elapsed]
oci_network_load_balancer_backend.test_backend[1]: Still creating... [1m1s elapsed]
oci_network_load_balancer_backend.test_backend[1]: Still creating... [1m11s elapsed]
oci_network_load_balancer_backend.test_backend[1]: Creation complete after 1m14s [id=networkLoadBalancers/ocid1.networkloadbalancer.oc1.eu-zurich-1.amaaaaaa5kjm7pyarkfapfnqqxrwaowlnmj5mnd3etmig5nfcwd3m5yb7uha/backendSets/Backend%20set%20test/backends/ocid1.instance.oc1.eu-zurich-1.an5heljr5kjm7pycft5ixge6ssknpyb5s6q3eihuccogpqrvv2ntqdlww72a.80]

Apply complete! Resources: 14 added, 0 changed, 0 destroyed.

Outputs:

instances_ips = [
  "132.x.x.x",
  "152.x.x.x",
]
lb_ip = tolist([
  {
    "ip_address" = "140.x.x.x"
    "is_public" = true
    "reserved_ip" = tolist([])
  },
])
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now we can ssh in one of the deployed instances:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ssh ubuntu@132.x.x.x

...
35 updates can be applied immediately.
25 of these updates are standard security updates.
To see these additional updates run: apt list --upgradable



The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.

To run a command as administrator (user "root"), use "sudo &amp;lt;command&amp;gt;".
See "man sudo_root" for details.

ubuntu@inst-ikudx-ubuntu-instance-pool:~$
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After some minutes (at least one backend must be in HEALTH state) also the network load balancer will respond to our requests:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -v 140.x.x.x
*   Trying 140.x.x.x:80...
* TCP_NODELAY set
* Connected to 140.x.x.x (140.x.x.x) port 80 (#0)
&amp;gt; GET / HTTP/1.1
&amp;gt; Host: 140.x.x.x
&amp;gt; User-Agent: curl/7.68.0
&amp;gt; Accept: */*
&amp;gt; 
* Mark bundle as not supporting multiuse
&amp;lt; HTTP/1.1 200 OK
&amp;lt; Server: nginx/1.18.0 (Ubuntu)
&amp;lt; Date: Wed, 27 Oct 2021 15:39:51 GMT
&amp;lt; Content-Type: text/html
&amp;lt; Content-Length: 672
&amp;lt; Last-Modified: Wed, 27 Oct 2021 15:33:26 GMT
&amp;lt; Connection: keep-alive
&amp;lt; ETag: "61797146-2a0"
&amp;lt; Accept-Ranges: bytes
...
...
...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Cleanup
&lt;/h3&gt;

&lt;p&gt;To cleanup/destroy our infrastrucure:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform destroy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>terraform</category>
      <category>oracle</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Deploy an Oracle Cloud compute instance using terraform</title>
      <dc:creator>Lorenzo Garuti</dc:creator>
      <pubDate>Fri, 29 Oct 2021 15:48:38 +0000</pubDate>
      <link>https://dev.to/garutilorenzo/deploy-an-oracle-cloud-compute-instance-using-terraform-1hfa</link>
      <guid>https://dev.to/garutilorenzo/deploy-an-oracle-cloud-compute-instance-using-terraform-1hfa</guid>
      <description>&lt;p&gt;Welcome  to the second part of the series dedicated to the Oracle cloud infrastructure and terraform, if you have missed it you can read the first part &lt;a href="https://dev.to/garutilorenzo/setup-terraform-oracle-cloud-provider-3eck"&gt;here&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;Once we have &lt;a href="https://dev.to/garutilorenzo/setup-terraform-oracle-cloud-provider-3eck"&gt;setup&lt;/a&gt; our terraform Oracle cloud provider, we are now ready to deploy our first instance. &lt;/p&gt;

&lt;h3&gt;
  
  
  Clone the repository
&lt;/h3&gt;

&lt;p&gt;I you haven't already done so, download this &lt;a href="https://github.com/garutilorenzo/oracle-cloud-terraform-examples"&gt;this&lt;/a&gt; repository, and enter in the simple-instance directory:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git clone https://github.com/garutilorenzo/oracle-cloud-terraform-examples.git
cd oracle-cloud-terraform-examples/simple-instance/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Infrastructure overview
&lt;/h3&gt;

&lt;p&gt;This example will deploy:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;one VCN network (cidr 10.0.0.0/16), you can customize the cidr changing the oci_core_vcn_cidr variable (see Variable setup)&lt;/li&gt;
&lt;li&gt;two subnets (cidr 10.0.0.0/24 and cidr 10.0.1.0/24). you can customize the subnets cidr changing oci_core_subnet_cidr10 and oci_core_subnet_cidr11 variables (see Variable setup)&lt;/li&gt;
&lt;li&gt;one internet gateway associated with the VCN network&lt;/li&gt;
&lt;li&gt;one route table associated with the VCN network&lt;/li&gt;
&lt;li&gt;one security list (see notes about security list)&lt;/li&gt;
&lt;li&gt;one Ocacle compute instance, VM.Standard.A1.Flex with 6GB of ram and 1 CPU (ARM processor)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;notes about security&lt;/strong&gt; the security list rules are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;By default only the incoming ICMP, SSH and HTTP traffic is allowed from your public ip. You can setup your public ip in my_public_ip_address variable.&lt;/li&gt;
&lt;li&gt;By default all the outgoing traffic is allowed&lt;/li&gt;
&lt;li&gt;A second security list rule (Custom security list) open all the incoming http traffic&lt;/li&gt;
&lt;li&gt;Both default security list and the custom security list are associated on both subnets&lt;/li&gt;
&lt;li&gt;Network flow from the private VCN subnet is allowed&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Notes
&lt;/h3&gt;

&lt;p&gt;Some important notes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;By default firewall on the compute instances is disabled. On some test the firewall has created some problems&lt;/li&gt;
&lt;li&gt;Nginx will be installed by default (nginx is used for testing the security list rules)&lt;/li&gt;
&lt;li&gt;The operating system used is Ubuntu 20.04&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Variables setup
&lt;/h3&gt;

&lt;p&gt;Before we can proceed we have to modify some variables in the vars.tf, the variables to modify are:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Variable&lt;/th&gt;
&lt;th&gt;Default&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;region&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;N.D.&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Set the correct region based on your needs&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;availability_domain&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;N.D.&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;set you availability domain, you can get the availability domain string in the "&lt;em&gt;Create instance&lt;/em&gt; form. Once you are in the create instance procedure under the placement section click "Edit" and copy the string that begin with &lt;em&gt;iAdc:&lt;/em&gt;. Example iAdc:EU-ZURICH-1-AD-1&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;default_fault_domain&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;FAULT-DOMAIN-1&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;set de default fault domain, choose one of: FAULT-DOMAIN-1, FAULT-DOMAIN-2, FAULT-DOMAIN-3&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;PATH_TO_PUBLIC_KEY&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;~/.ssh/id_rsa.pub&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;this variable have to point at your ssh public key&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;oci_core_vcn_cidr&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;10.0.0.0/16&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;set the default VCN subnet cidr&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;oci_core_subnet_cidr10&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;10.0.0.0/24&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;set the default subnet cidr&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;oci_core_subnet_cidr11&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;10.0.1.0/24&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;set the secondary subnet cidr&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;tutorial_tag_key&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;oracle-tutorial&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;set a key used to tag all the deployed resources&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;tutorial_tag_value&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;terraform&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;set the value of the tutorial_tag_key&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;my_public_ip_address&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;N.D.&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;set your public ip address&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;NOTE&lt;/strong&gt; this variables have to be set also in the &lt;a href="https://garutilorenzo.github.io/oracle-cloud-terraform-part3-instance-pool"&gt;part3&lt;/a&gt; and &lt;a href="https://garutilorenzo.github.io/oracle-cloud-terraform-part4-k3s-cluster"&gt;part4&lt;/a&gt;, remember to modify the vars.tf files also in this directories.&lt;/p&gt;

&lt;h3&gt;
  
  
  Deploy
&lt;/h3&gt;

&lt;p&gt;Now &lt;a href="https://garutilorenzo.github.io/oracle-cloud-terraform-part1-setup"&gt;create&lt;/a&gt; the terraform.tfvars file (Terraform setup section), and initialize terraform:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform init

Initializing the backend...

Initializing provider plugins...
- Finding latest version of hashicorp/oci...
- Installing hashicorp/oci v4.50.0...
- Installed hashicorp/oci v4.50.0 (signed by HashiCorp)

Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run "terraform init" in the future.

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;we are now ready to launch our first instance:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform plan

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # oci_core_default_route_table.default_oci_core_default_route_table will be created
  + resource "oci_core_default_route_table" "default_oci_core_default_route_table" {
      + compartment_id             = (known after apply)
      + defined_tags               = (known after apply)
      + display_name               = (known after apply)
      + freeform_tags              = (known after apply)
      + id                         = (known after apply)
      + manage_default_resource_id = (known after apply)
      + state                      = (known after apply)
      + time_created               = (known after apply)

      + route_rules {
          + cidr_block        = (known after apply)
          + description       = (known after apply)
          + destination       = "0.0.0.0/0"
          + destination_type  = "CIDR_BLOCK"
          + network_entity_id = (known after apply)
        }
    }


&amp;lt;TRUNCATED OUTPUT&amp;gt;

Plan: 8 to add, 0 to change, 0 to destroy.

Changes to Outputs:
  + instance_ip = (known after apply)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;if we have no error run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform apply

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # oci_core_default_route_table.default_oci_core_default_route_table will be created
  + resource "oci_core_default_route_table" "default_oci_core_default_route_table" {
      + compartment_id             = (known after apply)
      + defined_tags               = (known after apply)
      + display_name               = (known after apply)
      + freeform_tags              = (known after apply)
      + id                         = (known after apply)
      + manage_default_resource_id = (known after apply)
      + state                      = (known after apply)
      + time_created               = (known after apply)

      + route_rules {
          + cidr_block        = (known after apply)
          + description       = (known after apply)
          + destination       = "0.0.0.0/0"
          + destination_type  = "CIDR_BLOCK"
          + network_entity_id = (known after apply)
        }
    }

&amp;lt;TRUNCATED OUTPUT&amp;gt;

oci_core_default_route_table.default_oci_core_default_route_table: Creation complete after 1s [id=ocid1.routetable.oc1.eu-zurich-1.aaaaaaaa6jdgqpdgwsbwbnxcnkml2zhcbgtzpm4jynea6vid56p2ywrit3za]
oci_core_subnet.oci_core_subnet11: Creation complete after 1s [id=ocid1.subnet.oc1.eu-zurich-1.aaaaaaaam3443d65qq7aoc7kf4sftrwj3splrp7wiwzvboufoiu5tc5f7iaq]
oci_core_subnet.default_oci_core_subnet10: Creation complete after 4s [id=ocid1.subnet.oc1.eu-zurich-1.aaaaaaaapyx2u7vih7g6hflzvhe2qhs6utx5qqxezjf2lm5uyswrispjtnpq]
oci_core_instance.ubuntu_oci_instance: Creating...
oci_core_instance.ubuntu_oci_instance: Still creating... [10s elapsed]
oci_core_instance.ubuntu_oci_instance: Still creating... [20s elapsed]
oci_core_instance.ubuntu_oci_instance: Still creating... [30s elapsed]
oci_core_instance.ubuntu_oci_instance: Still creating... [40s elapsed]
oci_core_instance.ubuntu_oci_instance: Still creating... [50s elapsed]
oci_core_instance.ubuntu_oci_instance: Still creating... [1m0s elapsed]
oci_core_instance.ubuntu_oci_instance: Still creating... [1m10s elapsed]
oci_core_instance.ubuntu_oci_instance: Still creating... [1m20s elapsed]
oci_core_instance.ubuntu_oci_instance: Still creating... [1m30s elapsed]
oci_core_instance.ubuntu_oci_instance: Still creating... [1m40s elapsed]
oci_core_instance.ubuntu_oci_instance: Still creating... [1m50s elapsed]
oci_core_instance.ubuntu_oci_instance: Still creating... [2m0s elapsed]
oci_core_instance.ubuntu_oci_instance: Creation complete after 2m8s [id=ocid1.instance.oc1.eu-zurich-1.an5heljr5kjm7pycoborgwavcmh5xrjgd3ozciyvcsjsclxbvbtmrrbpzomq]

Apply complete! Resources: 8 added, 0 changed, 0 destroyed.

Outputs:

instance_ip = "152.x.x.x"

──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────

Note: You didn't use the -out option to save this plan, so Terraform can't guarantee to take exactly these actions if you run "terraform apply" now.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now we can ssh into our instance:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ssh ubuntu@152.x.x.x

...
35 updates can be applied immediately.
25 of these updates are standard security updates.
To see these additional updates run: apt list --upgradable



The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.

To run a command as administrator (user "root"), use "sudo &amp;lt;command&amp;gt;".
See "man sudo_root" for details.

ubuntu@ubuntu-instance:~$ 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If we have setup all the variables correctly we are now inside our Oracle compute instance, we can now check if nginx is running:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ubuntu@ubuntu-instance:~$ systemctl status nginx.service 
● nginx.service - A high performance web server and a reverse proxy server
     Loaded: loaded (/lib/systemd/system/nginx.service; enabled; vendor preset: enabled)
     Active: active (running) since Fri 2021-10-29 09:11:21 UTC; 3h 58min ago
       Docs: man:nginx(8)
   Main PID: 8469 (nginx)
      Tasks: 2 (limit: 6861)
     Memory: 2.3M
     CGroup: /system.slice/nginx.service
             ├─8469 nginx: master process /usr/sbin/nginx -g daemon on; master_process on;
             └─8470 nginx: worker process

Oct 29 09:11:21 ubuntu-instance systemd[1]: Starting A high performance web server and a reverse proxy server...
Oct 29 09:11:21 ubuntu-instance systemd[1]: Started A high performance web server and a reverse proxy server.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And from our workstation we can try to reach via browser (or curl) our public ip address on port 80 (HTTP):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl http://152.x.x.x
&amp;lt;!DOCTYPE html&amp;gt;
&amp;lt;html&amp;gt;
&amp;lt;head&amp;gt;
&amp;lt;title&amp;gt;Welcome to nginx!&amp;lt;/title&amp;gt;
&amp;lt;style&amp;gt;
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
&amp;lt;/style&amp;gt;
&amp;lt;/head&amp;gt;
&amp;lt;body&amp;gt;
&amp;lt;h1&amp;gt;Welcome to nginx!&amp;lt;/h1&amp;gt;
&amp;lt;p&amp;gt;If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.&amp;lt;/p&amp;gt;

...
...
...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Cleanup
&lt;/h3&gt;

&lt;p&gt;To cleanup/destroy our infrastrucure:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform destroy

&amp;lt;TRUNCATED OUTPUT&amp;gt;

Plan: 0 to add, 0 to change, 8 to destroy.

Changes to Outputs:
  - instance_ip = "152.x.x.x" -&amp;gt; null

Do you really want to destroy all resources?
  Terraform will destroy all your managed infrastructure, as shown above.
  There is no undo. Only 'yes' will be accepted to confirm.

  Enter a value: 

&amp;lt;TRUNCATED OUTPUT&amp;gt;


oci_core_instance.ubuntu_oci_instance: Destruction complete after 1m6s
oci_core_subnet.default_oci_core_subnet10: Destroying... [id=ocid1.subnet.oc1.eu-zurich-1.aaaaaaaapyx2u7vih7g6hflzvhe2qhs6utx5qqxezjf2lm5uyswrispjtnpq]
oci_core_subnet.default_oci_core_subnet10: Destruction complete after 0s
oci_core_security_list.custom_security_list: Destroying... [id=ocid1.securitylist.oc1.eu-zurich-1.aaaaaaaa5qfkat6fldch3jkkderuwruxjnxdgxjq5auennrv2krwqvnsqt3q]
oci_core_default_security_list.default_security_list: Destroying... [id=ocid1.securitylist.oc1.eu-zurich-1.aaaaaaaaklnlfa5y36tmdxyimbfiobgegmrikbt3mnaoaf3kot4i74yxtqga]
oci_core_security_list.custom_security_list: Destruction complete after 1s
oci_core_default_security_list.default_security_list: Destruction complete after 1s
oci_core_vcn.default_oci_core_vcn: Destroying... [id=ocid1.vcn.oc1.eu-zurich-1.amaaaaaa5kjm7pyalfrmvv5hlg5bo35v4pgnqrkdrjaepbaqjt3c32ai7qaq]
oci_core_vcn.default_oci_core_vcn: Destruction complete after 1s

Destroy complete! Resources: 8 destroyed.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>terraform</category>
      <category>oracle</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Setup Terraform Oracle Cloud provider</title>
      <dc:creator>Lorenzo Garuti</dc:creator>
      <pubDate>Fri, 29 Oct 2021 15:48:09 +0000</pubDate>
      <link>https://dev.to/garutilorenzo/setup-terraform-oracle-cloud-provider-3eck</link>
      <guid>https://dev.to/garutilorenzo/setup-terraform-oracle-cloud-provider-3eck</guid>
      <description>&lt;p&gt;This is the first post of a series dedicated to Oracle Cloud infrastructure and Terraform.&lt;/p&gt;

&lt;h3&gt;
  
  
  Oracle Cloud series index
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Setup Terraform Oracle Cloud provider&lt;/li&gt;
&lt;li&gt;Deploy an Oracle Cloud compute instance using terraform. Go to &lt;a href="https://dev.to/garutilorenzo/deploy-an-oracle-cloud-compute-instance-using-terraform-1hfa"&gt;part 2&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Deploy multiple Oracle Cloud compute instances using an instance pool using terraform Go to &lt;a href="https://dev.to/garutilorenzo/deploy-multiple-oracle-compute-instances-using-an-instance-pool-and-terraform-4doa"&gt;part 3&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Deploy a k3s cluster on Oracle Cloud using terraform &lt;a href="https://dev.to/garutilorenzo/deploy-a-k3s-cluster-on-oracle-cloud-using-terraform-710"&gt;part 4&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Sign-up to Oracle Cloud
&lt;/h2&gt;

&lt;p&gt;Go to &lt;a href="https://cloud.oracle.com/"&gt;https://cloud.oracle.com/&lt;/a&gt; and create a new account:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Pt3f6Zl---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://garutilorenzo.github.io/images/oracle-sign-up.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Pt3f6Zl---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://garutilorenzo.github.io/images/oracle-sign-up.png" alt="Oracle Signup Page" width="536" height="538"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Account setup
&lt;/h2&gt;

&lt;p&gt;Once you are logged in we need to create a new user and a new group with limited grants. To do so go to Identity &amp;amp; Security -&amp;gt; Identity:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--FaBU-Hke--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://garutilorenzo.github.io/images/oracle-identity.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--FaBU-Hke--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://garutilorenzo.github.io/images/oracle-identity.png" alt="Oracle Identity" width="609" height="667"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Under Group, create a new group called terraform:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--kbKoNRVC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://garutilorenzo.github.io/images/oracle-group.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--kbKoNRVC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://garutilorenzo.github.io/images/oracle-group.png" alt="Oracle Group" width="425" height="370"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;click "Create"&lt;/p&gt;

&lt;p&gt;Under Policys, create a new policy named terraform-policy&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--3BPjtBEK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://garutilorenzo.github.io/images/oracle-policy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--3BPjtBEK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://garutilorenzo.github.io/images/oracle-policy.png" alt="Oracle Policy" width="617" height="613"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;set the description to "terraform-users-policy", click on show manual editor and paste this lines:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Allow group terraform to manage virtual-network-family in tenancy
Allow group terraform to manage instance-family in tenancy
Allow group terraform to manage compute-management-family in tenancy
Allow group terraform to manage volume-family in tenancy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This policy could be too open, take it only as example for this tutorial. You can get more details &lt;a href="https://docs.oracle.com/en-us/iaas/Content/Identity/Concepts/policysyntax.htm#three"&gt;here&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now create a new user, under user create a new user called terraform:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--LtXZqjYM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://garutilorenzo.github.io/images/oracle-user.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--LtXZqjYM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://garutilorenzo.github.io/images/oracle-user.png" alt="Oracle User" width="794" height="600"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Choose IAM user, set the name to terraform and the description to "terraform user".&lt;/p&gt;

&lt;p&gt;Now in the username details (click on terraform in the User table), click "Edit user capabilites" and un-check:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Local Password&lt;/li&gt;
&lt;li&gt;SMTP credentials&lt;/li&gt;
&lt;li&gt;Customer Secret Keys&lt;/li&gt;
&lt;li&gt;OAuth 2.0 Client Credentials&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--OgaiLpt4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://garutilorenzo.github.io/images/oracle-user-capabilities.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--OgaiLpt4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://garutilorenzo.github.io/images/oracle-user-capabilities.png" alt="Oracle User" width="412" height="355"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now click "Add User to Group" and choose the terraform group.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--rwFk4HCG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://garutilorenzo.github.io/images/oracle-user-group.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--rwFk4HCG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://garutilorenzo.github.io/images/oracle-user-group.png" alt="Oracle user detail" width="880" height="209"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  RSA key generation
&lt;/h3&gt;

&lt;p&gt;To use terraform with the Oracle Cloud infrastructure you need to generate an RSA key. Generate the rsa key with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;openssl genrsa -out ~/.oci/terraform-oracle-cloud.pem 4096
chmod 600 ~/.oci/terraform-oracle-cloud.pem
openssl rsa -pubout -in ~/.oci/terraform-oracle-cloud.pem -out ~/.oci/terraform-oracle-cloud_public.pem
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;NOTE&lt;/strong&gt; ~/.oci/terraform-oracle-cloud_public.pem this string will be used on the &lt;em&gt;terraform.tfvars&lt;/em&gt; used by the Oracle provider plugin, so please take note of this string.&lt;/p&gt;

&lt;p&gt;Now copy the content of  ~/.oci/terraform-oracle-cloud_public.pem:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cat  ~/.oci/terraform-oracle-cloud_public.pem
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the Oracle Cloud Console, under user select the terraform user. Under API Keys click on "Add API Key" and paste the content of your public RSA key:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--c1yINPrl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://garutilorenzo.github.io/images/oracle-user-key.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--c1yINPrl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://garutilorenzo.github.io/images/oracle-user-key.png" alt="Oracle user detail" width="880" height="502"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now you should see your configuration details:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[DEFAULT]
user=&amp;lt;user ocid...&amp;gt;
fingerprint=&amp;lt;fingerprint..&amp;gt;
tenancy=&amp;lt;tenecny ocid...&amp;gt;
region=&amp;lt;region&amp;gt;
key_file=&amp;lt;path to your private keyfile&amp;gt; # ~/.oci/terraform-oracle-cloud_public.pem
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Terraform setup
&lt;/h3&gt;

&lt;p&gt;The first step is to &lt;a href="https://learn.hashicorp.com/tutorials/terraform/install-cli"&gt;install&lt;/a&gt; terraform. Once terraform is installed download &lt;a href="https://github.com/garutilorenzo/oracle-cloud-terraform-examples"&gt;this&lt;/a&gt; repository:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git clone https://github.com/garutilorenzo/oracle-cloud-terraform-examples.git
cd oracle-cloud-terraform-examples/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the root dir of this repository yoi will find three subdirectory, now we move into the simple-instance directory:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd simple-instance/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;on this directory we find all the necessary files for deploying our first instance, we see more in detail in the &lt;a href="https://dev.to/garutilorenzo/deploy-an-oracle-cloud-compute-instance-using-terraform-1hfa"&gt;next&lt;/a&gt; session.&lt;/p&gt;

&lt;p&gt;Now in this directory we have to create a file named "terraform.tfvars":&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;touch terraform.tfvars
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;edit the file and paste your configuration details, the file will look like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;fingerprint      = &amp;lt;fingerprint..&amp;gt;
private_key_path = "~/.oci/terraform-oracle-cloud.pem"
user_ocid        = "&amp;lt;user ocid...&amp;gt;"
tenancy_ocid     = &amp;lt;tenecny ocid...&amp;gt;
compartment_ocid = &amp;lt;compartment ocid...&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;NOTE&lt;/strong&gt; The compartment_ocid is the same as tenency_ocid.&lt;/p&gt;

&lt;p&gt;Now we have setup the terraform Oracle provider and we are ready for our &lt;a href="https://dev.to/garutilorenzo/deploy-an-oracle-cloud-compute-instance-using-terraform-1hfa"&gt;first&lt;/a&gt; deployment.&lt;/p&gt;

</description>
      <category>terraform</category>
      <category>oracle</category>
      <category>cloud</category>
    </item>
    <item>
      <title>A bash solution for docker and iptables conflict</title>
      <dc:creator>Lorenzo Garuti</dc:creator>
      <pubDate>Thu, 14 Oct 2021 15:14:30 +0000</pubDate>
      <link>https://dev.to/garutilorenzo/a-bash-solution-for-docker-and-iptables-conflict-5gac</link>
      <guid>https://dev.to/garutilorenzo/a-bash-solution-for-docker-and-iptables-conflict-5gac</guid>
      <description>&lt;p&gt;If you’ve ever tried to setup firewall rules on the same machine where docker daemon is running you may have noticed that docker (by default) manipulate your iptables chains.&lt;br&gt;
If you want the full control of your iptables rules this might be a problem.&lt;/p&gt;
&lt;h3&gt;
  
  
  Docker and iptables
&lt;/h3&gt;

&lt;p&gt;Docker is utilizing the iptables "nat" to resolve packets from and to its containers and "filter" for isolation purposes, by default docker creates some chains in your iptables setup:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo iptables -L

Chain INPUT (policy ACCEPT)
target     prot opt source               destination         

Chain FORWARD (policy DROP)
target     prot opt source               destination         
DOCKER-USER  all  --  anywhere             anywhere            
DOCKER-ISOLATION-STAGE-1  all  --  anywhere             anywhere            
ACCEPT     all  --  anywhere             anywhere             ctstate RELATED,ESTABLISHED
DOCKER     all  --  anywhere             anywhere            
ACCEPT     all  --  anywhere             anywhere            
ACCEPT     all  --  anywhere             anywhere            

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination         

Chain DOCKER (1 references)
target     prot opt source               destination         

Chain DOCKER-INGRESS (0 references)
target     prot opt source               destination         

Chain DOCKER-ISOLATION-STAGE-1 (1 references)
target     prot opt source               destination         
DOCKER-ISOLATION-STAGE-2  all  --  anywhere             anywhere            
RETURN     all  --  anywhere             anywhere            

Chain DOCKER-ISOLATION-STAGE-2 (1 references)
target     prot opt source               destination         
DROP       all  --  anywhere             anywhere            
RETURN     all  --  anywhere             anywhere            

Chain DOCKER-USER (1 references)
target     prot opt source               destination         
RETURN     all  --  anywhere             anywhere 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;now for example we have the need to expose our nginx container to the world:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker run --name some-nginx -d -p 8080:80 nginx:latest
47a12adff13aa7609020a1aa0863b0dff192fbcf29507788a594e8b098ffe47a

docker ps
CONTAINER ID   IMAGE          COMMAND                  CREATED          STATUS          PORTS                                   NAMES
47a12adff13a   nginx:latest   "/docker-entrypoint.…"   27 seconds ago   Up 24 seconds   0.0.0.0:8080-&amp;gt;80/tcp, :::8080-&amp;gt;80/tcp   some-nginx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and now we can reach our nginx default page:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -v http://192.168.25.200:8080

*   Trying 192.168.25.200:8080...
* TCP_NODELAY set
* Connected to 192.168.25.200 (192.168.25.200) port 8080 (#0)
&amp;gt; GET / HTTP/1.1
&amp;gt; Host: 192.168.25.200:8080
&amp;gt; User-Agent: curl/7.68.0
&amp;gt; Accept: */*
&amp;gt; 
* Mark bundle as not supporting multiuse
&amp;lt; HTTP/1.1 200 OK
&amp;lt; Server: nginx/1.21.1
&amp;lt; Date: Thu, 14 Oct 2021 10:31:38 GMT
&amp;lt; Content-Type: text/html
&amp;lt; Content-Length: 612
&amp;lt; Last-Modified: Tue, 06 Jul 2021 14:59:17 GMT
&amp;lt; Connection: keep-alive
&amp;lt; ETag: "60e46fc5-264"
&amp;lt; Accept-Ranges: bytes
&amp;lt; 
&amp;lt;!DOCTYPE html&amp;gt;
&amp;lt;html&amp;gt;
&amp;lt;head&amp;gt;
&amp;lt;title&amp;gt;Welcome to nginx!&amp;lt;/title&amp;gt;
&amp;lt;style&amp;gt;
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
...
* Connection #0 to host 192.168.25.200 left intact
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;NOTE&lt;/strong&gt; the connection test is made using an external machine, not the same machine where the docker container is running.&lt;/p&gt;

&lt;p&gt;The "magic" iptables rules added also allow our containers to reach the outside world:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker run --rm nginx curl ipinfo.io/ip
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100    15  100    15    0     0     94      0 --:--:-- --:--:-- --:--:--    94

1.2.3.4
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now check what happened to our iptables rules:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;iptables -L

...
Chain DOCKER (1 references)
target     prot opt source               destination         
ACCEPT     tcp  --  anywhere             172.17.0.2           tcp dpt:http
...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;a new rule is appeared, but is not the only rule added to our chains.&lt;/p&gt;

&lt;p&gt;To get a more detailed view of our iptables chain we can dump the full iptables rules with  &lt;em&gt;iptables-save&lt;/em&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Generated by iptables-save v1.8.4 on Thu Oct 14 12:32:46 2021
*mangle
:PREROUTING ACCEPT [33102:3022248]
:INPUT ACCEPT [33102:3022248]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [32349:12119113]
:POSTROUTING ACCEPT [32357:12120329]
COMMIT
# Completed on Thu Oct 14 12:32:46 2021
# Generated by iptables-save v1.8.4 on Thu Oct 14 12:32:46 2021
*nat
:PREROUTING ACCEPT [1:78]
:INPUT ACCEPT [1:78]
:OUTPUT ACCEPT [13:1118]
:POSTROUTING ACCEPT [13:1118]
:DOCKER - [0:0]
:DOCKER-INGRESS - [0:0]
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE
-A POSTROUTING -s 172.17.0.2/32 -d 172.17.0.2/32 -p tcp -m tcp --dport 80 -j MASQUERADE
-A DOCKER -i docker0 -j RETURN
-A DOCKER ! -i docker0 -p tcp -m tcp --dport 8080 -j DNAT --to-destination 172.17.0.2:80
COMMIT
# Completed on Thu Oct 14 12:32:46 2021
# Generated by iptables-save v1.8.4 on Thu Oct 14 12:32:46 2021
*filter
:INPUT ACCEPT [4758:361293]
:FORWARD DROP [0:0]
:OUTPUT ACCEPT [4622:357552]
:DOCKER - [0:0]
:DOCKER-INGRESS - [0:0]
:DOCKER-ISOLATION-STAGE-1 - [0:0]
:DOCKER-ISOLATION-STAGE-2 - [0:0]
:DOCKER-USER - [0:0]
-A FORWARD -j DOCKER-USER
-A FORWARD -j DOCKER-ISOLATION-STAGE-1
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A DOCKER -d 172.17.0.2/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 80 -j ACCEPT
-A DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -j RETURN
-A DOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP
-A DOCKER-ISOLATION-STAGE-2 -j RETURN
-A DOCKER-USER -j RETURN
COMMIT
# Completed on Thu Oct 14 12:32:46 2021
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;in our dump we can see some other rules added by docker:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;DOCKER-INGRESS (nat table)&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;-A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE
-A POSTROUTING -s 172.17.0.2/32 -d 172.17.0.2/32 -p tcp -m tcp --dport 80 -j MASQUERADE
-A DOCKER -i docker0 -j RETURN
-A DOCKER ! -i docker0 -p tcp -m tcp --dport 8080 -j DNAT --to-destination 172.17.0.2:80
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;DOCKER-USER (filter table)&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;-A FORWARD -j DOCKER-USER
-A FORWARD -j DOCKER-ISOLATION-STAGE-1
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A DOCKER -d 172.17.0.2/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 80 -j ACCEPT
-A DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -j RETURN
-A DOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP
-A DOCKER-ISOLATION-STAGE-2 -j RETURN
-A DOCKER-USER -j RETURN
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;to explore in detail how iptables and docker work:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Docker &lt;a href="https://docs.docker.com/network/iptables/"&gt;docs&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Docker forum &lt;a href="https://forums.docker.com/t/understanding-iptables-rules-added-by-docker/77210"&gt;question&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://gist.github.com/x-yuri/abf90a18895c62f8d4c9e4c0f7a5c188"&gt;gist&lt;/a&gt; from x-yuri &lt;/li&gt;
&lt;li&gt;argus-sec.com &lt;a href="https://argus-sec.com/docker-networking-behind-the-scenes/"&gt;post&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  The problem
&lt;/h3&gt;

&lt;p&gt;But what happen if we stop and restart our firewall?&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;systemctl stop ufw|firewalld # &amp;lt;- the service (ufw or firewalld) may change from distro to distro
systemctl stop ufw|firewalld


curl -v http://192.168.25.200:8080
*   Trying 192.168.25.200:8080...
* TCP_NODELAY set


docker run --rm nginx curl ipinfo.io/ip
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:--  0:00:06 --:--:--     0

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;we can see that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;our container is not reachable from the outside world&lt;/li&gt;
&lt;li&gt;our container is not able to reach internet&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  The solution
&lt;/h3&gt;

&lt;p&gt;The &lt;a href="https://github.com/garutilorenzo/iptables-docker"&gt;solution&lt;/a&gt; for this problem is a simple bash script (combined to an awk script) to manage our iptables rules.&lt;br&gt;
In short the script parse the output of the &lt;em&gt;iptables-save&lt;/em&gt; command and preserve a set of chains. The chains preserved are:&lt;/p&gt;

&lt;p&gt;for table nat:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;POSTROUTING&lt;/li&gt;
&lt;li&gt;PREROUTING&lt;/li&gt;
&lt;li&gt;DOCKER&lt;/li&gt;
&lt;li&gt;DOCKER-INGRESS&lt;/li&gt;
&lt;li&gt;OUTPUT&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;for table filter:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;FORWARD&lt;/li&gt;
&lt;li&gt;DOCKER-ISOLATION-STAGE-1&lt;/li&gt;
&lt;li&gt;DOCKER-ISOLATION-STAGE-2&lt;/li&gt;
&lt;li&gt;DOCKER&lt;/li&gt;
&lt;li&gt;DOCKER-INGRESS&lt;/li&gt;
&lt;li&gt;DOCKER-USER&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Install iptables-docker
&lt;/h3&gt;

&lt;p&gt;The first step is to clone &lt;a href="https://github.com/garutilorenzo/iptables-docker"&gt;this&lt;/a&gt; repository&lt;/p&gt;
&lt;h4&gt;
  
  
  Local install (sh)
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;NOTE&lt;/strong&gt; this kind of install use a static file (src/iptables-docker.sh). By default &lt;strong&gt;only&lt;/strong&gt; ssh access to local machine is allowd. To allow specific traffic you have to edit manually this file with your own rules:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    # Other firewall rules
    # insert here your firewall rules
    $IPT -A INPUT -p tcp --dport 1234 -m state --state NEW -s 0.0.0.0/0 -j ACCEPT
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;NOTE2&lt;/strong&gt; if you use a swarm cluster uncomment the lines under &lt;em&gt;Swarm mode - uncomment to enable swarm access (adjust source lan)&lt;/em&gt; and adjust your LAN subnet&lt;/p&gt;

&lt;p&gt;To install iptables-docker on a local machine, clone &lt;a href="https://github.com/garutilorenzo/iptables-docker"&gt;this&lt;/a&gt; repository and run &lt;em&gt;sudo sh install.sh&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo sh install.sh 

Set iptables to iptables-legacy
Disable ufw,firewalld
Synchronizing state of ufw.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install disable ufw
Failed to stop firewalld.service: Unit firewalld.service not loaded.
Failed to disable unit: Unit file firewalld.service does not exist.
Install iptables-docker.sh
Create systemd unit
Enable iptables-docker.service
Created symlink /etc/systemd/system/multi-user.target.wants/iptables-docker.service → /etc/systemd/system/iptables-docker.service.
start iptables-docker.service
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Automated install (ansible)
&lt;/h4&gt;

&lt;p&gt;You can also use ansible to deploy iptables-docker everywhere. To do this adjust the settings under group_vars/main.yml.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Label&lt;/th&gt;
&lt;th&gt;Default&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;docker_preserve&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;yes&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Preserve docker iptables rules&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;swarm_enabled&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;no&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Tells to ansible to open the required ports for the swarm cluster&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;ebable_icmp_messages&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;yes&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Enable response to ping requests&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;swarm_cidr&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;192.168.1.0/24&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Local docker swarm subnet&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;ssh_allow_cidr&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;0.0.0.0/0&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;SSH alloed subnet (default everywhere)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;iptables_allow_rules&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;[]&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;List of dict to dynamically open ports. Each dict has the following key: desc, proto, from, port. See group_vars/all.yml for examples&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;iptables_docker_uninstall&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;no&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Uninstall iptables-docker&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Now create the inventory (hosts.ini file) or use an inline inventory and run the playbook:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ansible-playbook -i hosts.ini site.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Usage
&lt;/h3&gt;

&lt;p&gt;To start the service use:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo systemctl start iptables-docker

or 

sudo iptables-docker.sh start
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To stop the srevice use:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo systemctl stop iptables-docker

or 

sudo iptables-docker.sh stop
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Test iptables-docker
&lt;/h3&gt;

&lt;p&gt;Now if you turn off the firewall with &lt;em&gt;sudo systemctl stop iptables-docker&lt;/em&gt; and if you check the iptable-save output, you will see that the docker rules are still there:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo iptables-save

# Generated by iptables-save v1.8.4 on Thu Oct 14 15:52:30 2021
*mangle
:PREROUTING ACCEPT [346:23349]
:INPUT ACCEPT [346:23349]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [340:24333]
:POSTROUTING ACCEPT [340:24333]
COMMIT
# Completed on Thu Oct 14 15:52:30 2021
# Generated by iptables-save v1.8.4 on Thu Oct 14 15:52:30 2021
*nat
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
:DOCKER - [0:0]
:DOCKER-INGRESS - [0:0]
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE
-A POSTROUTING -s 172.17.0.2/32 -d 172.17.0.2/32 -p tcp -m tcp --dport 80 -j MASQUERADE
-A DOCKER -i docker0 -j RETURN
-A DOCKER ! -i docker0 -p tcp -m tcp --dport 8080 -j DNAT --to-destination 172.17.0.2:80
COMMIT
# Completed on Thu Oct 14 15:52:30 2021
# Generated by iptables-save v1.8.4 on Thu Oct 14 15:52:30 2021
*filter
:INPUT ACCEPT [357:24327]
:FORWARD DROP [0:0]
:OUTPUT ACCEPT [355:26075]
:DOCKER - [0:0]
:DOCKER-INGRESS - [0:0]
:DOCKER-ISOLATION-STAGE-1 - [0:0]
:DOCKER-ISOLATION-STAGE-2 - [0:0]
:DOCKER-USER - [0:0]
-A FORWARD -j DOCKER-USER
-A FORWARD -j DOCKER-ISOLATION-STAGE-1
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A DOCKER -d 172.17.0.2/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 80 -j ACCEPT
-A DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -j RETURN
-A DOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP
-A DOCKER-ISOLATION-STAGE-2 -j RETURN
-A DOCKER-USER -j RETURN
COMMIT
# Completed on Thu Oct 14 15:52:30 2021
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;our container is still accesible form the outside:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; curl -v http://192.168.25.200:8080
*   Trying 192.168.25.200:8080...
* TCP_NODELAY set
* Connected to 192.168.25.200 (192.168.25.200) port 8080 (#0)
&amp;gt; GET / HTTP/1.1
&amp;gt; Host: 192.168.25.200:8080
&amp;gt; User-Agent: curl/7.68.0
&amp;gt; Accept: */*
&amp;gt; 
* Mark bundle as not supporting multiuse
&amp;lt; HTTP/1.1 200 OK
&amp;lt; Server: nginx/1.21.1
&amp;lt; Date: Thu, 14 Oct 2021 13:53:33 GMT
&amp;lt; Content-Type: text/html
&amp;lt; Content-Length: 612
&amp;lt; Last-Modified: Tue, 06 Jul 2021 14:59:17 GMT
&amp;lt; Connection: keep-alive
&amp;lt; ETag: "60e46fc5-264"
&amp;lt; Accept-Ranges: bytes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and our container can reach internet:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker run --rm nginx curl ipinfo.io/ip
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100    15  100    15    0     0     94      0 --:--:-- --:--:-- --:--:--    94
my-public-ip-address
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Important notes
&lt;/h3&gt;

&lt;p&gt;Before install iptables-docker please read this notes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;both local instal and ansible install configure your system to use &lt;strong&gt;iptables-legacy&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;by default &lt;strong&gt;only&lt;/strong&gt; port 22 is allowed&lt;/li&gt;
&lt;li&gt;ufw and firewalld will be permanently &lt;strong&gt;disabled&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;filtering on all docker interfaces is disabled&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Docker interfaces are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;vethXXXXXX interfaces&lt;/li&gt;
&lt;li&gt;br-XXXXXXXXXXX interfaces&lt;/li&gt;
&lt;li&gt;docker0 interface&lt;/li&gt;
&lt;li&gt;docker_gwbridge interface &lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Extending iptables-docker
&lt;/h3&gt;

&lt;p&gt;You can extend or modify iptables-docker by editing:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;src/iptables-docker.sh for the local install (sh)&lt;/li&gt;
&lt;li&gt;roles/iptables-docker/templates/iptables-docker.sh.j2 template file for the automated install (ansible)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Uninstall
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Local install (sh)
&lt;/h4&gt;

&lt;p&gt;Run uninstall.sh&lt;/p&gt;

&lt;h4&gt;
  
  
  Automated install (ansible)
&lt;/h4&gt;

&lt;p&gt;set the variable "iptables_docker_uninstall" to "yes" into group_vars/all.yml and run the playbook.&lt;/p&gt;

</description>
      <category>docker</category>
      <category>firewall</category>
      <category>iptables</category>
      <category>linux</category>
    </item>
    <item>
      <title>Deploy a high available etcd cluster using docker</title>
      <dc:creator>Lorenzo Garuti</dc:creator>
      <pubDate>Fri, 08 Oct 2021 12:13:12 +0000</pubDate>
      <link>https://dev.to/garutilorenzo/deploy-a-high-available-etcd-cluster-using-docker-2n1f</link>
      <guid>https://dev.to/garutilorenzo/deploy-a-high-available-etcd-cluster-using-docker-2n1f</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--sHG0cN4x--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://garutilorenzo.github.io/images/etcd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--sHG0cN4x--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://garutilorenzo.github.io/images/etcd.png" alt="etcd"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://etcd.io/"&gt;etcd&lt;/a&gt; is a strongly consistent, distributed key-value store that provides a reliable way to store data that needs to be accessed by a distributed system or cluster of machines. It gracefully handles leader elections during network partitions and can tolerate machine failure, even in the leader node.&lt;/p&gt;

&lt;p&gt;In this post we will see how to deploy an etcd cluster using docker and docker compose.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;docker&lt;/li&gt;
&lt;li&gt;docker-compose&lt;/li&gt;
&lt;li&gt;docker swarm cluster (optional)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Additional requirements for testing purposes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;python3&lt;/li&gt;
&lt;li&gt;pipenv&lt;/li&gt;
&lt;li&gt;pip3&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Configuration overview
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Nginx
&lt;/h3&gt;

&lt;p&gt;The nginx configuration is very simple. We need to create one upstream section and declare our etcd server names.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    upstream etcd_servers {
        least_conn;
        server etcd-00:2379 max_fails=3 fail_timeout=5s;
        server etcd-01:2379 max_fails=3 fail_timeout=5s;
        server etcd-02:2379 max_fails=3 fail_timeout=5s;
    }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;NOTE&lt;/strong&gt;: Since we are running on docker the server names are resolved by the internal dns. The name is the name of our services declared in docker-compose.yml (etcd-00, etcd-01, etcd-02)&lt;/p&gt;

&lt;p&gt;now we need to declare the server section, with the listen port (etcd default listen port) and the proxy_pass rule which will route the traffic to our etcd services:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    server {
        listen     2379;
        proxy_pass etcd_servers;
    }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;the final configuration will look like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;worker_processes 4;
worker_rlimit_nofile 40000;

events {
    worker_connections 8192;
}

stream {
    log_format  basic   '$time_iso8601 $remote_addr '
                        '$protocol $status $bytes_sent $bytes_received '
                        '$session_time $upstream_addr '
                        '"$upstream_bytes_sent" "$upstream_bytes_received" "$upstream_connect_time"';

    access_log  /dev/stdout basic;

    upstream etcd_servers {
        least_conn;
        server etcd-00:2379 max_fails=3 fail_timeout=5s;
        server etcd-01:2379 max_fails=3 fail_timeout=5s;
        server etcd-02:2379 max_fails=3 fail_timeout=5s;
    }
    server {
        listen     2379;
        proxy_pass etcd_servers;
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Etcd
&lt;/h3&gt;

&lt;p&gt;To configure etcd in &lt;a href="https://etcd.io/docs/v3.5/op-guide/clustering/"&gt;cluster&lt;/a&gt; mode, on each container we need to specify the following settings:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    command:
      - etcd
      - --name=etcd-02
      - --data-dir=data.etcd
      - --advertise-client-urls=http://etcd-02:2379
      - --listen-client-urls=http://0.0.0.0:2379
      - --initial-advertise-peer-urls=http://etcd-02:2380
      - --listen-peer-urls=http://0.0.0.0:2380
      - --initial-cluster=etcd-00=http://etcd-00:2380,etcd-01=http://etcd-01:2380,etcd-02=http://etcd-02:2380
      - --initial-cluster-state=new
      - --initial-cluster-token=etcd-cluster-1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;where:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;--name: Human-readable name for this member.&lt;/li&gt;
&lt;li&gt;--data-dir: Path to the data directory.&lt;/li&gt;
&lt;li&gt;--advertise-client-urls: List of this member’s client URLs to advertise to the rest of the cluster. These URLs can contain domain names.&lt;/li&gt;
&lt;li&gt;--listen-client-urls: List of URLs to listen on for client traffic. This flag tells the etcd to accept incoming requests from the clients on the specified scheme://IP:port combinations. &lt;/li&gt;
&lt;li&gt;--initial-advertise-peer-urls: List of this member’s peer URLs to advertise to the rest of the cluster. These addresses are used for communicating etcd data around the cluster. At least one must be routable to all cluster members. These URLs can contain domain names.&lt;/li&gt;
&lt;li&gt;--listen-peer-urls: List of URLs to listen on for peer traffic. This flag tells the etcd to accept incoming requests from its peers on the specified scheme://IP:port combinations. &lt;/li&gt;
&lt;li&gt;--initial-cluster: Initial cluster configuration for bootstrapping.&lt;/li&gt;
&lt;li&gt;--initial-cluster-state: Initial cluster state (“new” or “existing”). Set to new for all members present during initial static or DNS bootstrapping. If this option is set to existing, etcd will attempt to join the existing cluster. If the wrong value is set, etcd will attempt to start but fail safely.&lt;/li&gt;
&lt;li&gt;--initial-cluster-token: Initial cluster token for the etcd cluster during bootstrap.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;All the configuration flags are available &lt;a href="https://etcd.io/docs/v3.5/op-guide/configuration/"&gt;here&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Deploy etcd cluster with docker compose
&lt;/h2&gt;

&lt;p&gt;The first step is to clone &lt;a href="https://github.com/garutilorenzo/docker-etcd-cluster.git"&gt;this&lt;/a&gt; repository:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git clone https://github.com/garutilorenzo/docker-etcd-cluster.git
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;then enter in the repo directory an bring up the environment:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd docker-etcd-cluster 
docker-compose up -d
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;now check the status of the environment and wait for the containers to be ready:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker-compose ps


     Name                   Command               State                        Ports                      
----------------------------------------------------------------------------------------------------------
etcd_etcd-00_1   etcd --name=etcd-00 --data ...   Up      2379/tcp, 2380/tcp                              
etcd_etcd-01_1   etcd --name=etcd-01 --data ...   Up      2379/tcp, 2380/tcp                              
etcd_etcd-02_1   etcd --name=etcd-02 --data ...   Up      2379/tcp, 2380/tcp                              
etcd_nginx_1     /docker-entrypoint.sh ngin ...   Up      0.0.0.0:2379-&amp;gt;2379/tcp,:::2379-&amp;gt;2379/tcp, 80/tcp
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Test the environment
&lt;/h2&gt;

&lt;p&gt;To test the environment you need &lt;a href="https://pipenv.pypa.io/en/latest/install/#installing-pipenv"&gt;pipenv&lt;/a&gt; installed.&lt;br&gt;
Once you have pipenv installed:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pipenv shell
pip install -r requirements.txt
python test/etcd-test.py 
hey key1
hey key2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Check the log of the nginx service to see the traffic redirected to the etcd hosts:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker-compose logs -f nginx

Attaching to etcd_nginx_1
nginx_1    | /docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
nginx_1    | /docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
nginx_1    | /docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
nginx_1    | 10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
nginx_1    | 10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
nginx_1    | /docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
nginx_1    | /docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
nginx_1    | /docker-entrypoint.sh: Configuration complete; ready for start up
nginx_1    | 2021-10-08T09:41:23+00:00 172.28.0.1 TCP 200 422 665 0.052 172.28.0.3:2379 "665" "422" "0.000"
nginx_1    | 2021-10-08T09:41:24+00:00 172.28.0.1 TCP 200 422 665 0.046 172.28.0.2:2379 "665" "422" "0.000"
nginx_1    | 2021-10-08T09:50:56+00:00 172.28.0.1 TCP 200 422 665 0.029 172.28.0.4:2379 "665" "422" "0.000"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Docker swarm stack
&lt;/h2&gt;

&lt;p&gt;To deploy the etcd cluster on a docker &lt;a href="https://docs.docker.com/engine/swarm/"&gt;swarm&lt;/a&gt; cluster:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker stack deploy -c etcd-stack.yml etcd
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Check the status of the deployment:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker stack ps etcd

mx6fvfwye547   etcd_etcd-00.1       quay.io/coreos/etcd:v3.5.0   node-2    Running         Running 3 hours ago                                        
wybd7n4oitae   etcd_etcd-01.1       quay.io/coreos/etcd:v3.5.0   node-4    Running         Running 3 hours ago                                        
rmlycc3uvc8t   etcd_etcd-02.1       quay.io/coreos/etcd:v3.5.0   node-2    Running         Running 3 hours ago                                        
rexh1smoalpo   etcd_nginx.1         nginx:alpine                 node-2    Running         Running 21 hours ago    

docker service ls

ID             NAME                  MODE         REPLICAS   IMAGE                          PORTS
1u709kzgmo2b   etcd_etcd-00          replicated   1/1        quay.io/coreos/etcd:v3.5.0     
m7ze76xi58ww   etcd_etcd-01          replicated   1/1        quay.io/coreos/etcd:v3.5.0     
1535r562g3az   etcd_etcd-02          replicated   1/1        quay.io/coreos/etcd:v3.5.0     
v8n8qlo3dm30   etcd_nginx            replicated   1/1        nginx:alpine                   *:2379-&amp;gt;2379/tcp
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you would like to test the swarm setup, open test/etcd-test.py and change &lt;em&gt;127.0.0.1&lt;/em&gt; with one ip of a server in your docker swarm cluster (or the ip of your LB) and run the test.&lt;/p&gt;

</description>
      <category>docker</category>
      <category>etcd</category>
      <category>nginx</category>
      <category>swarm</category>
    </item>
  </channel>
</rss>
