<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Bhargavi Chiluka</title>
    <description>The latest articles on DEV Community by Bhargavi Chiluka (@bhargavi_chilukaa).</description>
    <link>https://dev.to/bhargavi_chilukaa</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/bhargavi_chilukaa"/>
    <language>en</language>
    <item>
      <title>Jenkins Upgrade from 2.1x to 2.4x</title>
      <dc:creator>Bhargavi Chiluka</dc:creator>
      <pubDate>Tue, 20 Feb 2024 06:35:14 +0000</pubDate>
      <link>https://dev.to/bhargavi_chilukaa/jenkins-upgrade-from-21x-to-24x-3fbi</link>
      <guid>https://dev.to/bhargavi_chilukaa/jenkins-upgrade-from-21x-to-24x-3fbi</guid>
      <description>&lt;p&gt;This Article speaks about the Jenkins upgrade from 2.1X to 2.4x Due to vulnerabilities observed in the Jenkins on 24th Jan 2024.&lt;/p&gt;

&lt;p&gt;References &lt;br&gt;
For more information on CVE-2024-23897, please refer to the following sources:]&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href="https://www.jenkins.io/security/advisory/2024-01-24/" rel="noopener noreferrer"&gt;https://www.jenkins.io/security/advisory/2024-01-24/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://thehackernews.com/2024/01/critical-jenkins-vulnerability-exposes.html" rel="noopener noreferrer"&gt;https://thehackernews.com/2024/01/critical-jenkins-vulnerability-exposes.html&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.bleepingcomputer.com/news/security/exploits-released-for-critical-jenkins-rce-flaw-patch-now/" rel="noopener noreferrer"&gt;https://www.bleepingcomputer.com/news/security/exploits-released-for-critical-jenkins-rce-flaw-patch-now/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/jenkinsci-cert/SECURITY-3314-3315" rel="noopener noreferrer"&gt;https://github.com/jenkinsci-cert/SECURITY-3314-3315&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;h5&gt;
  
  
  Vulnerable versions
&lt;/h5&gt;

&lt;p&gt;Jenkins 2.441 and earlier, LTS 2.426.2 and earlier.&lt;/p&gt;
&lt;h3&gt;
  
  
  Temporary mitigation
&lt;/h3&gt;

&lt;p&gt;Access to the CLI needs to be disabled. Both of the following steps must be taken:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Remove the CLI HTTP endpoint.&lt;/li&gt;
&lt;li&gt;Disable the SSH Port&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Both steps can be performed by executing the below script in script console of Jenkins UI(jenkins--&amp;gt;mange jenkins--&amp;gt;script--&amp;gt;console)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def removal = { lst -&amp;gt;
  lst.each { x -&amp;gt; if (x.getClass().getName()?.contains("CLIAction")) lst.remove(x) }
}
def j = jenkins.model.Jenkins.get();
removal(j.getExtensionList(hudson.cli.CLIAction.class))
removal(j.getExtensionList(hudson.ExtensionPoint.class))
removal(j.getExtensionList(hudson.model.Action.class))
removal(j.getExtensionList(hudson.model.ModelObject.class))
removal(j.getExtensionList(hudson.model.RootAction.class))
removal(j.getExtensionList(hudson.model.UnprotectedRootAction.class))
removal(j.getExtensionList(java.lang.Object.class))
removal(j.getExtensionList(org.kohsuke.stapler.StaplerProxy.class))
removal(j.actions)

println "Done!"

if (j.getPlugin('sshd')) {
  hudson.ExtensionList.lookupSingleton(org.jenkinsci.main.modules.sshd.SSHD.class).setPort(-1)
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Permanent solution/mitigation:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Permanent mitigation can be done by upgrading the Jenkins to latest version.&lt;/li&gt;
&lt;li&gt;As per our current Jenkins setup automatic upgrade/migration is not possible and we have to replace the source file(i.e. Jenkins.war)&lt;/li&gt;
&lt;li&gt;What is Jenkins.War:  The Jenkins Web application Archive (WAR) file bundles Winstone, a Jetty servlet container wrapper, and can be started on any operating system or platform with a version of Java supported by Jenkins&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;please install openjdk-17(install jdk not jre) with yum repositories before starting the upgrade which is required for 2.444 version(Latest version with fixed vulnerability at the time of writing this article&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The command to install openjdk-17 is&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;yum &lt;span class="nb"&gt;install &lt;/span&gt;java-17-devel
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and please don’t create any symbolic links if you have other version of jdk in your system instead please set the config to take the java 17 by below command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;alternative &lt;span class="nt"&gt;--config&lt;/span&gt; java
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;this command will prompt for the versions which are available in the system&lt;/p&gt;

&lt;h3&gt;
  
  
  Upgrade implementation steps on Linux:
&lt;/h3&gt;

&lt;p&gt;Step 1: Stop the Jenkins service&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;su -
service jenkins stop
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Step 2: If the Jenkins is running in background, please kill the PID of the Jenkins by checking respective Jenkins port 8080&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ps &lt;span class="nt"&gt;-ef&lt;/span&gt; | &lt;span class="nb"&gt;grep &lt;/span&gt;8080
&lt;span class="nb"&gt;kill&lt;/span&gt; &lt;span class="nt"&gt;-9&lt;/span&gt; PID
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Step 3: Take backup of Jenkins home directory by zipping the file and move to temporary path. in my case the paths are given below,but it might different from system to system&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cd&lt;/span&gt; /var/lib
&lt;span class="nb"&gt;tar&lt;/span&gt; &lt;span class="nt"&gt;-cvzf&lt;/span&gt; jenkins_date.tar.gz jenkins/
&lt;span class="nb"&gt;mv &lt;/span&gt;jenkins_date.tar.gz to /tmp path
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Step 4: Take the backup of Jenkins current version binary(jenkins.war) using following commands.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cd&lt;/span&gt; /usr/lib/jenkins/
&lt;span class="nb"&gt;mv &lt;/span&gt;jenkins.war jenkins_old.war
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Step 5: The webroot folder which is &lt;code&gt;/var/cache/Jenkins&lt;/code&gt; has to be empty, when we are starting with new jenkins.war file. &lt;br&gt;
so take the backup of war folder(&lt;code&gt;mv war war_old&lt;/code&gt;)and empty the folder (so that it will extract new configuration in war cache folder)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;mv &lt;/span&gt;war war_old
&lt;span class="nb"&gt;rm&lt;/span&gt; &lt;span class="nt"&gt;-rf&lt;/span&gt; war/&lt;span class="k"&gt;*&lt;/span&gt;
&lt;span class="nb"&gt;chown &lt;/span&gt;Jenkins:Jenkins war
&lt;span class="nb"&gt;chmod &lt;/span&gt;755 war/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Step 6: Download The New Jenkins Version and We can check the downloaded war file of SHA by&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;wget https://updates.jenkins-ci.org/latest/jenkins.war
&lt;span class="nb"&gt;sha256sum &lt;/span&gt;jenkins.war
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note: if the server is behind the proxy please execute the http and https proxy commands before downloading it.&lt;/p&gt;

&lt;p&gt;step 7: Start The Jenkins Service&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;service jenkins start
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;There are several difficulties has been faced during this since it is a major version upgrade&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Unable to start the Jenkins service.&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;it is due to incompatible &lt;code&gt;init.d/jenkins&lt;/code&gt; file where the  --daemon is not supported anymore &lt;/li&gt;
&lt;li&gt;and comment the handlermaxCount, handlerMaxIdle (&lt;a href="https://www.jenkins.io/doc/upgrade-guide/2.387/#de-duplicate-logging-implementations" rel="noopener noreferrer"&gt;https://www.jenkins.io/doc/upgrade-guide/2.387/#de-duplicate-logging-implementations&lt;/a&gt;)&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;The Jenkins will start in the background but still the service shows as failed.&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;To resolve this completely please take the Jenkins command which is used to start and create Jenkins.service file like below.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;service Jenkins status
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;Note: the starting command can be shown while checking the status, so please take the command and convert to below service file.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create jenkis.service file in /etc/systemd/system with the following content(from the above copied command)&lt;/li&gt;
&lt;li&gt;ExecStart command should match with the above copied command
&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;[&lt;/span&gt;Unit]
&lt;span class="nv"&gt;Description&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;Jenkins Service
&lt;span class="nv"&gt;After&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;network.target

&lt;span class="o"&gt;[&lt;/span&gt;Service]
&lt;span class="nv"&gt;Type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;simple
&lt;span class="nv"&gt;User&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;jenkins
&lt;span class="nv"&gt;Group&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;jenkins
&lt;span class="nv"&gt;ExecStart&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/etc/alternatives/java &lt;span class="nt"&gt;-Djava&lt;/span&gt;.awt.headless&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt; &lt;span class="nt"&gt;-DJENKINS_HOME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/var/lib/jenkins &lt;span class="nt"&gt;-jar&lt;/span&gt; /usr/lib/jenkins/jenkins.war &lt;span class="nt"&gt;--logfile&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/var/log/jenkins/jenkins.log &lt;span class="nt"&gt;--webroot&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/var/cache/jenkins/war &lt;span class="nt"&gt;--httpPort&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;8080
&lt;span class="nv"&gt;Restart&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;always

&lt;span class="o"&gt;[&lt;/span&gt;Install]
&lt;span class="nv"&gt;WantedBy&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;multi-user.target
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Please enable the service after creating the jenkins.service file
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;systemctl &lt;span class="nb"&gt;enable &lt;/span&gt;Jenkins.service
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;And start with the below command
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;systemctl start Jenkins.service
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;Note: please delete the init.d/Jenkins file if it is present in server , that might cause multiple instance starts&lt;br&gt;
this could be different from server to server please check your server config)&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ul&gt;
&lt;li&gt;var/lib/Jenkins—JENKINS_HOME&lt;/li&gt;
&lt;li&gt;/usr/lib/Jenkins—Jenkins.war&lt;/li&gt;
&lt;li&gt;/var/cache/Jenkins/war—webroot war extraction location&lt;/li&gt;
&lt;li&gt;Inti.d/jenkins—manually written service file location&lt;/li&gt;
&lt;li&gt;/etc/system/system/--- systemctl controlled services configuration locations&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>jenkins</category>
      <category>upgrade</category>
      <category>security</category>
      <category>vulnerabilities</category>
    </item>
    <item>
      <title>Elastic Search 8.x : ELK Setup over TLS/SSL</title>
      <dc:creator>Bhargavi Chiluka</dc:creator>
      <pubDate>Tue, 06 Feb 2024 15:25:19 +0000</pubDate>
      <link>https://dev.to/bhargavi_chilukaa/elk-setup-over-tlsssl-1jm2</link>
      <guid>https://dev.to/bhargavi_chilukaa/elk-setup-over-tlsssl-1jm2</guid>
      <description>&lt;p&gt;Starting from &lt;strong&gt;ElasticSearch V8.x&lt;/strong&gt;, all the configurations with security runs on the self-signed certificates that are auto-generated from the installation pack itself. Hence unless there is a requirement for the private certs or public signed certs of organisation it’s recommended to use the self-signed certs for best practice.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Table of Contents:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Elastic Search&lt;br&gt;
1.1. Cluster(Master Node) Installations&lt;br&gt;
1.1.1.Import the Elastic search GPG Key&lt;br&gt;
1.1.2.Installing from the RPM repository&lt;br&gt;
1.1.3.Elastic-Search Installation output with security enabled&lt;br&gt;
1.1.4.Configuration for Master Node.&lt;br&gt;
1.1.5.Starting the Elasticsearch&lt;br&gt;
1.1.6.Reset the password for required users.&lt;br&gt;
1.2.Other (data) Node installation&lt;br&gt;
1.2.1.Installation Process&lt;br&gt;
1.2.2.Configuration and Connection establishment with Cluster.&lt;br&gt;
1.2.3.Starting the data node and confirming the connection with cluster&lt;br&gt;
1.2.4.Common Errors in connection establishment between remote systems&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kibana&lt;br&gt;
2.1.Import the Elasticsearch GPG Key&lt;br&gt;
2.2.Installing from the RPM repository&lt;br&gt;
2.3.Configuring Kibana with Cluster Node Connection&lt;br&gt;
2.4.Starting Kibana and confirming connection with Cluster Configure kibana as service using&lt;br&gt;
2.5 Kibana SSL Certificates for Browser Traffic&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;3.Logstash&lt;br&gt;
3.1.Import the Elasticsearch GPG Key&lt;br&gt;
3.2.Installing from the RPM repository&lt;br&gt;
3.3.Configuring Logstash with Cluster Node Connection&lt;/p&gt;

&lt;p&gt;4.Filebeat Agent&lt;br&gt;
4.1 Download Filebeat&lt;/p&gt;

&lt;blockquote&gt;
&lt;ul&gt;
&lt;li&gt;For Kibana Web Application, the public signed certificate can be used for browser/HTTPS traffic, while self-signed certificates can be still used to establish the connection between the kibana and elastic-search cluster.&lt;/li&gt;
&lt;li&gt;This whole document focuses on installation and configuration happening through RedHat Linux OS, which has an RPM package manager available. 
Installation for the Ubuntu/Debian system may slightly differ and suggest to follow the public documentation. But the configuration and connection establishment remains the same.&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;
&lt;h1&gt;
  
  
  1. Elastic Search:
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://www.elastic.co/guide/en/elasticsearch/reference/8.8/rpm.html#rpm" rel="noopener noreferrer"&gt;RPM Installation Ref Doc&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  1.1. Cluster(Master Node) Installations
&lt;/h2&gt;
&lt;h3&gt;
  
  
  1.1.1.Import the Elastic search GPG Key
&lt;/h3&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;rpm &lt;span class="nt"&gt;--import&lt;/span&gt; https://artifacts.elastic.co/GPG-KEY-elasticsearch
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h3&gt;
  
  
  1.1.2.Installing from the RPM repository
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Create a file called &lt;code&gt;elasticsearch.repo&lt;/code&gt; in the &lt;code&gt;/etc/yum.repos.d/&lt;/code&gt; directory for RedHat based distributions.
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;[&lt;/span&gt;elasticsearch]
&lt;span class="nv"&gt;name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;Elasticsearch repository &lt;span class="k"&gt;for &lt;/span&gt;8.x packages
&lt;span class="nv"&gt;baseurl&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;https://artifacts.elastic.co/packages/8.x/yum
&lt;span class="nv"&gt;gpgcheck&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1
&lt;span class="nv"&gt;gpgkey&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;https://artifacts.elastic.co/GPG-KEY-elasticsearch
&lt;span class="nv"&gt;enabled&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;0
&lt;span class="nv"&gt;autorefresh&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1
&lt;span class="nb"&gt;type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;rpm-md
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ul&gt;
&lt;li&gt;Please add the &lt;code&gt;proxy=example.com&lt;/code&gt; if you have any proxy setup&lt;/li&gt;
&lt;li&gt;Install package
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;yum &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;--enablerepo&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;elasticsearch elasticsearch
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;If there is any error in the above installation or trouble in downloading and installing automatically, please refer to this &lt;a href="https://www.elastic.co/guide/en/elasticsearch/reference/8.8/rpm.html#install-rpm" rel="noopener noreferrer"&gt;manual installation guide from the docs&lt;/a&gt;. &lt;/p&gt;
&lt;h3&gt;
  
  
  1.1.3.Elastic-Search Installation output with security enabled
&lt;/h3&gt;

&lt;p&gt;When installing Elastic-search, security features are enabled and configured by default. When you install Elastic-search, the following security configuration occurs automatically:&lt;/p&gt;

&lt;p&gt;Authentication and authorization are enabled, and a password is generated for the elastic built-in superuser.&lt;br&gt;
Certificates and keys for TLS are generated for the transport and HTTP layer, and TLS is enabled and configured with these keys and certificates.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Please save the output generated here as it includes some password and important commands. &lt;br&gt;
Ex:&lt;br&gt;
&lt;/p&gt;


&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nt"&gt;-------Security&lt;/span&gt; autoconfiguration information-------

Authentication and authorization are enabled.
TLS &lt;span class="k"&gt;for &lt;/span&gt;the transport and HTTP layers is enabled and configured.

The generated password &lt;span class="k"&gt;for &lt;/span&gt;the elastic built-in superuser is : &amp;lt;password&amp;gt;

If this node should &lt;span class="nb"&gt;join &lt;/span&gt;an existing cluster, you can reconfigure this with
&lt;span class="s1"&gt;'/usr/share/elasticsearch/bin/elasticsearch-reconfigure-node --enrollment-token &amp;lt;token-here&amp;gt;'&lt;/span&gt;
after creating an enrollment token on your existing cluster.

You can &lt;span class="nb"&gt;complete &lt;/span&gt;the following actions at any &lt;span class="nb"&gt;time&lt;/span&gt;:

Reset the password of the elastic built-in superuser with
&lt;span class="s1"&gt;'/usr/share/elasticsearch/bin/elasticsearch-reset-password -u elastic'&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt;

Generate an enrollment token &lt;span class="k"&gt;for &lt;/span&gt;Kibana instances with
 &lt;span class="s1"&gt;'/usr/share/elasticsearch/bin/elasticsearch-create-enrollment-token -s kibana'&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt;

Generate an enrollment token &lt;span class="k"&gt;for &lt;/span&gt;Elasticsearch nodes with
&lt;span class="s1"&gt;'/usr/share/elasticsearch/bin/elasticsearch-create-enrollment-token -s node'&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  1.1.4.Configuration for Master Node.
&lt;/h3&gt;

&lt;p&gt;Please enable the following details in the master node which is also a cluster to work parallel.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Open &lt;code&gt;9200&lt;/code&gt; port for HTTP Communication. From other data/master nodes over http&lt;/li&gt;
&lt;li&gt;Open &lt;code&gt;9300&lt;/code&gt; port for Transport communication from nodes. &lt;/li&gt;
&lt;li&gt;Update the configuration of below params in &lt;code&gt;/etc/elasticsearch/elastricsearch.yml&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="s"&gt;cluster.name:&amp;lt;name-for-cluster&amp;gt;&lt;/span&gt;
&lt;span class="s"&gt;node.name:&amp;lt;unique-name-for-node-in-cluster&amp;gt;&lt;/span&gt;
&lt;span class="s"&gt;http.port:9200&lt;/span&gt;
&lt;span class="s"&gt;network.host:&amp;lt;DNS of system&amp;gt;&lt;/span&gt;
&lt;span class="s"&gt;http.host:&amp;lt;DNS of system&amp;gt;&lt;/span&gt;
&lt;span class="s"&gt;transport.host:&amp;lt;DNS of System&amp;gt;&lt;/span&gt;
&lt;span class="na"&gt;cluster.initial_master_nodes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;&amp;lt;node&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;name&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;given&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;above&amp;gt;"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;Here by default &lt;code&gt;http.host&lt;/code&gt; and &lt;code&gt;transport.host&lt;/code&gt; are commented out or either enabled with &lt;code&gt;0.0.0.0&lt;/code&gt; or &lt;code&gt;127.0.0.1&lt;/code&gt;, please convert them to system DNS as explained in above. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In above, transport.host is optional but if any issues raised in connecting the cluster please &lt;/li&gt;
&lt;li&gt;DO NOT modify any other parameters including security/certs. All are configured as required by default. Any modification may lead to issues. &lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  1.1.5.Starting the Elasticsearch
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Running the elasticsearch as service is a suggested approach and rpm installation, by default has halfway configuration. &lt;/li&gt;
&lt;li&gt;Running following will tie the service to &lt;code&gt;systemctl&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl daemon-reload
&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl &lt;span class="nb"&gt;enable &lt;/span&gt;elasticsearch.service
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Start / Stop the service
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl start elasticsearch.service
&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl stop elasticsearch.service
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Logs / &lt;code&gt;systemctl&lt;/code&gt; logs&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If the service failed to start, the initial logs for the service can be found in&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;journalctl &lt;span class="nt"&gt;--u&lt;/span&gt; elasticsearch
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;ElasticSearch Package Logs
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;less /var/logs/elasticsearch/elasticsearch.log
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Status Check with cURL&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;ElasticSearch by default runs with the security enabled and plaintext/non-secure calls are ignored by default. Requests must be hit with a certificate.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;And do not hit localhost with curl as server(s) may be behind proxy and result will not be piped&lt;/strong&gt;.&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;--cacert&lt;/span&gt; /etc/elasticsearch/certs/http_ca.crt &lt;span class="nt"&gt;-u&lt;/span&gt; elastic https://&amp;lt;DNS of System&amp;gt;:9200
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Above command will ask for “elastic” user password. Either enter the password collected from above installation output or reset the password using the next section in the document. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Expected Sample Output from the cluster&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Cp8oag6"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"cluster_name"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;&amp;lt;cluster&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;name&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"cluster_uuid"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"AT69_T_DTp-1qgIJlatQqA"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"version"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"number"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"8.8.2"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"build_type"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"tar"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"build_hash"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"f27399d"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"build_flavor"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"default"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"build_date"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"DATE"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"build_snapshot"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"lucene_version"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"9.6.0"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"minimum_wire_compatibility_version"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"1.2.3"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"minimum_index_compatibility_version"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"1.2.3"&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"tagline"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"You Know, for Search"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  1.1.6.Reset the password for required users.
&lt;/h3&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;There are different users required for different purposes. But majorly,
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;“elastic”: root user &lt;/li&gt;
&lt;li&gt;“kibana” : for kibana ui&lt;/li&gt;
&lt;li&gt;“logstash_system”: for logstash metrics etc. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;All of these username(s) password can be set/ reset by the&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;/usr/share/elasticsearch/bin/elasticsearch-reset-password &lt;span class="nt"&gt;-i&lt;/span&gt; &lt;span class="nt"&gt;-u&lt;/span&gt; &amp;lt;username&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;-i&lt;/code&gt; means its an interactive terminal session&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;-u&lt;/code&gt; is to define the user-name. &lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  1.2.Other (data) Node installation:
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1.2.1.Installation Process
&lt;/h3&gt;

&lt;p&gt;Please follow the steps from Import the Elasticsearch GPG Key to Elasticsearch Installation output with security enabled steps.&lt;/p&gt;

&lt;h3&gt;
  
  
  1.2.2.Configuration and Connection establishment with Cluster.
&lt;/h3&gt;

&lt;p&gt;This process involves token generation from and token configuration in the current node (which has to connect to the cluster).&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Generate the node token  in the elasticsearch cluster that we set previously. &lt;/li&gt;
&lt;li&gt;Execute the following one in the cluster system terminal &lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;:Make sure the elastic-search service is running in cluster while executing this&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;/usr/share/elasticsearch/bin/elasticsearch-create-enrollment-token &lt;span class="nt"&gt;-s&lt;/span&gt; node
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Copy the token generated in the above and run the following in new node.&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: Till the execution of the below script, DO NOT start the elastic-search service in the new node. If done, the following script will fail and it requires a fresh install of elasticsearch in the new node.&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;/usr/share/elasticsearch/bin/elasticsearch-reconfigure-node &lt;span class="nt"&gt;--enrollment-token&lt;/span&gt; &amp;lt;enrollment-token&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;The above script prompts some warnings of certs dir deletion and config override. Approve the same. &lt;/li&gt;
&lt;li&gt;If the above script failed to run with any error, check Common Errors in connection establishment between remote systems&lt;/li&gt;
&lt;li&gt;If the above script ran successfully, please proceed in configuring the following parameters in new node,  &lt;code&gt;/etc/elasticsearch/elastricsearch.yml&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="s"&gt;cluster.name:&amp;lt;cluster name given in cluster elasticsearch.yml file&amp;gt;&lt;/span&gt;
&lt;span class="s"&gt;node.name:&amp;lt;unique-name-for-node-in-cluster&amp;gt;&lt;/span&gt;
&lt;span class="s"&gt;http.port:9200&lt;/span&gt;
&lt;span class="s"&gt;network.host:&amp;lt;DNS of system&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Confirm the following things
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="s"&gt;http.host:&amp;lt;this parameter must be uncommented, else uncomment. But don’t edit the value&amp;gt;&lt;/span&gt;
&lt;span class="s"&gt;transport.host:&amp;lt;this parameter must be uncommented, else uncomment. But don’t edit the value&amp;gt;&lt;/span&gt;
&lt;span class="s"&gt;discovery.seed_hosts:&amp;lt;this value must be array with cluster DNS/IP [“”] format&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  1.2.3.Starting the data node and confirming the connection with cluster
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Running the elasticsearch as service is a suggested approach and rpm installation, by default has halfway configuration. &lt;/li&gt;
&lt;li&gt;Running following will tie the service to systemctl
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl daemon-reload
&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl &lt;span class="nb"&gt;enable &lt;/span&gt;elasticsearch.service
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Start / Stop the service
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl start elasticsearch.service
&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl stop elasticsearch.service
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Logs/ &lt;code&gt;systemctl&lt;/code&gt; logs&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If the service failed to start, the initial logs for the service can be found in&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;journalctl &lt;span class="nt"&gt;--u&lt;/span&gt; elasticsearch
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;ElasticSearch Package Logs
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;less /var/logs/elasticsearch/elasticsearch.log
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Status Node Connection status Check with Cluster cURL&lt;/li&gt;
&lt;li&gt;This command must be executed in the cluster.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;--cacert&lt;/span&gt; /etc/elasticsearch/certs/http_ca.crt &lt;span class="nt"&gt;-u&lt;/span&gt; elastic https://&amp;lt;DNS of Cluster System&amp;gt;:9200/_cluster/health?pretty
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Above command will ask for “elastic” user password. Either enter the password collected from above installation output or reset the password using the next section in the document. (Reset the password for required users.)&lt;/li&gt;
&lt;li&gt;Expected Sample Output from the cluster&amp;lt;&amp;gt;node configuration is,&lt;/li&gt;
&lt;li&gt;Please check the number_of_nodes param, it should show the count of connected nodes. (Where cluster itself is also node)&lt;/li&gt;
&lt;li&gt;If anything is not expected please new node logs first and cluster logs next.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"cluster_name"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"cluster-name"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"status"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"yellow"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"timed_out"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"number_of_nodes"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;⇐⇐⇐⇐⇐⇐⇐⇐⇐&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Pay&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;attention&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;here&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;the&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;count&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;must&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;increase.&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"number_of_data_nodes"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"active_primary_shards"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"active_shards"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"relocating_shards"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"initializing_shards"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"unassigned_shards"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"delayed_unassigned_shards"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"number_of_pending_tasks"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"number_of_in_flight_fetch"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"task_max_waiting_in_queue_millis"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"active_shards_percent_as_number"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mf"&gt;50.0&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  1.2.4.Common Errors in connection establishment between remote systems
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Cluster &lt;code&gt;elasticseach.yml&lt;/code&gt; doesn’t have &lt;code&gt;http.host&lt;/code&gt; not set to DNS or it may be commented.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;9200&lt;/code&gt; &amp;amp; &lt;code&gt;9300&lt;/code&gt; of Cluster are not opened, please raise a FW&lt;/li&gt;
&lt;li&gt;Cluster &lt;code&gt;/etc/elasticsearch/certs&lt;/code&gt; folder is modified or security config in yml modified. &lt;/li&gt;
&lt;li&gt;Cluster is not running as a service. &lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  2. Kibana
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://www.elastic.co/guide/en/kibana/current/rpm.html" rel="noopener noreferrer"&gt;Kibana RPM Docs&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  2.1.Import the Elasticsearch GPG Key
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;rpm &lt;span class="nt"&gt;--import&lt;/span&gt; https://artifacts.elastic.co/GPG-KEY-elasticsearch
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  2.2.Installing from the RPM repository
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;create a file called &lt;code&gt;kibana.repo&lt;/code&gt; in the &lt;code&gt;/etc/yum.repos.d/&lt;/code&gt; directory for RedHat based distributions
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;[&lt;/span&gt;kibana-8.x]
&lt;span class="nv"&gt;name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;Kibana repository &lt;span class="k"&gt;for &lt;/span&gt;8.x packages
&lt;span class="nv"&gt;baseurl&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;https://artifacts.elastic.co/packages/8.x/yum
&lt;span class="nv"&gt;gpgcheck&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1
&lt;span class="nv"&gt;gpgkey&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;https://artifacts.elastic.co/GPG-KEY-elasticsearch
&lt;span class="nv"&gt;enabled&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1
&lt;span class="nv"&gt;autorefresh&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1
&lt;span class="nb"&gt;type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;rpm-md
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Please add the &lt;code&gt;proxy=example.com&lt;/code&gt; if you have any proxy setup&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Install package&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;yum &lt;span class="nb"&gt;install &lt;/span&gt;kibana
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If there is any error in the above installation or trouble in downloading and installing automatically, please refer to this &lt;a href="https://www.elastic.co/guide/en/kibana/current/rpm.html#rpm-key" rel="noopener noreferrer"&gt;manual installation guide&lt;/a&gt; from the docs. &lt;/p&gt;

&lt;h2&gt;
  
  
  2.3.Configuring Kibana with Cluster Node Connection
&lt;/h2&gt;

&lt;p&gt;As explained in above details, from 8.X all elastic-search installation comes with self-signed certificates with TLS/SSL configured by default. Hence running the following scripts, before starting the Kibana would help adopt most of the settings. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Open 5601 FW for kibana UI to work on&lt;/li&gt;
&lt;li&gt;&amp;lt; FW Details to be updated.&amp;gt;&lt;/li&gt;
&lt;li&gt;Generate the Kibana token in the cluster system. Run the following the Cluster terminal
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;/usr/share/elasticsearch/bin/elasticsearch-create-enrollment-token &lt;span class="nt"&gt;-s&lt;/span&gt; kibana
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Copy the above token and execute the following in the kibana system.&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: Kibana must not be started before this command. Else we may need a fresh install.&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;/usr/share/kibana/bin/kibana-setup –enrollment-token &amp;lt;token&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Above command will configure and generate the security certs for kibana&amp;lt;&amp;gt;cluster communication with access-token (not username/password) and updates the &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Token&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Elasticsearch host&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Certs config &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In the &lt;code&gt;kibana yml&lt;/code&gt;.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;If any errors are raised in the above config, rectify the issue using (Common Errors in connection establishment between remote systems) if not solved, go for a fresh install of kibana. &lt;/li&gt;
&lt;li&gt;If the above script ran successfully, please proceed in configuring the following parameters in new kibana,  &lt;code&gt;/etc/kibana/kibana.yml&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="s"&gt;server.port:5601&lt;/span&gt;
&lt;span class="na"&gt;server.name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;kibana-host-name-can-be-anything"&lt;/span&gt;
&lt;span class="s"&gt;server.publicBaseUrl:”DNS of kibana system with port sample.com:5601” &amp;lt;= This can be optional&lt;/span&gt;
&lt;span class="s"&gt;server.host:”&amp;lt;kibana system DNS&amp;gt;”&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Observe the following things, they must be auto-set with above script
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="s"&gt;elasticsearch.hosts:&amp;lt;this-must-be-auto-configured-with-above-script as [“cluster-ip/dns”]&amp;gt;&lt;/span&gt;
&lt;span class="s"&gt;elasticsearch.serviceAccountToken:&amp;lt;this-must-be-auto-set-with-above-script-token&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;If kibana web-application has to serve over https and needs TLS certificates setup, please update them by storing &lt;code&gt;.crt&lt;/code&gt; and &lt;code&gt;.pem&lt;/code&gt; files in a dir near to kiba
na and update as
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;server.ssl.enabled&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;span class="na"&gt;server.ssl.certificate&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/path/to/your/server.crt ⇐⇐⇐⇐⇐ CA cert comes here&lt;/span&gt;
&lt;span class="na"&gt;server.ssl.key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/path/to/your/server.key  ⇐⇐⇐⇐⇐ PEM file comes here&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  2.4.Starting Kibana and confirming connection with Cluster Configure kibana as service using
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl daemon-reload
&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl &lt;span class="nb"&gt;enable &lt;/span&gt;kibana.service
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Start/ Stop service using
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl start kibana.service
&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl stop kibana.service
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Logs/&lt;code&gt;systemctl&lt;/code&gt; logs
If the kibana failed to start, the initial logs for the service can be found in
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;journalctl &lt;span class="nt"&gt;--u&lt;/span&gt; kibana
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;ElasticSearch Package Logs
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;less /var/logs/kibana/kibana.log
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;If all FW connections are in proper order and kibana is running without any errors, open &lt;code&gt;https://&amp;lt;kibana-dns&amp;gt;:5601&lt;/code&gt;. Kibana UI should be presented. Else check the logs for connection issues.&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; We are establishing the connection with serviceToken between elastic-search cluster and kibana (this is done using Configuring Kibana with Cluster Node Connection). Hence avoid adding username and password in &lt;code&gt;kibana.yml&lt;/code&gt; again as it's not required.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  2.5 Kibana SSL Certificates for Browser Traffic
&lt;/h3&gt;

&lt;p&gt;Collect/Purchase the SSL certificates for https traffic from browser and update the following parameters in the kibana.yml&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;server.ssl.enabled&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;span class="na"&gt;server.ssl.key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/etc/kibana/public_certs_kibana/&amp;lt;cert-name&amp;gt;.key&lt;/span&gt; &lt;span class="c1"&gt;# key file of system from SSL &lt;/span&gt;
&lt;span class="na"&gt;server.ssl.certificate&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/etc/kibana/public_certs_kibana/&amp;lt;kibana-pvt-key&amp;gt;.pem&lt;/span&gt; &lt;span class="c1"&gt;#ca cert file&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  3.Logstash
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://www.elastic.co/guide/en/logstash/current/installing-logstash.html" rel="noopener noreferrer"&gt;Ref: Logstas RPM&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  3.1.Import the Elasticsearch GPG Key
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;rpm &lt;span class="nt"&gt;--import&lt;/span&gt; https://artifacts.elastic.co/GPG-KEY-elasticsearch
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  3.2.Installing from the RPM repository
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Create a file called &lt;code&gt;logstash.repo&lt;/code&gt; in the &lt;code&gt;/etc/yum.repos.d/&lt;/code&gt; directory for RedHat based distributions
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;[&lt;/span&gt;logstash-8.x]
&lt;span class="nv"&gt;name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;Elastic repository &lt;span class="k"&gt;for &lt;/span&gt;8.x packages
&lt;span class="nv"&gt;baseurl&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;https://artifacts.elastic.co/packages/8.x/yum
&lt;span class="nv"&gt;gpgcheck&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1
&lt;span class="nv"&gt;gpgkey&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;https://artifacts.elastic.co/GPG-KEY-elasticsearch
&lt;span class="nv"&gt;enabled&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1
&lt;span class="nv"&gt;autorefresh&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1
&lt;span class="nb"&gt;type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;rpm-md
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Please add the &lt;code&gt;proxy=example.com&lt;/code&gt; if you have any proxy setup&lt;/li&gt;
&lt;li&gt;Install package
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;yum &lt;span class="nb"&gt;install &lt;/span&gt;logstash
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If there is any error in the above installation or trouble in downloading and installing automatically, please refer to this &lt;a href="https://www.elastic.co/guide/en/logstash/current/installing-logstash.html" rel="noopener noreferrer"&gt;manual installation guide&lt;/a&gt; from the docs. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; If above steps of rpm auto-installation don’t work, try downloading the required version from website manually&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-o&lt;/span&gt; https://artifacts.elastic.co/downloads/logstash/logstash-8.8.2-x86_64.rpm
&lt;span class="nb"&gt;sudo &lt;/span&gt;rpm &lt;span class="nt"&gt;-i&lt;/span&gt; logstash-8.8.2-x86_64.rpm
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  3.3.Configuring Logstash with Cluster Node Connection
&lt;/h2&gt;

&lt;p&gt;For TLS/SSL secure connection between the logstash &amp;lt;&amp;gt; cluster, we need to copy the certs folder present in the cluster &lt;code&gt;/etc/elasticsearch&lt;/code&gt; to the logstash installation location. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Configure the following parameters
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;node.name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;&amp;lt;logstash-node-name&amp;gt;&lt;/span&gt;
&lt;span class="na"&gt;path.data&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/var/lib/logstash&lt;/span&gt;
&lt;span class="na"&gt;path.config&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/etc/logstash/conf.d/*.conf&lt;/span&gt;
&lt;span class="na"&gt;path.logs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/var/logs/logstash&lt;/span&gt;
&lt;span class="na"&gt;xpack.monitoring.enabled&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;span class="na"&gt;xpack.monitoring.elasticsearch.username&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;logstash_system&lt;/span&gt;
&lt;span class="na"&gt;xpack.monitoring.elasticsearch.password&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;&amp;lt;password-set-in-cluster-for-that-user&amp;gt;&lt;/span&gt;
&lt;span class="na"&gt;xpack.monitoring.elasticsearch.hosts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;https://&amp;lt;cluster&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;DNS&amp;gt;:9200"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
&lt;span class="na"&gt;xpack.monitoring.elasticsearch.ssl.certificate_authority&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/etc/logstash/certs_cluster/http_ca.crt"&lt;/span&gt; &lt;span class="pi"&gt;]&lt;/span&gt;
&lt;span class="na"&gt;xpack.monitoring.elasticsearch.ssl.verification_mode&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;certificate&lt;/span&gt;
&lt;span class="na"&gt;xpack.monitoring.elasticsearch.sniffing&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
&lt;span class="na"&gt;xpack.monitoring.collection.interval&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;10s&lt;/span&gt;
&lt;span class="na"&gt;xpack.monitoring.collection.pipeline.details.enabled&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Write one connection pipeline for the logstash, in &lt;code&gt;conf.d&lt;/code&gt; dir,  with &lt;code&gt;logstash.conf&lt;/code&gt; file name
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;input {
 beats {
    client_inactivity_timeout =&amp;gt; 1200
    port =&amp;gt; 5044
  }
}

output {
        stdout { codec =&amp;gt; rubydebug }
        elasticsearch {
        cacert =&amp;gt; '/etc/logstash/certs_cluster/http-ca.crt'
        hosts =&amp;gt; ["https://0.0.0.0:9200", "https://1.1.1.1:9200", "
https://2.2.2.2:9200"]
        user =&amp;gt; user_name
        password =&amp;gt; 'password'
        ilm_enabled =&amp;gt; true
        ilm_rollover_alias =&amp;gt; logstash
        }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; cacert in the above &lt;code&gt;logstash.conf&lt;/code&gt; and &lt;code&gt;certificate_authority&lt;/code&gt; value in the &lt;code&gt;logstash.yml&lt;/code&gt; has to be the same. &lt;/p&gt;
&lt;/blockquote&gt;

&lt;ul&gt;
&lt;li&gt;And for security reasons, it's recommended to add the CA cert details logstash-JVM keystore so that self-signed certs are accepted. Caution: else system cannot connect to the cluster at all with any amount of efforts. &lt;/li&gt;
&lt;li&gt;Here we can use the ca cert file we copied from the elastic search cluster &lt;/li&gt;
&lt;li&gt;or we can directly generate one cert using ssl connection.&lt;/li&gt;
&lt;li&gt;Generating on go and passing the key to keystore can be like this &lt;/li&gt;
&lt;li&gt;Go to the JDK bin location which is located here, &lt;code&gt;/usr/share/logstash/jdk/bin&lt;/code&gt; as we need the keystore
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="nt"&gt;-n&lt;/span&gt; | openssl s_client &lt;span class="nt"&gt;-connect&lt;/span&gt; &amp;lt;elastic-search-cluster&amp;gt;:9200 | &lt;span class="nb"&gt;sed&lt;/span&gt; &lt;span class="nt"&gt;-ne&lt;/span&gt; &lt;span class="s1"&gt;'/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p'&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; ./ca_logstash.cer
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;The above one will generate a &lt;code&gt;ca_logstash.cer&lt;/code&gt; file with a headless tag of BEGIN/END but only with a key. &lt;/li&gt;
&lt;li&gt;Now run the following to copy that headless key to Java Keystore directly.&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; The default password for keystore is &lt;code&gt;changeit&lt;/code&gt; unless we modify it after installation.&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&amp;lt;keytool-path-as-per-system&amp;gt;/keytool &lt;span class="nt"&gt;-import&lt;/span&gt; &lt;span class="nt"&gt;-alias&lt;/span&gt; saelk &lt;span class="nt"&gt;-file&lt;/span&gt; ca_logstash.cer &lt;span class="nt"&gt;-keystore&lt;/span&gt; /usr/share/logstash/jdk/lib/security/cacerts
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Testing the configuration&lt;/li&gt;
&lt;li&gt;Before running the service/process, its important to confirm that &lt;code&gt;logstash.yml&lt;/code&gt; is configured properly, hence use
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo&lt;/span&gt; &lt;span class="nt"&gt;-u&lt;/span&gt; logstash /usr/share/logstash/bin/logstash &lt;span class="nt"&gt;--path&lt;/span&gt;.settings /etc/logstash &lt;span class="nt"&gt;-t&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;This should result with &lt;code&gt;configuration Ok&lt;/code&gt; else you need fix the config till you get this.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;Configuration OK
&lt;span class="o"&gt;[&lt;/span&gt;INFO &lt;span class="o"&gt;][&lt;/span&gt;logstash.runner         &lt;span class="o"&gt;]&lt;/span&gt; Using config.test_and_exit mode. Config Validation Result: OK. Exiting Logstash
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Then test the &lt;code&gt;*.conf&lt;/code&gt; file with logstash as process so that server/service will not impact &lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: &lt;code&gt;conf.d/&lt;/code&gt; should have owner as &lt;code&gt;logstash&lt;/code&gt; and user and fix any other path ownership issues based on the error/logs.&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo&lt;/span&gt; &lt;span class="nt"&gt;-u&lt;/span&gt; logstash /usr/share/logstash/bin/logstash &lt;span class="nt"&gt;-f&lt;/span&gt; /etc/logstash/conf.d/&amp;lt;file-name&amp;gt;.conf
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;&lt;p&gt;The above one will start the logstash as a process and you should see ERROR logs with connection issues. Then it's a success. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Start the logstash service and wait for a while. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You can confirm the connection by going to Kibana UI &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Left Menu =&amp;gt; Stack-Monitoring=&amp;gt; Nodes ( click on “set up basic config if you don't see any”)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Then you will see all elk-nodes, kibana nodes and logstash nodes if connected. &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  4.Filebeat Agent
&lt;/h1&gt;

&lt;p&gt;To enable monitoring for any server install file beat in the respective server&lt;/p&gt;

&lt;h2&gt;
  
  
  4.1  Download Filebeat
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Lightweight Log Analysis | Elastic&lt;/li&gt;
&lt;li&gt;Extract the filebeat zip file&lt;/li&gt;
&lt;li&gt;Create an user group for Filebeat using the below command&lt;/li&gt;
&lt;li&gt;Using Root user do the following:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;useradd &lt;span class="nt"&gt;-m&lt;/span&gt; filebeat &lt;span class="nt"&gt;-p&lt;/span&gt; filebeat
&lt;span class="nb"&gt;sudo &lt;/span&gt;groupadd filebeat
&lt;span class="nb"&gt;sudo &lt;/span&gt;usermod &lt;span class="nt"&gt;-a&lt;/span&gt; &lt;span class="nt"&gt;-G&lt;/span&gt; filebeat filebeat

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Change the Owner of extracted folder from root to filebeat user
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;chown &lt;/span&gt;filebeat:filebeat &lt;span class="nt"&gt;-R&lt;/span&gt; filebeat-7.8.1-linux-x86_64
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Add the following lines in &lt;code&gt;filebeat.yml&lt;/code&gt; file
filebeat.inputs:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;filebeat.inputs:
- type: log
  enabled: true
  encoding: iso8859-1
  paths:
          - /var/log/icinga2/icinga2.log
  multiline.pattern: '^[0-9]{4}-[0-9]{2}-[0-9]{2} [0-9]{2}:[0-9]{2}:[0-9]{2}'
  multiline.negate: true
  multiline.match: after
  tags: ["icinga-log"]
output.logstash:
        hosts: ["dns_name:5044"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;5.Make filebeat up&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;/etc/ filebeat-7.8.1-linux-x86_64/filebeat &amp;amp;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;To send the logs from filebeat to logstash server raise a firewall request&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: if systems are in same other network, then there is no need of firewall request adding firewall entry in IP tables is enough&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ul&gt;
&lt;li&gt;Destination : IP address&lt;/li&gt;
&lt;li&gt;Port : 5044 #default port for Logstash&lt;/li&gt;
&lt;li&gt;Create pipeline configuration in Logstash&lt;/li&gt;
&lt;li&gt;Go to path /etc/logstash/conf.d in logstash server&lt;/li&gt;
&lt;li&gt;Create a file using command : vi filename.conf
Add the below content according to system applications :
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;filter {
  if [log][file][path] =~ "/path/to/log/filename.log" {
    grok {
           match =&amp;gt; { "message" =&amp;gt; "%{TIMESTAMP_ISO8601:date} %{LOGLEVEL} \(%{DATA:worker_thread}\)\[%{SPACE}%{JAVACLASS:class}:%{GREEDYDATA:message_body}" }
      }
  }
}
output {
  if "grook-pattern" in [tags] {
    elasticsearch {
      cacert =&amp;gt; '/etc/logstash/elasticsearch-ca.crt'
      hosts =&amp;gt; ["https://0.0.0.0:9200", "https://1.1.1.1:9200", "https://2.2.2.2:9200"]
      user =&amp;gt; user_name
      password =&amp;gt; 'password'
      ilm_enabled =&amp;gt; true
      ilm_pattern =&amp;gt; "{now/d}-000001"
      ilm_rollover_alias =&amp;gt; grook-pattern
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Make sure grok pattern should match the log pattern in your server and also tags should be same both in &lt;code&gt;filebeat.yml&lt;/code&gt; and in &lt;code&gt;filename.conf&lt;/code&gt; files&lt;/li&gt;
&lt;li&gt;Now you can see the  logs for your server in kibana.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>elasticsearch</category>
      <category>devops</category>
      <category>monitoring</category>
      <category>security</category>
    </item>
    <item>
      <title>GitLab on premise to GitHub cloud Migration</title>
      <dc:creator>Bhargavi Chiluka</dc:creator>
      <pubDate>Tue, 06 Feb 2024 13:07:28 +0000</pubDate>
      <link>https://dev.to/bhargavi_chilukaa/gitlab-on-premise-to-github-cloud-migration-55ld</link>
      <guid>https://dev.to/bhargavi_chilukaa/gitlab-on-premise-to-github-cloud-migration-55ld</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;With latest public notice from GitLab, the on premise server instance security and maintainance would be stopped from 2024.&lt;/p&gt;

&lt;p&gt;Once this is effective in place, we would left with our on premise Code hosting servers with no updates and on demand maintenance would be a special case to consider.&lt;/p&gt;

&lt;h2&gt;
  
  
  Goal
&lt;/h2&gt;

&lt;p&gt;Understanding this situation, lets see how we can move our on premise Code hosting to our organization enterprise platform of GitHub.&lt;/p&gt;

&lt;p&gt;The migration process is planned to achieve&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Zero lose to commit history&lt;/li&gt;
&lt;li&gt;No Lose of access to teams/groups&lt;/li&gt;
&lt;li&gt;No hault or impact to our deploying strategies at present.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The migration process involving&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;a 160+repositories and&lt;/li&gt;
&lt;li&gt;85+ Deployment pipelines&lt;/li&gt;
&lt;li&gt;3 Jenkins server instances
Is actually a critical and complex migration with no room for error tolerance.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Strategy
&lt;/h2&gt;

&lt;p&gt;Considering the all use cases, I drafted the migration strategy as follows…&lt;/p&gt;

&lt;p&gt;Since we are supposed to migrate from ON PREMISE to CLOUD, We cannot utilise any migration tools provided by GitLab nor GitHub…&lt;/p&gt;

&lt;p&gt;Hence, it's a tough manual process entirely, with a lot of repeating steps, evolving to be following steps.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Clone the repo to intermediate device that could access private ok premise instance and cloud on internet.&lt;/li&gt;
&lt;li&gt;Detach on premise server configuration with cloned repo&lt;/li&gt;
&lt;li&gt;Create a new repo under organization GitHub Cloud account with necessary naming conventions and role based access.&lt;/li&gt;
&lt;li&gt;Establish remote repo configuration with cloned repo to new repo.&lt;/li&gt;
&lt;li&gt;Synchronise the code base with all branches and commit history. (A tough job)&lt;/li&gt;
&lt;li&gt;For the deployment process not to fail, create necessary public private key pair and store in repo level deployment key store.&lt;/li&gt;
&lt;li&gt;Generate unique SSH config and clone URL per repo per key to avoid key collision issues in ci/cd instances (Jenkins)&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  1. Cloning the repo
&lt;/h2&gt;

&lt;p&gt;As explained above, the clone device must be a interoperable between two code hosting platforms to have a seamless migration.&lt;/p&gt;

&lt;p&gt;The process is cloning is achieved with&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git clone &amp;lt;ssh-based git clone URL&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  2. Detach the server configuration
&lt;/h2&gt;

&lt;p&gt;This is as simple as deleting a parameter from configuration after full clone&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git remote remove &amp;lt;remote-name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  3. Create new repo in cloud with RBAC
&lt;/h2&gt;

&lt;p&gt;Pertaining to organization standard practice, we should either create private or internal repo. Considering requirements of access, I opted for private repo creation.&lt;/p&gt;

&lt;p&gt;But with huge count of repositories, it makes tough to create each repo with UI and also there is not any support from git CLI.&lt;/p&gt;

&lt;p&gt;Hence, I adopted GitHub Cli Portable and logged in with github creds to achieve the task via terminal scripts. As follows&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;gh repo create --private @&amp;lt;org&amp;gt;/&amp;lt;repo-name&amp;gt; --source=&amp;lt;cloned-repo-path&amp;gt; --team=@&amp;lt;org&amp;gt;/team-handle
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  4. Establish the connection with local repo to new repo
&lt;/h2&gt;

&lt;p&gt;The above one with —source actually created a problem of sync due to only main branch consideration. But our target of pushing all branches with all commits isn't solved.&lt;/p&gt;

&lt;p&gt;Hence manual remote addition and push to new repo are required and incorporated with&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd &amp;lt;cloned-repo-path&amp;gt;
git remote add origin &amp;lt;new-repo-ssh-url&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  5. Syncing all branches and commits
&lt;/h2&gt;

&lt;p&gt;As explained above, clone always picks up the main/default branch and checked out.&lt;/p&gt;

&lt;p&gt;Though it cloned all the branches locally, it cannot directly push them to new repo unless they are checked out.&lt;/p&gt;

&lt;p&gt;Reason: When something is cloned, git downloads them as objects and stores in respective stream/tree. And won't copy locally unless you check-out. Hence direct push from local can only push the default/main branch.&lt;/p&gt;

&lt;p&gt;With above, it's clear that we should check-out all branches to push them to new repo. But how do we know how many branches are there? Listing all branches and looping then over is a brute-force yet a bad idea.&lt;/p&gt;

&lt;p&gt;So we force copied remotes objects inferring the local checkout as follows.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git push --mirror
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  6. Deploy Keys Setup
&lt;/h2&gt;

&lt;p&gt;Since we have 160+ repos being used over 85+ pipelines and the servers are on premise earlier, we had SSH of one of user account added in Jenkins instances to succeed in code pull.&lt;/p&gt;

&lt;p&gt;But with GitHub Cloud involvement, it made it impossible to use same strategy, taking access management and access life cycle policy of the org.&lt;/p&gt;

&lt;p&gt;Which evolved to be,&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Cannot use SSH key addition to existing developer accounts&lt;/li&gt;
&lt;li&gt;Bot User Account SSH setup&lt;/li&gt;
&lt;li&gt;Personal access token setup to either of above&lt;/li&gt;
&lt;li&gt;Fine grained access token setup of either of above.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;That left us with last option of repo level, only read allowed deploy keys setup.&lt;/p&gt;

&lt;p&gt;As mentioned, these keys are&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Repo level&lt;/li&gt;
&lt;li&gt;Read only Access controlled&lt;/li&gt;
&lt;li&gt;Unique to usage. (One device-one key)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Hence the deploy key addition is handled via portable Github Cli as follows&lt;/p&gt;

&lt;p&gt;Create a password less key first&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ssh-keygen -t rsa -b 4096 -N "" -f &amp;lt;path&amp;gt;/&amp;lt;repo-name-as-certificate-name&amp;gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The above would generate a private public key pair in any mentioned path.&lt;/p&gt;

&lt;p&gt;Add the pub key to repo via cli&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;gh repo --repo &amp;lt;repo_name&amp;gt; deploy-key add &amp;lt;path-to-certificate&amp;gt;/&amp;lt;repo-nane-as-certificate&amp;gt;.pub

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  7. Generate unique SSH Config to avoid key collision.
&lt;/h2&gt;

&lt;p&gt;With unique deploy key reach repo, git clone command cannot automatically pickup the right key for repo even if we dump all the keys in relative .ssh directory. And would lead to time out issues in cloning.&lt;/p&gt;

&lt;p&gt;To solve this problem, we must pass key to use while cloning to git clone command.&lt;/p&gt;

&lt;p&gt;This might be feasible for limited number of use cases but not to automation stages like in Jenkins.&lt;/p&gt;

&lt;p&gt;To achieve the scalability, we can actually write SSH configuration which can use different key to different host(domain).&lt;/p&gt;

&lt;p&gt;But with each repo in same GitHub, how do write different host names for different repos? Again this can be solved with host name resolution in ssh config.&lt;/p&gt;

&lt;p&gt;As a whole, we can write config to tell ssh to use key1 to test1.com and later ask ssh to resolve test1.com to github.com with&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Host test1.com
    Host github.com
    IdentityFile &amp;lt;path-to-key-file&amp;gt;
    # If ci/cd is behind a firewall, add proxy
    ProxyCommand nc --proxy-type sock5 --proxy &amp;lt;proxy-addres&amp;gt; %h %p
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And git clone command to pickup above config will be like,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git clone git@test1.com:&amp;lt;org&amp;gt;/&amp;lt;new-repo-name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;The whole strategy above explained, would look like the following for one single repo, and looped over the repo titles to be migrated.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#step 1
git clone &amp;lt;on premise server URL&amp;gt;/&amp;lt;repo-nane&amp;gt;
#step 2
cd &amp;lt;repo-name&amp;gt;
git remote remove origin
cd ..
#step 3
gh repo create --private @&amp;lt;org&amp;gt;/&amp;lt;new-repo-name&amp;gt; --source=&amp;lt;cloned-repo-path&amp;gt; --team=@&amp;lt;org&amp;gt;/team-handle
#steo 4
cd &amp;lt;repo-name&amp;gt;
git add remote origin @&amp;lt;org&amp;gt;/&amp;lt;new-repo-name&amp;gt;
#step 5
git push --mirror
#step 6
ssh-keygen -t rsa -b 4096 -N "" -f &amp;lt;path&amp;gt;/&amp;lt;new-repo-name&amp;gt;
cd &amp;lt;repo-name&amp;gt;
# if we are in repo dir, gh will auto pickup correct repo from origin config
gh repo deploy-key add &amp;lt;path-to-certificate&amp;gt;/&amp;lt;new-repo-name&amp;gt;.pub

#step 7
# this config needs to stored in relative ssh config of device/user
Host github-&amp;lt;new-repo-name&amp;gt;.com
    Host github.com
    IdentityFile &amp;lt;path-to-key-file&amp;gt;/&amp;lt;new-repo-name-as-private-key-cert&amp;gt;
    # If ci/cd is behind a firewall, add proxy
    ProxyCommand socks5 ....

# updated git clone command with new URL tobe used in pipeline is
git clone git@github-&amp;lt;new-repo-name&amp;gt;.com:&amp;lt;org&amp;gt;/&amp;lt;new-repo-name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>github</category>
      <category>gitlab</category>
      <category>devops</category>
      <category>productivity</category>
    </item>
  </channel>
</rss>
