<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Peter Shekindo</title>
    <description>The latest articles on DEV Community by Peter Shekindo (@petershekiondo).</description>
    <link>https://dev.to/petershekiondo</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/petershekiondo"/>
    <language>en</language>
    <item>
      <title>How to appropriately git stash</title>
      <dc:creator>Peter Shekindo</dc:creator>
      <pubDate>Mon, 12 Sep 2022 05:32:06 +0000</pubDate>
      <link>https://dev.to/clickpesa/how-to-appropriately-git-stash-1ed3</link>
      <guid>https://dev.to/clickpesa/how-to-appropriately-git-stash-1ed3</guid>
      <description>&lt;p&gt;Imagine these scenarios&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;You want to safely switch between branches without worrying about current untracked files or the state of the working directory. Well, you don't need to commit changes every time before switching between branches. &lt;/li&gt;
&lt;li&gt;Imagine temporarily committing on the current branch, switching to a different branch, cherry-picking your commit, and then removing it from where it was originally created.  🥵🤢&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;I am not done with what I am currently working on and there comes an urgent update that needs to be implemented in a production environment. &lt;/p&gt;

&lt;p&gt;🤔  should I just commit the unfinished updates and move to the production branch?&lt;/p&gt;

&lt;p&gt;🤔  which commit message should I write, “&lt;em&gt;Just committed this for the sake of not losing my updates&lt;/em&gt;”? &lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;That’s where &lt;code&gt;git stash&lt;/code&gt; becomes your Swiss knife to support you in multitasking and constantly being able to shift and work on something different without breaking a sweat.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Git stash is a powerful tool to stash or secretly save changes made on any tracked files without the need to commit them, which allows retrieving or reapplying those changes at any point in time regardless of the current branch in use&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;So how do you use it?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To stash the current changes just use the &lt;code&gt;git stash&lt;/code&gt; command as shown below and your changes are safely stashed.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git stash
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note: git stash works only for tracked changes. So for example you have added a new file to your project, any changes made to that file are untracked changes.  To allow them to be tracked you need to stage them with &lt;code&gt;git add&lt;/code&gt; command.&lt;/p&gt;

&lt;p&gt;To confirm if all the files are tracked, you can run &lt;code&gt;git status&lt;/code&gt; command as shown below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git status
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The output should look like so.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh3.googleusercontent.com%2FbRwcHjbcI2fBwtEl8ZATo9zucSNQKpxtlsEyLV4X_JkK10JBPOlHL8UXU9lHFt8fUyvvZI4xHA6LP1ZJxrGTMRo1nupZ7TR9udhJznE0eq_ogKEbw7OyHnqjLl-ysIrDQjgayajrV5lnYABxeErS41E-MTT_wjTrLz7Z8KN-vMqb9ZGgQEqD7U70_Q" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh3.googleusercontent.com%2FbRwcHjbcI2fBwtEl8ZATo9zucSNQKpxtlsEyLV4X_JkK10JBPOlHL8UXU9lHFt8fUyvvZI4xHA6LP1ZJxrGTMRo1nupZ7TR9udhJznE0eq_ogKEbw7OyHnqjLl-ysIrDQjgayajrV5lnYABxeErS41E-MTT_wjTrLz7Z8KN-vMqb9ZGgQEqD7U70_Q" alt="img"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As seen above there is a list of files that are not staged yet in &lt;code&gt;/number_trivia.dart&lt;/code&gt; directory and an untracked file in &lt;code&gt;/sampleNumberTrivia.dart&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;To allow files to be tracked, we use &lt;code&gt;git add&lt;/code&gt; command followed by a specific file name you would like to start tracking, or followed by a full stop “.” if you would like to track all untracked files. &lt;/p&gt;

&lt;p&gt;This adds tracking to a specific file&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git add &amp;lt;file name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This adds tracking to all untracked files&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git add .
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So if we run the git status command it should look different now.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh5.googleusercontent.com%2Fj6h-uwXsaHJkQFL_DSpBzXsJuy_C8-m7VCLQQ6wTPGpxz2Codf5RQ03pD7-xGlsz561w8jA21cGz29Sqv1PALZRaoAHAE_ZPooq5njWxnj4GdgF0mRwgaX9XjmbSqWhcuZkPLNBREIMMPwFi-8e6ixeT_byNP787qkstjt7vVqbcnveOzSPjsDSwDQ" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh5.googleusercontent.com%2Fj6h-uwXsaHJkQFL_DSpBzXsJuy_C8-m7VCLQQ6wTPGpxz2Codf5RQ03pD7-xGlsz561w8jA21cGz29Sqv1PALZRaoAHAE_ZPooq5njWxnj4GdgF0mRwgaX9XjmbSqWhcuZkPLNBREIMMPwFi-8e6ixeT_byNP787qkstjt7vVqbcnveOzSPjsDSwDQ" alt="img"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To view a list of all stashed changes you simply run the &lt;code&gt;git stash list&lt;/code&gt; command as shown below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git stash list
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The output should be something like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh3.googleusercontent.com%2F7kePylI7T9HYVVNp-0YZHm1adQA2J_lVh_yj6I8MDvaTdYdwqGq80NGbtONIFpRFXTM1z4a5YW2OadYvoaef4vkK2h_n0svSoWdroKvMx7TFsiOqA4wXTkh8ILceXAlqVTxYq1ytF8T-hsC5sYrLIiLLGo2ry4YiVyvlVGuZD4WJEvkARVHwD9ZPeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh3.googleusercontent.com%2F7kePylI7T9HYVVNp-0YZHm1adQA2J_lVh_yj6I8MDvaTdYdwqGq80NGbtONIFpRFXTM1z4a5YW2OadYvoaef4vkK2h_n0svSoWdroKvMx7TFsiOqA4wXTkh8ILceXAlqVTxYq1ytF8T-hsC5sYrLIiLLGo2ry4YiVyvlVGuZD4WJEvkARVHwD9ZPeg" alt="img"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;But hey, how can I remember which stashed changes contain what.? Though the files are named with different names as stash{0}, stash{1} ……. Stash{n} but this naming format is not helpful enough. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Stash your changes with a message ✍️&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To make it simple it is possible to stash changes with name or message. I usually stash my changes with a message.&lt;/p&gt;

&lt;p&gt;Run the &lt;code&gt;git stash -m&lt;/code&gt; command as shown below with a message.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git stash -m “stashing message”
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And the output should be like so.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh6.googleusercontent.com%2Fnurm2uu3xDLJDoFwAl6YunxI2fhJmYJ9BuaehLD9M-OKD8cEiK1Fa-OtakIL3FKMyPeXoN_9nkvYd-3PNMFreOQUc4GS4ULEfYxt_XqcODpyIBvnDPW7PF-ue4ixJoLOo6lHlgj835-rTM4mTjdnjUiaeTnlImRdXt5Dc7R7hhGiwDfZ__ENu1i0sg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh6.googleusercontent.com%2Fnurm2uu3xDLJDoFwAl6YunxI2fhJmYJ9BuaehLD9M-OKD8cEiK1Fa-OtakIL3FKMyPeXoN_9nkvYd-3PNMFreOQUc4GS4ULEfYxt_XqcODpyIBvnDPW7PF-ue4ixJoLOo6lHlgj835-rTM4mTjdnjUiaeTnlImRdXt5Dc7R7hhGiwDfZ__ENu1i0sg" alt="img"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So if we list the stashed changes by using &lt;code&gt;git stash list&lt;/code&gt; command now, the output should look as below&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh3.googleusercontent.com%2F49SGMZ-NkYOagGkrvKPX_VIKxwfyRu1v3EjYHitvIHW54B3sTmxpkryBJ0FeQucj_hjALggxZKNhi9_wAMEEkS8L7Uzlz6LO-NhyRSX5KBOSuxATowAnWYJWBk4i7UBzYlw7YVjEiXTYyiYaOMovGM7U7QkECFAbzTrgJ-eoYFkRFCV-8E3dRT1fYw" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh3.googleusercontent.com%2F49SGMZ-NkYOagGkrvKPX_VIKxwfyRu1v3EjYHitvIHW54B3sTmxpkryBJ0FeQucj_hjALggxZKNhi9_wAMEEkS8L7Uzlz6LO-NhyRSX5KBOSuxATowAnWYJWBk4i7UBzYlw7YVjEiXTYyiYaOMovGM7U7QkECFAbzTrgJ-eoYFkRFCV-8E3dRT1fYw" alt="img"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Notice a message on index “stash@{0}”. This makes it simple to understand which stash contains what.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How git stash works&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Before we start looking at how to retrieve a stashed record, we need to understand how stashing works.&lt;/p&gt;

&lt;p&gt;Consider the following diagram&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh4.googleusercontent.com%2F00RDVliqIu_PrvkXeDqplg6cEfgZF9sHB_dq1XIgENCFMNZP6Zb10hpCejarLK9radTD7_kNRLB30BhW8TCFPkCcctZdZe1jOaRMkf8yejL9zd9z9aqz41RB0PQHDTr6UwpepYgVf70pJMP2yCj8C5hZRlY6EsGgUAQnROVS-XLB6KoPkk0BibTowA" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh4.googleusercontent.com%2F00RDVliqIu_PrvkXeDqplg6cEfgZF9sHB_dq1XIgENCFMNZP6Zb10hpCejarLK9radTD7_kNRLB30BhW8TCFPkCcctZdZe1jOaRMkf8yejL9zd9z9aqz41RB0PQHDTr6UwpepYgVf70pJMP2yCj8C5hZRlY6EsGgUAQnROVS-XLB6KoPkk0BibTowA" alt="img"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Every time you stash changes, git tracks the stash records in a stack. Therefore, the latest stash is moved to the top of the stack which is identified with the index 0 (&lt;code&gt;stash@{0}&lt;/code&gt;). Storing the stashes in a stack plays a key role when using other functionality when using &lt;code&gt;apply&lt;/code&gt; and &lt;code&gt;pop&lt;/code&gt; options to retrieve a stash.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How to retrieve stashed records?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;There are two ways you can retrieve stashed changes &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Pop&lt;/li&gt;
&lt;li&gt;Apply&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;pop&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;With pop, you retrieve changes to a current branch and remove the stash record from the stash list.&lt;/p&gt;

&lt;p&gt;We use &lt;code&gt;git stash pop&lt;/code&gt; command to pop the stashed record. This can be used in two ways by specifying a targeted stash record or without specifying a targeted stash record&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git stash pop
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The above command will pop Stash@{0} from the stash record and apply it to a current branch since stashes are stored in stack format. This can be seen in the below example&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh5.googleusercontent.com%2FuQClSZfJohVa22zYCE28tTwdIPVCxwsFqsXg-AFJhNfM68z5ZNFHU5sfstr2Q-rVwHffvPXVCkdAebe9cq2ODXdo8i5iSWnLR0eXMXmg4nxReqXxnc5or-lGortkmvc90WeraB_MZXL7nh5repk9U6HhyX5RU5JY3S5N2CcOuIhNJPgmy4djEBYLBQ" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh5.googleusercontent.com%2FuQClSZfJohVa22zYCE28tTwdIPVCxwsFqsXg-AFJhNfM68z5ZNFHU5sfstr2Q-rVwHffvPXVCkdAebe9cq2ODXdo8i5iSWnLR0eXMXmg4nxReqXxnc5or-lGortkmvc90WeraB_MZXL7nh5repk9U6HhyX5RU5JY3S5N2CcOuIhNJPgmy4djEBYLBQ" alt="img"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To pop a specific record from a stash list we need to specify the record name (index) defined at the end of the git stash pop command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git stash pop &amp;lt;name index&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This can be seen in the diagram below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh6.googleusercontent.com%2FQDK1XAWaq8fXop24lWFuUDlwdA4z8CKsETJlK7jx65NloBTl-32omvIcJ51Os1uUXdl67zrnTcq2By6LWe77te80MAtdgExZZd8BupKKJyA4iPrc6zszFvu8pd623NL9V_kheuOpVDvmK2lc0ZIrkOFjeJI8tSp-_anI5NBCgKEz2EQcxQs0UCdQMw" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh6.googleusercontent.com%2FQDK1XAWaq8fXop24lWFuUDlwdA4z8CKsETJlK7jx65NloBTl-32omvIcJ51Os1uUXdl67zrnTcq2By6LWe77te80MAtdgExZZd8BupKKJyA4iPrc6zszFvu8pd623NL9V_kheuOpVDvmK2lc0ZIrkOFjeJI8tSp-_anI5NBCgKEz2EQcxQs0UCdQMw" alt="img"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Apply&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Git Apply command works almost similar to the git pop command with the only difference being that using pop retrieves and deletes the record from the stash list while git apply retrieves and retains the record from the stash list.&lt;/p&gt;

&lt;p&gt;Also apply can be used in two ways. Git apply with a specific target record or without specifying a targeted record.&lt;/p&gt;

&lt;p&gt;Without specifying a targeted record, git will retrieve stash@{0} index which is the last stashed record. &lt;/p&gt;

&lt;p&gt;Apply without a targeted record&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git stash apply
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Apply a specific record&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git stash apply &amp;lt;name index&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let us take a break here for today, on the next section we will go through the rest of the concepts which are.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Removing git stash&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Most of the time we keep on stashing records and re-applying them later without necessarily removing them from the stash list. To remove git stash we use the  &lt;code&gt;drop&lt;/code&gt; option. as in &lt;code&gt;pop&lt;/code&gt; and &lt;code&gt;apply&lt;/code&gt;, if you don't specify which record you want to drop from the list, it will remove the last stashed record &lt;code&gt;stash@{0}&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git stash drop
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;to specify which record you would like to remove from a stash list we use&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git stash drop &amp;lt;name index&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In case you would like to wipe and clean all the stashed records from the stack at once, you can use the &lt;code&gt;clear&lt;/code&gt; option.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git stash clear
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is all I have for you today, I hope you find this article useful. until next time "keep reading keep growing" &lt;/p&gt;

&lt;p&gt;This article has been prepared on behalf of ClickPesa. For many articles like this just click this --&amp;gt; &lt;a href="https://clickpesa.hashnode.dev/" rel="noopener noreferrer"&gt;ClickPesa on hasnode&lt;/a&gt;, &lt;a href="https://dev.to/clickpesa/multisignature-in-stellar-blockchain-k0d"&gt;ClickPesa on dev.to&lt;/a&gt; and &lt;a href="https://medium.com/clickpesa-engineering-blog" rel="noopener noreferrer"&gt;ClickPesa on medium&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>git</category>
      <category>stash</category>
      <category>pop</category>
      <category>apply</category>
    </item>
    <item>
      <title>How to implement logging in your REST service by using Elasticsearch: PART 2.B</title>
      <dc:creator>Peter Shekindo</dc:creator>
      <pubDate>Mon, 08 Aug 2022 11:37:00 +0000</pubDate>
      <link>https://dev.to/clickpesa/how-to-implement-logging-in-your-rest-service-by-using-elasticsearch-part-2b-55o9</link>
      <guid>https://dev.to/clickpesa/how-to-implement-logging-in-your-rest-service-by-using-elasticsearch-part-2b-55o9</guid>
      <description>&lt;p&gt;This is the last section of this article series which explains how to implement logging in REST services using Elasticsearch.&lt;/p&gt;

&lt;p&gt;Before continuing with this section, I advise you to go through the previous sections first in case you did not go through them. Just click the links bellow&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;How to implement logging in your REST service by using Elasticsearch - Part 1&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;How to implement logging in your REST service by using Elasticsearch — Part 2&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In this last section, “_I promise it is the last 😉 _”. We will finish up strong 💪 with the final three steps.&lt;/p&gt;

&lt;p&gt;Step-4 Install and configure Kibana.&lt;/p&gt;

&lt;p&gt;Step-5 Install and configure Logstash&lt;/p&gt;

&lt;p&gt;Step-6 Exploring logs in Kibana dashboard&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step-4 Install and configure Kibana.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;According to the &lt;a href="https://www.elastic.co/guide/en/elastic-stack/current/installing-elastic-stack.html"&gt;official documentation&lt;/a&gt;, you should install Kibana only after installing Elasticsearch. Installing in this order ensures to manage dependency for each component.&lt;/p&gt;

&lt;p&gt;Because you’ve already added the Elastic package source in the previous step, you can just install the remaining components of the Elastic Stack using &lt;code&gt;apt&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo apt install kibana
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then run the command below to enable and start the Kibana service:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo systemctl enable kibana
$ sudo systemctl start kibana
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Because Kibana is configured to only listen on &lt;code&gt;localhost&lt;/code&gt;, we must set up a &lt;a href="https://www.digitalocean.com/community/tutorials/digitalocean-community-glossary#reverse-proxy"&gt;reverse proxy&lt;/a&gt; to allow external access to it. We will use Nginx for this purpose, which should already be installed on your server. You may use any web server of your choice.&lt;/p&gt;

&lt;p&gt;In this article, we will focus on setup Nginx to use reverse proxy only, but for more on how to configure a reverse proxy, load balancing, buffering and caching  with Nginx, click the following link. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.digitalocean.com/community/tutorials/understanding-nginx-http-proxying-load-balancing-buffering-and-caching"&gt;Nginx HTTP proxying.&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;First, use the &lt;code&gt;openssl&lt;/code&gt; command to create an administrative Kibana user which you’ll use to access the Kibana web portal. As an example we will name this account &lt;code&gt;kibanaadmin&lt;/code&gt;, but to ensure greater security we recommend that you choose a non-standard name for your user that would be difficult to guess.&lt;/p&gt;

&lt;p&gt;The following command will create the administrative Kibana user and password, and store them in the &lt;code&gt;htpasswd.users&lt;/code&gt; file. You will configure Nginx to require this username and password and read this file momentarily:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ echo "kibanaadmin:`openssl passwd -apr1`" | sudo tee -a /etc/nginx/htpasswd.users
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Enter and confirm a password at the prompt. Remember or take note of this login, as you will need it to access the Kibana web portal.&lt;/p&gt;

&lt;p&gt;Next, we will create an Nginx server block file. As an example, we will refer to this file as &lt;code&gt;example.com&lt;/code&gt;, although you may find it helpful to give yours a more descriptive name. For instance, if you have a &lt;a href="https://en.wikipedia.org/wiki/Fully_qualified_domain_name"&gt;FQDN&lt;/a&gt; and &lt;a href="https://en.wikipedia.org/wiki/Domain_Name_System"&gt;DNS&lt;/a&gt; records set up for this server, you could name this file after your FQDN:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo nano /etc/nginx/sites-available/example.com
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Add the following code block into the file, being sure to update &lt;code&gt;example.com&lt;/code&gt; to match your server’s FQDN or public IP address. This code configures Nginx to direct your server’s HTTP traffic to the Kibana application, which is listening on &lt;code&gt;localhost:5601&lt;/code&gt;. Additionally, it configures Nginx to read the &lt;code&gt;htpasswd.users&lt;/code&gt; file and require basic authentication.&lt;/p&gt;

&lt;p&gt;Delete everything in the file and paste the content below. Note only do this if it is your first time to do this, if you have already configured the file you may want to update as per content below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;server {
    listen 80;
    server_name example.com;
    auth_basic "Restricted Access";
    auth_basic_user_file /etc/nginx/htpasswd.users;
    location / {
        proxy_pass http://localhost:5601;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host $host;
        proxy_cache_bypass $http_upgrade;
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When you’re finished, save and close the file.&lt;/p&gt;

&lt;p&gt;Next, enable the new configuration by creating a symbolic link to the &lt;code&gt;sites-enabled&lt;/code&gt; directory. If you already created a server block file with the same name in the Nginx prerequisite, you do not need to run this command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo ln -s /etc/nginx/sites-available/example.com /etc/nginx/sites-enabled/example.com
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then check the configuration for syntax errors:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo nginx -t
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If any errors are reported in your output, go back and double-check that the content you placed in your configuration file was added correctly. Once you see &lt;code&gt;syntax is ok&lt;/code&gt; in the output, go ahead and restart the Nginx service:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo systemctl restart nginx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Also, you should have a UFW firewall enabled. To allow connections to Nginx, we can adjust the rules by typing:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo ufw allow 'Nginx Full' 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will allow both HTTP and HTTPS traffic through the firewall. You may use &lt;code&gt;'Nginx HTTP'&lt;/code&gt;for HTTP and  &lt;code&gt;'Nginx HTTPS'&lt;/code&gt; for HTTPS.&lt;/p&gt;

&lt;p&gt;Kibana is now accessible via your FQDN or the public IP address of your Elastic Stack server. You can check the Kibana server’s status page by navigating to the following address and entering your login credentials when prompted:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;http://your_serve_IP/status 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This status page displays information about the server’s resource usage and lists the installed plugins.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--hZmlOUZB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://res.cloudinary.com/clickpesa/image/upload/v1659950077/engineering-blog/images/ywpwi3ih1jnmdrvhr6cv.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--hZmlOUZB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://res.cloudinary.com/clickpesa/image/upload/v1659950077/engineering-blog/images/ywpwi3ih1jnmdrvhr6cv.jpg" alt="drawing" width="794" height="406"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now that the Kibana dashboard is configured, let’s install the next component: Logstash.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step-5 Install and configure Logstash&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Logstash is used to process our saved log files. It collects data from different sources, transform it into a common format, and exports it to another database.&lt;/p&gt;

&lt;p&gt;Install Logstash with this command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo apt install logstash
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;create a configuration file called &lt;code&gt;input.conf&lt;/code&gt;, where you will set up your log source input:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo nano /etc/logstash/conf.d/input.conf
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Insert the following &lt;code&gt;input&lt;/code&gt; configuration. This specifies a source file that holds all logs generated in the system. As seen in the script &lt;code&gt;path =&amp;gt; "specify the path to where your logfile is located"&lt;/code&gt;  this specifies log file path, &lt;code&gt;start_position =&amp;gt; "beginning"&lt;/code&gt;this specifies Logstash should read the file from the beginning.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;input {
  file {
         path =&amp;gt; "specify the path to where your logfile is located"
    start_position =&amp;gt; "beginning"
    sincedb_path =&amp;gt; "/dev/null"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Save and close the file.&lt;/p&gt;

&lt;p&gt;Next, create a configuration file called &lt;code&gt;30-elasticsearch-output.conf&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo nano /etc/logstash/conf.d/30-elasticsearch-output.conf  
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Insert the following &lt;code&gt;output&lt;/code&gt; configuration. Essentially, this output configures Logstash to store the logged data in Elasticsearch, which is running at &lt;code&gt;localhost:9200&lt;/code&gt;. Notice the &lt;code&gt;Index =&amp;gt; filebeat&lt;/code&gt;  this will help us to create index pattern in the Kibana dashboard which will act as our log reference to where our logs referred from&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;output {
  elasticsearch {
    hosts =&amp;gt; ["localhost:9200"]
    Index =&amp;gt; filebeat
  }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Save and close the file.&lt;/p&gt;

&lt;p&gt;Test your Logstash configuration with this command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo -u logstash /usr/share/logstash/bin/logstash --path.settings /etc/logstash -t
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If there are no syntax errors, your output will display &lt;code&gt;Config Validation Result: OK. Exiting Logstash&lt;/code&gt; after a few seconds. If you don’t see this in your output, check for any errors noted in your output and update your configuration to correct them. Note that you will receive warnings from OpenJDK, but they should not cause any problems and can be ignored.&lt;/p&gt;

&lt;p&gt;If your configuration test is successful, start and enable Logstash to put the configuration changes into effect:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo systemctl start logstash
$ sudo systemctl enable logstash
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now that Logstash is running correctly and is fully configured, let’s start to review logs in Kibana dashboard.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step-6 Exploring logs in Kibana dashboard&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Let’s return to the Kibana web interface that we installed earlier.&lt;/p&gt;

&lt;p&gt;In a web browser, go to the FQDN or public IP address of your Elastic Stack server. If your session has been interrupted, you will need to re-enter the credentials you defined in Kibana configuration steps. Once you have logged in, you should receive the Kibana homepage:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--a-LwrcW7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://res.cloudinary.com/clickpesa/image/upload/v1659950212/engineering-blog/images/ojql1nbztvjm8rknxpny.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--a-LwrcW7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://res.cloudinary.com/clickpesa/image/upload/v1659950212/engineering-blog/images/ojql1nbztvjm8rknxpny.jpg" alt="drawing" width="783" height="414"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click the Discover link in the left-hand navigation bar (you may have to click the Expand icon at the very bottom left to see the navigation menu items). On the Discover page, select the predefined &lt;code&gt;filebeat-*&lt;/code&gt;  index pattern to see logged data. By default, this will show you all of the log data over the last 15 minutes. You will see a histogram with log events, and some log messages below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--6SgiKvbr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://res.cloudinary.com/clickpesa/image/upload/v1659950162/engineering-blog/images/yrg8p2sfnzjfxlxklas8.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--6SgiKvbr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://res.cloudinary.com/clickpesa/image/upload/v1659950162/engineering-blog/images/yrg8p2sfnzjfxlxklas8.jpg" alt="drawing" width="768" height="371"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this tutorial, you’ve learned how to install and configure the Elastic Stack to collect and analyze system logs. Remember that you can send just about any type of log or indexed data to Logstash, but the data becomes even more useful if it is parsed and structured with a Logstash filter, as this transforms the data into a consistent format that can be read easily by Elasticsearch.&lt;/p&gt;

&lt;p&gt;To learn more about ELSK stack and other concept, you may visit this link &lt;a href="https://www.digitalocean.com/community"&gt;digital ocean community&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;This article has been prepared on behalf of ClickPesa. For many article like this just click this licks &lt;a href="https://clickpesa.hashnode.dev/"&gt;ClickPesa on hasnode&lt;/a&gt;, &lt;a href="https://dev.to/clickpesa/multisignature-in-stellar-blockchain-k0d"&gt;ClickPesa on dev.to&lt;/a&gt; and &lt;a href="https://medium.com/clickpesa-engineering-blog"&gt;ClickPesa on medium&lt;/a&gt;.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>How to implement logging in your REST service by using Elasticsearch - PART 2</title>
      <dc:creator>Peter Shekindo</dc:creator>
      <pubDate>Wed, 13 Jul 2022 03:46:35 +0000</pubDate>
      <link>https://dev.to/clickpesa/how-to-implement-logging-in-your-rest-service-by-using-elasticsearch-part-2-21fm</link>
      <guid>https://dev.to/clickpesa/how-to-implement-logging-in-your-rest-service-by-using-elasticsearch-part-2-21fm</guid>
      <description>&lt;h3&gt;
  
  
  How to implement logging in your REST service by using Elasticsearch - PART 2
&lt;/h3&gt;

&lt;p&gt;Note: &lt;br&gt;
This article is the second part of How to implement logging in your REST service by using Elasticsearch article series. Click the below link for part 1 of this series, or if you have already gone through that then you are good to proceed with part 2.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://clickpesa.hashnode.dev/how-to-implement-logging-in-your-rest-service-by-using-elasticsearch"&gt;How to implement logging in your REST service by using Elasticsearch - PART 1&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In part 1 of this article series, I explained the introduction to the ELK stack and how it can be used to implement logging in your REST service. You can find the first section of this series in the following links medium, dev, and hashnode. In this section, we will discuss how we can install, configure and use the ELK stack.&lt;/p&gt;

&lt;p&gt;For logging purposes, Elasticsearch comes with two other tools Kibana and Logstash. Together they form the ELK stack as introduced in part one of this article.&lt;br&gt;
To avoid confusion and reading fatigue, I will divide this second part into two sections as well&lt;/p&gt;
&lt;h4&gt;
  
  
  Part 2.A: Install and configure Elasticsearch
&lt;/h4&gt;
&lt;h4&gt;
  
  
  Part 2.B: Install and configure Kibana and Logstash as well as how to use the ELK stack
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--oihAJe8X--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://res.cloudinary.com/clickpesa/image/upload/v1657461627/engineering-blog/images/biefup2e4z3luwfmcnna.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--oihAJe8X--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://res.cloudinary.com/clickpesa/image/upload/v1657461627/engineering-blog/images/biefup2e4z3luwfmcnna.jpg" alt="drawing" width="772" height="528"&gt;&lt;/a&gt;&lt;/p&gt;


&lt;p&gt;Logging Process with ELK stack
&lt;/p&gt;

&lt;h4&gt;
  
  
  Part 2.A: Install and configure Elasticsearch
&lt;/h4&gt;

&lt;p&gt;Initially, we need to download Elasticsearch, Kibana, and Logstash. Below are the links where you may download these tools. Currently, I am using version 8.3.1 in a Linux environment, but the version might vary for different environments and times. Also in this article, we will be using APT (Advanced Package Tool) which is a Linux installation package manager to download and install all tools.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Tool&lt;/th&gt;
&lt;th&gt;Files (.zip)&lt;/th&gt;
&lt;th&gt;apt-get installation&lt;/th&gt;
&lt;th&gt;yum&lt;/th&gt;
&lt;th&gt;Docker image&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Kibana&lt;/td&gt;
&lt;td&gt;&lt;a href="https://artifacts.elastic.co/downloads/kibana/kibana-8.3.1-linux-x86_64.tar.gz"&gt;https://artifacts.elastic.co/downloads/kibana/kibana-8.3.1-linux-x86_64.tar.gz&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;&lt;a href="https://www.elastic.co/guide/en/kibana/8.3/deb.html#deb-repo"&gt;https://www.elastic.co/guide/en/kibana/8.3/deb.html#deb-repo&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;&lt;a href="https://www.elastic.co/guide/en/kibana/8.3/rpm.html#rpm-repo"&gt;https://www.elastic.co/guide/en/kibana/8.3/rpm.html#rpm-repo&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;&lt;a href="https://www.elastic.co/guide/en/kibana/current/docker.html"&gt;https://www.elastic.co/guide/en/kibana/current/docker.html&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Logstash&lt;/td&gt;
&lt;td&gt;&lt;a href="https://artifacts.elastic.co/downloads/logstash/logstash-8.3.1-linux-x86_64.tar.gz"&gt;https://artifacts.elastic.co/downloads/logstash/logstash-8.3.1-linux-x86_64.tar.gz&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;&lt;a href="https://www.elastic.co/guide/en/logstash/8.3/installing-logstash.html#_apt"&gt;https://www.elastic.co/guide/en/logstash/8.3/installing-logstash.html#_apt&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;&lt;a href="https://www.elastic.co/guide/en/logstash/8.3/installing-logstash.html#_yum"&gt;https://www.elastic.co/guide/en/logstash/8.3/installing-logstash.html#_yum&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;&lt;a href="https://www.elastic.co/guide/en/logstash/current/docker.html"&gt;https://www.elastic.co/guide/en/logstash/current/docker.html&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Elasticsearch&lt;/td&gt;
&lt;td&gt;&lt;a href="https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-8.3.1-linux-x86_64.tar.gz"&gt;https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-8.3.1-linux-x86_64.tar.gz&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;&lt;a href="https://www.elastic.co/guide/en/elasticsearch/reference/8.3/deb.html#deb-repo"&gt;https://www.elastic.co/guide/en/elasticsearch/reference/8.3/deb.html#deb-repo&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;&lt;a href="https://www.elastic.co/guide/en/elasticsearch/reference/8.3/rpm.html#rpm-repo"&gt;https://www.elastic.co/guide/en/elasticsearch/reference/8.3/rpm.html#rpm-repo&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;&lt;a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html"&gt;https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Note:&lt;br&gt;
In this article, we mainly focus on accessing logs logged in an external file or files. This means you need to implement an external logging service to a file or files in the REST architecture of your choice. Thus all logs should be accessed from a specific file or files.&lt;/p&gt;
&lt;h3&gt;
  
  
  Steps 1 - Installing Elasticsearch
&lt;/h3&gt;

&lt;p&gt;To begin, we need to configure the Ubuntu package repository by adding Elastic’s package source list in order to download and install Elasticsearch. This is not configured by default, so we need to do it manually. &lt;/p&gt;

&lt;p&gt;Note:&lt;br&gt;
All of the packages are signed with the Elasticsearch public GPG key in order to protect your system from package spoofing. Packages authenticated using a key are considered secured by the downloading manager.&lt;/p&gt;

&lt;p&gt;a. Open the terminal and use the cURL command-line tool for transferring data with URL, to import the Elasticsearch public GPG key into APT. We are also using the arguments -fsSL to silence all progress and possible errors (except for a server failure) and to allow cURL to make a request on a new location if redirected:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ curl -fsSL https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;b. add the Elastic source list to the &lt;code&gt;sources.list.d&lt;/code&gt; directory, where APT will look for new sources:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ echo "deb https://artifacts.elastic.co/packages/7.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-7.x.list
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;c. update your package lists so APT will read the new Elastic source:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo apt update
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;d. Use this command to install Elasticsearch&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo apt install elasticsearch
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you have reached this far without any error, that means Elasticsearch is now installed and ready to be configured.  🎉&lt;/p&gt;

&lt;h4&gt;
  
  
  Steps 2 - Configuringing Elasticsearch
&lt;/h4&gt;

&lt;p&gt;All Elastic search configuration goes into  &lt;code&gt;elasticsearch.yml&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;a. Use the command to access elasticsearch.yml file&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo nano /etc/elasticsearch/elasticsearch.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;There is a lot you can configure in Elasticsearch such as cluster, node, path, memory, network, discovery, and gateway. Most of these configurations are already preconfigured in the file but you can change them as you see fit.&lt;br&gt;
For the sake of this tutorial, we will only change the network host configuration to allow single server access.&lt;/p&gt;

&lt;p&gt;Elasticsearch listens for traffic from everywhere on port 9200. For this reason You may want to restrict outside access to your Elasticsearch instance to prevent outsiders from reading your data or shutting down your Elasticsearch cluster through its [REST API]&lt;/p&gt;

&lt;p&gt;In order to accomplish this, find the line that specifies &lt;code&gt;network.host&lt;/code&gt;, uncomment it, and replace its value with &lt;code&gt;custom IP address&lt;/code&gt; like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;. . .
# ------------------------Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: custom IP address
. . .

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;b. If you accessed the configuration file by nano, use the following key combination to save and close the file &lt;code&gt;CTRL+X&lt;/code&gt; or &lt;code&gt;⌘ + X&lt;/code&gt; in Macintosh, followed by &lt;code&gt;Y&lt;/code&gt; and then &lt;code&gt;ENTER&lt;/code&gt;.&lt;/p&gt;

&lt;h4&gt;
  
  
  Steps 3 - Configuring Elasticsearch
&lt;/h4&gt;

&lt;p&gt;We use &lt;code&gt;systemctl&lt;/code&gt; command to start Elasticsearch service, this will allow Elasticsearch to initiate properly otherwise it will run into error and fail to start.&lt;/p&gt;

&lt;p&gt;a. Open terminal and run command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo systemctl start elasticsearch
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;b. You can also enable Elasticsearch to automatically run on every system boot.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo systemctl enable elasticsearch
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;c. Run the following command to test your Elasticsearch. note, as for me Elasticsearch is running on &lt;code&gt;localhost:9200&lt;/code&gt;. you may want to specify which &lt;code&gt;IP:Port&lt;/code&gt; address your Elasticsearch is using upon.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ curl -X GET "localhost:9200"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If every thing went well, you will see a response showing some basic information about your local node, similar to this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Output
{
  "name" : "Elasticsearch",
  "cluster_name" : "elasticsearch",
  "cluster_uuid" : "qqhFHPigQ9e2lk-a7AvLNQ",
  "version" : {
    "number" : "7.7.1",
    "build_flavor" : "default",
    "build_type" : "deb",
    "build_hash" : "ef48eb35cf30adf4db14086e8aabd07ef6fb113f",
    "build_date" : "2020-03-26T06:34:37.794943Z",
    "build_snapshot" : false,
    "lucene_version" : "8.5.1",
    "minimum_wire_compatibility_version" : "6.8.0",
    "minimum_index_compatibility_version" : "6.0.0-beta1"
  },
  "tagline" : "You Know, for Search"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now that Elasticsearch is up and running, in the next section which is Part 2.B of this article series, we will install Kibana, and Logstash and test our Logging configuration.&lt;/p&gt;

&lt;p&gt;Just incase you dont know, there are many more article like this at &lt;a href="https://clickpesa.hashnode.dev/"&gt;ClickPesa on hasnode&lt;/a&gt;, &lt;a href="https://dev.to/clickpesa/multisignature-in-stellar-blockchain-k0d"&gt;ClickPesa on dev.to&lt;/a&gt; and &lt;a href="https://medium.com/clickpesa-engineering-blog"&gt;ClickPesa on medium&lt;/a&gt;. You will thank me later.&lt;/p&gt;

</description>
      <category>elasticsearch</category>
      <category>kibana</category>
      <category>logstash</category>
      <category>logging</category>
    </item>
    <item>
      <title>How to implement logging in your REST service by using Elasticsearch - PART 2</title>
      <dc:creator>Peter Shekindo</dc:creator>
      <pubDate>Mon, 11 Jul 2022 16:51:08 +0000</pubDate>
      <link>https://dev.to/petershekiondo/how-to-implement-logging-in-your-rest-service-by-using-elasticsearch-part-2-2dl5</link>
      <guid>https://dev.to/petershekiondo/how-to-implement-logging-in-your-rest-service-by-using-elasticsearch-part-2-2dl5</guid>
      <description>&lt;h2&gt;
  
  
  How to implement logging in your REST service by using Elasticsearch - PART 2
&lt;/h2&gt;

&lt;p&gt;In part 1 of this article series I explained the introduction to the ELK stack and how it can be used to implement logging in your REST service. You can find the first section of this series in the following links medium, dev, and hashnode. In this section, we will discuss how we can install, configure and use the ELK stack.&lt;/p&gt;

&lt;p&gt;For logging purposes, Elasticsearch comes with two other tools Kibana and Logstash. Together they form the ELK stack as introduced in part one of this article.&lt;br&gt;
To avoid confusion and reading fatigue, I will divide this second part into two sections as well&lt;/p&gt;
&lt;h3&gt;
  
  
  Part 2.A: Install and configure Elasticsearch
&lt;/h3&gt;
&lt;h3&gt;
  
  
  Part 2.B: Install and configure Kibana and Logstash as well as how to use the ELK stack
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--oihAJe8X--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://res.cloudinary.com/clickpesa/image/upload/v1657461627/engineering-blog/images/biefup2e4z3luwfmcnna.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--oihAJe8X--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://res.cloudinary.com/clickpesa/image/upload/v1657461627/engineering-blog/images/biefup2e4z3luwfmcnna.jpg" alt="drawing" width="" height=""&gt;&lt;/a&gt;&lt;/p&gt;


&lt;p&gt;Logging Process with ELK stack
&lt;/p&gt;

&lt;h3&gt;
  
  
  Part 2.A: Install and configure Elasticsearch
&lt;/h3&gt;

&lt;p&gt;Initially, we need to download Elasticsearch, Kibana, and Logstash. Below are the links where you may download these tools. Currently, I am using version 8.3.1 in a Linux environment, but the version might vary for different environments and times. Also in this article, we will be using APT (Advanced Package Tool) which is a Linux installation package manager to download and install all tools.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Tool&lt;/th&gt;
&lt;th&gt;Files (.zip)&lt;/th&gt;
&lt;th&gt;apt-get installation&lt;/th&gt;
&lt;th&gt;yum&lt;/th&gt;
&lt;th&gt;Docker image&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Kibana&lt;/td&gt;
&lt;td&gt;&lt;a href="https://artifacts.elastic.co/downloads/kibana/kibana-8.3.1-linux-x86_64.tar.gz"&gt;https://artifacts.elastic.co/downloads/kibana/kibana-8.3.1-linux-x86_64.tar.gz&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;&lt;a href="https://www.elastic.co/guide/en/kibana/8.3/deb.html#deb-repo"&gt;https://www.elastic.co/guide/en/kibana/8.3/deb.html#deb-repo&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;&lt;a href="https://www.elastic.co/guide/en/kibana/8.3/rpm.html#rpm-repo"&gt;https://www.elastic.co/guide/en/kibana/8.3/rpm.html#rpm-repo&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;&lt;a href="https://www.elastic.co/guide/en/kibana/current/docker.html"&gt;https://www.elastic.co/guide/en/kibana/current/docker.html&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Logstash&lt;/td&gt;
&lt;td&gt;&lt;a href="https://artifacts.elastic.co/downloads/logstash/logstash-8.3.1-linux-x86_64.tar.gz"&gt;https://artifacts.elastic.co/downloads/logstash/logstash-8.3.1-linux-x86_64.tar.gz&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;&lt;a href="https://www.elastic.co/guide/en/logstash/8.3/installing-logstash.html#_apt"&gt;https://www.elastic.co/guide/en/logstash/8.3/installing-logstash.html#_apt&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;&lt;a href="https://www.elastic.co/guide/en/logstash/8.3/installing-logstash.html#_yum"&gt;https://www.elastic.co/guide/en/logstash/8.3/installing-logstash.html#_yum&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;&lt;a href="https://www.elastic.co/guide/en/logstash/current/docker.html"&gt;https://www.elastic.co/guide/en/logstash/current/docker.html&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Elasticsearch&lt;/td&gt;
&lt;td&gt;&lt;a href="https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-8.3.1-linux-x86_64.tar.gz"&gt;https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-8.3.1-linux-x86_64.tar.gz&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;&lt;a href="https://www.elastic.co/guide/en/elasticsearch/reference/8.3/deb.html#deb-repo"&gt;https://www.elastic.co/guide/en/elasticsearch/reference/8.3/deb.html#deb-repo&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;&lt;a href="https://www.elastic.co/guide/en/elasticsearch/reference/8.3/rpm.html#rpm-repo"&gt;https://www.elastic.co/guide/en/elasticsearch/reference/8.3/rpm.html#rpm-repo&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;&lt;a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html"&gt;https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Note:&lt;br&gt;
In this article, we mainly focus on accessing logs logged in an external file or files. This means you need to implement an external logging service to a file or files in the REST architecture of your choice. Thus all logs should be accessed from a specific file or files.&lt;/p&gt;
&lt;h3&gt;
  
  
  Steps 1 - Installing Elasticsearch
&lt;/h3&gt;

&lt;p&gt;To begin, we need to configure the Ubuntu package repository by adding Elastic’s package source list in order to download and install Elasticsearch. This is not configured by default, so we need to do it manually. &lt;/p&gt;

&lt;p&gt;Note:&lt;br&gt;
All of the packages are signed with the Elasticsearch public GPG key in order to protect your system from package spoofing. Packages authenticated using a key are considered secured by the downloading manager.&lt;/p&gt;

&lt;p&gt;a. Open the terminal and use the cURL command-line tool for transferring data with URL, to import the Elasticsearch public GPG key into APT. We are also using the arguments -fsSL to silence all progress and possible errors (except for a server failure) and to allow cURL to make a request on a new location if redirected:&lt;/p&gt;

&lt;p&gt;
  src="https://carbon.now.sh/embed?bg=rgba%28171%2C+184%2C+195%2C+1%29&amp;amp;t=seti&amp;amp;wt=none&amp;amp;l=application%2Fx-sh&amp;amp;width=680&amp;amp;ds=true&amp;amp;dsyoff=20px&amp;amp;dsblur=68px&amp;amp;wc=true&amp;amp;wa=true&amp;amp;pv=56px&amp;amp;ph=56px&amp;amp;ln=false&amp;amp;fl=1&amp;amp;fm=Hack&amp;amp;fs=14px&amp;amp;lh=133%25&amp;amp;si=false&amp;amp;es=1x&amp;amp;wm=false&amp;amp;code=%2524%2520curl%2520-fsSL%2520https%253A%252F%252Fartifacts.elastic.co%252FGPG-KEY-elasticsearch%2520%257C%2520sudo%2520apt-key%2520add%2520-"&lt;br&gt;
  style="width: 824px; height: 210px; border:0; transform: scale(1); overflow:hidden;"&lt;br&gt;
  sandbox="allow-scripts allow-same-origin"&amp;gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;b. add the Elastic source list to the &lt;code&gt;sources.list.d&lt;/code&gt; directory, where APT will look for new sources:&lt;/p&gt;

&lt;p&gt;
  src="https://carbon.now.sh/embed?bg=rgba%28171%2C+184%2C+195%2C+1%29&amp;amp;t=seti&amp;amp;wt=none&amp;amp;l=application%2Fx-sh&amp;amp;width=680&amp;amp;ds=true&amp;amp;dsyoff=20px&amp;amp;dsblur=68px&amp;amp;wc=true&amp;amp;wa=true&amp;amp;pv=56px&amp;amp;ph=56px&amp;amp;ln=false&amp;amp;fl=1&amp;amp;fm=Hack&amp;amp;fs=14px&amp;amp;lh=133%25&amp;amp;si=false&amp;amp;es=1x&amp;amp;wm=false&amp;amp;code=%2524%2520echo%2520%2522deb%2520https%253A%252F%252Fartifacts.elastic.co%252Fpackages%252F7.x%252Fapt%2520stable%2520main%2522%2520%257C%2520sudo%2520tee%2520-a%2520%252Fetc%252Fapt%252Fsources.list.d%252Felastic-7.x.list"&lt;br&gt;
  style="width: 924px; height: 220px; border:0; transform: scale(1); overflow:hidden;"&lt;br&gt;
  sandbox="allow-scripts allow-same-origin"&amp;gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;c. update your package lists so APT will read the new Elastic source:&lt;/p&gt;

&lt;p&gt;
  src="https://carbon.now.sh/embed?bg=rgba%28171%2C+184%2C+195%2C+1%29&amp;amp;t=seti&amp;amp;wt=none&amp;amp;l=application%2Fx-sh&amp;amp;width=680&amp;amp;ds=true&amp;amp;dsyoff=20px&amp;amp;dsblur=68px&amp;amp;wc=true&amp;amp;wa=true&amp;amp;pv=56px&amp;amp;ph=56px&amp;amp;ln=false&amp;amp;fl=1&amp;amp;fm=Hack&amp;amp;fs=14px&amp;amp;lh=133%25&amp;amp;si=false&amp;amp;es=1x&amp;amp;wm=false&amp;amp;code=%2524%2520sudo%2520apt%2520update"&lt;br&gt;
  style="width: 300px; height: 210px; border:0; transform: scale(1); overflow:hidden;"&lt;br&gt;
  sandbox="allow-scripts allow-same-origin"&amp;gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;d. Use this command to install Elasticsearch&lt;/p&gt;

&lt;p&gt;
  src="https://carbon.now.sh/embed?bg=rgba%28171%2C+184%2C+195%2C+1%29&amp;amp;t=seti&amp;amp;wt=none&amp;amp;l=application%2Fx-sh&amp;amp;width=680&amp;amp;ds=true&amp;amp;dsyoff=20px&amp;amp;dsblur=68px&amp;amp;wc=true&amp;amp;wa=true&amp;amp;pv=56px&amp;amp;ph=56px&amp;amp;ln=false&amp;amp;fl=1&amp;amp;fm=Hack&amp;amp;fs=14px&amp;amp;lh=133%25&amp;amp;si=false&amp;amp;es=1x&amp;amp;wm=false&amp;amp;code=%2524%2520sudo%2520apt%2520install%2520elasticsearch"&lt;br&gt;
  style="width: 410px; height: 208px; border:0; transform: scale(1); overflow:hidden;"&lt;br&gt;
  sandbox="allow-scripts allow-same-origin"&amp;gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;If you have reached this far without any error, that means Elasticsearch is now installed and ready to be configured.  🎉&lt;/p&gt;

&lt;h3&gt;
  
  
  Steps 2 - Configuringing Elasticsearch
&lt;/h3&gt;

&lt;p&gt;All Elastic search configuration goes into  &lt;code&gt;elasticsearch.yml&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;a. Use the command to access elasticsearch.yml file&lt;/p&gt;

&lt;p&gt;
  src="https://carbon.now.sh/embed?bg=rgba%28171%2C+184%2C+195%2C+1%29&amp;amp;t=seti&amp;amp;wt=none&amp;amp;l=application%2Fx-sh&amp;amp;width=680&amp;amp;ds=true&amp;amp;dsyoff=20px&amp;amp;dsblur=68px&amp;amp;wc=true&amp;amp;wa=true&amp;amp;pv=56px&amp;amp;ph=56px&amp;amp;ln=false&amp;amp;fl=1&amp;amp;fm=Hack&amp;amp;fs=14px&amp;amp;lh=133%25&amp;amp;si=false&amp;amp;es=1x&amp;amp;wm=false&amp;amp;code=%2524%2520sudo%2520nano%2520%252Fetc%252Felasticsearch%252Felasticsearch.yml"&lt;br&gt;
  style="width: 500px; height: 208px; border:0; transform: scale(1); overflow:hidden;"&lt;br&gt;
  sandbox="allow-scripts allow-same-origin"&amp;gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;There is a lot you can configure in Elasticsearch such as cluster, node, path, memory, network, discovery, and gateway. Most of these configurations are already preconfigured in the file but you can change them as you see fit.&lt;br&gt;
For the sake of this tutorial, we will only change the network host configuration to allow single server access.&lt;/p&gt;

&lt;p&gt;Elasticsearch listens for traffic from everywhere on port 9200. For this reason You may want to restrict outside access to your Elasticsearch instance to prevent outsiders from reading your data or shutting down your Elasticsearch cluster through its [REST API]&lt;/p&gt;

&lt;p&gt;In order to accomplish this, find the line that specifies &lt;code&gt;network.host&lt;/code&gt;, uncomment it, and replace its value with &lt;code&gt;custom IP address&lt;/code&gt; like this:&lt;/p&gt;

&lt;p&gt;
  src="https://carbon.now.sh/embed?bg=rgba%28171%2C+184%2C+195%2C+1%29&amp;amp;t=seti&amp;amp;wt=none&amp;amp;l=auto&amp;amp;width=680&amp;amp;ds=true&amp;amp;dsyoff=20px&amp;amp;dsblur=68px&amp;amp;wc=true&amp;amp;wa=true&amp;amp;pv=56px&amp;amp;ph=56px&amp;amp;ln=false&amp;amp;fl=1&amp;amp;fm=Hack&amp;amp;fs=14px&amp;amp;lh=133%25&amp;amp;si=false&amp;amp;es=1x&amp;amp;wm=false&amp;amp;code=.%2520.%2520.%250A%2523%2520-----------------Network%2520-----------------%250A%2523%250A%2523%2520Set%2520the%2520bind%2520address%2520to%2520a%2520specific%2520IP%2520%28IPv4%2520or%2520IPv6%29%253A%250A%2523%250Anetwork.host%253A%2520custom%2520IP%2520address%250A.%2520.%2520."&lt;br&gt;
  style="width: 600px; height: 330px; border:0; transform: scale(1); overflow:hidden;"&lt;br&gt;
  sandbox="allow-scripts allow-same-origin"&amp;gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;b. If you accessed the configuration file by nano, use the following key combination to save and close the file &lt;code&gt;CTRL+X&lt;/code&gt; or &lt;code&gt;⌘ + X&lt;/code&gt; in Macintosh, followed by &lt;code&gt;Y&lt;/code&gt; and then &lt;code&gt;ENTER&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Steps 3 - Configuring Elasticsearch
&lt;/h3&gt;

&lt;p&gt;We use &lt;code&gt;systemctl&lt;/code&gt; command to start Elasticsearch service, this will allow Elasticsearch to initiate properly otherwise it will run into error and fail to start.&lt;/p&gt;

&lt;p&gt;a. Open terminal and run command.&lt;/p&gt;

&lt;p&gt;
  src="https://carbon.now.sh/embed?bg=rgba%28171%2C+184%2C+195%2C+1%29&amp;amp;t=seti&amp;amp;wt=none&amp;amp;l=application%2Fx-sh&amp;amp;width=680&amp;amp;ds=true&amp;amp;dsyoff=20px&amp;amp;dsblur=68px&amp;amp;wc=true&amp;amp;wa=true&amp;amp;pv=56px&amp;amp;ph=56px&amp;amp;ln=false&amp;amp;fl=1&amp;amp;fm=Hack&amp;amp;fs=14px&amp;amp;lh=133%25&amp;amp;si=false&amp;amp;es=1x&amp;amp;wm=false&amp;amp;code=%2524%2520sudo%2520systemctl%2520start%2520elasticsearch%250A"&lt;br&gt;
  style="width: 450px; height: 230px; border:0; transform: scale(1); overflow:hidden;"&lt;br&gt;
  sandbox="allow-scripts allow-same-origin"&amp;gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;b. You can also enable Elasticsearch to automatically run on every system boot.&lt;/p&gt;

&lt;p&gt;
  src="https://carbon.now.sh/embed?bg=rgba%28171%2C+184%2C+195%2C+1%29&amp;amp;t=seti&amp;amp;wt=none&amp;amp;l=auto&amp;amp;width=680&amp;amp;ds=true&amp;amp;dsyoff=20px&amp;amp;dsblur=68px&amp;amp;wc=true&amp;amp;wa=true&amp;amp;pv=56px&amp;amp;ph=56px&amp;amp;ln=false&amp;amp;fl=1&amp;amp;fm=Hack&amp;amp;fs=14px&amp;amp;lh=133%25&amp;amp;si=false&amp;amp;es=1x&amp;amp;wm=false&amp;amp;code=%2524%2520sudo%2520systemctl%2520enable%2520elasticsearch%250A"&lt;br&gt;
  style="width: 450px; height: 230px; border:0; transform: scale(1); overflow:hidden;"&lt;br&gt;
  sandbox="allow-scripts allow-same-origin"&amp;gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;c. Run the following command to test your Elasticsearch. note, as for me Elasticsearch is running on &lt;code&gt;localhost:9200&lt;/code&gt;. you may want to specify which &lt;code&gt;IP:Port&lt;/code&gt; address your Elasticsearch is using upon.&lt;/p&gt;

&lt;p&gt;
  src="https://carbon.now.sh/embed?bg=rgba%28171%2C+184%2C+195%2C+1%29&amp;amp;t=seti&amp;amp;wt=none&amp;amp;l=auto&amp;amp;width=680&amp;amp;ds=true&amp;amp;dsyoff=20px&amp;amp;dsblur=68px&amp;amp;wc=true&amp;amp;wa=true&amp;amp;pv=56px&amp;amp;ph=56px&amp;amp;ln=false&amp;amp;fl=1&amp;amp;fm=Hack&amp;amp;fs=14px&amp;amp;lh=133%25&amp;amp;si=false&amp;amp;es=1x&amp;amp;wm=false&amp;amp;code=%2524%2520curl%2520-X%2520GET%2520%2522localhost%253A9200%2522"&lt;br&gt;
  style="width: 400px; height: 210px; border:0; transform: scale(1); overflow:hidden;"&lt;br&gt;
  sandbox="allow-scripts allow-same-origin"&amp;gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;If every thing went well, you will see a response showing some basic information about your local node, similar to this:&lt;/p&gt;

&lt;p&gt;
  src="https://carbon.now.sh/embed?bg=rgba%28171%2C+184%2C+195%2C+1%29&amp;amp;t=seti&amp;amp;wt=none&amp;amp;l=application%2Fx-sh&amp;amp;width=680&amp;amp;ds=true&amp;amp;dsyoff=20px&amp;amp;dsblur=68px&amp;amp;wc=true&amp;amp;wa=true&amp;amp;pv=56px&amp;amp;ph=56px&amp;amp;ln=false&amp;amp;fl=1&amp;amp;fm=Hack&amp;amp;fs=14px&amp;amp;lh=133%25&amp;amp;si=false&amp;amp;es=1x&amp;amp;wm=false&amp;amp;code=%250A%257B%250A%2520%2520%2522name%2522%2520%253A%2520%2522Elasticsearch%2522%252C%250A%2520%2520%2522cluster_name%2522%2520%253A%2520%2522elasticsearch%2522%252C%250A%2520%2520%2522cluster_uuid%2522%2520%253A%2520%2522qqhFHPigQ9e2lk-a7AvLNQ%2522%252C%250A%2520%2520%2522version%2522%2520%253A%2520%257B%250A%2520%2520%2520%2520%2522number%2522%2520%253A%2520%25227.7.1%2522%252C%250A%2520%2520%2520%2520%2522build_flavor%2522%2520%253A%2520%2522default%2522%252C%250A%2520%2520%2520%2520%2522build_type%2522%2520%253A%2520%2522deb%2522%252C%250A%2520%2520%2520%2520%2522build_hash%2522%2520%253A%2520%2522ef48eb35cf30adf4db14086e8aabd07ef6fb113f%2522%252C%250A%2520%2520%2520%2520%2522build_date%2522%2520%253A%2520%25222020-03-26T06%253A34%253A37.794943Z%2522%252C%250A%2520%2520%2520%2520%2522build_snapshot%2522%2520%253A%2520false%252C%250A%2520%2520%2520%2520%2522lucene_version%2522%2520%253A%2520%25228.5.1%2522%252C%250A%2520%2520%2520%2520%2522minimum_wire_compatibility_version%2522%2520%253A%2520%25226.8.0%2522%252C%250A%2520%2520%2520%2520%2522minimum_index_compatibility_version%2522%2520%253A%2520%25226.0.0-beta1%2522%250A%2520%2520%257D%252C%250A%2520%2520%2522tagline%2522%2520%253A%2520%2522You%2520Know%252C%2520for%2520Search%2522%250A%257D%250A"&lt;br&gt;
  style="width: 600px; height: 580px; border:0; transform: scale(1); overflow:hidden;"&lt;br&gt;
  sandbox="allow-scripts allow-same-origin"&amp;gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;Now that Elasticsearch is up and running, in the next section which is Part 2.B of this article series, we will install Kibana, and Logstash and test our Logging configuration.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>How to implement logging in your REST service by using Elasticsearch</title>
      <dc:creator>Peter Shekindo</dc:creator>
      <pubDate>Mon, 30 May 2022 08:26:22 +0000</pubDate>
      <link>https://dev.to/clickpesa/how-to-implement-logging-in-your-rest-service-by-using-elasticsearch-3bnm</link>
      <guid>https://dev.to/clickpesa/how-to-implement-logging-in-your-rest-service-by-using-elasticsearch-3bnm</guid>
      <description>&lt;h2&gt;
  
  
  How to implement logging in your REST service by using Elasticsearch
&lt;/h2&gt;

&lt;p&gt;This article consists of two sections. In part one we will focus on Logging as a service concept and introduction, in Part two, we will see how to set up, integrate Elasticsearch and its tools and how to access logs via Elastic search. Below are section categories.&lt;/p&gt;

&lt;h4&gt;
  
  
  Part 1
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;What is Logging?&lt;/li&gt;
&lt;li&gt;What is a Log file&lt;/li&gt;
&lt;li&gt;Types of logs&lt;/li&gt;
&lt;li&gt;Elasticsearch

&lt;ul&gt;
&lt;li&gt;What is Elasticsearch?&lt;/li&gt;
&lt;li&gt;What is ELK stack?&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h4&gt;
  
  
  Part 2
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Elasticsearch

&lt;ul&gt;
&lt;li&gt;How to setup Elasticsearch&lt;/li&gt;
&lt;li&gt;How to integrate Elasticsearch into your REST service&lt;/li&gt;
&lt;li&gt;How to access logs in Elasticsearch&lt;/li&gt;
&lt;li&gt;Elasticsearch in relation to REST API&lt;/li&gt;
&lt;li&gt;Benefits of using Elasticsearch?&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Logging advantage&lt;/li&gt;

&lt;li&gt;Conclusion&lt;/li&gt;

&lt;/ul&gt;

&lt;h4&gt;
  
  
  PART 1
&lt;/h4&gt;

&lt;h5&gt;
  
  
  What is Logging?
&lt;/h5&gt;

&lt;p&gt;Is a process of collecting and processing any type of log files coming from any given source or location such as server services, applications, devices, etc.&lt;/p&gt;

&lt;h5&gt;
  
  
  What is a Logfile?
&lt;/h5&gt;

&lt;p&gt;Is a file that contains records of either events or processes that are taking place in a system. This file may contain a status, warning, or any other intended information that explains what is going on in a system.&lt;/p&gt;

&lt;p&gt;In various REST service architectures, there are several types of log files that can be implemented that define the type of information the file contains. Below are some of the log files.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;General Purpose logs&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This is a type of log file that contains general information such as the service port that a system is using or the current state of a service&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Warning logs&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;These are logs intended to handle issues that are not fatal or disruptive. This usually denotes that the issue is not disruptive but it should be taken into consideration. (example: managed to save data in a DB but after multiple attempts)&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Error logs&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This contains information regarding unhandled issues that are fatal or disruptive. (Example: Failed to save data in a DB but all the validation passed and data is clear to be saved)&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Debug logs&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This contains information that helps us to debug the logic in case of an error or warning. These usually are intended for developers.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Verbose logs&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;These are the logs to provide insights about the behavior of the app, intended for operators and the support team.&lt;/p&gt;

&lt;p&gt;Various REST Architectures support default logging services out of the box and they can be configured to be accessed without any third-party services. This is suitable while when you are interacting with small data sets and managing small to medium systems, monitoring extensive systems can become an issue. A solution to this is to add dedicated services to support your logging process. In this article, I will be walking you through how to implement logging on REST API with &lt;a href="https://www.elastic.co/" rel="noopener noreferrer"&gt;Elasticsearch&lt;/a&gt; covering deploying, managing, and analyzing logs.&lt;/p&gt;

&lt;h6&gt;
  
  
  What is Elasticsearch?
&lt;/h6&gt;

&lt;p&gt;Elasticsearch is a distributed RESTful search and analytics engine capable of addressing a growing number of use cases. It centrally stores your data for lightning-fast search, fine‑tuned relevancy, and powerful analytics that scale with ease.&lt;/p&gt;

&lt;p&gt;Elasticsearch consists of various tools that can be used alongside to support your demands. &lt;a href="https://www.elastic.co/products/" rel="noopener noreferrer"&gt;Explore more&lt;/a&gt; to see other tools that Elasticsearch offers.&lt;/p&gt;

&lt;h5&gt;
  
  
  What is ELK stack?
&lt;/h5&gt;

&lt;p&gt;"ELK" is the acronym for three open source projects Elasticsearch, Logstash, and Kibana.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Elasticsearch&lt;br&gt;&lt;br&gt;
A distributed, free and open search and analytics engine for all types of data, including textual, numerical, geospatial, structured, and unstructured.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Logstash&lt;br&gt;&lt;br&gt;
A server‑side data processing pipeline that ingests data from multiple sources simultaneously, transforms it, and then sends it to a "stash" like Elasticsearch.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kibana&lt;br&gt;&lt;br&gt;
Lets users visualize data with charts and graphs in Elasticsearch.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fres.cloudinary.com%2Fclickpesa%2Fimage%2Fupload%2Fv1653895868%2Fengineering-blog%2Fimages%2Fi1sn5uvkyez7jnttosni.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fres.cloudinary.com%2Fclickpesa%2Fimage%2Fupload%2Fv1653895868%2Fengineering-blog%2Fimages%2Fi1sn5uvkyez7jnttosni.jpg" alt="drawing"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt; ELK stack workflow with Spring Boot logging service &lt;/p&gt;

&lt;p&gt;This is all for this chapter regarding the introduction to logging as a service and tooling. In the next chapter, we will see how to implement the Elasticsearch on the REST service to manage and monitor logs.&lt;/p&gt;

</description>
      <category>elasticsearch</category>
      <category>kibana</category>
      <category>logstash</category>
      <category>logging</category>
    </item>
    <item>
      <title>Automating code review on commit</title>
      <dc:creator>Peter Shekindo</dc:creator>
      <pubDate>Mon, 02 May 2022 10:20:33 +0000</pubDate>
      <link>https://dev.to/clickpesa/automating-code-review-on-commit-28lo</link>
      <guid>https://dev.to/clickpesa/automating-code-review-on-commit-28lo</guid>
      <description>&lt;h1&gt;
  
  
  HOW TO AUTOMATE CODE REVIEWS ON COMMITS
&lt;/h1&gt;

&lt;p&gt;Imagine the hustle of spending hours on code review every day. Code review automation may save countless wasted efforts.&lt;/p&gt;

&lt;p&gt;This Article may walk you through steps on how to add automated code reviews that may be executed during every code commit.&lt;/p&gt;

&lt;p&gt;Codacy is one of the tools that may be used to achieve this.&lt;/p&gt;

&lt;p&gt;Codacy is an automated code review tool that helps developers ship better software, faster. With Codacy, you get static analysis, cyclomatic complexity, duplication, and code unit test coverage changes in every commit and pull request. &lt;/p&gt;

&lt;h2&gt;
  
  
  Steps to integrate your project with Codacy
&lt;/h2&gt;

&lt;h2&gt;
  
  
  1. Sign up
&lt;/h2&gt;

&lt;p&gt;You may use your Git provider such as GitHub, GitLab, or Bitbucket to sign up to Codacy. This links your Codacy user with your Git provider user, making it easier to add repositories to Codacy and invite your teammates.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--df5IA-MQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/http://res.cloudinary.com/clickpesa/image/upload/v1650444797/engineering-blog/images/f47nkua8llyolvnmeqdu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--df5IA-MQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/http://res.cloudinary.com/clickpesa/image/upload/v1650444797/engineering-blog/images/f47nkua8llyolvnmeqdu.png" alt="sign up" width="558" height="337"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Grant access
&lt;/h2&gt;

&lt;p&gt;Codacy will request access to your Git provider and will require you to provide a few details regarding your organization. (this helps them on use case evaluation).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--6--DR_MI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/http://res.cloudinary.com/clickpesa/image/upload/v1650444920/engineering-blog/images/vtxfeitocdow3fw4uqqa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--6--DR_MI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/http://res.cloudinary.com/clickpesa/image/upload/v1650444920/engineering-blog/images/vtxfeitocdow3fw4uqqa.png" alt="grant access" width="555" height="576"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  3.  Choose organization
&lt;/h2&gt;

&lt;p&gt;Now you will be required to join the organizations that contain your repositories&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;Note:  &lt;/p&gt;

&lt;p&gt;codacy name the organizations similar to your git repository user name. &lt;/p&gt;

&lt;p&gt;Example If your repository is “github.com/Fruitsandvegies”. Then the name of your organization will be “Fruitsandvegies”.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--dP9qTCKe--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/http://res.cloudinary.com/clickpesa/image/upload/v1650444969/engineering-blog/images/zc6idpiadlinetgvui8i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--dP9qTCKe--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/http://res.cloudinary.com/clickpesa/image/upload/v1650444969/engineering-blog/images/zc6idpiadlinetgvui8i.png" alt="choose organization" width="602" height="191"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  4. Adding repository
&lt;/h2&gt;

&lt;p&gt;By selecting the organization on step 3, codency will list all related repositories available. Select the “add” option to add the repository for review. Codancy will immediately set up everything so as to be ready to analyze on the next commit.&lt;/p&gt;

&lt;p&gt;Note:&lt;/p&gt;

&lt;p&gt;You can add the repository to Codancy only if you have permission to that repository.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--rGjNSMi2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/http://res.cloudinary.com/clickpesa/image/upload/v1650445009/engineering-blog/images/eiwpirbmddgpnq1mwt8d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--rGjNSMi2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/http://res.cloudinary.com/clickpesa/image/upload/v1650445009/engineering-blog/images/eiwpirbmddgpnq1mwt8d.png" alt="adding repository" width="602" height="362"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  5.  Review result of the analysis
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;Select “Go to repository” once the analysis has completed reviewing the results.&lt;/p&gt;

&lt;p&gt;Detailed analysis result&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--nanMasZN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/http://res.cloudinary.com/clickpesa/image/upload/v1650445048/engineering-blog/images/yxdypvp8vbvhpsyav2kx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--nanMasZN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/http://res.cloudinary.com/clickpesa/image/upload/v1650445048/engineering-blog/images/yxdypvp8vbvhpsyav2kx.png" alt="review result of the analysis" width="586" height="284"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The initial analysis from Codancy will be based on the default configuration. Codacy provides tweaks to allow you to set up a custom configuration such as files to ignore, code pattern, and adjust quality settings.&lt;/p&gt;

</description>
      <category>codacy</category>
      <category>git</category>
      <category>devops</category>
    </item>
    <item>
      <title>Automating code review on commit</title>
      <dc:creator>Peter Shekindo</dc:creator>
      <pubDate>Fri, 29 Apr 2022 12:37:24 +0000</pubDate>
      <link>https://dev.to/petershekiondo/automating-code-review-on-commit-2e2j</link>
      <guid>https://dev.to/petershekiondo/automating-code-review-on-commit-2e2j</guid>
      <description>&lt;h1&gt;
  
  
  HOW TO AUTOMATE CODE REVIEWS ON COMMITS
&lt;/h1&gt;

&lt;p&gt;Imagine the hustle of spending hours on code review every day. Code review automation may save countless wasted efforts.&lt;/p&gt;

&lt;p&gt;This Article may walk you through steps on how to add automated code reviews that may be executed during every code commit.&lt;/p&gt;

&lt;p&gt;Codacy is one of the tools that may be used to achieve this.&lt;/p&gt;

&lt;p&gt;Codacy is an automated code review tool that helps developers ship better software, faster. With Codacy, you get static analysis, cyclomatic complexity, duplication, and code unit test coverage changes in every commit and pull request. &lt;/p&gt;

&lt;h2&gt;
  
  
  Steps to integrate your project with Codacy
&lt;/h2&gt;

&lt;h2&gt;
  
  
  1. Sign up
&lt;/h2&gt;

&lt;p&gt;You may use your Git provider such as GitHub, GitLab, or Bitbucket to sign up to Codacy. This links your Codacy user with your Git provider user, making it easier to add repositories to Codacy and invite your teammates.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--df5IA-MQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/http://res.cloudinary.com/clickpesa/image/upload/v1650444797/engineering-blog/images/f47nkua8llyolvnmeqdu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--df5IA-MQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/http://res.cloudinary.com/clickpesa/image/upload/v1650444797/engineering-blog/images/f47nkua8llyolvnmeqdu.png" alt="sign up" width="558" height="337"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Grant access
&lt;/h2&gt;

&lt;p&gt;Codacy will request access to your Git provider and will require you to provide a few details regarding your organization. (this helps them on use case evaluation).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--6--DR_MI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/http://res.cloudinary.com/clickpesa/image/upload/v1650444920/engineering-blog/images/vtxfeitocdow3fw4uqqa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--6--DR_MI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/http://res.cloudinary.com/clickpesa/image/upload/v1650444920/engineering-blog/images/vtxfeitocdow3fw4uqqa.png" alt="grant access" width="555" height="576"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  3.  Choose organization
&lt;/h2&gt;

&lt;p&gt;Now you will be required to join the organizations that contain your repositories&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;Note:  &lt;/p&gt;

&lt;p&gt;codacy name the organizations similar to your git repository user name. &lt;/p&gt;

&lt;p&gt;Example If your repository is “github.com/Fruitsandvegies”. Then the name of your organization will be “Fruitsandvegies”.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--dP9qTCKe--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/http://res.cloudinary.com/clickpesa/image/upload/v1650444969/engineering-blog/images/zc6idpiadlinetgvui8i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--dP9qTCKe--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/http://res.cloudinary.com/clickpesa/image/upload/v1650444969/engineering-blog/images/zc6idpiadlinetgvui8i.png" alt="choose organization" width="602" height="191"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  4. Adding repository
&lt;/h2&gt;

&lt;p&gt;By selecting the organization on step 3, codency will list all related repositories available. Select the “add” option to add the repository for review. Codancy will immediately set up everything so as to be ready to analyze on the next commit.&lt;/p&gt;

&lt;p&gt;Note:&lt;/p&gt;

&lt;p&gt;You can add the repository to Codancy only if you have permission to that repository.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--rGjNSMi2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/http://res.cloudinary.com/clickpesa/image/upload/v1650445009/engineering-blog/images/eiwpirbmddgpnq1mwt8d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--rGjNSMi2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/http://res.cloudinary.com/clickpesa/image/upload/v1650445009/engineering-blog/images/eiwpirbmddgpnq1mwt8d.png" alt="adding repository" width="602" height="362"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  5.  Review result of the analysis
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;Select “Go to repository” once the analysis has completed reviewing the results.&lt;/p&gt;

&lt;p&gt;Detailed analysis result&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--nanMasZN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/http://res.cloudinary.com/clickpesa/image/upload/v1650445048/engineering-blog/images/yxdypvp8vbvhpsyav2kx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--nanMasZN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/http://res.cloudinary.com/clickpesa/image/upload/v1650445048/engineering-blog/images/yxdypvp8vbvhpsyav2kx.png" alt="review result of the analysis" width="586" height="284"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The initial analysis from Codancy will be based on the default configuration. Codacy provides tweaks to allow you to set up a custom configuration such as files to ignore, code pattern, and adjust quality settings.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Automating code review on commit</title>
      <dc:creator>Peter Shekindo</dc:creator>
      <pubDate>Fri, 29 Apr 2022 06:45:01 +0000</pubDate>
      <link>https://dev.to/petershekiondo/automating-code-review-on-commit-173l</link>
      <guid>https://dev.to/petershekiondo/automating-code-review-on-commit-173l</guid>
      <description>&lt;h1&gt;
  
  
  HOW TO AUTOMATE CODE REVIEWS ON COMMITS
&lt;/h1&gt;

&lt;p&gt;Imagine the hustle of spending 1- 4 hours on code review every day. Code review automation may save countless wasted efforts.&lt;/p&gt;

&lt;p&gt;This Article may walk you through steps on how to add automated code reviews that may be executed during every code commit.&lt;/p&gt;

&lt;p&gt;Codacy is one of the tools that may be used to achieve this.&lt;/p&gt;

&lt;p&gt;Codacy is an automated code review tool that helps developers ship better software, faster. With Codacy, you get static analysis, cyclomatic complexity, duplication, and code unit test coverage changes in every commit and pull request. &lt;/p&gt;

&lt;h2&gt;
  
  
  Steps to integrate your project with Codacy
&lt;/h2&gt;

&lt;h2&gt;
  
  
  1. Sign up
&lt;/h2&gt;

&lt;p&gt;You may use your Git provider such as GitHub, GitLab, or Bitbucket to sign up to Codacy. This links your Codacy user with your Git provider user, making it easier to add repositories to Codacy and invite your teammates.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--df5IA-MQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/http://res.cloudinary.com/clickpesa/image/upload/v1650444797/engineering-blog/images/f47nkua8llyolvnmeqdu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--df5IA-MQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/http://res.cloudinary.com/clickpesa/image/upload/v1650444797/engineering-blog/images/f47nkua8llyolvnmeqdu.png" alt="sign up" width="558" height="337"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Grant access
&lt;/h2&gt;

&lt;p&gt;Codacy will request access to your Git provider and will require you to provide a few details regarding your organization. (this helps them on use case evaluation).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--6--DR_MI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/http://res.cloudinary.com/clickpesa/image/upload/v1650444920/engineering-blog/images/vtxfeitocdow3fw4uqqa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--6--DR_MI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/http://res.cloudinary.com/clickpesa/image/upload/v1650444920/engineering-blog/images/vtxfeitocdow3fw4uqqa.png" alt="grant access" width="555" height="576"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  3.  Choose organization
&lt;/h2&gt;

&lt;p&gt;Now you will be required to join the organizations that contain your repositories&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;Note:  &lt;/p&gt;

&lt;p&gt;codacy name the organizations similar to your git repository user name. &lt;/p&gt;

&lt;p&gt;Example If your repository is “github.com/Fruitsandvegies”. Then the name of your organization will be “Fruitsandvegies”.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--dP9qTCKe--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/http://res.cloudinary.com/clickpesa/image/upload/v1650444969/engineering-blog/images/zc6idpiadlinetgvui8i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--dP9qTCKe--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/http://res.cloudinary.com/clickpesa/image/upload/v1650444969/engineering-blog/images/zc6idpiadlinetgvui8i.png" alt="choose organization" width="602" height="191"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  4. Adding repository
&lt;/h2&gt;

&lt;p&gt;By selecting the organization on step 3, codency will list all related repositories available. Select the “add” option to add the repository for review. Codancy will immediately set up everything so as to be ready to analyze on the next commit.&lt;/p&gt;

&lt;p&gt;Note:&lt;/p&gt;

&lt;p&gt;You can add the repository to Codancy only if you have permission to that repository.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--rGjNSMi2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/http://res.cloudinary.com/clickpesa/image/upload/v1650445009/engineering-blog/images/eiwpirbmddgpnq1mwt8d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--rGjNSMi2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/http://res.cloudinary.com/clickpesa/image/upload/v1650445009/engineering-blog/images/eiwpirbmddgpnq1mwt8d.png" alt="adding repository" width="602" height="362"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  5.  Review result of the analysis
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;Select “Go to repository” once the analysis has completed reviewing the results.&lt;/p&gt;

&lt;p&gt;Detailed analysis result&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--nanMasZN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/http://res.cloudinary.com/clickpesa/image/upload/v1650445048/engineering-blog/images/yxdypvp8vbvhpsyav2kx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--nanMasZN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/http://res.cloudinary.com/clickpesa/image/upload/v1650445048/engineering-blog/images/yxdypvp8vbvhpsyav2kx.png" alt="review result of the analysis" width="586" height="284"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The initial analysis from Codancy will be based on the default configuration. Codacy provides tweaks to allow you to set up a custom configuration such as files to ignore, code pattern, and adjust quality settings.&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
