<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: David Bros</title>
    <description>The latest articles on DEV Community by David Bros (@thehunter896).</description>
    <link>https://dev.to/thehunter896</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/thehunter896"/>
    <language>en</language>
    <item>
      <title>5 tips to make your Gitlab CI/CD cleaner</title>
      <dc:creator>David Bros</dc:creator>
      <pubDate>Mon, 27 Feb 2023 11:16:36 +0000</pubDate>
      <link>https://dev.to/thehunter896/5-tips-to-make-your-gitlab-ci-cleaner-514d</link>
      <guid>https://dev.to/thehunter896/5-tips-to-make-your-gitlab-ci-cleaner-514d</guid>
      <description>&lt;p&gt;Gitlab can be a bit of a challenge if you have never written any CI/CD pipelines, forget about your software development mindset and quickly adapt to the pipeline mindset with these 5 easy to remember tips.&lt;/p&gt;




&lt;ol&gt;
&lt;li&gt;Templates!&lt;/li&gt;
&lt;li&gt;Abstract Logins&lt;/li&gt;
&lt;li&gt;Keep jobs simple&lt;/li&gt;
&lt;li&gt;Keep jobs neatly linked&lt;/li&gt;
&lt;li&gt;Keep important steps manual&lt;/li&gt;
&lt;/ol&gt;



&lt;h3&gt;
  
  
  Templates! &lt;a&gt;&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;Pipelines can get messy, their top-down execution means you need to carefully analyze which rules apply to each job before executing them. You can simplify most of these rules with templates.&lt;/p&gt;

&lt;p&gt;Templates can be defined in separate repos, and then used in your project's gitlab-ci.yml for extra commodity and abstraction. You can enable this behavior with &lt;strong&gt;include&lt;/strong&gt; and &lt;strong&gt;extends&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ci-cd-templates repo&lt;/strong&gt;&lt;br&gt;
templates.yml&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="s"&gt;.echo-statement&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;before_script&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;echo "this is an inherited step!"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;my-project repo&lt;/strong&gt;&lt;br&gt;
gitlab-cy.yml&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;include&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;project&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;ci-cd-templates'&lt;/span&gt;
    &lt;span class="na"&gt;file&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;templates.yml'&lt;/span&gt;
    &lt;span class="na"&gt;rev&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;(tag or branch)&lt;/span&gt;

&lt;span class="na"&gt;example-job&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;extends&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;.echo-statement&lt;/span&gt;
  &lt;span class="na"&gt;script&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;echo "this is a normal step!"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Important caveat&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Commands in your job will be defined by the template, if you define them in example-job they will overwrite the template's keys and values. This is my templates normally use a before/after script.&lt;/li&gt;
&lt;/ul&gt;



&lt;h3&gt;
  
  
  Abstract Logins &lt;a&gt;&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;Let's use the freshly acquired knowledge from the last tip to simplify our pipelines. When doing pipelines you will inadvertably reach a point where you will need to login into something, cloud, api, gitlab api, anything really.&lt;/p&gt;

&lt;p&gt;Keep your pipelines cleaner by abstracting the logins to your templates. Since logins almost always generate files from which to yield the access tokens, combine this with &lt;strong&gt;artifacts&lt;/strong&gt; and abstract your logins.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ci-cd-templates repo&lt;/strong&gt;&lt;br&gt;
templates.yml&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="s"&gt;.aws-login&lt;/span&gt;
&lt;span class="na"&gt;variables&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;role&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;MY-ROLE&lt;/span&gt;
  &lt;span class="na"&gt;role_session_name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;dev&lt;/span&gt;
  &lt;span class="na"&gt;account_id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;0000000&lt;/span&gt;
&lt;span class="na"&gt;before_script&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;aws sts assume-role --role-arn "arn:aws:iam::${ACCOUNT_ID}:role/${ROLE}" --role-session-name "${ROLE_SESSION_NAME}" &amp;gt; creds.tmp.json&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;echo AWS_ACCESS_KEY_ID=$(cat creds.tmp.json | jq -r ".Credentials.AccessKeyId") &amp;gt; build.env&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;echo AWS_SECRET_ACCESS_KEY=$(cat creds.tmp.json | jq -r ".Credentials.SecretAccessKey") &amp;gt;&amp;gt; build.env&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;echo AWS_SESSION_TOKEN=$(cat creds.tmp.json | jq -r ".Credentials.SessionToken") &amp;gt;&amp;gt; build.env&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;rm creds.tmp.json&lt;/span&gt;
&lt;span class="na"&gt;artifacts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;reports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;dotenv&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;build.env&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Attach this login templated job to your deployment jobs.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;deploy-dev&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;extends&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;.aws-login&lt;/span&gt;
  &lt;span class="na"&gt;script&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;docker build -t [...]&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;docker push [...]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;h3&gt;
  
  
  Keep jobs simple &lt;a&gt;&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;When developing jobs, keep them as purpose fit as possible, this will ensure readability and maintainability.&lt;/p&gt;

&lt;p&gt;Declare the same stages every time and give them individual stages. Those who have read "The Pragmatic Programmer" will kill me for saying this, but remember: forget about your programmer mindset.&lt;/p&gt;

&lt;p&gt;A template we use in almost every project:&lt;/p&gt;

&lt;p&gt;gitlab-ci.yml&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;stages&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;test&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;build&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;deploy&lt;/span&gt;

&lt;span class="na"&gt;make-test&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;  &lt;span class="c1"&gt;# executes unittests / integration tests&lt;/span&gt;
  &lt;span class="na"&gt;stage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;test&lt;/span&gt;

&lt;span class="na"&gt;init-dev-dependencies&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="c1"&gt;# downloads dependencies&lt;/span&gt;
  &lt;span class="na"&gt;stage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;build&lt;/span&gt;

&lt;span class="na"&gt;build-dev&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="c1"&gt;# builds a docker image / gz / tar / etc&lt;/span&gt;
  &lt;span class="na"&gt;stage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;build&lt;/span&gt;

&lt;span class="na"&gt;deploy-dev&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="c1"&gt;# deploys / uploads a docker image / gz / tar / etc&lt;/span&gt;
  &lt;span class="na"&gt;stage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;deploy&lt;/span&gt;

&lt;span class="na"&gt;init-tst-dependencies&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="c1"&gt;# same steps, different enviornment&lt;/span&gt;
  &lt;span class="na"&gt;stage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;build&lt;/span&gt;

&lt;span class="na"&gt;build-tst&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;stage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;build&lt;/span&gt;

&lt;span class="na"&gt;deploy-tst&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;stage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;deploy&lt;/span&gt;

&lt;span class="pi"&gt;[&lt;/span&gt; &lt;span class="nv"&gt;other environments&lt;/span&gt; &lt;span class="pi"&gt;]&lt;/span&gt; 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;h3&gt;
  
  
  Keep jobs neatly linked &lt;a&gt;&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;Neatly tie jobs in order to form dependent pipelines with &lt;strong&gt;needs&lt;/strong&gt; and &lt;strong&gt;dependencies&lt;/strong&gt;.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Needs: Needs a job to run before itself, this will create independent pipelines for each environment, if you define it correctly.&lt;/li&gt;
&lt;li&gt;Dependencies: Depends on the files from the dependent jobs.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;gitlab-ci.yml&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;make-test&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;stage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;test&lt;/span&gt;

&lt;span class="na"&gt;init-dev-dependencies&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;  
  &lt;span class="na"&gt;stage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;build&lt;/span&gt;
  &lt;span class="na"&gt;needs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; 
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;make-test&lt;/span&gt;

&lt;span class="na"&gt;build-dev&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;stage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;build&lt;/span&gt;
  &lt;span class="na"&gt;needs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;init-dev-dependencies&lt;/span&gt;
  &lt;span class="na"&gt;dependencies&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; 
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;init-dev-dependecies&lt;/span&gt;

&lt;span class="na"&gt;deploy-dev&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;stage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;deploy&lt;/span&gt;
  &lt;span class="na"&gt;needs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;build-dev&lt;/span&gt;
  &lt;span class="na"&gt;dependencies&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;build-dev&lt;/span&gt;

&lt;span class="na"&gt;init-tst-dependencies&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;stage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;build&lt;/span&gt;
  &lt;span class="na"&gt;needs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;make-test&lt;/span&gt;

&lt;span class="na"&gt;build-tst&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;stage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;build&lt;/span&gt;
  &lt;span class="na"&gt;needs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;init-tst-dependencies&lt;/span&gt;
  &lt;span class="na"&gt;dependencies&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;init-tst-dependencies&lt;/span&gt;

&lt;span class="na"&gt;deploy-tst&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;stage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;deploy&lt;/span&gt;
  &lt;span class="na"&gt;needs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; 
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;build-tst&lt;/span&gt;
  &lt;span class="na"&gt;dependencies&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;build-tst&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For a clearer view of what &lt;strong&gt;needs&lt;/strong&gt; actually does:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;With&lt;/strong&gt; needs:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwkb6693y7a6zqklknaqx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwkb6693y7a6zqklknaqx.png" alt="With needs" width="800" height="282"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Without&lt;/strong&gt; needs:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4x90kwbw5e0cxmw70bp5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4x90kwbw5e0cxmw70bp5.png" alt="Without needs" width="800" height="58"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So, in essence, it parallelizes.&lt;/p&gt;



&lt;h3&gt;
  
  
  Keep important steps manual &lt;a&gt;&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;Lastly, keep the most important or dangerous steps manual, by adding &lt;strong&gt;when&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;When will make sure a user presses the button in the gitlab UI in order for the step to run. Of course jobs that are linked to a manual job via needs will not run until the button is pressed. &lt;u&gt;It is also highly recommended to tie the initial production job to the acceptance / test deployment, for extra security against unwanted deployments.&lt;/u&gt;&lt;/p&gt;

&lt;p&gt;gitlab-ci.yml&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;
&lt;span class="na"&gt;init-prd-dependencies&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;  &lt;span class="c1"&gt;# no need for unit tests when deploying to prod, since it should already be tested in the dev/tst/acc pipelines&lt;/span&gt;
  &lt;span class="na"&gt;stage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;build&lt;/span&gt;
  &lt;span class="na"&gt;when&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;manual&lt;/span&gt;

&lt;span class="na"&gt;build-prd&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;stage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;build&lt;/span&gt;
  &lt;span class="na"&gt;needs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;init-prd-dependencies&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;deploy-acc&lt;/span&gt;
  &lt;span class="na"&gt;dependencies&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; 
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;init-prd-dependecies&lt;/span&gt;

&lt;span class="na"&gt;deploy-prd&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;stage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;deploy&lt;/span&gt;
  &lt;span class="na"&gt;needs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;build-prd&lt;/span&gt;
  &lt;span class="na"&gt;dependencies&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;build-prd&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;These are some tips from things we have learned in the past months, these GitLab features have helped our team create better, safer, readable and robust pipelines.&lt;/p&gt;

</description>
      <category>career</category>
      <category>worklifebalance</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Mastering Step Functions States and Paths</title>
      <dc:creator>David Bros</dc:creator>
      <pubDate>Sun, 04 Dec 2022 11:01:22 +0000</pubDate>
      <link>https://dev.to/thehunter896/mastering-step-functions-states-and-paths-2d36</link>
      <guid>https://dev.to/thehunter896/mastering-step-functions-states-and-paths-2d36</guid>
      <description>&lt;p&gt;AWS has gained exponential traction over the past years, and for all the good reasons.&lt;/p&gt;

&lt;p&gt;While as a beginner AWS can be a bit of a headache (you know, due to the fact that there is over 200+ services to choose from) the entry level for the grand majority of them is relatively low, especially you have some software engineering experience.&lt;/p&gt;

&lt;p&gt;So is the case for Step Functions, easy to understand, relatively hard to master. In this article I want to guide you through some techniques and introduce you to some concepts that will help you understand and efficiently manage the state.&lt;/p&gt;

&lt;h3&gt;
  
  
  Table of contents:
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Intro&lt;/li&gt;
&lt;li&gt;Set up&lt;/li&gt;
&lt;li&gt;The State&lt;/li&gt;
&lt;li&gt;Execution State&lt;/li&gt;
&lt;li&gt;Local State&lt;/li&gt;
&lt;li&gt;Manipulating the local state&lt;/li&gt;
&lt;li&gt;End Note&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Intro &lt;a&gt;&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Wait, what are step functions?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Step functions are infrastructure as code meant to represent the series of steps your logical process. This process can be anything, moving data from S3 buckets, executing Lambdas, sending Alarms. Any of the AWS Services can be used in custom made flows that you will define, and later have available as IAC. &lt;/p&gt;

&lt;p&gt;In this article, rather than explain what you can do with them, I will detail what the state is and how to manage it.&lt;/p&gt;

&lt;h3&gt;
  
  
  Set Up (Only if you want to follow along practically)&lt;a&gt;&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;AWS offers free tier for 12 months (with certain limits), if you have never developed Step Functions in AWS, I recommend you build the step function as you follow along the article.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://portal.aws.amazon.com/billing/signup#/start/email" rel="noopener noreferrer"&gt;AWS Free Tier Account&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://aws.amazon.com/getting-started/hands-on/run-serverless-code/" rel="noopener noreferrer"&gt;Hello World Lambda&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/step-functions/latest/dg/getting-started.html" rel="noopener noreferrer"&gt;Step Function&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Make sure the step function has rights of execution by adding the &lt;strong&gt;AWSLambdaRole&lt;/strong&gt; policy to the Step Function's role.&lt;/p&gt;

&lt;h3&gt;
  
  
  The State &lt;a&gt;&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;The state is a JSON object that defines the input of the logical step you are executing as well as later becoming the output of it. This JSON structure is completely customizable, however, you need to take care of it's structure and content, otherwise it will become unreadable and unmaintainable. &lt;/p&gt;

&lt;p&gt;&lt;u&gt;Types of state&lt;/u&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Execution State (Execution Context)&lt;/li&gt;
&lt;li&gt;Local State (Local Context)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Execution state&lt;/strong&gt; &lt;a&gt;&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;The execution state is the input the step function was executed with. &lt;/p&gt;

&lt;p&gt;It can be defined when executing the step function manually or by schedulers/orchestrators which do it for you. You can modify it, &lt;u&gt; but it is highly recommended you don't, this way you will be able to know how the step function was executed.&lt;/u&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl8rj1ta1yz5wno4jldud.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl8rj1ta1yz5wno4jldud.png" alt="Execute State Machine"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frph49ey62o8r5pqbyoe9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frph49ey62o8r5pqbyoe9.png" alt="Input"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The execution state will be accessible during the &lt;strong&gt;whole&lt;/strong&gt; flow of the function, leverage it!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Accessing the execution state&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;By using double $ as a prefix you can access the Execution State and use it's keys as if they were variables. You will also need to use .$ as a suffix in the key where you want to use it. &lt;/p&gt;

&lt;p&gt;For example, let's run the hello-world Lambda using the following:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Execution State (Execution Input)&lt;/strong&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

{
  "key1": "This is an execution argument!",
  "key2": "No, I can't fix your printer!",
  "key3": "HTML is not a programming language!"
} 


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Lambda Payload&lt;/strong&gt;:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

{
  "key1.$": "$$.Execution.Input.key1",
  "key2.$": "$$.Execution.Input.key2",
  "key3.$": "$$.Execution.Input.key3",
} 


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Remember&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;.$ as a suffix in the keys tells step functions that the value will be a call to a State.&lt;/li&gt;
&lt;li&gt;$$. as a prefix tells the step functions to access the Execution state.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Lambda step should look like this now:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Filt4372g3hoxjbseeus2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Filt4372g3hoxjbseeus2.png" alt="Result"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Run the function:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgwjpx1dnbt33axndoni7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgwjpx1dnbt33axndoni7.png" alt="Function Run"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you look at the results, you will see the values have been successfully replaced:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcwr54lvypbdula3297rl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcwr54lvypbdula3297rl.png" alt="Replaced Arguments"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Local State &lt;a&gt;&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;The local state is the one you get to play with the most, since it gets changed / manipulated in every step. &lt;/p&gt;

&lt;p&gt;In the first step of the step function, the Execution State will be copied and used as an input. &lt;/p&gt;

&lt;p&gt;The flow goes as follows:&lt;/p&gt;

&lt;p&gt;&lt;em&gt;local state is used as input for a step (task) &amp;gt; local state is now the output of that step (task)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;However, sometimes you want to be able to keep the output of a step for another one which is further down the line.&lt;/p&gt;

&lt;p&gt;Whatever the case is, in 90% of situations you will need to manipulate it in order to fit your flow's needs.&lt;/p&gt;

&lt;p&gt;Left unchecked, your state will become messy and unreadable (truncated, some of these objects are very long):&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4m06l6gkuqag1vgbzdoh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4m06l6gkuqag1vgbzdoh.png" alt="Results-4"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this picture, the lambda invoke is the first step, thus the input is a copy of the Execution State. After that, the local state will become the Output. In the next step, the Input would be the Output shown.&lt;/p&gt;

&lt;h3&gt;
  
  
  Manipulating the local state &lt;a&gt;&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;You can manipulate the state with the given paths:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;InputPath&lt;/li&gt;
&lt;li&gt;ResultSelector&lt;/li&gt;
&lt;li&gt;OutputPath&lt;/li&gt;
&lt;li&gt;ResultPath&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It is key you understand what all these paths mean in order to master manipulating your state and leveraging it.&lt;/p&gt;

&lt;p&gt;All the paths can be enabled at every step of your flow:&lt;/p&gt;

&lt;p&gt;Input:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgmp0qrzdzcyrsxslz5lg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgmp0qrzdzcyrsxslz5lg.png" alt="Input"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Output:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fclxplglydz2obz66lc8x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fclxplglydz2obz66lc8x.png" alt="Output"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;How are they executed:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;They are optional, so you can activate or deactivate them at any point, creating any combination with them.&lt;/li&gt;
&lt;li&gt;The flow goes as follows, this is the case for every step in your flow:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;InputPath &amp;gt; Step Execution (Lambda) &amp;gt; ResultSelector &amp;gt; ResultPath&amp;gt; OutputPath&lt;/em&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  InputPath &lt;a&gt;&lt;/a&gt;
&lt;/h4&gt;

&lt;p&gt;The InputPath takes care of filtering the local state that acts as input for a certain step, you can select a certain key to use as input for the step, instead the whole local state that was passed from the previous step.&lt;/p&gt;

&lt;p&gt;Add the following to the InputPath: ```&lt;br&gt;
&lt;br&gt;
 $.key1 &lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
Example:

![Filtering Input](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0lkaz5dd4jvtav41vjdu.png)

It looks like nothing happened, activate the advanced view:

![Remmaped payload](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/x3yl1iw6o49ozq2pv9wd.png)

The payload was successfully changed. (The Lambda will fail, since the input is not supposed to be a string).

#### ResultSelector &amp;lt;a name="result-selector"&amp;gt;&amp;lt;/a&amp;gt;

The ResultSelector takes care of filtering and transforming the step's output, with this you can shape the output to whatever form you want.

Add the following to ResultSelector:

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;{&lt;br&gt;
    "im_changing_the_field.$": "$",&lt;br&gt;
    "im_a_copy.$": "$",&lt;br&gt;
    "im_another_copy.$": "$.Payload"&lt;br&gt;
}&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
Run the function, look at the results (truncated):

![Results](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xx4yf1wyyv86hr9z4mqn.png)

#### ResultPath &amp;lt;a name="result-path"&amp;gt;&amp;lt;/a&amp;gt;

The ResultPath (different than ResultSelector) takes care of adding or substracting structures from your state. This step is limited to two options:

- Add the output from OutputPath to your original unfiltered input with a new key.
- Discard the output from OutputPath and return your original unfiltered input. 

Copy the following to your ResultPath (disable ResultSelector): ```

 $.lambda_result 

``` and select the first option. 

Look at the results:

![Results](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pn7a0rpai7qgl9mjq3la.png)

Do the same with the second option and look at the results:

![Results-2](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/puxa6c0vhykoakz98bn4.png)

#### OutputPath &amp;lt;a name="output-path"&amp;gt;&amp;lt;/a&amp;gt;

The OutputPath does the same as the InputPath, but for the output. Take in account this output &amp;lt;u&amp;gt;can be the raw output from the step or, a transformed output that comes from ResultSelector/ResultPath&amp;lt;/u&amp;gt;.

Add the following to OutputPath (disable ResultsPath): ```

 $.Payload 

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Run the function, look at the results (enable advanced view):&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fro577bwwymv3lg99y5no.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fro577bwwymv3lg99y5no.png" alt="Output Result"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Notice how the Result was filtered by the OutputPath.&lt;/p&gt;

&lt;p&gt;Now try to combine all of them at the same time ;)&lt;/p&gt;

&lt;h3&gt;
  
  
  End Note
&lt;/h3&gt;

&lt;p&gt;Now you know how the state works and how to manipulate it, it falls on your shoulders to keep it clean and readable. &lt;/p&gt;

&lt;p&gt;Reflect on the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What does my previous state output, is all the output needed for the next step?&lt;/li&gt;
&lt;li&gt;What does my next step need as an input, is all the local state needed for it? &lt;/li&gt;
&lt;li&gt;Do I need my local state to output at the end of the function, can you discard it?&lt;/li&gt;
&lt;li&gt;Will my step function have an execution input? How can you leverage that in order to keep the local state clean and simple?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These questions will help you design clean and readable step functions.&lt;/p&gt;

&lt;p&gt;Thank you for reading and see you in the next one!&lt;/p&gt;

</description>
      <category>stepfunctions</category>
      <category>cloud</category>
      <category>beginners</category>
      <category>aws</category>
    </item>
    <item>
      <title>ELK: Logstash in 5 minutes</title>
      <dc:creator>David Bros</dc:creator>
      <pubDate>Tue, 22 Mar 2022 23:19:18 +0000</pubDate>
      <link>https://dev.to/thehunter896/elk-logstash-in-5-minutes-530o</link>
      <guid>https://dev.to/thehunter896/elk-logstash-in-5-minutes-530o</guid>
      <description>&lt;p&gt;This is a continuation to the &lt;a href="https://dev.to/thehunter896/elk-elasticsearch-in-5-minutes-5bfn"&gt;previous&lt;/a&gt; article: Elasticsearch in 5 minutes. &lt;/p&gt;

&lt;p&gt;Logstash is the second essential tool for Data Engineering. Like Elasticsearch it is developed and maintained by the Elastic team and it also has a free version available (which will cover all your needs).&lt;/p&gt;

&lt;p&gt;In this article we will learn what Logstash is and how it works, we will also set up a Logstash processor and play with some of the plugins it has to offer. All in under 5 minutes!&lt;/p&gt;

&lt;p&gt;For this article I have set up a &lt;a href="https://github.com/TheHunter896/logstash-in-five-minutes" rel="noopener noreferrer"&gt;Github Repository&lt;/a&gt; where you can find all the files needed to complete some of the steps as well as the images within this article. &lt;/p&gt;




&lt;p&gt;&lt;strong&gt;What is Logstash&lt;/strong&gt;&lt;br&gt;
Logstash is a data processor, it acts as the pipeline between your databases or nodes and other resources, which could be other databases, nodes or applications.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frznkk8h6pj1jz3o7pnol.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frznkk8h6pj1jz3o7pnol.png" alt="Logstash Diagram"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;However the true power of Logstash comes in its data manipulation functions: Transforming and Filtering operations. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Where Logstash Shines&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Data Processing&lt;/li&gt;
&lt;li&gt;Data Streaming Enviornments&lt;/li&gt;
&lt;li&gt;Data Analytics Environments&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Why does it shine in these enviroments&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;High compatibility with input and output resources: Logstash can read from almost anything, including Cloud applications, message Brokers and websockets. Here is the list for &lt;a href="https://www.elastic.co/guide/en/logstash/current/input-plugins.html" rel="noopener noreferrer"&gt;input&lt;/a&gt; and &lt;a href="https://www.elastic.co/guide/en/logstash/current/output-plugins.html" rel="noopener noreferrer"&gt;output&lt;/a&gt; plugins. &lt;/li&gt;
&lt;li&gt;Fast and flexible transformation functions: Transform data structures with simple JSON defined operations. &lt;/li&gt;
&lt;li&gt;Filtering: Huge range of filtering functions are your disposal, you can use this in different pipelines to create multiple sources from one raw source. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;What version are we setting up&lt;/strong&gt;&lt;br&gt;
We will be setting up a free Logstash processor version 8.x&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Step 1, Docker&lt;/strong&gt;&lt;br&gt;
For ease of use we will be using a Centos7 Docker container.&lt;/p&gt;

&lt;p&gt;Pull the image:&lt;br&gt;
&lt;code&gt;docker pull centos:7&lt;/code&gt;&lt;br&gt;
You can find the image &lt;a href="https://hub.docker.com/_/centos" rel="noopener noreferrer"&gt;here&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Run the container with SYS Admin permissions and a mounted volume:&lt;br&gt;
&lt;code&gt;docker run -id --cap-add=SYS_ADMIN --name=logstash-centos7 -v /sys/fs/cgroup:/sys/fs/cgroup:ro centos:7 /sbin/init&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;We run the container with SYS ADMIN permissions because we need systemctl to work inside the container&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Connect to the container:&lt;br&gt;
&lt;code&gt;docker exec -it logstash-centos7 /bin/bash&lt;/code&gt;&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Step 2, Machine Setup&lt;/strong&gt;&lt;br&gt;
Update the machine:&lt;br&gt;
&lt;code&gt;yum update -y &amp;amp;&amp;amp; yum upgrade -y&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Install a text editor (any): &lt;br&gt;
&lt;code&gt;yum install vim -y&lt;/code&gt; &lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Step 3, Install Logstash&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Install the GPG key for elastic:&lt;br&gt;
&lt;code&gt;sudo rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Add this logstash repo under &lt;strong&gt;/etc/yum.repo.d/logstash.repo&lt;/strong&gt;:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;[logstash-8.x]&lt;br&gt;
name=Elastic repository for 8.x packages&lt;br&gt;
baseurl=&lt;a href="https://artifacts.elastic.co/packages/8.x/yum" rel="noopener noreferrer"&gt;https://artifacts.elastic.co/packages/8.x/yum&lt;/a&gt;&lt;br&gt;
gpgcheck=1&lt;br&gt;
gpgkey=&lt;a href="https://artifacts.elastic.co/GPG-KEY-elasticsearch" rel="noopener noreferrer"&gt;https://artifacts.elastic.co/GPG-KEY-elasticsearch&lt;/a&gt;&lt;br&gt;
enabled=1&lt;br&gt;
autorefresh=1&lt;br&gt;
type=rpm-md&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Remember to save!&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Download and install the repo&lt;br&gt;
&lt;code&gt;yum install logstash&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Start logstash&lt;br&gt;
&lt;code&gt;systemctl start logstash&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Verify logstash is running&lt;br&gt;
&lt;code&gt;systemctl status logstash&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc1phahtfbynr0clyxopm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc1phahtfbynr0clyxopm.png" alt="Running Logstash Service"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now stop logstash, we'll restart it later down the line&lt;br&gt;
&lt;code&gt;systemctl stop logstash&lt;/code&gt;&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Step 4, set up an elastic node&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We will be using Elasticsearch as input for our pipelines, so you will need to set up another Docker image with an elastic node, you can find a guide &lt;a href="https://dev.to/thehunter896/elk-elasticsearch-in-5-minutes-5bfn"&gt;here&lt;/a&gt; if you don't have one already running. &lt;/p&gt;

&lt;p&gt;Annotate the IPs of your containers&lt;/p&gt;

&lt;p&gt;By default Docker sets up a network called bridge which containers can use to communicate between themselves, you can see the IPs by first seeing your container IDs, then inspecting the container:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;docker ps&lt;/code&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fva7g20z5c61rom0udpm6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fva7g20z5c61rom0udpm6.png" alt="Docker ps"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;docker inspect {docker container id}&lt;/code&gt;&lt;br&gt;
Docker inspect will yield a very large JSON, search for the field IPAddress&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1lx68bfe8aa5tvguhaec.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1lx68bfe8aa5tvguhaec.png" alt="IPAddress field"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Annotate the IPs for both containers, we'll use them in the next step. &lt;/p&gt;

&lt;p&gt;Create a &lt;strong&gt;test_pipeline&lt;/strong&gt; index and insert sample data, you need to do this inside your Elasticsearch container. You can copy &lt;a href="https://github.com/TheHunter896/logstash-in-five-minutes/blob/master/sample_data.txt" rel="noopener noreferrer"&gt;this&lt;/a&gt; sample data and paste it under &lt;strong&gt;/tmp/sample_data.json&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;vim /tmp/sample_data.json&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh71jokw58qimimk9jp1h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh71jokw58qimimk9jp1h.png" alt="Create index"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Check the index has been created correctly&lt;/p&gt;

&lt;p&gt;&lt;code&gt;curl -X GET --cacert /etc/elasticsearch/certs/http_ca.crt -u elastic https://localhost:9200/_cat/indices&lt;/code&gt; &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffibuadm4rlyuno8e66vw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffibuadm4rlyuno8e66vw.png" alt="Cat Indices"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Index this sample data we previously saved&lt;/p&gt;

&lt;p&gt;&lt;code&gt;curl -X POST --cacert /etc/elasticsearch/certs/http_ca.crt -u elastic -H 'Content-Type: application/nx-ndjson' https://localhost:9200/_cat/indices --data-binary @/tmp/sample_data.json&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc16j27f6fsd7hddfpixb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc16j27f6fsd7hddfpixb.png" alt="Index Documents"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Create another index called &lt;strong&gt;test_pipeline_output&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Due to time constraints (hint on the title) we are going to take some shortcuts: Disabling SSL and Setting our replicas to 0. &lt;/p&gt;

&lt;p&gt;To do disable ssl you need only change &lt;strong&gt;xpac.security.enabled&lt;/strong&gt; to &lt;strong&gt;false&lt;/strong&gt; under the file &lt;strong&gt;/etc/elasticsearch/elasticsearch.yml&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frscxjr78gbh9e6p247az.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frscxjr78gbh9e6p247az.png" alt="XPACK Security Deactivation"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Restart elasticsearch &lt;br&gt;
&lt;code&gt;systemctl restart elasticsearch&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Now set your replicas to 0, do this step for both &lt;strong&gt;test_pipeline&lt;/strong&gt; and &lt;strong&gt;test_pipeline_output&lt;/strong&gt; indices:&lt;br&gt;
&lt;code&gt;curl -X PUT -u elastic http://localhost:9200/**test_pipeline**/_settings -H 'Content-Type: application/json' --data '{"index": {"number_of_replicas": 0}}'&lt;/code&gt;&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Step 5, set up a logstash pipeline&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Copy &lt;a href="https://github.com/TheHunter896/logstash-in-five-minutes/blob/master/conf/pipeline.conf" rel="noopener noreferrer"&gt;this&lt;/a&gt; configuration under&lt;br&gt;
&lt;strong&gt;/etc/logstash/conf.d/pipeline.conf&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The purpose of this pipeline is to get the messages from &lt;strong&gt;index test_pipeline&lt;/strong&gt; to another index called &lt;strong&gt;test_pipeline_output&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The only thing left now is to restart logstash in our logstash docker container.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;systemctl restart logstash&lt;/code&gt; &lt;/p&gt;

&lt;p&gt;Let it run for a couple of minutes, after that check indices in your Elasticsearch node:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;curl -X GET --cacert /etc/elasticsearch/certs/http_ca.crt -u elastic http://localhost:9200/_cat/indices/&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi2gpik7x9jl47lq4ysrd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi2gpik7x9jl47lq4ysrd.png" alt="Indices after logstash"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the image you can see that the output index has 100 items, while the original has 10, this is because logstash will continuously run this pipeline every few seconds. &lt;/p&gt;

&lt;p&gt;In any case, with this article we have successfully set up a Logstash processor and moved data around with a very basic setup.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3wwi35zlkd9g6y15k2f0.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3wwi35zlkd9g6y15k2f0.gif" alt="It just works"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Follow me for part 3, where we'll set up a Kibana UI in 5 minutes!&lt;/p&gt;

</description>
      <category>docker</category>
      <category>centos</category>
      <category>logstash</category>
      <category>devops</category>
    </item>
    <item>
      <title>ELK: Elasticsearch in 5 minutes</title>
      <dc:creator>David Bros</dc:creator>
      <pubDate>Tue, 15 Mar 2022 20:59:23 +0000</pubDate>
      <link>https://dev.to/thehunter896/elk-elasticsearch-in-5-minutes-5bfn</link>
      <guid>https://dev.to/thehunter896/elk-elasticsearch-in-5-minutes-5bfn</guid>
      <description>&lt;p&gt;If you claim to be a Data Engineer or a DevOps engineer then you surely have heard of &lt;strong&gt;Elasticsearch&lt;/strong&gt;, this technology has been gaining traction for all the right reasons. &lt;/p&gt;

&lt;p&gt;Although you can use Elasticsearch only for storage, it is highly recommended to use it alongside it's brothers: Logstash and Kibana, we will get to those in other posts.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;What is Elasticsearch&lt;/strong&gt;&lt;br&gt;
Elasticsearch is a NoSQL document database (JSON), its environment revolves around 3 main concepts:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Nodes: An Elasticsearch node is a server which stores data, it does this through shards.&lt;/li&gt;
&lt;li&gt;Shards: Shards are a storage abstraction that provides easier access to your data as well as support for replication. &lt;/li&gt;
&lt;li&gt;Indices: Indices are the second storage abstraction, they are what you are going to interact with when fetching or aggregating data from the Elasticsearch API.&lt;/li&gt;
&lt;/ul&gt;



&lt;center&gt;
![Elastic Node](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vnizubkxockvhpi2rqsv.png)
&lt;/center&gt;



&lt;p&gt;Where Elasticsearch shines:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Data Streaming Environments: Together with Logstash&lt;/li&gt;
&lt;li&gt;Data Analytics Enviornments: Together with an Analytics Frontend such as Kibana&lt;/li&gt;
&lt;li&gt;Data Lake: Use the Elasticsearch REST API, compatible with any data analytics processing application.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Why does it shine in these environments:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Fast and efficient I/O operations: Provided by sharding and indexing.&lt;/li&gt;
&lt;li&gt;Aggregation support: Elasticsearch has an incredibly flexible REST API that you can use from any client to make custom aggregations and data structures.&lt;/li&gt;
&lt;li&gt;Kibana: Kibana is a front end application that fully integrates with Elasticsearch, it uses the Elasticsearch REST API to support all sorts of graph visualizations.
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;What version are we setting up&lt;/strong&gt;&lt;br&gt;
Elasticsearch has a payed and free version, you'll learn how to set up the free version.&lt;/p&gt;



&lt;p&gt;&lt;strong&gt;Step 1, Docker&lt;/strong&gt;: &lt;br&gt;
For ease of usage, let's use Docker. As mentioned in the beginning, we'll use docker and CentOS 7.&lt;/p&gt;

&lt;p&gt;Pull the image:&lt;br&gt;
&lt;code&gt;docker pull centos:7&lt;/code&gt;&lt;br&gt;
You can find the image &lt;a href="https://hub.docker.com/_/centos" rel="noopener noreferrer"&gt;here&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Run the container with SYS Admin permissions and a mounted volume:&lt;br&gt;
&lt;code&gt;docker run -id --cap-add=SYS_ADMIN --name=elasticsearch-centos7 -v /sys/fs/cgroup:/sys/fs/cgroup:ro centos:7 /sbin/init&lt;/code&gt; &lt;/p&gt;

&lt;p&gt;&lt;em&gt;We run the container with SYS ADMIN permissions because we need systemctl to work inside the container&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Connect to the container:&lt;br&gt;
&lt;code&gt;docker exec -it elasticsearch-centos7 /bin/bash&lt;/code&gt;&lt;/p&gt;



&lt;p&gt;&lt;strong&gt;Step 2, Machine Setup&lt;/strong&gt;&lt;br&gt;
Update the machine:&lt;br&gt;
&lt;code&gt;yum update -y &amp;amp;&amp;amp; yum upgrade -y&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Install a text editor (any): &lt;br&gt;
&lt;code&gt;yum install vim -y&lt;/code&gt; &lt;/p&gt;



&lt;p&gt;&lt;strong&gt;Step 3, Install Elasticsearch&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Install the elasticsearch repo under &lt;strong&gt;/etc/yum.repos.d/elasticsearch.repo&lt;/strong&gt;:&lt;br&gt;
&lt;code&gt;vim /etc/yum.repos.d/elasticsearch.repo&lt;/code&gt; &lt;/p&gt;

&lt;p&gt;Paste the following text: &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;[elasticsearch]&lt;br&gt;
name=Elasticsearch repository for 8.x packages&lt;br&gt;
baseurl=&lt;a href="https://artifacts.elastic.co/packages/8.x/yum" rel="noopener noreferrer"&gt;https://artifacts.elastic.co/packages/8.x/yum&lt;/a&gt;&lt;br&gt;
gpgcheck=1&lt;br&gt;
gpgkey=&lt;a href="https://artifacts.elastic.co/GPG-KEY-elasticsearch" rel="noopener noreferrer"&gt;https://artifacts.elastic.co/GPG-KEY-elasticsearch&lt;/a&gt;&lt;br&gt;
enabled=0&lt;br&gt;
autorefresh=1&lt;br&gt;
type=rpm-md&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Remember to save!&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You can find the latest version of this configuration &lt;a href="https://www.elastic.co/guide/en/elasticsearch/reference/8.1/rpm.html#rpm-repo" rel="noopener noreferrer"&gt;here&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Install Elasticsearch:&lt;br&gt;
&lt;code&gt;yum install --enablerepo=elasticsearch elasticsearch -y&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;There are other options to download the repository, but this is the easiest. &lt;/p&gt;

&lt;p&gt;Reload the daemon&lt;br&gt;
&lt;code&gt;systemctl daemon-reload&lt;/code&gt; &lt;/p&gt;

&lt;p&gt;Enable and Start elasticsearch:&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;systemctl enable elasticsearch &amp;amp;&amp;amp; systemctl start elasticsearch&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Step 4, Verify&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Check the service is running&lt;br&gt;
&lt;code&gt;systemctl status elasticsearch&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxkkl6d4dlhqje3grxfjd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxkkl6d4dlhqje3grxfjd.png" alt="ElasticSearch Service Running" width="800" height="140"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Reset your password for user &lt;strong&gt;elastic&lt;/strong&gt;&lt;br&gt;
&lt;code&gt;/usr/share/elasticsearch/bin/elasticsearch-reset-password -u elastic&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Confirm that elasticsearch works by querying the Elasticsearch API (https enabled by default):&lt;br&gt;
&lt;code&gt;curl --cacert /etc/elasticsearch/certs/http_ca.crt -u elastic https://localhost:9200&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxp6cxmfqe5od8tinwwn8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxp6cxmfqe5od8tinwwn8.png" alt="ElasticSearch API Repsponse" width="800" height="278"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You are now the proud owner of a free ElasticSearch Node!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3wwi35zlkd9g6y15k2f0.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3wwi35zlkd9g6y15k2f0.gif" alt="It just works" width="220" height="123"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Follow me for part 2, where we'll set up Logstash in 5 minutes!&lt;/p&gt;

</description>
      <category>elasticsearch</category>
      <category>devops</category>
      <category>centos7</category>
      <category>docker</category>
    </item>
  </channel>
</rss>
