<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Djomkam Kevin</title>
    <description>The latest articles on DEV Community by Djomkam Kevin (@dj_kev).</description>
    <link>https://dev.to/dj_kev</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/dj_kev"/>
    <language>en</language>
    <item>
      <title>Covid Symptom Reporting using Amazon Connect, Lex and Lambda (High Level View)</title>
      <dc:creator>Djomkam Kevin</dc:creator>
      <pubDate>Thu, 19 Aug 2021 18:43:52 +0000</pubDate>
      <link>https://dev.to/dj_kev/covid-symptom-reporting-using-amazon-connect-lex-and-lambda-high-level-view-34g4</link>
      <guid>https://dev.to/dj_kev/covid-symptom-reporting-using-amazon-connect-lex-and-lambda-high-level-view-34g4</guid>
      <description>&lt;h2&gt;
  
  
  Amazon Connect
&lt;/h2&gt;

&lt;p&gt;Amazon Connect makes it easy for you to set up and manage a customer contact center and provide reliable customer engagement at any scale. With Amazon Connect you can deploy a customer contact center with just a few clicks in the AWS management console, on-board agents from anywhere, and quickly begin to engage with your customers. &lt;/p&gt;

&lt;p&gt;Amazon Connect provides a seamless experience across voice and chat for your customers and agents. This includes one set of tools for skills-based routing, task management, powerful real-time and historical analytics, and intuitive management tools – all with pay-as-you-go pricing, which means Amazon Connect simplifies contact center operations, improves agent efficiency, and lowers costs.&lt;/p&gt;

&lt;p&gt;After creating a contact flow on connect, you can set up a phone number which customers will use to interact with our applications or our agents for support.&lt;/p&gt;

&lt;p&gt;For this use case, we are setting up a covid symptom reporting through which users can call to report their status or ask for information related to Covid.&lt;/p&gt;

&lt;p&gt;When user calls they are linked to the contact flow which proposes the different options available for them select what they are interested in.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--9ePF5CXV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/we6sms31cdc3wynuaktq.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--9ePF5CXV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/we6sms31cdc3wynuaktq.jpg" alt="Covid Symptom Reporting high level architecture"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If the users wants to report their covid status, they select 1 on their phone and they will be taken to the status reporting flow.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--OoM0L7NT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ipm78kgpos4xk6oytw18.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--OoM0L7NT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ipm78kgpos4xk6oytw18.jpg" alt="Report status flow"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As shown in the figure above, Amazon Lex enables us to create a chatbot which we can link to our contact flow in connect.&lt;/p&gt;

&lt;p&gt;Lex is backed by a lambda function which performs all the necessary processing and can also interact with an API hosted on premises or at any other server. After the processing is done, a successful message is returned to the user who can hangup or ask for other information.&lt;/p&gt;

&lt;p&gt;This post is a high level view of what is possible with amazon connect; in futur posts more details will be provided on how to create a contact flow, how to create a queue in a contact flow, how to create a bot in amazon lex and link to connect and how to create a lambda function to process requests from Lex.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>architecture</category>
      <category>cloud</category>
      <category>serverless</category>
    </item>
    <item>
      <title>Installing ELK stack on ubuntu 14.04</title>
      <dc:creator>Djomkam Kevin</dc:creator>
      <pubDate>Thu, 19 Aug 2021 02:29:43 +0000</pubDate>
      <link>https://dev.to/dj_kev/installing-elk-stack-on-ubuntu-14-04-1ofb</link>
      <guid>https://dev.to/dj_kev/installing-elk-stack-on-ubuntu-14-04-1ofb</guid>
      <description>&lt;h2&gt;
  
  
  Pre-requisite
&lt;/h2&gt;

&lt;p&gt;Make sure java 8 is installed and also set java_home to /etc/default/elasticsearch&lt;/p&gt;

&lt;h2&gt;
  
  
  Installing Elasticsearch
&lt;/h2&gt;

&lt;p&gt;Before starting, you will need to import the Elasticsearch public GPG key into apt. You can do this with the following command:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sudo wget -qO - https://packages.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Then, you will need to add Elastic's package source list to apt.&lt;br&gt;
To do this open the sources.list file:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sudo nano /etc/apt/sources.list&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Add the following line:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;deb http://packages.elastic.co/elasticsearch/2.x/debian stable main&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Save the file and update the repository with the following command:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sudo apt-get update&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Now, install the Elasticsearch with the following command:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sudo apt-get -y install elasticsearch&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Once elasticsearch is installed, you will need to restrict outside access to the Elasticsearch instance, you can do this by editing the elasticsearch.yml file.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sudo /etc/elasticsearch/elasticsearch.yml&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Find the line network.host and replace its value with localhost (previous value is 192.168.0.1)&lt;/p&gt;

&lt;p&gt;Save the file and start elasticsearch service:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sudo /etc/init.d/elasticsearch start&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Next, enable elasticsearch service to start at boot with the following command:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sudo update-rc.d elasticsearch defaults&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Now, elasticsearch is up and running, it's time to test elasticsearch.&lt;br&gt;
You can test elasticsearch with the following curl command:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;curl localhost:9200&lt;/code&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Installing Logstash
&lt;/h2&gt;

&lt;p&gt;By default Logstash is not available in Ubutnu repository, so you will need to add Logstash source list to apt.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sudo nano /etc/apt/sources.list&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Add the following line:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;deb http://packages.elastic.co/logstash/2.2/debian stable main&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Save the file and update the repository:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sudo apt-get update&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Now, install the logstash with the following command:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sudo apt-get install logstash&lt;/code&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Configure Logstash
&lt;/h2&gt;

&lt;p&gt;Once logstash is installed, you will need to configure the logstash file located at /etc/logstash/conf.d directory. The configuration consists of three parts: inputs, filters, and outputs.&lt;br&gt;
Before configuring logstash, create a directory for storing certificate and key for logstash.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sudo mkdir -p /etc/pki/tls/certs sudo mkdir /etc/pki/tls/private&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Next, add IP address of ELK server to OpenSSL configuration file:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sudo nano /etc/ssl/openssl.cnf&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Find the section [ v3_ca] and add the following line:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;subjectAltName = IP: 192.168.1.7&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Save the file and generate SSL certificate by running the following command:&lt;br&gt;
Where 192.168.1.7 is your ELK server IP address.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;cd /etc/pki/tls &lt;br&gt;
sudo openssl req -config /etc/ssl/openssl.cnf -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout private/filebeat.key -out certs/filebeat.crt&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Note that you will need to copy this certificate to every client whose logs you want to send to the ELK server.&lt;/p&gt;

&lt;p&gt;Now, create the filebeat input configuration file with the following command:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sudo nano /etc/logstash/conf.d/beats-input.conf&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Add the following lines:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;input {  
      beats {
        port =&amp;gt; 5044
        type =&amp;gt; "logs"
        ssl =&amp;gt; true
        ssl_certificate =&amp;gt; "/etc/pki/tls/certs/filebeat.crt"
        ssl_key =&amp;gt; "/etc/pki/tls/private/filebeat.key"
      }
    }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, create logstash filters config file:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sudo nano /etc/logstash/conf.d/syslog-filter.conf&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Add the following lines:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;filter {  
      if [type] == "syslog" {
        grok {
          match =&amp;gt; { "message" =&amp;gt; "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:[%{POSINT:syslog_pid}])?: %{GREEDYDATA:syslog_message}" }
          add_field =&amp;gt; [ "received_at", "%{@timestamp}" ]
          add_field =&amp;gt; [ "received_from", "%{host}" ]
        }
        syslog_pri { }
        date {
          match =&amp;gt; [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
        }
      }
    }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Last, create logstash outputs config file:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sudo nano /etc/logstash/conf.d/output.conf&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Add the following lines:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;output {  
      elasticsearch {
        hosts =&amp;gt; ["localhost:9200"]
      }
      stdout { codec =&amp;gt; rubydebug }
    }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Save the file.&lt;br&gt;
Edit the file /etc/default/logstash with the below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;JAVACMD=/home/djomkam/Desktop/jdk1.8.0_162 /bin/java
export  JAVACMD
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Test your Logstash configuration with the following command:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sudo service logstash configtest&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;The output will display Configuration OK if there are no errors. Otherwise, check the logstash log to troubleshoot problems.&lt;/p&gt;

&lt;p&gt;Next, restart logstash service and enable logstash service to run automatically at bootup:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo /etc/init.d/logstash restart 
sudo update-rc.d logstash defaults
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Installing Kibana
&lt;/h2&gt;

&lt;p&gt;To install Kibana, you will need to add Elastic's package source list to apt.&lt;br&gt;
You can create kibana source list file with the following command:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sudo echo "deb http://packages.elastic.co/kibana/4.4/debian stable main" | sudo tee -a /etc/apt/sources.list.d/kibana-4.4.x.list&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Next, update the apt repository with the following command:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sudo apt-get update&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Finally, install Kabana by running the following command:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sudo apt-get -y install kibana&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Once kibana is installed, you will need to configure kibana. You can do this by editing it's configuration file:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sudo nano /opt/kibana/config/kibana.yml&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Change the following line:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;server.port: 5601
server.host: localhost
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, start the kibana service and enable it to start at boot:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo /etc/init.d/kibana start 
sudo update-rc.d kibana defaults
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can verify whether kibana is running or not with the following command:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;netstat –pltn&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Once kibana is installed, you will need to download sample Kibana dashboards and Beats index patterns. You can download the sample dashboard with the following command:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;curl -L -O https://download.elastic.co/beats/dashboards/beats-dashboards-1.1.0.zip&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Once download is complete, unzip the downloaded file with the following command:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;unzip beats-dashboards-1.1.0.zip&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Now, load the sample dashboards, visualizations and Beats index patterns into Elasticsearch by running the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd beats-dashboards-1.1.0 
./load.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You will find the following index patterns in the the kibana dashboard:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;packetbeat-*
topbeat-*
filebeat-*
winlogbeat-*
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here, we will use only filebeat to forward logs to Elasticsearch, so we will load a filebeat index template into the elasticsearch.&lt;br&gt;
To do this, download the filebeat index template.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;curl -O https://gist.githubusercontent.com/thisismitch/3429023e8438cc25b86c/raw/d8c479e2a1adcea8b1fe86570e42abab0f10f364/filebeat-index-template.json&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Now load the following template by running the following command:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;curl -XPUT 'http://localhost:9200/_template/filebeat?pretty' -d@filebeat-index-template.json&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;If the template loaded properly, you should see the following output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
      "acknowledged" : true
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Installing Nginx
&lt;/h2&gt;

&lt;p&gt;ou will also need to install Nginx to set up a reverse proxy to allow&lt;br&gt;
To install Nginx, run the following command:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sudo apt-get install nginx&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;You will also need to install apache2-utils for htpasswd utility:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sudo apt-get install apache2-utils&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Now, Create admin user to access Kibana web interface using htpasswd utility:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sudo htpasswd -c /etc/nginx/htpasswd.users admin&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Enter password as you wish, you will need this password to access Kibana web interface.&lt;br&gt;
Next, open Nginx default configuration file:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sudo nano /etc/nginx/sites-available/default&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Delete all the lines and add the following lines:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;server {
        listen 80;
        server_name 192.168.1.7;
        auth_basic "Restricted Access";
        auth_basic_user_file /etc/nginx/htpasswd.users;
        location / {
        proxy_pass http://localhost:5601;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host $host;
        proxy_cache_bypass $http_upgrade;
        }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Save and exit the file. Nginx now directs your server's traffic to the Kibana server, which is listening on localhost:5601. Now restart Nginx service and enable it to start at boot:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo systemctl start nginx
sudo systemctl enable nginx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, Kibana is accessible via the public IP address of your ELK server. The ELK server is now ready to receive filebeat data, now it's time to set up Filebeat on each client server.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setup Filebeat on the Client Server
&lt;/h2&gt;

&lt;p&gt;You will also need to setup filebeat on each Ubuntu server that you want to send logs to Logstash on your ELK Server.&lt;/p&gt;

&lt;p&gt;Before setting up filebeat on the client server, you will need to copy the SSL certificate from ELK server to your client server.&lt;/p&gt;

&lt;p&gt;On the ELK server, run the following command to copy SSL certificate to client server:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;scp /etc/pki/tls/certs/filebeat.crt user@client-server-ip:/tmp/&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Where user is the username of the client server and client-server-ip is the IP address of the client server&lt;/p&gt;

&lt;p&gt;Now, on client server, copy ELK server's SSL certificate into appropriate location:&lt;br&gt;
First, create directory structure for SSL certificate:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sudo mkdir -p /etc/pki/tls/certs/&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Then, copy certificate into it:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sudo cp /tmp/filebeat.crt /etc/pki/tls/certs/&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Now, it's time to install the filebeat package on the client server.&lt;/p&gt;

&lt;p&gt;To install filebeat, you will need to create source list for filebeat, you can do this with the following command:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;echo "deb https://packages.elastic.co/beats/apt stable main" | sudo tee -a /etc/apt/sources.list.d/beats.list&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Then, add the GPG key with the following command:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;wget -qO - https://packages.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Next, update the repository with the following command:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sudo apt-get update&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Finally install filebeat by running the following command:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sudo apt-get install filebeat&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Once filebeat is installed, start filebeat service and enable it to start at boot:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sudo /etc/init.d/filebeat start sudo update-rc.d filebeat defaults&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Next, you will need to configure Filebeat to connect to Logstash on our ELK Server. You can do this by editing the Filebeat configuration file located at /etc/filebeat/filebeat.yml.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sudo nano /etc/filebeat/filebeat.yml&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Change the file as shown below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;filebeat:
      prospectors:
        -
          paths:
            - /var/log/auth.log
            - /var/log/syslog
          #  - /var/log/*.loginput_type: logdocument_type: syslogregistry_file: /var/lib/filebeat/registryoutput:
      logstash:
        hosts: ["192.168.1.7:5044"]
        bulk_max_size: 1024tls:
          certificate_authorities: ["/etc/pki/tls/certs/filebeat.crt"]shipper:
logging:
      files:
        rotateeverybytes: 10485760 # = 10MB
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Save the file and restart filebeat service:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sudo /etc/init.d/filebeat restart&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Now Filebeat is sending syslog and auth.log to Logstash on your ELK server.&lt;/p&gt;

&lt;p&gt;Once everything is up-to-date, you will need to test whether Filebeat on your client server should be shipping your logs to Logstash on your ELK server.&lt;/p&gt;

&lt;p&gt;To do this, run the following command on your ELK server:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;curl -XGET 'http://localhost:9200/filebeat-*/_search?pretty'&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;You can also test filebeat by running the following command on Client Server:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sudo filebeat -c /etc/filebeat/filebeat.yml -e –v&lt;/code&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Allow ELK Through Your Firewall
&lt;/h2&gt;

&lt;p&gt;Next, you will need to configure your firewall to allow traffic to the following ports. You can do this by running the following command:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sudo iptables -A INPUT -p tcp -m state --state NEW -m tcp --dport 5601 -j ACCEPT sudo iptables -A INPUT -p tcp -m state --state NEW -m tcp --dport 9200 -j ACCEPT sudo iptables -A INPUT -p tcp -m state --state NEW -m tcp --dport 80 -j ACCEPT sudo iptables -A INPUT -p tcp -m state --state NEW -m tcp --dport 5044 -j ACCEPT&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Now, save the iptables rules with the following command:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sudo service iptables save&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Finally, restart iptables service with the following command:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sudo service iptables restart&lt;/code&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Access the Kibana Web Interface
&lt;/h2&gt;

&lt;p&gt;When everything is up-to-date, it's time to access the Kibana web interface.&lt;/p&gt;

&lt;p&gt;On the client computer, open your web browser and type the URL &lt;a href="http://your-elk-server-ip"&gt;http://your-elk-server-ip&lt;/a&gt;, Enter "kibana" credentials that you have created earlier, you will be redirected to Kibana welcome page.&lt;/p&gt;

&lt;p&gt;Then, click filebeat* in the top left sidebar, and click on the status of the ELK server. You should see the status of the ELK server.&lt;/p&gt;

</description>
      <category>ubuntu</category>
      <category>elasticsearch</category>
      <category>logstash</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>How to Set Up Distributed Tracing in Microservices with Spring Boot, Zipkin or the ELK Stack</title>
      <dc:creator>Djomkam Kevin</dc:creator>
      <pubDate>Wed, 18 Aug 2021 22:42:00 +0000</pubDate>
      <link>https://dev.to/dj_kev/how-to-set-up-distributed-tracing-in-microservices-with-spring-boot-zipkin-or-the-elk-stack-444a</link>
      <guid>https://dev.to/dj_kev/how-to-set-up-distributed-tracing-in-microservices-with-spring-boot-zipkin-or-the-elk-stack-444a</guid>
      <description>&lt;h2&gt;
  
  
  Pre-requisite
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Have Basic understanding of how to set up a microservice using spring boot and spring cloud. &lt;/li&gt;
&lt;li&gt;Install the zipkin server&lt;/li&gt;
&lt;li&gt;Install Elasticsearch, logstash and kibana&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Installing and running Zipkin Server
&lt;/h2&gt;

&lt;p&gt;In order to install the zipkin server, there are two ways:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;If you have Java 8 or higher installed, the quickest way to get started is to fetch the &lt;a href="https://search.maven.org/remote_content?g=io.zipkin&amp;amp;a=zipkin-server&amp;amp;v=LATEST&amp;amp;c=exec"&gt;Latest release&lt;/a&gt; as a self-contained executable jar:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;curl -sSL https://zipkin.io/quickstart.sh | bash -s&lt;br&gt;
java -jar zipkin.jar&lt;/code&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;If you have docker installed, you can use the following to run the latest image directly:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;docker run -d -p 9411:9411 openzipkin/zipkin&lt;/code&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Install Elsaticsearch, Logstash, Kibana
&lt;/h2&gt;

&lt;p&gt;There are also two ways to install and use the ELK stack:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The elk stack can be run through docker by running the following commands:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;docker run -d --name elasticsearch --net es -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" elasticsearch:6.7.2&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Create a file logstash.conf with the following content:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;input {
  tcp {
    port =&amp;gt; 5000
    codec =&amp;gt; json
  }
}
output {
  elasticsearch {
    hosts =&amp;gt; ["http://elasticsearch:9200"]
    index =&amp;gt; "micro-%{appname}"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then run the following command:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;docker run -d --name logstash --net es -p 5044:5044 -v ~/logstash.conf:/usr/share/logstash/pipeline/logstash.conf docker.elastic.co/logstash/logstash:6.7.2&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Finally, run kibana with the following:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;docker run -d --name kibana --net es -e "ELASTICSEARCH_URL=http://elasticsearch:9200" -p 5601:5601 docker.elastic.co/kibana/kibana:6.7.2&lt;/code&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The ELK stack can also be installed by navigating to &lt;a href="https://www.elastic.co/downloads/"&gt;https://www.elastic.co/downloads/&lt;/a&gt; and downloading elasticsearch, logstash and kibana on the filesystem and unzipping them.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  ElasticSearch
&lt;/h3&gt;

&lt;p&gt;Unzip the archive&lt;br&gt;
Run bin/elasticsearch (or bin\elasticsearch.bat on Windows)&lt;br&gt;
Run curl &lt;a href="http://localhost:9200/"&gt;http://localhost:9200/&lt;/a&gt; or Invoke-RestMethod &lt;a href="http://localhost:9200"&gt;http://localhost:9200&lt;/a&gt; with PowerShell&lt;/p&gt;
&lt;h3&gt;
  
  
  Kibana
&lt;/h3&gt;

&lt;p&gt;Unzip the archive&lt;br&gt;
Open config/kibana.yml in an editor&lt;br&gt;
Set elasticsearch.hosts to point at your Elasticsearch instance&lt;br&gt;
Run bin/kibana (or bin\kibana.bat on Windows)&lt;br&gt;
Point your browser at &lt;a href="http://localhost:5601"&gt;http://localhost:5601&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  LogStash
&lt;/h3&gt;

&lt;p&gt;Unzip the archive&lt;br&gt;
Prepare a logstash.conf &lt;a href="https://www.elastic.co/guide/en/logstash/current/configuration.html"&gt;config&lt;/a&gt; file&lt;br&gt;
Run bin/logstash -f logstash.conf&lt;/p&gt;
&lt;h2&gt;
  
  
  Building the Microservice architecture and integrating tracing
&lt;/h2&gt;
&lt;h3&gt;
  
  
  STEP 1: Building the config server with spring cloud config
&lt;/h3&gt;

&lt;p&gt;To enable Spring Cloud Config feature for an application, first include spring-cloud-config-server to your project dependencies.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;dependency&amp;gt;
    &amp;lt;groupId&amp;gt;org.springframework.cloud&amp;lt;/groupId&amp;gt;
    &amp;lt;artifactId&amp;gt;spring-cloud-config-server&amp;lt;/artifactId&amp;gt;
&amp;lt;/dependency&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then enable running embedded configuration server during application boot use @EnableConfigServer annotation.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.config.server.EnableConfigServer;

@SpringBootApplication
@EnableConfigServer
public class ConfigServerApplication {

    public static void main(String[] args) {
        SpringApplication.run(ConfigServerApplication.class, args);
    }

}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;By default Spring Cloud Config Server store the configuration data inside Git repository. This is very good choice in production mode, but for the purpose of this tutorial a file system backend will be enough. It is really easy to start with config server, because we can place all the properties in the classpath. Spring Cloud Config by default search for property sources inside the following locations: classpath:/, classpath:/config, file:./, file:./config.&lt;/p&gt;

&lt;p&gt;We place all the property sources inside src/main/resources/config. The YAML filename will be the same as the name of service. For example, YAML file for discovery-service will be located here: src/main/resources/config/discovery-service.yml.&lt;/p&gt;

&lt;p&gt;And last two important things. If you would like to start config server with file system backend you have to activate the &lt;strong&gt;native&lt;/strong&gt; profile. It may be achieved by setting parameter --spring.profiles.active=native during application boot or setting it in the properties file. Set the server port by setting property server.port in bootstrap.yml file but we will use 8888. Now, all other applications, including discovery-service, need to add spring-cloud-starter-config dependency in order to enable config client.&lt;/p&gt;

&lt;h3&gt;
  
  
  STEP 2: Building the discovery Service with spring cloud Netflix Eureka
&lt;/h3&gt;

&lt;p&gt;In order for us to set up the discovery service, we also have to include the dependency to spring-cloud-starter-netflix-eureka-server.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;dependency&amp;gt;
    &amp;lt;groupId&amp;gt;org.springframework.cloud&amp;lt;/groupId&amp;gt;
    &amp;lt;artifactId&amp;gt;spring-cloud-starter-netflix-eureka-server&amp;lt;/artifactId&amp;gt;
&amp;lt;/dependency&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then you should enable running embedded discovery server during application boot by setting @EnableEurekaServer annotation on the main class.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.netflix.eureka.server.EnableEurekaServer;

@SpringBootApplication
@EnableEurekaServer
public class DiscoveryServiceApplication {

    public static void main(String[] args) {
        SpringApplication.run(DiscoveryServiceApplication.class, args);
    }

}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Application has to fetch property source from configuration server. The minimal configuration required on the client side is an application name and config server’s connection settings.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;spring:
  application:
    name: discovery-service
  cloud:
    config:
      uri: http://localhost:8888
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The configuration file discovery-service.yml should contain the below configurations and should be placed inside the config-service module. For standalone Eureka instances we have to disable the registration and fetching registry.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;server:
  port: 8761

eureka:
  instance:
    hostname: localhost
  client:
    registerWithEureka: false
    fetchRegistry: false
    serviceUrl:
      defaultZone: http://${eureka.instance.hostname}:${server.port}/eureka/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  STEP 3: Building Microservice using spring boot and spring cloud
&lt;/h3&gt;

&lt;p&gt;Our microservice has to perform some operations during boot. It needs to fetch configuration from config-service, register itself in discovery-service and expose HTTP API. To enable all these mechanisms we need to include some dependencies in pom.xml. To enable config client we should include starter spring-cloud-starter-config. Discovery client will be enabled for microservice after including spring-cloud-starter-netflix-eureka-client and annotating the main class with @EnableDiscoveryClient. Here is the list of dependencies required for the sample microservice:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;dependency&amp;gt;
    &amp;lt;groupId&amp;gt;org.springframework.cloud&amp;lt;/groupId&amp;gt;
    &amp;lt;artifactId&amp;gt;spring-cloud-starter-netflix-eureka-client&amp;lt;/artifactId&amp;gt;
&amp;lt;/dependency&amp;gt;
&amp;lt;dependency&amp;gt;
    &amp;lt;groupId&amp;gt;org.springframework.cloud&amp;lt;/groupId&amp;gt;
    &amp;lt;artifactId&amp;gt;spring-cloud-starter-config&amp;lt;/artifactId&amp;gt;
&amp;lt;/dependency&amp;gt;
&amp;lt;dependency&amp;gt;
    &amp;lt;groupId&amp;gt;org.springframework.boot&amp;lt;/groupId&amp;gt;
    &amp;lt;artifactId&amp;gt;spring-boot-starter-web&amp;lt;/artifactId&amp;gt;
&amp;lt;/dependency&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And here is the main class of application that enables &lt;strong&gt;Discovery Client&lt;/strong&gt; for the microservice.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.client.discovery.EnableDiscoveryClient;

@SpringBootApplication
@EnableDiscoveryClient
public class SiteServiceApplication {

    public static void main(String[] args) {
        SpringApplication.run(SiteServiceApplication.class, args);
    }

}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Application has to fetch configuration from a remote server, so we should only provide bootstrap.yml file with service name and server URL. In fact, this is the example of &lt;strong&gt;Config First Bootstrap&lt;/strong&gt; approach, when an application first connects to a config server and takes a discovery server address from a remote property source. There is also &lt;strong&gt;Discovery First Bootstrap&lt;/strong&gt; approach, where a config server address is fetched from a discovery server.&lt;br&gt;
&lt;strong&gt;bootstrap.yml&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;spring:
  application:
    name: site-service
  cloud:
    config:
      uri: http://localhost:8888
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;There are not many configuration settings. Here's the application's configuration file (site-service.yml) stored on a config server. It stores only the HTTP running port and Eureka URL.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;site-service.yml&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;server:
  port: 8090

eureka:
  client:
    serviceUrl:
      defaultZone: http://localhost:8761/eureka/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here’s the code with implementation of REST controller class.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import java.util.List;
import java.util.Optional;

import com.cinema.site.model.Site;
import com.cinema.site.repository.SiteRepository;

import org.springframework.cloud.context.config.annotation.RefreshScope;
import org.springframework.http.HttpStatus;
import org.springframework.http.ResponseEntity;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.PostMapping;
import org.springframework.web.bind.annotation.RequestBody;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;

import lombok.AllArgsConstructor;
import lombok.NonNull;

@AllArgsConstructor
@RestController
@RefreshScope
@RequestMapping("/api")
public class SiteController {

    private final SiteRepository siteRepository;

    @GetMapping("/sites/{userId}")
    public ResponseEntity&amp;lt;List&amp;lt;Site&amp;gt;&amp;gt; getSitesByUser(@NonNull @PathVariable Long userId) {
        return new ResponseEntity&amp;lt;&amp;gt;(siteRepository.findByUserId(userId) ,HttpStatus.OK);
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  STEP 4: Communication between microservices with spring cloud and open feign
&lt;/h3&gt;

&lt;p&gt;Now, we will add another microservice (user service) that communicates with the site service. The user service needs to get the list of sites for a given user ID. That’s why we need to include additional dependency for those modules: spring-cloud-starter-openfeign. Spring Cloud Open Feign is a declarative REST client that uses Ribbon client-side load balancer in order to communicate with other microservice.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;dependency&amp;gt;
    &amp;lt;groupId&amp;gt;org.springframework.cloud&amp;lt;/groupId&amp;gt;
    &amp;lt;artifactId&amp;gt;spring-cloud-starter-openfeign&amp;lt;/artifactId&amp;gt;
&amp;lt;/dependency&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The alternative solution to Open Feign is Spring RestTemplate with @LoadBalanced. However, Feign provides more elegant way of defining client, so I prefer using it instead of RestTemplate. After including the required dependency we should also enable Feign clients using @EnableFeignClients annotation.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.client.discovery.EnableDiscoveryClient;
import org.springframework.cloud.openfeign.EnableFeignClients;

@SpringBootApplication
@EnableDiscoveryClient
@EnableFeignClients
public class UserServiceApplication {

    public static void main(String[] args) {
        SpringApplication.run(UserServiceApplication.class, args);
    }

}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, we need to define a client interface. Because user-service communicates with site-service we should create an interface. Every client’s interface should be annotated with @FeignClient. One field inside annotation is required – name. This name should be the same as the name of target service registered in service discovery. Here’s the interface of the client that calls endpoint GET /api/sites/{userId} exposed by user-service.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import java.util.List;

import org.springframework.cloud.openfeign.FeignClient;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.PathVariable;

@FeignClient(name = "site-service", fallbackFactory = SiteClientFallbackFactory.class)
public interface SiteClient {

    @GetMapping("/api/sites/{userId}")
    List findAllByUser(@PathVariable(value="userId") Long userId);
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Sometimes we want to create a fallback method to be executed if the feign client is not able to reach the target service. SiteClientFallbackFactory helps in archiving that.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import java.util.ArrayList;
import java.util.List;

import org.springframework.stereotype.Component;

import feign.hystrix.FallbackFactory;
import lombok.extern.slf4j.Slf4j;

@Component
@Slf4j
public class SiteClientFallbackFactory implements FallbackFactory&amp;lt;SiteClient&amp;gt; {
    @Override
    public SiteClient create(Throwable cause) {
        return new SiteClient() {
            @Override
            public List findAllByUser(Long id) {
                log.error(cause.getMessage(), cause);
                return new ArrayList&amp;lt;&amp;gt;();
            }
        };
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Finally, we have to inject the Feign client’s beans to the REST controller through the service. Now, we may call the methods defined inside SiteClient, which is equivalent to calling REST endpoints.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import java.util.List;
import java.util.Optional;

import com.cinema.user.client.SiteClient;
import com.cinema.user.model.User;
import com.cinema.user.repository.UserRepository;

import org.springframework.stereotype.Service;

import lombok.AllArgsConstructor;

@AllArgsConstructor
@Service
public class UserServiceImpl implements UserService {

    private final UserRepository userRepository;
    private final SiteClient siteClient;

    @Override
    public List&amp;lt;User&amp;gt; findAll() {
        return userRepository.findAll();
    }

    @Override
    public List findAllSitesByUser(final Long userId) {
        return siteClient.findAllByUser(userId);
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import java.util.List;
import java.util.Optional;

import com.cinema.user.model.User;
import com.cinema.user.service.UserService;

import org.springframework.cloud.context.config.annotation.RefreshScope;
import org.springframework.http.HttpStatus;
import org.springframework.http.ResponseEntity;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.PostMapping;
import org.springframework.web.bind.annotation.RequestBody;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;

import lombok.AllArgsConstructor;
import lombok.NonNull;

@AllArgsConstructor
@RestController
@RefreshScope
@RequestMapping("/api")
public class UserController {

    private final UserService userService;

    @GetMapping("/users")
    public ResponseEntity&amp;lt;List&amp;lt;User&amp;gt;&amp;gt; getUsers() {
        return new ResponseEntity&amp;lt;&amp;gt;(userService.findAll(), HttpStatus.OK);
    }

    @GetMapping("/users/sites/{userId}")
    public ResponseEntity&amp;lt;List&amp;gt; getUserSites(@PathVariable("userId") Long id) {
        Optional&amp;lt;User&amp;gt; user = userService.findOne(id);
        if(user.isPresent())
            return new ResponseEntity&amp;lt;&amp;gt;(userService.findAllSitesByUser(id), HttpStatus.OK);
        else
            return new ResponseEntity&amp;lt;&amp;gt;(HttpStatus.BAD_REQUEST);
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  STEP 5: Building API gateway using spring cloud Netflix Zuul (edge-service)
&lt;/h3&gt;

&lt;p&gt;Spring Cloud Netflix Zuul is a Spring Cloud project providing API gateway for microservices. API gateway is implemented inside module edge-service. First, we should include starter spring-cloud-starter-netflix-zuul to the project dependencies.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;dependency&amp;gt;
    &amp;lt;groupId&amp;gt;org.springframework.cloud&amp;lt;/groupId&amp;gt;
    &amp;lt;artifactId&amp;gt;spring-cloud-starter-netflix-zuul&amp;lt;/artifactId&amp;gt;
&amp;lt;/dependency&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We also need to have discovery client enabled, because edge-service integrates with Eureka in order to be able to perform routing to the downstream services.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;spring:
  application:
    name: edge-service
  cloud:
    config:
      uri: http://localhost:8888
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here's the application's configuration file (edge-service.yml) stored on a config server. It stores only the HTTP running port and Eureka URL.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;server:
  port: 8190

eureka:
  client:
    serviceUrl:
      defaultZone: http://localhost:8761/eureka/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.boot.autoconfigure.liquibase.LiquibaseProperties;
import org.springframework.boot.context.properties.EnableConfigurationProperties;
import org.springframework.cloud.client.discovery.EnableDiscoveryClient;
import org.springframework.cloud.netflix.zuul.EnableZuulProxy;

@SpringBootApplication
@EnableDiscoveryClient
@EnableZuulProxy
@EnableConfigurationProperties({LiquibaseProperties.class, ApplicationProperties.class})
public class EdgeServiceApplication {

    public static void main(String[] args) {
        SpringApplication.run(EdgeServiceApplication.class, args);
    }

}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  STEP 6: Correlating Logs between different microservices using spring cloud sleuth and zipkin
&lt;/h3&gt;

&lt;p&gt;Correlating logs between different microservice using Spring Cloud Sleuth is very easy. In fact, the only thing you have to do is to add starter spring-cloud-starter-sleuth to the dependencies of every single microservice and gateway.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;dependency&amp;gt;
    &amp;lt;groupId&amp;gt;org.springframework.cloud&amp;lt;/groupId&amp;gt;
    &amp;lt;artifactId&amp;gt;spring-cloud-starter-sleuth&amp;lt;/artifactId&amp;gt;
&amp;lt;/dependency&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In order to configure zipkin, add the dependency below to every microservice's pom.xml file&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;dependency&amp;gt;
    &amp;lt;groupId&amp;gt;org.springframework.cloud&amp;lt;/groupId&amp;gt;
    &amp;lt;artifactId&amp;gt;spring-cloud-starter-zipkin&amp;lt;/artifactId&amp;gt;
&amp;lt;/dependency&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then add the following to the yml file of each microservice in the config server:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;spring:
  zipkin:
    baseUrl: http://localhost:9411/
  sleuth:
    sampler:
      probability: 1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Assuming the zipkin server is responding on localhost at port 9411&lt;/p&gt;

&lt;h3&gt;
  
  
  STEP 7: Configuring microservices to send logs to logstash
&lt;/h3&gt;

&lt;p&gt;Sending microservice logs to logstash requires the following dependencies to be added to each and every microservice.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;dependency&amp;gt;
    &amp;lt;groupId&amp;gt;net.logstash.logback&amp;lt;/groupId&amp;gt;
    &amp;lt;artifactId&amp;gt;logstash-logback-encoder&amp;lt;/artifactId&amp;gt;
    &amp;lt;version&amp;gt;5.3&amp;lt;/version&amp;gt;
&amp;lt;/dependency&amp;gt;
&amp;lt;dependency&amp;gt;
    &amp;lt;groupId&amp;gt;ch.qos.logback&amp;lt;/groupId&amp;gt;
    &amp;lt;artifactId&amp;gt;logback-core&amp;lt;/artifactId&amp;gt;
    &amp;lt;version&amp;gt;1.2.3&amp;lt;/version&amp;gt;
&amp;lt;/dependency&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The next configuration to add is to create a file called logback.xml in resource folder of every microservice with the following contents:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;?xml version="1.0" encoding="UTF-8"?&amp;gt;
&amp;lt;configuration debug="false"&amp;gt;
    &amp;lt;include resource="org/springframework/boot/logging/logback/base.xml"/&amp;gt;
    &amp;lt;appender name="logstash" class="net.logstash.logback.appender.LogstashTcpSocketAppender"&amp;gt;
        &amp;lt;destination&amp;gt;localhost:5044&amp;lt;/destination&amp;gt;
        &amp;lt;encoder class="net.logstash.logback.encoder.LoggingEventCompositeJsonEncoder"&amp;gt;
            &amp;lt;providers&amp;gt;
                &amp;lt;mdc/&amp;gt;
                &amp;lt;context/&amp;gt;
                &amp;lt;version/&amp;gt;
                &amp;lt;logLevel/&amp;gt;
                &amp;lt;loggerName/&amp;gt;
                &amp;lt;message/&amp;gt;
                &amp;lt;pattern&amp;gt;
                    &amp;lt;pattern&amp;gt;
                        {
                            "appName": "site-service"
                        }
                    &amp;lt;/pattern&amp;gt;
                &amp;lt;/pattern&amp;gt;
                &amp;lt;threadName/&amp;gt;
                &amp;lt;stackTrace/&amp;gt;
            &amp;lt;/providers&amp;gt;
        &amp;lt;/encoder&amp;gt;
    &amp;lt;/appender&amp;gt;
    &amp;lt;root level="INFO"&amp;gt;
        &amp;lt;appender-ref ref="CONSOLE"/&amp;gt;
        &amp;lt;appender-ref ref="logstash"/&amp;gt;
    &amp;lt;/root&amp;gt;
    &amp;lt;logger name="org.springframework" level="INFO"/&amp;gt;
    &amp;lt;logger name="com.cinema" level="INFO"/&amp;gt;
&amp;lt;/configuration&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The steps outlined above if followed diligently will enable you to put in a place distributed tracing in your micro services architectures and be able to visualise your logs through kibana and search through them using elasticsearch.&lt;/p&gt;

</description>
      <category>microservices</category>
      <category>elasticsearch</category>
      <category>java</category>
      <category>programming</category>
    </item>
  </channel>
</rss>
