<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Artem</title>
    <description>The latest articles on DEV Community by Artem (@lbatters).</description>
    <link>https://dev.to/lbatters</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/lbatters"/>
    <language>en</language>
    <item>
      <title>SSL certificate for java application</title>
      <dc:creator>Artem</dc:creator>
      <pubDate>Wed, 28 Apr 2021 19:50:41 +0000</pubDate>
      <link>https://dev.to/lbatters/ssl-certificate-for-java-application-5a0o</link>
      <guid>https://dev.to/lbatters/ssl-certificate-for-java-application-5a0o</guid>
      <description>&lt;p&gt;A third-party service is available via https, so how can java app connect to that service?&lt;/p&gt;

&lt;h3&gt;
  
  
  Truststore and Keystore
&lt;/h3&gt;

&lt;p&gt;Java has two places for save certificate: &lt;em&gt;truststore&lt;/em&gt; and &lt;em&gt;keystore&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Truststore&lt;/em&gt; - for client and public key&lt;br&gt;
&lt;em&gt;Keystore&lt;/em&gt; - for private key&lt;/p&gt;

&lt;p&gt;In our task we need a &lt;em&gt;truststore&lt;/em&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Tools
&lt;/h3&gt;

&lt;p&gt;For SSL certificate use such tools like &lt;a href="https://www.openssl.org/"&gt;openssl&lt;/a&gt; and &lt;em&gt;keytool&lt;/em&gt; from jdk&lt;/p&gt;
&lt;h3&gt;
  
  
  Example
&lt;/h3&gt;

&lt;p&gt;First of all download certificate from third-party-service.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo rm -f thirdPartyCert.pem &amp;amp;&amp;amp; sudo echo -n | openssl s_client -showcerts -connect third-party-service:443 | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' &amp;gt; thirdPartyCert.pem
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Copy current &lt;em&gt;truststore&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cp $JAVA_HOME/lib/security/cacerts currentCacerts
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Import the new certificate to &lt;em&gt;truststore&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;keytool -import -trustcacerts -keystore "currentCacerts" -alias third-party-service -file "thirdPartyCert.pem" -storepass changeit
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Check certificate&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;keytool -list -v -keystore currentCacerts -alias third-party-service -storepass changeit
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Use the option to add a certificate while launching your app&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;-Djavax.net.ssl.trustStore=mySuperCacerts
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Perfect!&lt;br&gt;
&lt;a href="https://i.giphy.com/media/1yiPWNsQ1vq7V90fRY/giphy.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://i.giphy.com/media/1yiPWNsQ1vq7V90fRY/giphy.gif" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>java</category>
      <category>openssl</category>
      <category>programming</category>
      <category>ssl</category>
    </item>
    <item>
      <title>How to backup Postgresql </title>
      <dc:creator>Artem</dc:creator>
      <pubDate>Sun, 25 Apr 2021 13:26:40 +0000</pubDate>
      <link>https://dev.to/lbatters/how-to-backup-postgresql-29me</link>
      <guid>https://dev.to/lbatters/how-to-backup-postgresql-29me</guid>
      <description>&lt;p&gt;If your Postgres in docker container use &lt;code&gt;pg_dumpall&lt;/code&gt; to backup:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker exec -t your-db-container pg_dumpall -c -U postgres &amp;gt; dump_`date +%d-%m-%Y"_"%H_%M_%S`.sql
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and then restore it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cat dump_24-04-2021_20_50_17.sql | docker exec -i some-postgres  psql -U postgres
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you want to play with the current command we can run a new docker container with postgres, create table, and check how a command is working.&lt;/p&gt;

&lt;p&gt;Start a container:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker run --name some-postgres -e POSTGRES_PASSWORD=pass -d postgres
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Go inside the container:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker exec -it some-postgres bash
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Start psql console:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;psql -U postgres
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create table:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CREATE TABLE first_table (column1 int);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And then you can do anything you want :)&lt;/p&gt;

</description>
      <category>postgres</category>
      <category>devops</category>
      <category>docker</category>
      <category>programming</category>
    </item>
    <item>
      <title>Rsyslog kafka and elk</title>
      <dc:creator>Artem</dc:creator>
      <pubDate>Sun, 04 Apr 2021 16:45:39 +0000</pubDate>
      <link>https://dev.to/lbatters/rsyslog-kafka-and-elk-3kgk</link>
      <guid>https://dev.to/lbatters/rsyslog-kafka-and-elk-3kgk</guid>
      <description>&lt;h2&gt;
  
  
  Intro.
&lt;/h2&gt;

&lt;p&gt;The elk stack is popular, but I have not found any suitable articles about connecting Rsyslog to Kafka and ELK. You can found somewhere about Kafka somewhere about Logstash or Rsyslog, but not altogether.&lt;/p&gt;

&lt;p&gt;In this article, we will make a docker-compose file that will launch the entire system, and build an image that simulates an application with logs. We will also consider how you can check each system separately.&lt;/p&gt;

&lt;p&gt;I don't want to create a detailed description of each application. It is just starting point for you to learn rsyslog and ELK&lt;/p&gt;

&lt;p&gt;Github with full project: &lt;a href="https://github.com/ArtemMe/rsyslog_kafka_elk" rel="noopener noreferrer"&gt;https://github.com/ArtemMe/rsyslog_kafka_elk&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I split the project on release (tag). Every release is a new service like Rsyslog(tag 0.1) or Kibana(tag 0.4). You can switch to desire release and start project to test build&lt;/p&gt;

&lt;p&gt;Below in the article, I give a description of each service. You can download the project and go to the root of the project and enter the command:&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;docker-compose up&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;Rsyslog kafka logstash elasticsearch and kibana will be up. So you can go to kibana on &lt;code&gt;localhost:5601&lt;/code&gt; and check launch. &lt;br&gt;
Also, each section contains tips on how to check the service %)&lt;/p&gt;
&lt;h2&gt;
  
  
  Rsyslog. (tag 0.1)
&lt;/h2&gt;

&lt;p&gt;We will need two configuration files: one with the basic settings &lt;code&gt;/etc/rsyslog.conf&lt;/code&gt;, the second&lt;code&gt;/etc/rsyslog.d/kafka-sender.conf&lt;/code&gt; is optional with settings for our needs&lt;/p&gt;

&lt;p&gt;We will not delve into the rsyslog settings because you can dig into them for a long time. Just remember basic instructions: module, template, action&lt;br&gt;
Let's take a look at an example of the file &lt;code&gt;/etc/rsyslog.d/kafka-sender.conf&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# load module which use for sending message to kafka
module(load="omkafka") 

# Declare template for log with name "json_lines" :
template(name="json_lines" type="list" option.json="on") {  
        constant(value="{")
        constant(value="\"timestamp\":\"")      property(name="timereported" dateFormat="rfc3339")
        constant(value="\",\"message\":\"")     property(name="msg")
        constant(value="\",\"host\":\"")        property(name="hostname")
        constant(value="\",\"severity\":\"")    property(name="syslogseverity-text")
        constant(value="\",\"facility\":\"")    property(name="syslogfacility-text")
        constant(value="\",\"syslog-tag\":\"")  property(name="syslogtag")
        constant(value="\"}")
}

# Decalare action to send message to kafka broker in test_topic_1. Note how we use template json_lines and module omkafka
action(
        broker=["host.docker.internal:9092"]
        type="omkafka"
        template="json_lines"
        topic="test_topic_1"
        action.resumeRetryCount="-1"
        action.reportsuspension="on"
)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Remember topic name: test_topic_1&lt;/p&gt;

&lt;p&gt;You can find the full list of property names for templates there: &lt;a href="https://www.rsyslog.com/doc/master/configuration/properties.html" rel="noopener noreferrer"&gt;https://www.rsyslog.com/doc/master/configuration/properties.html&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Also note the main file &lt;code&gt;/etc/rsyslog.conf&lt;/code&gt; contains a line like:&lt;code&gt;$ IncludeConfig /etc/rsyslog.d/ *. Conf&lt;/code&gt;&lt;br&gt;
This is a directive that tells us where else to read the settings for rsyslog. It is useful to separate common settings from specific ones&lt;/p&gt;
&lt;h3&gt;
  
  
  Create an image for generating logs
&lt;/h3&gt;

&lt;p&gt;The image will essentially just start rsyslog. In the future, we will be able to enter this container and generate logs.&lt;/p&gt;

&lt;p&gt;You can find the Docker file in the &lt;code&gt;/rsyslog&lt;/code&gt; folder. Let's look at the chunk of that file where on the first and second lines we copy our config. On the third line, we mount a folder for logs which will be generated&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;COPY rsyslog.conf /etc/
COPY rsyslog.d/*.conf /etc/rsyslog.d/

VOLUME ["/var/log"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Building. Go to &lt;code&gt;/rsyslog&lt;/code&gt; folder and execute&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker build . -t rsyslog_kafka
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Launch the container to check the image.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker run rsyslog_kafka
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To check that rsyslog are writing logs, go to our container and call the command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker run --rm --network=rsyslog_kafka_elk_elk rsyslog_kafka bash -c `logger -p daemon.debug "This is a test."`
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's look at folder  &lt;code&gt;/logs&lt;/code&gt;and you should find a string like this &lt;code&gt;This is a test.&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Congratulations! You have configured rsyslog in your docker container!&lt;/p&gt;

&lt;h2&gt;
  
  
  A bit about networking in docker containers.
&lt;/h2&gt;

&lt;p&gt;Let's create our network in the docker-compose.yml file. In the future, each service can be launched to different machines. This is no problem.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;networks:
  elk:
    driver: bridge
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Kafka (tag 0.2)
&lt;/h2&gt;

&lt;p&gt;I took this repository as a basis: &lt;code&gt;https://github.com/wurstmeister/kafka-docker&lt;/code&gt;&lt;br&gt;
The resulting service is:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;zookeeper:
  image: wurstmeister/zookeeper:latest
  ports:
    - "2181:2181"
  container_name: zookeeper
  networks:
    - elk

kafka:
  image: wurstmeister/kafka:0.11.0.1
  ports:
    - "9092:9092"
  environment:
    # The below only works for a macOS environment if you installed Docker for
    # Mac. If your Docker engine is using another platform/OS, please refer to
    # the relevant documentation with regards to finding the Host IP address
    # for your platform.
    KAFKA_ADVERTISED_HOST_NAME: docker.for.mac.localhost
    KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
    KAFKA_CREATE_TOPICS: "logstash_logs:1:1"
  links:
    - zookeeper
  depends_on:
    - zookeeper
  container_name: kafka
  networks:
    - elk
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Tips how to check kafka (you can do it after starting containers):&lt;/p&gt;

&lt;p&gt;We will be able to see what is in the Kafka topic when we launch our containers. First, you need to download Kafka. Here is a cool tutorial &lt;code&gt;https://kafka.apache.org/quickstart&lt;/code&gt; but if it's short download it here &lt;code&gt;https://www.apache.org/dyn/closer.cgi?path=/kafka/2.7.0/kafka_2.13-2.7.0.tgz&lt;/code&gt; and unpack it to &lt;code&gt;/app&lt;/code&gt; folder.&lt;br&gt;
Actually, we need scripts in the &lt;code&gt;/bin&lt;/code&gt; folder.&lt;/p&gt;

&lt;p&gt;Now, we can connect to the container and execute a script to see if there are any entries inside the topic &lt;code&gt;test_topic_1&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker run --rm --network=rsyslog_kafka_elk_elk -v /app/kafka_2.13-2.7.0:/kafka wurstmeister/kafka:0.11.0.1 bash -c "/kafka/bin/kafka-console-consumer.sh --topic test_topic_1 --from-beginning --bootstrap-server 172.23.0.4:9092"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;About the command itself: we connect to the rsyslog_kafka_elk_elk network, rsyslog_kafka_elk is the name of the folder where the docker-compose.yml file is located, and elk is the network that we specified. With the -v command, we mount scripts for Kafka into our container.&lt;/p&gt;

&lt;p&gt;The result of command should be something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{"timestamp":"2021-02-27T17:43:38.828970+00:00","message":" action 'action-1-omkafka' resumed (module 'omkafka') [v8.1901.0 try https://www.rsyslog.com/e/2359 ]","host":"c0dcee95ffd0","severity":"info","facility":"syslog","syslog-tag":"rsyslogd:"}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Logstash (tag 0.3)
&lt;/h3&gt;

&lt;p&gt;Configs are located in the &lt;code&gt;/logstash&lt;/code&gt; folder. &lt;code&gt;logstash.yml&lt;/code&gt; - here we specify parameters for connecting to Elasticsearch&lt;/p&gt;

&lt;p&gt;In the config, there is a setting for Kafka as for an incoming stream and a setting for elasticsearch as for an outgoing stream&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;input {
    beats {
        port =&amp;gt; 5044
    }

    tcp {
        port =&amp;gt; 5000
    }
    kafka
    {
        bootstrap_servers =&amp;gt; "kafka:9092"
        topics =&amp;gt; "test_topic_1"
    }
}

## Add your filters / logstash plugins configuration here

output {
    elasticsearch {
        hosts =&amp;gt; "elasticsearch:9200"
        user =&amp;gt; "elastic"
        password =&amp;gt; "changeme"
        ecs_compatibility =&amp;gt; disabled
    }

    file {

        path =&amp;gt; "/var/logstash/logs/test.log"
        codec =&amp;gt; line { format =&amp;gt; "custom format: %{message}"}
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To monitor what goes into the Elastisearch and check the Logstesh is working properly, I created a file output stream so logs will be written to &lt;code&gt;test.log&lt;/code&gt; file. The main thing is does not forget to add volume to docker-compose.yml&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;volumes:
  - type: bind
    source: ./logstash/config/logstash.yml
    target: /usr/share/logstash/config/logstash.yml
    read_only: true
  - type: bind
    source: ./logstash/pipeline
    target: /usr/share/logstash/pipeline
    read_only: true
  - ./logs:/var/logstash/logs
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When you start the service check &lt;code&gt;test.log&lt;/code&gt; file in your project. You should find logs from kafka&lt;/p&gt;

&lt;h3&gt;
  
  
  Elasticsearch (tag 0.3)
&lt;/h3&gt;

&lt;p&gt;This is the simplest configuration. We will launch the trial version, but you can turn on the open source one if you wish. Configs as usual in &lt;code&gt;/elasticsearch/config/&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;## Default Elasticsearch configuration from Elasticsearch base image.
## https://github.com/elastic/elasticsearch/blob/master/distribution/docker/src/docker/config/elasticsearch.yml
#
cluster.name: "docker-cluster"
network.host: 0.0.0.0

## X-Pack settings
## see https://www.elastic.co/guide/en/elasticsearch/reference/current/setup-xpack.html
#
xpack.license.self_generated.type: trial
xpack.security.enabled: true
xpack.monitoring.collection.enabled: true
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Tips how to check elasticsearch (you can do it after starting containers):&lt;/p&gt;

&lt;p&gt;Let's check the indexes of the elastic. Take as a basis a cool image of &lt;code&gt;praqma/network-multitool&lt;/code&gt; and command curl:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker run --rm --network=rsyslog_kafka_elk_elk praqma/network-multitool bash -c "curl elasticsearch:9200/_cat/indices?s=store.size:desc -u elastic:changeme"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As the result of command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;The directory /usr/share/nginx/html is not mounted.
Over-writing the default index.html file with some useful information.
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0green  open .monitoring-es-7-2021.02.28       QP1RL9ezRwmCFLe38dnlTg 1 0 1337 442   1.4mb   1.4mb
green  open .monitoring-es-7-2021.03.07       z0f-K-g7RhqDEbqnupfzPA 1 0  576 428   1.2mb   1.2mb
green  open .monitoring-logstash-7-2021.03.07 rKMYIZE9Q6mSR6_8SG5kUw 1 0  382   0 340.4kb 340.4kb
green  open .watches                          nthHo2KlRhe0HC-8MuT6rA 1 0    6  36 257.1kb 257.1kb
green  open .monitoring-logstash-7-2021.02.28 x98c3c14ToSqmBSOX8gmSg 1 0  363   0 230.1kb 230.1kb
green  open .monitoring-alerts-7              nbdSRkOSSGuLTGYv0z2L1Q 1 0    3   5  62.4kb  62.4kb
yellow open logstash-2021.03.07-000001        22YB7SzYR2a-BAgDEBY0bg 1 1   18   0  10.6kb  10.6kb
green  open .triggered_watches                sp7csXheQIiH7TGmY-EiIw 1 0    0  12   6.9kb   6.9kb
100   784  100   784    0     0  14254      0 --:--:-- --:--:-- --:--:-- 14254
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can see that the indices are being created and our elastic is alive. Let's connect Kibana now&lt;/p&gt;

&lt;h3&gt;
  
  
  Kibana (tag 0.4)
&lt;/h3&gt;

&lt;p&gt;This is what the service looks like&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kibana:
  build:
    context: kibana/
    args:
      ELK_VERSION: $ELK_VERSION
  volumes:
    - type: bind
      source: ./kibana/config/kibana.yml
      target: /usr/share/kibana/config/kibana.yml
      read_only: true
  ports:
    - "5601:5601"
  networks:
    - elk
  depends_on:
    - elasticsearch
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the &lt;code&gt;/kibana&lt;/code&gt; folder we have a docker file to build an image and also settings for kibana:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;server.name: kibana
server.host: 0.0.0.0
elasticsearch.hosts: [ "http://elasticsearch:9200" ]
monitoring.ui.container.elasticsearch.enabled: true

## X-Pack security credentials
#
elasticsearch.username: elastic
elasticsearch.password: changeme
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To enter the Kibana UI, you need to log in to the browser &lt;code&gt;localhost:5601&lt;/code&gt; (login/password is elasctic/changeme)&lt;br&gt;
In the left menu, find Discover, click on it and create an index. I suggest this &lt;code&gt;logstash-*&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F12798761%2F113514744-191f5f80-9579-11eb-8fb1-3fc9d22236b2.png" class="article-body-image-wrapper"&gt;&lt;img alt="Create index pattern" src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F12798761%2F113514744-191f5f80-9579-11eb-8fb1-3fc9d22236b2.png" width="800" height="360"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>kafka</category>
      <category>elasticsearch</category>
      <category>rsyslog</category>
      <category>devops</category>
    </item>
  </channel>
</rss>
