DEV Community

Peter Shekindo for ClickPesa

Posted on

How to implement logging in your REST service by using Elasticsearch - PART 2

How to implement logging in your REST service by using Elasticsearch - PART 2

Note:
This article is the second part of How to implement logging in your REST service by using Elasticsearch article series. Click the below link for part 1 of this series, or if you have already gone through that then you are good to proceed with part 2.

How to implement logging in your REST service by using Elasticsearch - PART 1

In part 1 of this article series, I explained the introduction to the ELK stack and how it can be used to implement logging in your REST service. You can find the first section of this series in the following links medium, dev, and hashnode. In this section, we will discuss how we can install, configure and use the ELK stack.

For logging purposes, Elasticsearch comes with two other tools Kibana and Logstash. Together they form the ELK stack as introduced in part one of this article.
To avoid confusion and reading fatigue, I will divide this second part into two sections as well

Part 2.A: Install and configure Elasticsearch

Part 2.B: Install and configure Kibana and Logstash as well as how to use the ELK stack

drawing

Logging Process with ELK stack

Part 2.A: Install and configure Elasticsearch

Initially, we need to download Elasticsearch, Kibana, and Logstash. Below are the links where you may download these tools. Currently, I am using version 8.3.1 in a Linux environment, but the version might vary for different environments and times. Also in this article, we will be using APT (Advanced Package Tool) which is a Linux installation package manager to download and install all tools.

Note:
In this article, we mainly focus on accessing logs logged in an external file or files. This means you need to implement an external logging service to a file or files in the REST architecture of your choice. Thus all logs should be accessed from a specific file or files.

Steps 1 - Installing Elasticsearch

To begin, we need to configure the Ubuntu package repository by adding Elastic’s package source list in order to download and install Elasticsearch. This is not configured by default, so we need to do it manually.

Note:
All of the packages are signed with the Elasticsearch public GPG key in order to protect your system from package spoofing. Packages authenticated using a key are considered secured by the downloading manager.

a. Open the terminal and use the cURL command-line tool for transferring data with URL, to import the Elasticsearch public GPG key into APT. We are also using the arguments -fsSL to silence all progress and possible errors (except for a server failure) and to allow cURL to make a request on a new location if redirected:

$ curl -fsSL https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
Enter fullscreen mode Exit fullscreen mode

b. add the Elastic source list to the sources.list.d directory, where APT will look for new sources:

$ echo "deb https://artifacts.elastic.co/packages/7.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-7.x.list
Enter fullscreen mode Exit fullscreen mode

c. update your package lists so APT will read the new Elastic source:

$ sudo apt update
Enter fullscreen mode Exit fullscreen mode

d. Use this command to install Elasticsearch

$ sudo apt install elasticsearch
Enter fullscreen mode Exit fullscreen mode

If you have reached this far without any error, that means Elasticsearch is now installed and ready to be configured. 🎉

Steps 2 - Configuringing Elasticsearch

All Elastic search configuration goes into elasticsearch.yml

a. Use the command to access elasticsearch.yml file

$ sudo nano /etc/elasticsearch/elasticsearch.yml
Enter fullscreen mode Exit fullscreen mode

There is a lot you can configure in Elasticsearch such as cluster, node, path, memory, network, discovery, and gateway. Most of these configurations are already preconfigured in the file but you can change them as you see fit.
For the sake of this tutorial, we will only change the network host configuration to allow single server access.

Elasticsearch listens for traffic from everywhere on port 9200. For this reason You may want to restrict outside access to your Elasticsearch instance to prevent outsiders from reading your data or shutting down your Elasticsearch cluster through its [REST API]

In order to accomplish this, find the line that specifies network.host, uncomment it, and replace its value with custom IP address like this:

. . .
# ------------------------Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: custom IP address
. . .

Enter fullscreen mode Exit fullscreen mode

b. If you accessed the configuration file by nano, use the following key combination to save and close the file CTRL+X or ⌘ + X in Macintosh, followed by Y and then ENTER.

Steps 3 - Configuring Elasticsearch

We use systemctl command to start Elasticsearch service, this will allow Elasticsearch to initiate properly otherwise it will run into error and fail to start.

a. Open terminal and run command.

$ sudo systemctl start elasticsearch
Enter fullscreen mode Exit fullscreen mode

b. You can also enable Elasticsearch to automatically run on every system boot.

$ sudo systemctl enable elasticsearch
Enter fullscreen mode Exit fullscreen mode

c. Run the following command to test your Elasticsearch. note, as for me Elasticsearch is running on localhost:9200. you may want to specify which IP:Port address your Elasticsearch is using upon.

$ curl -X GET "localhost:9200"
Enter fullscreen mode Exit fullscreen mode

If every thing went well, you will see a response showing some basic information about your local node, similar to this:

Output
{
  "name" : "Elasticsearch",
  "cluster_name" : "elasticsearch",
  "cluster_uuid" : "qqhFHPigQ9e2lk-a7AvLNQ",
  "version" : {
    "number" : "7.7.1",
    "build_flavor" : "default",
    "build_type" : "deb",
    "build_hash" : "ef48eb35cf30adf4db14086e8aabd07ef6fb113f",
    "build_date" : "2020-03-26T06:34:37.794943Z",
    "build_snapshot" : false,
    "lucene_version" : "8.5.1",
    "minimum_wire_compatibility_version" : "6.8.0",
    "minimum_index_compatibility_version" : "6.0.0-beta1"
  },
  "tagline" : "You Know, for Search"
}
Enter fullscreen mode Exit fullscreen mode

Now that Elasticsearch is up and running, in the next section which is Part 2.B of this article series, we will install Kibana, and Logstash and test our Logging configuration.

Just incase you dont know, there are many more article like this at ClickPesa on hasnode, ClickPesa on dev.to and ClickPesa on medium. You will thank me later.

Top comments (0)