DEV Community

S Karthik
S Karthik

Posted on

Setting Up Elasticsearch and Kibana Single-Node with Docker Compose

Introduction

Setting up Elasticsearch and Kibana on a single-node cluster can be a straightforward process with Docker Compose. In this guide, we’ll walk through the steps to get your Elasticsearch and Kibana instances up and running smoothly.

Hardware Prerequisites

According to the Elastic Cloud Enterprise documentation, here are the hardware requirements for running Elasticsearch and Kibana

  • CPU: A minimum of 2 CPU cores is recommended, but the actual requirement depends on your workload. More CPU cores may be required for intensive tasks or larger datasets.

  • RAM: Elastic recommends a minimum of 8GB of RAM for Elasticsearch, but 16GB or more is recommended for production use, especially when running both Elasticsearch and Kibana on the same machine.

  • Storage: SSD storage is recommended for better performance, especially for production use. The amount of storage required depends on your data volume and retention policies.

For more detailed hardware requirements and recommendations, refer to the Elastic Cloud Enterprise documentation.

Software Prerequisites

Before getting started, make sure you have Docker installed on your system. You can download and install Docker from the official website.

Setting Up Instructions

In this guide, I will perform these operations with the following specifications.

  • OS: Ubuntu 22.04

  • RAM: 8GB

  • Storage: 30GB SSD

1. Adjust Kernel Settings

The vm.max_map_count kernel setting must be set to at least 262144. How you set vm.max_map_count depends on your platform. For more information. I’m using the Linux operating system, so I will set vm.max_map_count using as follows

Open the /etc/sysctl.conf file in a text editor with root privileges. You can use the following command

sudo nano /etc/sysctl.conf
Enter fullscreen mode Exit fullscreen mode

Navigate to the end of the file or search for the line containing vm.max_map_count, If the line exists, modify it to set the desired value

vm.max_map_count=262144
Enter fullscreen mode Exit fullscreen mode

If the line doesn’t exist, add it at the end of the file

# Set vm.max_map_count to increase memory map areas
vm.max_map_count=262144
Enter fullscreen mode Exit fullscreen mode

Save the file and exit the text editor. Apply the changes by running the following command

sudo sysctl -p
Enter fullscreen mode Exit fullscreen mode

This command reloads the sysctl settings from the configuration file. Now, the value of vm.max_map_count should be updated to 262144.

2. Prepare Environment Variables

Create or navigate to an empty directory for the project. Inside this directory, create a .env file and set up the necessary environment variables.

Copy the following content and paste it into the .env file.

# Password for the 'elastic' user (at least 6 characters)
ELASTIC_PASSWORD=

# Password for the 'kibana_system' user (at least 6 characters)
KIBANA_PASSWORD=

# Version of Elastic products
STACK_VERSION={version}

# Set the cluster name
CLUSTER_NAME=docker-cluster

# Set to 'basic' or 'trial' to automatically start the 30-day trial
LICENSE=basic

# Port to expose Elasticsearch HTTP API to the host
ES_PORT=9200

# Port to expose Kibana to the host
KIBANA_PORT=5601

# Increase or decrease based on the available host memory (in bytes)
MEM_LIMIT=2147483648
Enter fullscreen mode Exit fullscreen mode

In the .env file, specify a password for the ELASTIC_PASSWORD and KIBANA_PASSWORD variables.

The passwords must be alphanumeric and can’t contain special characters, such as ! or @. The bash script included in the compose.yml file only works with alphanumeric characters. Example:

# Password for the 'elastic' user (at least 6 characters)
ELASTIC_PASSWORD=Secure123

# Password for the 'kibana_system' user (at least 6 characters)
KIBANA_PASSWORD=Secure123

...
Enter fullscreen mode Exit fullscreen mode

In the .env file, set STACK_VERSION to the Elastic Stack version. Example:

...

# Version of Elastic products
STACK_VERSION=8.13.2

...
Enter fullscreen mode Exit fullscreen mode

3. Create Docker Compose Configuration

Now, create a compose.yml file in the same directory and copy the following content and paste it into the compose.yml file.

version: "2.2"

services:
  setup:
    image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}
    volumes:
      - certs:/usr/share/elasticsearch/config/certs
    user: "0"
    command: >
      bash -c '
        if [ x${ELASTIC_PASSWORD} == x ]; then
          echo "Set the ELASTIC_PASSWORD environment variable in the .env file";
          exit 1;
        elif [ x${KIBANA_PASSWORD} == x ]; then
          echo "Set the KIBANA_PASSWORD environment variable in the .env file";
          exit 1;
        fi;
        if [ ! -f config/certs/ca.zip ]; then
          echo "Creating CA";
          bin/elasticsearch-certutil ca --silent --pem -out config/certs/ca.zip;
          unzip config/certs/ca.zip -d config/certs;
        fi;
        if [ ! -f config/certs/certs.zip ]; then
          echo "Creating certs";
          echo -ne \
          "instances:\n"\
          "  - name: es01\n"\
          "    dns:\n"\
          "      - es01\n"\
          "      - localhost\n"\
          "    ip:\n"\
          "      - 127.0.0.1\n"\
          > config/certs/instances.yml;
          bin/elasticsearch-certutil cert --silent --pem -out config/certs/certs.zip --in config/certs/instances.yml --ca-cert config/certs/ca/ca.crt --ca-key config/certs/ca/ca.key;
          unzip config/certs/certs.zip -d config/certs;
        fi;
        echo "Setting file permissions"
        chown -R root:root config/certs;
        find . -type d -exec chmod 750 \{\} \;;
        find . -type f -exec chmod 640 \{\} \;;
        echo "Waiting for Elasticsearch availability";
        until curl -s --cacert config/certs/ca/ca.crt https://es01:9200 | grep -q "missing authentication credentials"; do sleep 30; done;
        echo "Setting kibana_system password";
        until curl -s -X POST --cacert config/certs/ca/ca.crt -u "elastic:${ELASTIC_PASSWORD}" -H "Content-Type: application/json" https://es01:9200/_security/user/kibana_system/_password -d "{\"password\":\"${KIBANA_PASSWORD}\"}" | grep -q "^{}"; do sleep 10; done;
        echo "All done!";
      '
    healthcheck:
      test: ["CMD-SHELL", "[ -f config/certs/es01/es01.crt ]"]
      interval: 1s
      timeout: 5s
      retries: 120

  es01:
    image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}
    volumes:
      - certs:/usr/share/elasticsearch/config/certs
      - esdata:/usr/share/elasticsearch/data
    ports:
      - ${ES_PORT}:9200
    environment:
      - node.name=es01
      - cluster.name=${CLUSTER_NAME}
      - discovery.type=single-node
      - ELASTIC_PASSWORD=${ELASTIC_PASSWORD}
      - bootstrap.memory_lock=true
      - xpack.security.enabled=true
      - xpack.security.http.ssl.enabled=true
      - xpack.security.http.ssl.key=certs/es01/es01.key
      - xpack.security.http.ssl.certificate=certs/es01/es01.crt
      - xpack.security.http.ssl.certificate_authorities=certs/ca/ca.crt
      - xpack.security.transport.ssl.enabled=true
      - xpack.security.transport.ssl.key=certs/es01/es01.key
      - xpack.security.transport.ssl.certificate=certs/es01/es01.crt
      - xpack.security.transport.ssl.certificate_authorities=certs/ca/ca.crt
      - xpack.security.transport.ssl.verification_mode=certificate
      - xpack.license.self_generated.type=${LICENSE}
    mem_limit: ${MEM_LIMIT}
    ulimits:
      memlock:
        soft: -1
        hard: -1
    healthcheck:
      test:
        [
          "CMD-SHELL",
          "curl -s --cacert config/certs/ca/ca.crt https://localhost:9200 | grep -q 'missing authentication credentials'",
        ]
      interval: 10s
      timeout: 10s
      retries: 120

  kibana:
    depends_on:
      es01:
        condition: service_healthy
    image: docker.elastic.co/kibana/kibana:${STACK_VERSION}
    volumes:
      - certs:/usr/share/kibana/config/certs
      - kibanadata:/usr/share/kibana/data
    ports:
      - ${KIBANA_PORT}:5601
    environment:
      - SERVERNAME=kibana
      - ELASTICSEARCH_HOSTS=https://es01:9200
      - ELASTICSEARCH_USERNAME=kibana_system
      - ELASTICSEARCH_PASSWORD=${KIBANA_PASSWORD}
      - ELASTICSEARCH_SSL_CERTIFICATEAUTHORITIES=config/certs/ca/ca.crt
      - SERVER_PUBLICBASEURL=http://localhost:5601
    mem_limit: ${MEM_LIMIT}
    healthcheck:
      test:
        [
          "CMD-SHELL",
          "curl -s -I http://localhost:5601 | grep -q 'HTTP/1.1 302 Found'",
        ]
      interval: 10s
      timeout: 10s
      retries: 120

volumes:
  certs:
    driver: local
  esdata:
    driver: local
  kibanadata:
    driver: local
Enter fullscreen mode Exit fullscreen mode

4. Start Docker Compose

Now you can start Elasticsearch and Kibana using Docker Compose. Run the following command from your project directory

docker compose up -d
Enter fullscreen mode Exit fullscreen mode

5. Access Elasticsearch and Kibana

Once Docker Compose has started the services, you can access Elasticsearch at https://<localhost or serverip>:9200 and Kibana at http://<localhost or serverip>:5601 in your web browser.

Log in to Elasticsearch or Kibana as the elastic user and the password is the one you set earlier in the .env file.

Conclusion

You’ve successfully set up Elasticsearch and Kibana on a single-node using Docker Compose.

Reference https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html

Top comments (0)