Preface
Elasticsearch is a developer-friendly, with minimal configuration & manual management. But sometimes we still want to know more about our cluster.
You can fetch these information from elasticsearch APIs, so if you're a elasticsearch API ninja you can skip this article.
Here are some tools that I prefer, to help me to quickly get the overview of the cluster.
Prerequisite
Usually our cluster is located in private VPC, so we have to port-forward to our local machine.
What I usually do is to forward the kubernetes API to my local machine, and use kubectl port-forward
to handle the rest.
# I modified the kubeconfig to use port 16443 for remote environments
$ ssh -L 16443:127.0.0.1:6443 ssh-jumper
# port-foward elasticsearch to localhost
$ kubectl -n logging port-forward elasticsearch-data-0 9200:9200
# check if it's ready
$ curl localhost:9200
ElasticHQ
ElasticHQ is an open source application that offers a simplified interface for managing and monitoring Elasticsearch clusters.
$ git clone https://github.com/ElasticHQ/elasticsearch-HQ.git
$ cd elasticsearch-HQ
# Python 3 is required.
$ sudo pip3 install -r requirements.txt
$ python3 application.py
# Access HQ with: http://localhost:5000
# If you're using docker version, access ES via host.docker.internal:9200
In this page we can check the cluster load, free space & heap usage. Also check the size of each indices, document count and total storage size.
Another goodie is the Diagnostics
tab. HQ will highlight those metrics with potential risk and give you some advices. This can be a reference when you're troubleshooting or doing performance tuning on your elasticsearch cluster.
Cerebro
https://github.com/lmenezes/cerebro
# Java 1.8 or newer is required. brew cask install java
# Download the latest tarball from https://github.com/lmenezes/cerebro/releases/latest
$ wget https://github.com/lmenezes/cerebro/releases/download/v0.8.1/cerebro-0.8.1.tgz
$ tar zxvf cerebro-0.8.1.tgz
$ cd cerebro-0.8.1
$ ./bin/cerebro
# Open cerebro with http://localhost:9000
# If you're using docker version, access ES via host.docker.internal:9200
I usually use Cerebro to observe the index shards allocation. It's clear and intuitive.
And the cluster settings
, aliases
and index templates
under more menu come handy when you need them.
Top comments (3)
Elasticsearch Comrade - github.com/moshe/elasticsearch-com... is a new open source admin panel and management GUI for Elasticseach (I'm the maintainer of it)
moshe / elasticsearch-comrade
Elasticsearch admin panel built for ops and monitoring
Elasticsearch Comrade
Elasticsearch Comrade is an open-source Elasticsearch admin and monitoring panel highly inspired by Cerebro. Elasticsearch Comrade built with python3, VueJS, Sanic, Vuetify2 and Cypress
Main Features
Quickstart
Cluster dir definitaions
Comrade discovers clusters using the
--clusters-dir
param, docs are here, examples are hereUsing docker (recommended)
docker run -v $PWD/clusters/:/app/comrade/clusters/ -it -p 8000:8000 mosheza/elasticsearch-comrade
Using the python package
pip install elasticsearch-comrade
comrade --clusters-dir clusters
Installation, configuration and next steps
Here
Roadmap
v1.1.0
v1.2.0
v1.3.0
Screenshots
Great! Didn't know ElasticHQ was back from the dead after the great "Site plugin are removed" Elasticsearch move! Thanks!
Thanks for this! Found it helpful today.