DEV Community

Santhosh S
Santhosh S

Posted on

Deploying EFK stack on Kubernetes cluster using Helm Charts

Prerequisites

  • A running Kubernetes cluster (e.g., Minikube, EKS, GKE).
  • kubectl CLI installed and configured.
  • Helm installed on your local machine.
  • Sufficient cluster resources (memory and CPU) for Elasticsearch
  • Your applications are deployed on K8s using Helm charts, if not, then I am confident you can figure that out, as we would be focussing on a solution using Helm charts

Step 1: Create a Storage Class for Elasticsearch

To set up the necessary storage class, you can create one using the following YAML configuration:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: ebs-sc
provisioner: ebs.csi.aws.com
parameters:
  type: gp3
volumeBindingMode: WaitForFirstConsumer
Enter fullscreen mode Exit fullscreen mode

Step 2: Installing the AWS EBS CSI Driver using Helm Chart

To proceed with the installation, follow these steps:

helm repo add aws-ebs-csi-driver https://kubernetes-sigs.github.io/aws-ebs-csi-driver
Enter fullscreen mode Exit fullscreen mode
helm repo update
Enter fullscreen mode Exit fullscreen mode
helm install aws-ebs-csi-driver aws-ebs-csi-driver/aws-ebs-csi-driver
Enter fullscreen mode Exit fullscreen mode

Step 3: Installing Elasticsearch with Helm Chart

Now is the time to install the Helm chart for Elasticsearch, but first, let’s create a namespace for our logging stack:

$ kubectl create namespace efk
Enter fullscreen mode Exit fullscreen mode
  • We create a dedicated namespace called efk to keep our Elasticsearch and other logging components organized within the Kubernetes cluster.

Next, add the Elasticsearch repository to your Helm:

helm repo add elastic https://helm.elastic.co
Enter fullscreen mode Exit fullscreen mode
  • This command adds the Elastic Helm chart repository, allowing you to access and install Elasticsearch with Helm.

Now, you can install Elasticsearch using the following Helm chart:

helm install elasticsearch \
 --set service.type=LoadBalancer \
 --set volumeClaimTemplate.storageClassName=ebs-sc \
 --set persistence.labels.enabled=true elastic/elasticsearch -n efk
Enter fullscreen mode Exit fullscreen mode

After the installation, you can retrieve the username and password for Elasticsearch using the following commands:

kubectl get secrets --namespace=efk elasticsearch-master-credentials -ojsonpath='{.data.username}' | base64 -d
Enter fullscreen mode Exit fullscreen mode
kubectl get secrets --namespace=efk elasticsearch-master-credentials -ojsonpath='{.data.password}' | base64 -d
Enter fullscreen mode Exit fullscreen mode
  • These commands fetch the credentials required to access Elasticsearch securely. The username and password are stored in a Kubernetes secret called elasticsearch-master-credentials within the efk namespace. The base64 -d part is used to decode the base64-encoded values for human-readable access.

you can obtain the LoadBalancer IP for the service and test Elasticsearch using the following commands:

kubectl get svc -n efk
Enter fullscreen mode Exit fullscreen mode

Look for the service with the name associated with Elasticsearch, and under the “EXTERNAL-IP” column, you should see the LoadBalancer IP. It might be pending at first but will eventually get an IP assigned.

http://<LoadBalancer-IP>:9200

When you access this URL, your browser should prompt you for the username and password. Enter the credentials you obtained earlier, and you should be able to access Elasticsearch securely over HTTPS.

Kibana Installation

Step 1: Installing Kibana with LoadBalancer Service Type

To set up Kibana and make it accessible through a LoadBalancer service type, you can use the following Helm installation command:

The elasticsearch repo also provides kibana so we just need to install it by the following command:

helm install kibana --set service.type=LoadBalancer elastic/kibana -n efk
Enter fullscreen mode Exit fullscreen mode
  • watch all pods in the efk namespace:
kubectl get pods --namespace=efk
Enter fullscreen mode Exit fullscreen mode
  • Retrieve the Kibana service account token:
kubectl get secrets --namespace=efk kibana-kibana-es-token -ojsonpath=’{.data.token}’ | base64 -d
Enter fullscreen mode Exit fullscreen mode

you can obtain the LoadBalancer IP for the service and open the Kibana UI using the following commands:

kubectl get svc -n efk
Enter fullscreen mode Exit fullscreen mode

After obtaining the LoadBalancer IP, you can open Kibana in your browser. Kibana typically runs on port 5601. Open your web browser and enter the following URL, replacing with the actual LoadBalancer IP:

http://<LoadBalancer-IP>:5601

Use the username and password you obtained for Elasticsearch above.

After that, you should see the following dashboard:

Press enter or click to view image in full size

Fluent Bit Installation:

To set up Fluent Bit for log processing, follow these steps:

Add the Fluent Bit Helm Repository:
You need to add the Fluent Bit Helm repository to access the necessary Helm charts.
Run the following command to add the repository:

helm repo add fluent https://fluent.github.io/helm-charts
Enter fullscreen mode Exit fullscreen mode
  1. Configure Fluent Bit:

Before installing the Helm chart for Fluent Bit, it’s essential to configure the Fluent Bit to correctly access Elasticsearch, which may be part of your EFK (Elasticsearch, Fluent Bit, Kibana) stack. To do this, you need to set up a configuration file. Here’s how to obtain the values file for Fluent Bit and save it in YAML format:

helm show values fluent/fluent-bit > fluentbit-values.yaml
Enter fullscreen mode Exit fullscreen mode

This command fetches the default configuration values for Fluent Bit and saves them in a file named fluentbit-values.yaml. You can modify this file to customize Fluent Bit's settings for your specific Elasticsearch setup and logging needs. After making your changes, you can install Fluent Bit using Helm with your customized configuration.

And update the following changes in the fluentbit-values.yaml file:

1.modify fluentbit-values.yaml elastic search user name and password
Ex:

    `HTTP_User elastic`
    `HTTP_Passwd e7uafnP9WARJEaZX`
Enter fullscreen mode Exit fullscreen mode
config:
  service: |
    [SERVICE]
        Daemon Off
        Flush {{ .Values.flush }}
        Log_Level {{ .Values.logLevel }}
        Parsers_File /fluent-bit/etc/parsers.conf
        Parsers_File /fluent-bit/etc/conf/custom_parsers.conf
        HTTP_Server On
        HTTP_Listen 0.0.0.0
        HTTP_Port {{ .Values.metricsPort }}
        Health_Check On

  ## https://docs.fluentbit.io/manual/pipeline/inputs
  inputs: |
    [INPUT]
        Name tail
        Path /var/log/containers/*.log
        multiline.parser docker, cri
        Tag kube.*
        Mem_Buf_Limit 5MB
        Skip_Long_Lines On

    [INPUT]
        Name systemd
        Tag host.*
        Systemd_Filter _SYSTEMD_UNIT=kubelet.service
        Read_From_Tail On

  ## https://docs.fluentbit.io/manual/pipeline/filters
  filters: |
    [FILTER]
        Name kubernetes
        Match kube.*
        Merge_Log On
        Keep_Log Off
        K8S-Logging.Parser On
        K8S-Logging.Exclude On


  ## https://docs.fluentbit.io/manual/pipeline/outputs
  outputs: |
    [OUTPUT]
        Name es
        Match kube.*
        Index fluent-bit
        Type  _doc
        Host elasticsearch-master
        Port 9200
        HTTP_User elastic
        HTTP_Passwd e7uafnP9WARJEaZX
        tls On
        tls.verify Off
        Logstash_Format On
        Logstash_Prefix logstash
        Retry_Limit False
        Suppress_Type_Name On

    [OUTPUT]
        Name es
        Match host.*
        Index fluent-bit
        Type  _doc
        Host elasticsearch-master
        Port 9200
        HTTP_User elastic
        HTTP_Passwd e7uafnP9WARJEaZX
        tls On
        tls.verify Off
        Logstash_Format On
        Logstash_Prefix node
        Retry_Limit False
        Suppress_Type_Name On

......
......
......
Enter fullscreen mode Exit fullscreen mode

We can use a Lua script and a simple Lua filter for adding the index field in the log event itself and then use that field in the output plugin in the Logstash_Prefix_Key

.........
.........
.........
luaScripts:
  setIndex.lua: |
    function set_index(tag, timestamp, record)
        index = "somePrefix-"
        if record["kubernetes"] ~= nil then
            if record["kubernetes"]["namespace_name"] ~= nil then
                if record["kubernetes"]["container_name"] ~= nil then
                    record["es_index"] = index
                        .. record["kubernetes"]["namespace_name"]
                        .. "-"
                        .. record["kubernetes"]["container_name"]
                    return 1, timestamp, record
                end
                record["es_index"] = index
                    .. record["kubernetes"]["namespace_name"]
                return 1, timestamp, record
            end
        end
        return 1, timestamp, record
    end
## https://docs.fluentbit.io/manual/administration/configuring-fluent-bit/classic-mode/configuration-file
config:
  service: |
    [SERVICE]
        Daemon Off
        Flush {{ .Values.flush }}
        Log_Level {{ .Values.logLevel }}
        Parsers_File /fluent-bit/etc/parsers.conf
        Parsers_File /fluent-bit/etc/conf/custom_parsers.conf
        HTTP_Server On
        HTTP_Listen 0.0.0.0
        HTTP_Port {{ .Values.metricsPort }}
        Health_Check On

  ## https://docs.fluentbit.io/manual/pipeline/inputs
  inputs: |
    [INPUT]
        Name tail
        Path /var/log/containers/*.log
        multiline.parser docker, cri
        Tag kube.*
        Mem_Buf_Limit 5MB
        Skip_Long_Lines On

    [INPUT]
        Name systemd
        Tag host.*
        Systemd_Filter _SYSTEMD_UNIT=kubelet.service
        Read_From_Tail On

  ## https://docs.fluentbit.io/manual/pipeline/filters
  filters: |
    [FILTER]
        Name kubernetes
        Match kube.*
        Merge_Log On
        Keep_Log Off
        K8S-Logging.Parser On
        K8S-Logging.Exclude On

    [FILTER]
        Name lua
        Match kube.*
        script /fluent-bit/scripts/setIndex.lua
        call set_index

  ## https://docs.fluentbit.io/manual/pipeline/outputs
  outputs: |
    [OUTPUT]
        Name es
        Match kube.*
        Type  _doc
        Host elasticsearch-master
        Port 9200
        HTTP_User elastic
        HTTP_Passwd e7uafnP9WARJEaZX
        tls On
        tls.verify Off
        Logstash_Format On
        Logstash_Prefix logstash
        Retry_Limit False
        Suppress_Type_Name On

    [OUTPUT]
        Name es
        Match host.*
        Type  _doc
        Host elasticsearch-master
        Port 9200
        HTTP_User elastic
        HTTP_Passwd e7uafnP9WARJEaZX
        tls On
        tls.verify Off
        Logstash_Format On
        Logstash_Prefix node
        Retry_Limit False
        Suppress_Type_Name On

......
......
......
Enter fullscreen mode Exit fullscreen mode

Installing helm chart of Fluentbit using custom value file.

helm install fluent-bit fluent/fluent-bit -f fluentbit-values.yaml -n efk

Enter fullscreen mode Exit fullscreen mode

Conclusion

By default, Fluentbit will start gathering container logs from all the pods that are present in the cluster and will push these to the newly deployed ES cluster.

It also listens to the systemd metrics and pushes them to the same ES cluster.

For exploring the logs, first, verify whether the newly created indices are showing on Kibana.

Go to Kibana → Stack Management → Index Management, and under the Indices tab, you should see the 2 newly created indices with the names logstash-yyyy.MM.dd and node-yyyy.MM.dd

For checking your application logs on Kibana, you need to create an Index Pattern for your app (one-time activity).

Index patterns can be created by: Go to Kibana → Stack Management → Data Views → Create data view→ Specify your index pattern and select a timestamp field → save data view to Kibana
Now, you can check your logs by going to Discover → Select your newly created index pattern from the dropdown → Search Logs
now you will see your logs

Top comments (0)