DEV Community

Cover image for Setting Up Grafana, Prometheus, and ELK for Monitoring and Logging in AWS EKS
Bhanvendra Singh Gaur for AWS Community Builders

Posted on • Originally published at Medium

Setting Up Grafana, Prometheus, and ELK for Monitoring and Logging in AWS EKS

Introduction

Monitoring and logging are essential for healthy and functionally optimal Cloud-native apps, particularly in Kubernetes environments (AWS EKS). You may create a strong observability system by combining tools like Grafana for visualization, Prometheus for metrics, and the ELK stack (Elasticsearch, Logstash, and Kibana) for logging. In this article, I'll walk you through setting up logging and monitoring functionality for the apps deployed in Amazon EKS.

Image description

The Significance of Observability in EKS Applications

Monitoring system and application-level metrics and logging events becomes more crucial as microservices and containerized applications in Kubernetes get more complicated. At the same time, logs offer the specific information required for debugging and root cause investigation, metrics aid in identifying patterns and abnormalities. With a strong observability stack, you can learn about:

  • Resource utilization (RAM, Memory & I/O)
  • Performance of deployed applications in EKS
  • EKS Cluster health
  • Notification & Alerts

Before getting started, make sure that you have the following resources and tools installed and running:
EKS cluster running 
Helm and Kubectl are installed and configured on your local machine to interact with your EKS cluster.

Step 1: Install Prometheus to collect metrics

An effective open-source toolbox for alerting and monitoring is called Prometheus. Metrics are scraped from many endpoints, kept in a time-series database, and queryable using PromQL.

1.1 Install & Configure Prometheus

Initially, add the Prometheus Helm repository:
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update

Then, install Prometheus in the desired namespace in your Kubernetes cluster:
helm install prometheus prometheus-community/prometheus --namespace monitoring --create-namespace

1.2 Configuring Prometheus

Now, we need to configure it. This is a sample prometheus.yml configuration to scrape desired metrics from your K8s nodes, pods, and application:

global:
  scrape_interval: 15s
scrape_configs:
  - job_name: 'kubernetes-cadvisor'
    kubernetes_sd_configs:
      - role: node
  - job_name: 'kubernetes-pods'
    kubernetes_sd_configs:
      - role: pod
  - job_name: 'application'
    metrics_path: '/metrics'
    static_configs:
      - targets: ['<application-service>:<port>'] #endpoint or address where we want to share the data

Enter fullscreen mode Exit fullscreen mode

1.3 Access & verify Prometheus UI

Using the following command will help to check whether the scrap point is working properly or not via port-forwarding:
kubectl port-forward service/prometheus-server 9090:80 -n monitoring
You can evaluate it by opening http://localhost:9090 in your browser and explore the collected metrics.

Step 2: Get Grafana Set up for Visualizing Metrics

A popular tool for visualizing time-series data is Grafana. It has a Prometheus connection and offers dynamic dashboards for tracking metrics.
In the previous step, we have already set up the Prometheus that will forward the metrics from the appropriate endpoint to the desired destination in our case we will be forwarding the metrics to Grafana for graphical visualization.

2.1 Installing Grafana

For installing Grafana using Helm:
helm repo add grafana https://grafana.github.io/helm-charts
helm install grafana grafana/grafana --namespace monitoring #change the namespace to desired one

2.2 Accessing Grafana

Now we have the GUI installed, for access using the following command:
kubectl port-forward service/grafana 3000:80 -n monitoring
You can access Grafana by visiting http://localhost:3000. The default login credentials are admin/admin (we can change the credentials later).

2.3 Configure Grafana to connect with Prometheus

  • Once you have Grafana running follow the below steps:
  • Initially navigate to Configuration > Data Sources.
  • Click Add data source.
  • Select Prometheus and set the URL to http://prometheus-server.monitoring.svc.cluster.local:80.
  • Then you can create dashboards for visualizing EKS cluster and application metrics.

Step 3: Configuring the Logging ELK Stack

We'll set up Kibana for log viewing, Logstash for log aggregation, and Elasticsearch for storage and indexing.

Image description

3.1 Installing Elasticsearch

To install Elasticsearch via Helm:
helm repo add elastic https://helm.elastic.co
helm install elasticsearch elastic/elasticsearch --namespace logging --create-namespace

3.2 Installing Logstash

Logstash transmits logs to Elasticsearch after processing logs from different sources (such as Fluentd or Fluent Bit). This is how to set up Logstash:
helm install logstash elastic/logstash --namespace logging
Next, you need to create a ConfigMap to configure Logstash to interact with elastic search:

apiVersion: v1
kind: ConfigMap
metadata:
  name: logstash-config
  namespace: logging
data:
  logstash.conf: |
    input {
      beats {
        port => 5044
      }
    }
    output {
      elasticsearch {
        hosts => ["http://elasticsearch:9200"]
        index => "logs-%{+YYYY.MM.dd}"
      }
    }
Enter fullscreen mode Exit fullscreen mode

Apply the ConfigMap:
kubectl apply -f logstash-config.yaml

3.3 Installing Kibana

To visualize logs in Elasticsearch, install Kibana using Helm:
helm install kibana elastic/kibana --namespace logging
Access Kibana using port-forwarding:
kubectl port-forward service/kibana 5601:5601 -n logging
You can now access Kibana at http://localhost:5601 and start exploring your logs.

Step 4: Using Alertmanager and Prometheus to Set Up Alerts

Notifications of critical issues in your application or cluster should be sent to you. To do this, we'll set up Alertmanager to manage notifications and configure Prometheus alerts.

4.1 Define Alerting Rules

Create and deploy an alerting rules file (alert.rules.yml) for

Prometheus:
groups:
- name: example
  rules:
  - alert: HighCpuUsage
    expr: sum(rate(container_cpu_usage_seconds_total[1m])) by (container_name) > 0.9
    for: 5m
    labels:
      severity: critical
    annotations:
      summary: "High CPU usage detected"
Enter fullscreen mode Exit fullscreen mode

This will update Prometheus' configuration to use the rules file.

4.2 Set Up Alertmanager

Install Alertmanager using Helm:
helm install alertmanager prometheus-community/alertmanager --namespace monitoring
Configure Alertmanager to send notifications (e.g., via email):

apiVersion: v1
kind: ConfigMap
metadata:
  name: alertmanager-config
  namespace: monitoring
data:
  alertmanager.yml: |
    global:
      smtp_smarthost: 'smtp.example.com:587'
      smtp_from: 'alertmanager@example.com'
      smtp_auth_username: 'username'
      smtp_auth_password: 'password'
    route:
      receiver: 'email-config'
    receivers:
    - name: 'email-config'
      email_configs:
      - to: 'you@example.com'
Enter fullscreen mode Exit fullscreen mode

Additional Considerations

  • Security: For Prometheus, Grafana, and ELK, use TLS encryption and Role-Based Access Control (RBAC).
  • Data Persistence: Set up Prometheus and Elasticsearch to use persistent storage.
  • Cost management: Because log data accumulates quickly, be aware of the costs, particularly when using Elasticsearch.

Conclusion

With this configuration, your AWS EKS apps now have a fully functional monitoring and logging system. Your metrics are insightfully visualized by Grafana, while logs are captured and analyzed by the ELK stack. You can get informed about critical events with Prometheus and Alertmanager, which also makes sure your apps are running smoothly and efficiently.
You can make sure that your Kubernetes apps are highly observable, robust, and simple to debug by following these steps. You are welcome to alter this configuration to suit your needs.

Top comments (0)