DEV Community

Cover image for Day 36: Monitoring Kubernetes with Prometheus and Grafana
Arbythecoder
Arbythecoder

Posted on

Day 36: Monitoring Kubernetes with Prometheus and Grafana

In modern Kubernetes environments, effective monitoring ensures cluster performance and system reliability. Monitoring tools like Prometheus and Grafana enable teams to collect, analyze, and visualize metrics, helping to prevent downtime and optimize resource usage.

This article introduces Prometheus and Grafana, explains their significance, and provides a step-by-step guide to setting them up in a Kubernetes cluster.


Why Monitor Kubernetes?

Kubernetes manages complex workloads across distributed environments, but without visibility into cluster health, troubleshooting becomes a challenge. Monitoring provides insights into:

  • Cluster Performance: CPU, memory, and storage usage.
  • Pod Health: Ensure workloads are running smoothly.
  • Alerting: Notify teams of issues before they escalate.

Prometheus serves as the engine for collecting and querying time-series metrics, while Grafana turns these metrics into actionable visualizations through intuitive dashboards.


Step-by-Step Guide: Setting Up Prometheus and Grafana

1. Create a Monitoring Namespace

Organize your monitoring tools by isolating them in a dedicated namespace.

kubectl create namespace monitoring
Enter fullscreen mode Exit fullscreen mode

2. Deploy Prometheus Operator

The Prometheus Operator simplifies deploying and managing Prometheus in Kubernetes. We’ll use Helm for easy deployment.

  1. Add the Helm repository for Prometheus:
   helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
   helm repo update
Enter fullscreen mode Exit fullscreen mode
  1. Install the Prometheus Operator:
   helm install prometheus prometheus-community/prometheus \
   --namespace monitoring \
   --create-namespace \
   --set service.type=LoadBalancer
Enter fullscreen mode Exit fullscreen mode

Tip: If your setup doesn’t support LoadBalancer, use NodePort or omit the service.type to keep Prometheus accessible only within the cluster.


3. Deploy Grafana

Grafana works seamlessly with Prometheus to visualize collected metrics.

  1. Add the Grafana Helm repository:
   helm repo add grafana https://grafana.github.io/helm-charts
   helm repo update
Enter fullscreen mode Exit fullscreen mode
  1. Install Grafana:
   helm install grafana grafana/grafana \
   --namespace monitoring \
   --set service.type=LoadBalancer \
   --set adminPassword=your_strong_password
Enter fullscreen mode Exit fullscreen mode

Replace your_strong_password with a secure password for the Grafana admin user.


4. Access Grafana

Once deployed, access Grafana through your browser:

  • External IP Access: Get the external IP for Grafana using:
  kubectl get svc grafana -n monitoring
Enter fullscreen mode Exit fullscreen mode

Visit http://<external-ip>:3000 and log in with the admin username (admin) and the password you set.

  • Port Forwarding (For Testing):
  kubectl port-forward svc/grafana -n monitoring 3000:3000
Enter fullscreen mode Exit fullscreen mode

Open http://localhost:3000 in your browser.


5. Configure Prometheus as a Data Source

  1. Log in to Grafana and go to Configuration → Data Sources.
  2. Add a new data source of type Prometheus.
  3. Set the URL to Prometheus’s service endpoint, e.g.,
   http://prometheus-server.monitoring.svc.cluster.local
Enter fullscreen mode Exit fullscreen mode
  1. Save the configuration.

6. Import Kubernetes Dashboards

Grafana provides pre-built Kubernetes dashboards for quick insights:

  1. Navigate to Dashboards → Import.
  2. Use dashboard IDs from the Grafana Dashboard Library.
  3. Customize them as needed for your cluster.

Background: Horizontal Pod Autoscalers (HPAs)

Monitoring isn’t just about dashboards—it’s about action. Kubernetes’ Horizontal Pod Autoscaler (HPA) ensures resource efficiency by dynamically scaling pods based on metrics like CPU or memory usage.

Here’s a quick example:

  1. Deploy an Nginx app:
   kubectl create deployment nginx --image=nginx
   kubectl expose deployment nginx --port=80 --target-port=80
Enter fullscreen mode Exit fullscreen mode
  1. Enable autoscaling:
   kubectl autoscale deployment nginx --cpu-percent=50 --min=1 --max=10
Enter fullscreen mode Exit fullscreen mode
  1. Test the autoscaler: Simulate load using tools like kubectl run or Apache Benchmark. Monitor scaling behavior with:
   kubectl get hpa
Enter fullscreen mode Exit fullscreen mode

Top comments (0)