Integrating Prometheus & Grafana to monitor my self-hosted Kubernetes cluster with public HTTPS access, a customized default dashboard, and maintained security, all in an MVP GitOps logic.
In this sixth step, I’m integrating a monitoring system to supervise my Kubernetes cluster, keeping things minimalist, self-hosted, and avoiding unnecessary complexity.
Objective: simple and effective visibility
I want to view the state of my cluster at a glance: CPU, memory, pods, network. No over-engineering, no email alerts, just a clear dashboard.
I’m using the Helm chart kube-prometheus-stack, widely used in production, even though I’m underutilizing it here for educational purposes.
Installation with Helm
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update
helm upgrade --install monitoring prometheus-community/kube-prometheus-stack \
--namespace monitoring \
--create-namespace \
--values prometheus-stack-values.yaml
I disabled alertmanager
in the values.yaml
file, since I don’t want to manage alerts for this MVP:
alertmanager:
enabled: false
Why not use the MicroK8s Prometheus module?
MicroK8s offers a prometheus
module that can be enabled in one line:
microk8s enable prometheus
But this module is a black box and hard to integrate into a GitOps workflow:
- It’s not versioned in your Git repo
- It offers almost no control over configuration or versions
- It doesn’t separate Grafana and Prometheus cleanly
By choosing the Helm chart kube-prometheus-stack
, I keep full control over configuration via my values.yaml
, can version my infrastructure, and make my setup portable to any cloud or local Kubernetes cluster.
Grafana access
I created an Ingress at grafana.woulf.fr
with an HTTPS certificate managed by cert-manager
.
Admin access is protected by a Kubernetes Secret
(not committed), defined like this:
grafana:
admin:
existingSecret: monitoring-grafana
Secret creation
kubectl -n monitoring create secret generic monitoring-grafana \ --from-literal=admin-user=admin \ --from-literal=admin-password=********
To enable easy public access, I allowed anonymous access with the Viewer
role (read-only):
grafana:
grafana.ini:
auth.anonymous:
enabled: true
org_name: Main Org.
org_role: Viewer
hide_version: true
Default dashboard
I chose the built-in Kubernetes / Compute Resources / Cluster
dashboard, then exported and versioned it into a ConfigMap
, mounting it as the home dashboard:
grafana:
grafana.ini:
dashboards:
default_home_dashboard_path: /var/lib/grafana/dashboards/grafana-dashboard-home/default.json
dashboardsConfigMaps:
grafana-dashboard-home: grafana-dashboard-home
sidecar:
dashboards:
enabled: true
label: grafana_dashboard
searchNamespace: ALL
The related ConfigMap
is versioned in my infra repo and labeled with grafana_dashboard: "1"
to be auto-loaded by Grafana’s sidecar.
📁 A
ConfigMap
is a Kubernetes resource that lets you mount non-sensitive files into a pod.
It is automatically reloaded when modified.
Result
- Grafana is publicly accessible over HTTPS
- Default dashboard is readable and useful
- No login required to monitor the cluster
- Admin account is secured via a Kubernetes Secret
This approach follows DevOps best practices while staying simple and understandable for a visitor or recruiter.
🧠 What about production?
This setup is intentionally minimalist and educational, but in a production context, several aspects would be hardened:
- Alertmanager would be enabled with alert routing to services (email, Slack, etc.) to notify when components fail.
- Grafana access wouldn’t be anonymous: it would be IP-restricted, proxied, or SSO/LDAP-protected.
- Admin passwords wouldn’t be handled via static Secrets, but through Vault or a secrets manager (SealedSecrets, ExternalSecrets).
- Dashboards would be provisioned via API or dedicated files with more modular versioning strategies.
- TLS certificates would be managed via larger-scale automatic rotation mechanisms (wildcard DNS, ACME DNS challenge, etc.)
But for this MVP, the setup offers a great balance between simplicity, readability, baseline security, and GitOps maintainability.
⚡ Next step: adding loki
for centralized log collection? Or testing ArgoCD for advanced GitOps?
Stay tuned!
Top comments (0)