DEV Community

upinder sujlana
upinder sujlana

Posted on • Updated on

Prometheus and Grafana install on a Kubernetes cluster using helm

Below are some quick notes on how I setup helm, prometheus
and grafana on a Kubernetes cluster using helm.

[+] Have a K8S cluster already.
kmaster2@kmaster2:~$ kubectl get nodes -o wide
NAME       STATUS   ROLES                  AGE   VERSION   INTERNAL-IP    EXTERNAL-IP   OS-IMAGE           KERNEL-VERSION       CONTAINER-RUNTIME
kmaster2   Ready    control-plane,master   9d    v1.22.2   192.168.1.86   <none>        Ubuntu 18.04 LTS   4.15.0-161-generic   docker://20.10.9
knode3     Ready    <none>                 9d    v1.22.2   192.168.1.87   <none>        Ubuntu 18.04 LTS   4.15.0-161-generic   docker://20.10.9
knode4     Ready    <none>                 9d    v1.22.2   192.168.1.88   <none>        Ubuntu 18.04 LTS   4.15.0-161-generic   docker://20.10.9
kmaster2@kmaster2:~$


All three nodes have below OS details:-
kmaster2@kmaster2:~$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 18.04 LTS
Release:        18.04
Codename:       bionic
kmaster2@kmaster2:~$

[+] Install helm on kmaster2  ( https://helm.sh/docs/intro/install/ ) I preffered using apt-get

    sudo curl https://baltocdn.com/helm/signing.asc | sudo apt-key add -
    sudo apt-get install apt-transport-https --yes
    echo "deb https://baltocdn.com/helm/stable/debian/ all main" | sudo tee /etc/apt/sources.list.d/helm-stable-debian.list
    sudo apt-get update
    sudo apt-get install helm

[+] Then install prometheus-community/kube-prometheus-stack


sudo helm repo add prometheus-community https://prometheus-community.github.io/helm-charts

sudo helm repo update

[+] Create a namespace for keeping the charts in its own namespace

kubectl create ns prometheus

[+] Install prometheus-community/kube-prometheus-stack

sudo helm install prometheus prometheus-community/kube-prometheus-stack -n prometheus

kmaster2@kmaster2:~$ helm list -n prometheus
NAME            NAMESPACE       REVISION        UPDATED                                 STATUS          CHART                          APP VERSION
prometheus      prometheus      1               2021-10-22 10:07:13.399835228 -0700 PDT deployed        kube-prometheus-stack-19.2.2   0.50.0
kmaster2@kmaster2:~$
https://helm.sh/docs/helm/helm_list/

[+] Check all the objects created:

kmaster2@kmaster2:~/$ kubectl get all -n prometheus
NAME                                                         READY   STATUS    RESTARTS   AGE
pod/alertmanager-prometheus-kube-prometheus-alertmanager-0   2/2     Running   0          44s
pod/prometheus-grafana-b8cd4d67-4t9wb                        2/2     Running   0          3m39s
pod/prometheus-kube-prometheus-operator-bcdfdbc79-cf8cc      1/1     Running   0          3m39s
pod/prometheus-kube-state-metrics-58c5cd6ddb-9xtmt           1/1     Running   0          3m39s
pod/prometheus-prometheus-kube-prometheus-prometheus-0       2/2     Running   0          44s
pod/prometheus-prometheus-node-exporter-46f6g                1/1     Running   0          3m41s
pod/prometheus-prometheus-node-exporter-sc6c7                1/1     Running   0          3m41s
pod/prometheus-prometheus-node-exporter-zzq2q                1/1     Running   0          3m41s

NAME                                              TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE
service/alertmanager-operated                     ClusterIP   None             <none>        9093/TCP,9094/TCP,9094/UDP   3m14s
service/prometheus-grafana                        ClusterIP   10.96.216.191    <none>        80/TCP                       3m44s
service/prometheus-kube-prometheus-alertmanager   ClusterIP   10.109.69.174    <none>        9093/TCP                     3m45s
service/prometheus-kube-prometheus-operator       ClusterIP   10.97.223.67     <none>        443/TCP                      3m48s
service/prometheus-kube-prometheus-prometheus     ClusterIP   10.110.169.144   <none>        9090/TCP                     3m50s
service/prometheus-kube-state-metrics             ClusterIP   10.109.193.189   <none>        8080/TCP                     3m50s
service/prometheus-operated                       ClusterIP   None             <none>        9090/TCP                     3m11s
service/prometheus-prometheus-node-exporter       ClusterIP   10.104.193.94    <none>        9100/TCP                     3m47s

NAME                                                 DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
daemonset.apps/prometheus-prometheus-node-exporter   3         3         3       3            3           <none>          3m44s

NAME                                                  READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/prometheus-grafana                    1/1     1            1           3m43s
deployment.apps/prometheus-kube-prometheus-operator   1/1     1            1           3m43s
deployment.apps/prometheus-kube-state-metrics         1/1     1            1           3m43s

NAME                                                            DESIRED   CURRENT   READY   AGE
replicaset.apps/prometheus-grafana-b8cd4d67                     1         1         1       3m42s
replicaset.apps/prometheus-kube-prometheus-operator-bcdfdbc79   1         1         1       3m42s
replicaset.apps/prometheus-kube-state-metrics-58c5cd6ddb        1         1         1       3m42s

NAME                                                                    READY   AGE
statefulset.apps/alertmanager-prometheus-kube-prometheus-alertmanager   1/1     3m13s
statefulset.apps/prometheus-prometheus-kube-prometheus-prometheus       1/1     3m11s
kmaster2@kmaster2:~/$


[+] Get the dashboard port information and default user created

kmaster2@kmaster2:~/$ kubectl get pods -o=custom-columns=NameSpace:.metadata.namespace,NAME:.metadata.name,CONTAINERS:.spec.containers[*].name -n prometheus
NameSpace    NAME                                                     CONTAINERS
prometheus   alertmanager-prometheus-kube-prometheus-alertmanager-0   alertmanager,config-reloader
prometheus   prometheus-grafana-b8cd4d67-4t9wb                        grafana-sc-dashboard,grafana
prometheus   prometheus-kube-prometheus-operator-bcdfdbc79-cf8cc      kube-prometheus-stack
prometheus   prometheus-kube-state-metrics-58c5cd6ddb-9xtmt           kube-state-metrics
prometheus   prometheus-prometheus-kube-prometheus-prometheus-0       prometheus,config-reloader
prometheus   prometheus-prometheus-node-exporter-46f6g                node-exporter
prometheus   prometheus-prometheus-node-exporter-sc6c7                node-exporter
prometheus   prometheus-prometheus-node-exporter-zzq2q                node-exporter
kmaster2@kmaster2:~/$

Note from above the POD of interest "prometheus-grafana-b8cd4d67-4t9wb" and the container of interest is "grafana".

[+]  Get the HTTP port number and user info:
kmaster2@kmaster2:~/$ kubectl logs prometheus-grafana-b8cd4d67-4t9wb -c grafana -n prometheus | grep -E "Listen|default admin"
t=2021-10-22T17:09:45+0000 lvl=info msg="Created default admin" logger=sqlstore user=admin
t=2021-10-22T17:09:46+0000 lvl=info msg="HTTP Server Listen" logger=http.server address=[::]:3000 protocol=http subUrl= socket=
kmaster2@kmaster2:~/$

[+] Password for grafana is "prom-operator" lookup from here:
https://github.com/prometheus-community/helm-charts/blob/main/charts/kube-prometheus-stack/values.yaml

[+] Review Grafana dashboard by just using the POD (port-forward)

kmaster2@kmaster2:~$ kubectl port-forward -n prometheus pod/prometheus-grafana-b8cd4d67-4t9wb 3000
Forwarding from 127.0.0.1:3000 -> 3000
Forwarding from [::1]:3000 -> 3000

go to http://127.0.0.1:3000  ( admin / prom-operator )

[+] Review prometheus dashboard by just using the POD & container logs (port-forward)

kmaster2@kmaster2:~$ kubectl logs prometheus-prometheus-kube-prometheus-prometheus-0  -n prometheus -c prometheus | grep -i 9090
level=info ts=2021-10-22T17:38:39.008Z caller=web.go:541 component=web msg="Start listening for connections" address=0.0.0.0:9090
kmaster2@kmaster2:~$


kmaster2@kmaster2:~$ kubectl port-forward -n prometheus prometheus-prometheus-kube-prometheus-prometheus-0 9090
Forwarding from 127.0.0.1:9090 -> 9090
Forwarding from [::1]:9090 -> 9090

[+] Create a quick SVC to just use the Grafana deployment on a nodeport
kmaster2@kmaster2:~$ kubectl get pod -n prometheus -l app.kubernetes.io/name=grafana
NAME                                READY   STATUS    RESTARTS      AGE
prometheus-grafana-b8cd4d67-4t9wb   2/2     Running   2 (54m ago)   80m
kmaster2@kmaster2:~$ 
kmaster2@kmaster2:~$ kubectl get deployment -n prometheus -l app.kubernetes.io/name=grafana
NAME                 READY   UP-TO-DATE   AVAILABLE   AGE
prometheus-grafana   1/1     1            1           80m
kmaster2@kmaster2:~$
kmaster2@kmaster2:~$

kmaster2@kmaster2:~$ kubectl expose deployment prometheus-grafana -n prometheus --name=prometheus-svc --port=3000 --type=NodePort
service/prometheus-svc exposed
kmaster2@kmaster2:~$

kmaster2@kmaster2:~$ kubectl get svc -n prometheus | grep -i prometheus-svc
prometheus-svc                            NodePort    10.109.152.21    <none>        3000:30371/TCP               73s
kmaster2@kmaster2:~$


Now in a browser go to any of the cluster nodes IP and port 30371 to get into the grafana dashboard
In my cluster I went to :
http://192.168.1.86:30371/login  ( admin / prom-operator )


[+] For a constant service you can do:
kmaster2@kmaster2:~$ kubectl expose deployment prometheus-grafana -n prometheus --name=prometheus-svc --port=3000 --type=NodePort --dry-run=client -o yaml > grafana.yaml
kmaster2@kmaster2:~$

Edit the YAML file to add "nodePort: 30000"

kmaster2@kmaster2:~$ cat grafana.yaml
apiVersion: v1
kind: Service
metadata:
  creationTimestamp: null
  labels:
    app.kubernetes.io/instance: prometheus
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: grafana
    app.kubernetes.io/version: 8.2.1
    helm.sh/chart: grafana-6.17.2
  name: prometheus-svc
  namespace: prometheus
spec:
  ports:
  - port: 3000
    nodePort: 30000
    protocol: TCP
    targetPort: 3000
  selector:
    app.kubernetes.io/instance: prometheus
    app.kubernetes.io/name: grafana
  type: NodePort
status:
  loadBalancer: {}
kmaster2@kmaster2:~$

kmaster2@kmaster2:~$ kubectl apply -f grafana.yaml
service/prometheus-svc created
kmaster2@kmaster2:~$
kmaster2@kmaster2:~$ kubectl get svc -n prometheus | grep -i prometheus-svc
prometheus-svc                            NodePort    10.111.152.221   <none>        3000:30000/TCP               20s
kmaster2@kmaster2:~$
kmaster2@kmaster2:~$ kubectl describe svc prometheus-grafana -n prometheus
Name:              prometheus-grafana
Namespace:         prometheus
Labels:            app.kubernetes.io/instance=prometheus
                   app.kubernetes.io/managed-by=Helm
                   app.kubernetes.io/name=grafana
                   app.kubernetes.io/version=8.2.1
                   helm.sh/chart=grafana-6.17.2
Annotations:       meta.helm.sh/release-name: prometheus
                   meta.helm.sh/release-namespace: prometheus
Selector:          app.kubernetes.io/instance=prometheus,app.kubernetes.io/name=grafana
Type:              ClusterIP
IP Family Policy:  SingleStack
IP Families:       IPv4
IP:                10.96.216.191
IPs:               10.96.216.191
Port:              service  80/TCP
TargetPort:        3000/TCP
Endpoints:         10.44.0.1:3000
Session Affinity:  None
Events:            <none>
kmaster2@kmaster2:~$

Now in a browser go to any of the cluster nodes IP and port 30000 to get into the grafana dashboard
In my cluster I went to :
http://192.168.1.86:30000/login  ( admin / prom-operator )

and it works :)
Enter fullscreen mode Exit fullscreen mode

Oldest comments (0)