What is Helm?
Helm is the package manager for Kubernetes. Think of it like apt for Ubuntu or npm for Node.js. Instead of writing many YAML files and running kubectl apply on each, you bundle them into a chart and install the whole app with one command.
- Packaging applications as "Charts"
- Managing complex deployments
- Handling versioning and rollbacks
- Templating Kubernetes manifests
Helm Architecture & Components
Helm installs charts into Kubernetes, creating a new release for each installation. And to find new charts, you can search Helm chart repositories.
- Repository where charts are stored and shared (like Docker Hub for containers)
- Chart a package containing Kubernetes manifests as templates.
- Release an instance of a chart running in your cluster
- Values configuration that fills in the template variables
Chart Structure:
mychart/
├── Chart.yaml # Chart metadata
├── values.yaml # Default configuration values
├── templates/ # Kubernetes manifest templates
│ ├── deployment.yaml
│ ├── service.yaml
│ └── _helpers.tpl # Template helpers
└── charts/ # Dependencies
Key Files:
- Chart.yaml: Chart metadata (name, version, description)
- values.yaml: Default values for template variables
- templates/: Go templates that generate Kubernetes manifests
Basic Commands
# Add a repository
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo update
# Search for charts
helm search repo nginx
# Install a chart (creates a release)
helm install my-nginx bitnami/nginx
# List releases
helm list
# Upgrade a release
helm upgrade my-nginx bitnami/nginx --set replicaCount=3
# Rollback
helm rollback my-nginx 1
# Uninstall
helm uninstall my-nginx
Your First Chart:
helm create mychart
This generates a directory:
mychart/
Chart.yaml # metadata (name, version, description)
values.yaml # default configuration
templates/ # Kubernetes manifests with Go templating
deployment.yaml
service.yaml
_helpers.tpl
charts/ # subchart dependencies
Hands-On Practice
- Prerequisites:
kubernetes Cluster , kubectl , helm
Basic Repository and Install
# Add the repo
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo add grafana https://grafana.github.io/helm-charts
helm repo update
# Remove the repo
helm repo list
helm repo remove bitnami
# See what's available
helm search repo kube-prometheus-stack
helm search repo kube-prometheus-stack --versions | head -5
# Inspect before installing
helm show chart prometheus-community/kube-prometheus-stack # inspecting a Helm chart’s metadata without installing it.
helm show values prometheus-community/kube-prometheus-stack > default-values.yaml
# Install with defaults
kubectl create namespace monitoring
helm search repo kube-prometheus-stack --versions | head -5
helm install monitoring prometheus-community/kube-prometheus-stack \
--namespace monitoring \
--version 65.1.1 # 65.1.1 is the Helm chart version, not the app version.
# Watch it come up
kubectl get pods -n monitoring -w
Once pods are running, access the UIs:
## Once pods are running, access the UIs:
# Grafana (default admin / prom-operator)
kubectl port-forward -n monitoring svc/monitoring-grafana 3000:80
# Prometheus
kubectl port-forward -n monitoring svc/monitoring-kube-prometheus-prometheus 9090:9090
# Alertmanager
kubectl port-forward -n monitoring svc/monitoring-kube-prometheus-alertmanager 9093:9093
Inspect the Release
# Inspect the Release
helm list -n monitoring
helm status monitoring -n monitoring # get usernames and passwords
helm get values monitoring -n monitoring
helm get values monitoring -n monitoring --all # includes defaults
helm get manifest monitoring -n monitoring | head -50 # Fetches the full Kubernetes YAML manifests for the release named monitoring
helm history monitoring -n monitoring
Upgrade and Clean up
# Upgrade with --set
helm upgrade monitoring prometheus-community/kube-prometheus-stack \
--namespace monitoring \
--set grafana.replicas=2
helm history monitoring -n monitoring
# roll it back
helm rollback monitoring 1 -n monitoring
kubectl get pods -n monitoring -l app.kubernetes.io/name=grafana
# Clean up
helm uninstall monitoring -n monitoring
kubectl get all -n monitoring
kubectl delete namespace monitoring
Pull the Chart
# Download the .tgz
helm pull prometheus-community/kube-prometheus-stack --version 65.1.1
# Download AND extract it
helm pull prometheus-community/kube-prometheus-stack \
--version 65.1.1 \
--untar \
--untardir ./charts
ls charts/kube-prometheus-stack/
# Chart.yaml values.yaml templates/ charts/ README.md ...
helm dependency list
ls charts/kube-prometheus-stack/charts/
helm dependency update # download/update dependencies
# Read Chart.yaml: Looks at the dependencies: block
# Download required charts From the specified repositories
# Store them locally: Inside the charts/ directory of your project
# Generate/update Chart.lock: Locks exact versions
ls charts/kube-prometheus-stack/charts/ # you'll see kube-prometheus-stack-65.1.1.tgz
cat Chart.yaml | grep version
cat Chart.lock # lock file for your chart dependencies , commit this to Git
# dependencies: exact versions used
# digest: checksum of dependencies (integrity check)
# generated: timestamp
Build the Chart
mkdir -p monitoring-stack/environments
tree monitoring-stack
cd monitoring-stack
Create Chart.yaml
apiVersion: v2
name: company-monitoring
description: Company-wide monitoring stack
type: application
version: 1.0.0
appVersion: "1.0.0"
dependencies:
- name: kube-prometheus-stack
version: "65.1.1"
repository: "https://prometheus-community.github.io/helm-charts"
helm dependency update
ls charts/
Create values.yaml
kube-prometheus-stack:
fullnameOverride: monitoring
prometheus:
prometheusSpec:
retention: 15d
retentionSize: "45GB"
serviceMonitorSelectorNilUsesHelmValues: false
podMonitorSelectorNilUsesHelmValues: false
ruleSelectorNilUsesHelmValues: false
resources:
requests:
cpu: 200m
memory: 1Gi
limits:
memory: 2Gi
grafana:
adminPassword: "grafana123" # override per environment
defaultDashboardsTimezone: browser
sidecar:
dashboards:
enabled: true
label: grafana_dashboard
searchNamespace: ALL
datasources:
enabled: true
alertmanager:
alertmanagerSpec:
resources:
requests:
cpu: 50m
memory: 128Mi
nodeExporter:
enabled: true
kubeStateMetrics:
enabled: true
Environment Overrides:
Create environments/values-dev.yaml
kube-prometheus-stack:
prometheus:
prometheusSpec:
retention: 3d
retentionSize: "8GB"
resources:
requests:
cpu: 100m
memory: 512Mi
grafana:
adminPassword: "admindev123"
persistence:
enabled: false
alertmanager:
enabled: false
Create environments/values-prod.yaml
kube-prometheus-stack:
prometheus:
prometheusSpec:
retention: 30d
retentionSize: "180GB"
replicas: 2
resources:
requests:
cpu: 1
memory: 4Gi
limits:
memory: 8Gi
grafana:
replicas: 2
adminPassword: "adminprod123"
persistence:
enabled: true
size: 10Gi
alertmanager:
alertmanagerSpec:
replicas: 3
Lint and Render Before Installing
# Check for syntax issues
helm lint . # Lint (validate) the Helm chart in the current folder
helm lint . -f environments/values-dev.yaml
# Render locally — no cluster needed
helm template monitoring . -f environments/values-dev.yaml > rendered-dev.yaml
# Dry-run against the cluster (validates against API)
helm install monitoring . \
--namespace monitoring \
--create-namespace \
-f environments/values-dev.yaml \
--dry-run --debug 2>&1 | head -80
helm install monitoring . \
--namespace monitoring \
--create-namespace \
-f values.yaml \
-f environments/values-dev.yaml \
--rollback-on-failure \
--timeout 10m
kubectl get secret monitoring-grafana -n monitoring -o jsonpath="{.data.admin-password}" | base64 --decode
kubectl port-forward -n monitoring svc/monitoring-grafana 3000:80
# added replicas to values.dev.yaml
# grafana:
# replicas: 3
helm upgrade monitoring . \
--namespace monitoring \
-f values.yaml \
-f environments/values-dev.yaml \
--timeout 10m
# Verify
helm list -n monitoring
kubectl get pods -n monitoring
kubectl get prometheusrules -n monitoring
kubectl get servicemonitors -n monitoring
Teardown
kubectl delete ns monitoring
helm repo remove prometheus-community grafana
helm repo list
helm list -A
Don't forget to destroy your cloud Kubernetes cluster
Top comments (0)