Helm is the package manager for Kubernetes. It packages Kubernetes resources and configuration into reusable, versioned units called charts, and provides commands to install, upgrade, inspect, and roll back those packages on a cluster. Think of a Helm chart as a tarball that contains templates (YAML with placeholders), default values, and metadata so you can deploy an app with one command instead of hand-editing many manifests.
Why we need Helm
Kubernetes apps usually require many manifests (Deployments, Services, ConfigMaps, Secrets, CRDs, etc.). Helm solves several real problems:
Templating & reuse — one chart can be parameterized with a values.yaml so the same chart deploys different environments (dev/stage/prod).
Versioning — charts are versioned, so you can track and roll back releases.
Dependency management — charts can declare dependencies on other charts (databases, ingress controllers).
Release lifecycle — Helm tracks releases (what was installed, with what values), enabling upgrades and rollbacks.
Easier collaboration — teams can publish charts in repos and share curated, tested deployment recipes.
Helm speeds developer workflows and reduces repetitive YAML edits while giving operations teams repeatable, auditable deployments.
Helm architecture (practical, Helm v3 view)
Helm historically had two parts: a client and a server component called Tiller (Helm v2). Tiller caused security and complexity concerns. Since Helm v3, Tiller was removed — Helm is client-only and talks directly to the Kubernetes API server using the cluster credentials (kubeconfig). Important pieces to understand now:
Helm CLI (client) — what you run locally or inside CI. It renders templates and issues Kubernetes API requests to create/update resources.
Charts — the package format: Chart.yaml, templates/, values.yaml, optional charts/ for dependencies, and helpers. Charts are what you version and ship.
Repositories — HTTP locations (index + tarballs) that host charts; helm repo add & helm repo update manage them.
Releases — an installed instance of a chart with a release name and values. Helm stores release history as Kubernetes secrets (or ConfigMaps) in the target namespace to enable upgrades/rollbacks.
Because Helm v3 is client-side, security works with normal Kubernetes RBAC and kubeconfig contexts (no Tiller privileges).
How to install & configure Helm (step-by-step)
These steps assume you have kubectl configured with access to a cluster and kubeconfig set.
- Install Helm CLI
 
macOS (Homebrew): brew install helm
Linux: download the official tarball from helm.sh and extract helm into /usr/local/bin.
Windows: use Chocolatey or scoop.
Verify: helm version (should show a client version). (source: official docs). 
- Add a chart repository Example: add the stable community repo mirror:
 
helm repo add stable https://charts.helm.sh/stable
helm repo update
helm repo list shows configured repos.
- Search and inspect charts
 
helm search repo nginx
helm show chart stable/nginx-ingress
helm show values stable/nginx-ingress   # default values
- Create a new chart scaffold (start developing)
 
helm create myapp
This creates myapp/Chart.yaml, values.yaml, and templates/ with example manifests. Edit values.yaml to set image name, replicas, ports, and any config. (You’ll template values in the files under templates/.)
- Install a chart into the cluster
 
helm install myrelease ./myapp
or from a repository:
helm install myrelease stable/nginx-ingress --namespace web --create-namespace
Helm renders the templates, applies them to the cluster, and records the release.
- Override values at install Use CLI overrides or a custom values file:
 
helm install myrelease ./myapp -f prod-values.yaml --set image.tag=1.2.3
- Upgrade & rollback Upgrade:
 
helm upgrade myrelease ./myapp -f prod-values.yaml
List history and rollback:
helm history myrelease
helm rollback myrelease 2
- Uninstall
 
helm uninstall myrelease --namespace web
For more commands and details see Helm’s cheat sheet and CLI docs.
Important helm commands (practical list with one-line meaning)
I’ll list the commands you’ll use day-to-day, each with a short example.
helm repo add — add chart repo.
helm repo update — refresh repo index.
helm search repo — search charts in repo.
helm show values — show default values.
helm create — scaffold a chart.
helm install [-f values.yaml] [--set k=v] — install a release.
helm list [--namespace N] — list releases.
helm upgrade [-f values.yaml] — upgrade a release.
helm rollback — rollback.
helm uninstall — remove release.
helm template — render templates locally (useful in CI or for review).
helm lint — validate chart for common mistakes.
helm package — produce .tgz chart package.
helm push (via plugin) — push a packaged chart to a chart repo (requires chartmuseum or OCI registry support).
helm history — show release history.
helm status — show the current release status.
The official docs and cheat sheet are great reference pages for full options.
Best practices for charts and values (short list)
Keep environment-specific settings in values-.yaml and avoid huge --set chains.
Use helm lint and helm template to validate before applying.
Keep templates idempotent and avoid embedding cluster-specific secrets in charts (use secrets managers or sealed-secrets).
Version your charts (Chart.yaml version and appVersion) and follow semantic versioning.
Prefer provisioning external dependencies (DBs) via separate charts or managed services, and declare dependencies in Chart.yaml when appropriate. (Docs & community guidance.)
How to implement Helm in a CI/CD pipeline — principles first
CI builds and pushes container images; CD deploys them. Helm fits into CD as the tool that installs/upgrades the Kubernetes resources with the new image tag. Two patterns are common:
Push-based CD from CI — CI (Jenkins/GitHub Actions/GitLab CI) builds the image, pushes it to a registry, then runs helm upgrade --install (or helm template + kubectl apply) to update the cluster. Simple and widely used.
GitOps / Pull-based CD — CI updates a Git ops repo (committed values.yaml or Helm chart version bump); a GitOps controller (Argo CD / Flux) watches the repo and reconciles cluster state. This decouples build from deployment and adds an audit trail. Both are valid; choose depending on team needs.
Below are ready-to-use CI pipeline snippets you can paste and adapt.
Example 1 — GitHub Actions: build image, push, helm upgrade
This example assumes:
Your repo contains chart/ (Helm chart) or you reference a chart repo.
You store KUBECONFIG as a GitHub secret or use azure/k8s-set-context / runner with cluster access.
IMAGE uses GitHub Package Registry or Docker Hub.
.github/workflows/deploy.yml (simplified):
name: Build and Deploy
on:
  push:
    branches: [ main ]
jobs:
  build:
    runs-on: ubuntu-latest
    steps:
      - name: Checkout
        uses: actions/checkout@v4
  - name: Set up Docker Buildx
    uses: docker/setup-buildx-action@v3
  - name: Login to registry
    uses: docker/login-action@v2
    with:
      registry: ghcr.io
      username: ${{ github.actor }}
      password: ${{ secrets.GHCR_TOKEN }}
  - name: Build and push image
    run: |
      IMAGE=ghcr.io/${{ github.repository_owner }}/myapp:${{ github.sha }}
      docker build -t $IMAGE .
      docker push $IMAGE
    env:
      DOCKER_BUILDKIT: 1
  - name: Set up kubectl
    uses: azure/setup-kubectl@v3
    with:
      version: '1.29.0'
  - name: Configure kubeconfig
    run: |
      mkdir -p ~/.kube
      echo "${{ secrets.KUBECONFIG }}" > ~/.kube/config
      chmod 600 ~/.kube/config
  - name: Helm upgrade (deploy)
    uses: azure/helm@v1
    with:
      helm-version: '3.11.0'
    env:
      IMAGE: ghcr.io/${{ github.repository_owner }}/myapp:${{ github.sha }}
    run: |
      helm repo update
      helm upgrade --install myapp ./chart \
        --namespace web --create-namespace \
        -f ./chart/values.yaml \
        --set image.repository=${{ env.IMAGE%:* }} \
        --set image.tag=${{ github.sha }}
Notes:
We use helm upgrade --install to create the release if missing.
Replace secret and registry parts with your provider.
You can use helm template + kubectl apply -f - if you prefer rendering locally and using kubectl directly.
Example 2 — Jenkins pipeline (Declarative) with Helm
Assumptions: Jenkins has agents with Docker & Helm installed, and credentials (kubeconfig) are stored in Jenkins Credentials.
Jenkinsfile (stripped to essentials):
pipeline {
  agent any
  environment {
    IMAGE = "myregistry.example.com/myapp:${env.BUILD_NUMBER}"
  }
  stages {
    stage('Checkout') {
      steps { checkout scm }
    }
    stage('Build & Push') {
      steps {
        sh '''
          docker build -t $IMAGE .
          docker login -u $DOCKER_USER -p $DOCKER_PASS myregistry.example.com
          docker push $IMAGE
        '''
      }
    }
    stage('Deploy with Helm') {
      steps {
        withCredentials([file(credentialsId: 'kubeconfig-id', variable: 'KUBECONFIG_FILE')]) {
          sh '''
            mkdir -p $HOME/.kube
            cp $KUBECONFIG_FILE $HOME/.kube/config
            helm repo update
            helm upgrade --install myapp ./chart \
              --namespace web --create-namespace \
              --set image.repository=myregistry.example.com/myapp \
              --set image.tag=${env.BUILD_NUMBER}
          '''
        }
      }
    }
  }
}
Notes and tips:
Use Jenkins credentials plugin to securely store kubeconfig.
Consider helm lint and helm template in a test stage before deploying.
For blue-green or canary deployments, combine Helm with appropriate chart hooks or use service meshes/Ingress strategies.
Example 3 — GitLab CI (short)
GitLab CI can build and then run helm upgrade in a job with kubectl configured via a CI/CD variable KUBE_CONFIG or using the Kubernetes integration. The pattern mirrors GitHub Actions: build image, push, then helm upgrade --install passing --set image.tag=$CI_COMMIT_SHA.
GitOps alternative (Argo CD / Flux)
If you prefer pull-based deployments, CI only updates a Git repo (commits a change to values.yaml or bumps a chart version in requirements), and Argo CD or Flux picks it up and applies to cluster. This gives strong visibility and easier rollbacks via Git. Use Helm charts as the unit of packaging and versioning; Argo/Flux can pull charts from chart repos or render from Git.
Quick troubleshooting checklist
If helm install fails, run helm template to see rendered manifests.
Check Kubernetes events and pods (kubectl -n get pods, kubectl -n describe pod ).
Use helm status and helm history to inspect release state.
If RBAC errors occur, ensure the kubeconfig used by CI has appropriate permissions.
    
Top comments (0)