DEV Community

Cover image for Learning GitOps with Helm Charts + Flux
Oliver
Oliver

Posted on

Learning GitOps with Helm Charts + Flux

Earlier this month I built and deployed my first ever GitOps Continuous Delivery pipeline using kubernetes and flux. Throughout the process I found myself wading through quite a bit of outdated and confusing documentation, so having made it to the other side I wanted to write up a walk-through of the steps I took in the hopes that other developers can have an easier time.

The Stack:

AWS EKS for the kubernetes cluster
HelmOperator by Flux for GitOps
Terraform + Cloudformation for infrastructure

The application I deployed to the cluster was a GraphQL API that lives in a docker image on AWS ECR, but this can be adapted to any docker container or other containerized build.

Since there are a lot of good resources on creating a kubernetes cluster specific to whatever service you are using (AWS, Google GKE, etc) I am going to start this walk-through with the assumption that you have an existing cluster that you want to deploy an image to.

Step 1: GitHub

You will need a dedicated GitHub repository for this process. This is where you will commit all of the following configuration (your helm charts, helm releases, etc), and it should be separate from whatever repository you are using for building the application that you are deploying.

Step 2: Setting up Fluxcd

Follow the instructions for installing Helm and Fluxctl

Then, use helm to install the fluxcd charts:

helm repo add fluxcd https://charts.fluxcd.io
Enter fullscreen mode Exit fullscreen mode

Create a namespace on your cluster for flux-related pods to live:

I used 'fluxcd' but you can choose whatever namespace makes sense for you

kubectl create ns fluxcd
Enter fullscreen mode Exit fullscreen mode

Create your flux pod (the GitHub repo + branch will be from Step 1):

helm upgrade -i flux fluxcd/flux --wait \
  --namespace fluxcd \
  --set git.url=git@github.com:<GITHUB REPO> \
  --set git.branch=<BRANCH>
Enter fullscreen mode Exit fullscreen mode

If your repository is private you will also need to add the knownhosts flag:

--set-file ssh.known_hosts=$known_hosts_path

pointing to the path where your knownhosts are, usually in the ~/.ssh directory.

Step 3: The Helm Operator

The helm operator is a controller that works in tandem with flux and allows you to create HelmReleases that will deploy an image, as well as update the image to the latest version whenever Flux detects a change.

Apply the Helm Operator to your cluster:

kubectl apply -f https://raw.githubusercontent.com/fluxcd/helm-operator/master/deploy/flux-helm-release-crd.yaml
Enter fullscreen mode Exit fullscreen mode

Create the helmRelease pod:

helm upgrade -i helm-operator fluxcd/helm-operator --wait \
  --namespace fluxcd \
  --set helm.versions=v3
Enter fullscreen mode Exit fullscreen mode

Retrieve the SSH Key:

fluxctl identity --k8s-fwd-ns fluxcd

and add it to your github repository under deploy keys.

You will want to give this key write access so that flux can update your helmReleases when it detects a new version of your image.

Step 4: HelmReleases

Now that flux is all set up, it's time to get to the good stuff. As I mentioned previously the HelmReleases are yaml files that flux will use to deploy an image.

apiVersion: helm.fluxcd.io/v1
kind: HelmRelease
metadata:
  name: <app name -- name of the application you're deploying>
  namespace: default
  annotations:
    flux.weave.works/automated: "true"
    flux.weave.works/tag.chart-image: glob:dev-*
spec:
  releaseName: <app name>
  helmVersion: v3
  chart:
    git: <github repo from step 1>
    path: <the directory where your charts live -- ex. charts/app-name >
    ref: <github branch set in step 2>
  values:
    image: <image endpoint of your application>
    replicaCount: 1
    hpa:
      enabled: true
      maxReplicas: 3
      cpu: 1
    extraEnvs:
      ## This is where you can add any public env variables
      username: oliver_2020
      dbServer: 11.0.1.213 
    envFrom:
      secretRef:
        ## The Secret Name for private env variables (more on that later)
        name: mysecrets
Enter fullscreen mode Exit fullscreen mode

Each HelmRelease you have will need a corresponding Helm Chart. Helm has some great documentation on building custom Helm Charts + Templating, but I'll add the basics.

Step 5: Helm Charts

If kubernetes is the pilot, Helm is the steering wheel. Most Helm Charts contain the following components:

The Chart.yaml file is almost like the 'title page' of a helm chart. It consists of the chart name and version information.

apiVersion: v1
kind: application
description: A Super Awesome GraphQL API
name: graphql
version: 1.0.0
Enter fullscreen mode Exit fullscreen mode

The values.yaml file is where you will define all the variables you will need for your chart.

image: <image endpoint of your application>
    replicaCount: 1
    hpa:
      enabled: true
      maxReplicas: 3
      cpu: 1
    extraEnvs:
      username: oliver_2020
Enter fullscreen mode Exit fullscreen mode

These variables are then consumed by the various template files.

A common template file is the deployment template:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: {{ .Release.Name }}
  labels:
    app: {{ template "app.name" . }}
    chart: {{ template "app.chart" . }}
    release: {{ .Release.Name }}
    heritage: {{ .Release.Service }}
spec:
  replicas: {{ .Values.replicaCount }}
  selector:
    matchLabels:
      app: {{ template "app.name" . }}
      release: {{ .Release.Name }}
  template:
    metadata:
      labels:
        app: {{ template "app.name" . }}
        release: {{ .Release.Name }}
      annotations:
        prometheus.io/scrape: 'true'
    spec:
      containers:
        - name: {{ .Chart.Name }}
          image: "{{ .Values.image }}"
          imagePullPolicy: {{ .Values.imagePullPolicy }}
          env:
            - name: publicEnvKey
              value: {{ .Values.extraEnv.key | quote }}
            - name: secretEnvKey
              valueFrom:
                secretKeyRef:
                  key: dbPassword
                  name: mysecrets
          resources:
{{ toYaml .Values.resources | indent 12 }}
    {{- with .Values.nodeSelector }}
      nodeSelector:
{{ toYaml . | indent 8 }}
    {{- end }}
    {{- with .Values.affinity }}
      affinity:
{{ toYaml . | indent 8 }}
    {{- end }}
    {{- with .Values.tolerations }}
      tolerations:
{{ toYaml . | indent 8 }}
    {{- end }}
      volumes:
      - name: data
        emptyDir: {}
Enter fullscreen mode Exit fullscreen mode

You will have access to variables defined throughout your chart using the templating syntax

.Values
.Release
.Chart

However, if you need to change the structure of a value, for example to remove or add a hyphen, you can add a helper.tpl file. Template helpers are also useful for generating values like timestamps, where it wouldn't make sense to put it in your values.yaml file because it's not really integral to the chart.

Step 6: Template Helpers

Variables are defined in a helper file with the define action

{{- define "mychart.labels" }}
  labels:
    generator: helm
    date: {{ now | htmlDate }}
{{- end }}
Enter fullscreen mode Exit fullscreen mode

You can then call this value in any of your template files with the template action

{{ template "mychart.labels" }}

which the template engine will read as the current date.

For a thorough reference on declaring custom values see the helm named templates guide

Step 7: Sealed Secrets

If you have any private env variables you want to use, it's good practice to store them in a secret. Kubernetes has a native way to store secrets but the values are only base64 encoded which isn't the safest strategy so I've been using a tool called sealed-secrets which allows you to create SealedSecrets resources that you can then store in GitHub because they can only be decrypted by the controller that created them.

A SealedSecret resource looks something like this:

apiVersion: bitnami.com/v1alpha1
kind: SealedSecret
metadata:
  name: mysecrets
  namespace: default
spec:
  encryptedData:
    dbPassword: AgBy3i4OJSWK+PiTySYZZA9rO43cGDEq.....
Enter fullscreen mode Exit fullscreen mode

and can be referenced in your helmRelease by the secret name

envFrom:
      secretRef:
        name: mysecrets

Enter fullscreen mode Exit fullscreen mode

To begin, install the Stable repository:

helm repo add stable https://kubernetes-charts.storage.googleapis.com/

Update your Helm repositories:

helm repo update

and then install sealed-secrets from the stable repository:

helm install --namespace kube-system stable/sealed-secrets --generate-name

Once it's installed you can apply the sealed-secrets controller to you cluster:

kubectl apply -f https://github.com/bitnami-labs/sealed-secrets/releases/download/v0.9.6/controller.yaml

and then fetch the generated public key (this can be committed to GitHub) using the generated sealed secrets pod name:

kubeseal --fetch-cert \
  --controller-namespace=kube-system \
  --controller-name=sealed-secrets-230493143 \
  > pub-cert.pem
Enter fullscreen mode Exit fullscreen mode

you can grab the pod name with kubectl get pods -n kube-system

Step 8: Creating a SealedSecret Resource

To create a Secret you will first generate it with kubectl and then ecrypt it with kubeseal

kubectl -n <env namespace> create secret generic <secretname> \
--from-literal=<key>=<value> \
--from-literal=<key>=<value> \
--dry-run \
-o json > <secretname>.json

Enter fullscreen mode Exit fullscreen mode

the dry run flag will create the json file with base64 encrypted values, but will not create an actual kubernetes secret on your cluster.

To encrypt it with kubeseal you will pass in the location of the public key from the previous step, the name of the json file you just created, and the filename you want to use for your secret

kubeseal --format=yaml --cert=pub-cert.pem < secretname.json > secretname.yaml
Enter fullscreen mode Exit fullscreen mode

This process will generate a custom SealedSecret resource that contains encrypted credentials that can only be unsealed by the controller that created it.

Once you have your SealedSecret resource you can delete or .gitignore the generated json file. When you commit the pub-cert file and the new SealedSecrets resource to GitHub flux will apply the secret to your cluster and then the controller will unseal it into a native kubernetes secret to be read by your cluster.

Step 9: Push To Git

Once your helm releases and corresponding helm charts are all set you can go ahead and push your commits to the GitHub repository from step 1.

Once your commits are merged or pushed to the branch you specified in step 2 flux will pick up the changes and deploy your first helm release.

Flux generally takes a minute or two to sync, but you can force a sync manually with

fluxctl sync --k8s-fwd-ns fluxcd

replacing fluxcd with whatever namespace you chose for your flux pod in step 2.

Step 10. TroubleShooting

If you pushed your commits, waited for flux to sync, and still aren't seeing the pod with the app-name from your helm release there are a few ways you can troubleshoot.

Get a list of all pods with kubectl get pods --all-namespaces and copy the name of the flux pod. Then run kubectl logs <fluxpodname> -n fluxcd

You should be able to see the logs of flux trying to clone the repo, and polling for updates, and can check for any errors.

Another strategy is kubectl get hr which will get all helm release instances, and will often show where the release is in the deployment process and any errors that came up.

Lastly, I always find it helpful to describe the resources and make sure they are configured as expected. You can describe any kubernetes resource with kubectl describe <resourcetype> <resourcename>


Conclusion

This is only one of many ways to configure a continuous delivery pipeline for your kubernetes pipeline. Flux is a great tool because it integrates well with Helm, and is a relatively straight forward way to automate the deployment process. Helm recently updated to version 3 and flux support for Helm v3 is still in beta mode, thus the somewhat confusing/outdated documentation, but the team at Weaveworks has been very responsive on their slack channel at https://cloud-native.slack.com/ and hopefully this walkthrough can help navigate some of the complexity as well.

Have questions, feedback, pictures of your pets? Don't hesitate to comment down below and happy coding!

Oldest comments (1)

Collapse
 
angeloplsight profile image
angeloplsight • Edited

.Release.Name where does that value come from? The helmrelease has a releaseName but where is it specified that the releaseName in the HelmRelease corresponds to the Release.Name