DEV Community

Chirag Modi
Chirag Modi

Posted on • Updated on

Kubernetes Helm Charts Testing

Tools to use for Helm Chart Testing during Development to Release

Helm Chart Testing during Development to Release

Introduction

Helm Chart is a package management software to write Kubernetes templates and package it as a chart with all its dependencies. A single chart can be used to deploy nginx, memcache or any full stack web application. You can deploy any application chart just by running the following command.

helm install my-release bitnami/nginx
Enter fullscreen mode Exit fullscreen mode

Scope

This article does not cover detailed information about Helm Chart development instead Helm chart has very good documentation which you can go through to learn more about it.

I am going to cover how to test helm charts as part of development and what different types of testing tools can be used to test charts from unit tests to integration tests.

Helm Chart Development — Not a pleasant Experience

Helm Chart is written in go templates and writing those templates to render Kubernetes manifests is a painful experience. There is no good debugger support available and errors are clueless so sometimes you need to spend hours to fix minor indentation related issues. Helm provides a debug flag while rendering templates though it does not pinpoint the exact line where the error is in the code so it’s difficult to find issues. I hope to have better tools available in the future for helm chart development to make developers’ lives easy.

Photo by Ryan Snaadt on UnsplashPhoto by Ryan Snaadt on Unsplash

Are your chart templates correct ?

As mentioned earlier, Helm Chart templates use go templates so as part of development you need to be sure of syntactical errors so you don’t get last minute surprises when you release your chart.

Helm provides a lint command which finds and reports all these issues related to templates so you can execute this command frequently to find compile time errors as part of your development.

Here is a helm chart deployment template with errors.

Helm lint will report the following issues which were expected.

➜ mychart helm lint .
==> Linting .
[INFO] Chart.yaml: icon is recommended

[ERROR] templates/: parse error at (mychart/templates/deployment.yaml:19): function “Values” not defined

[ERROR] templates/: template: mychart/templates/deployment.yaml:7:16: executing “mychart/templates/deployment.yaml” at <include “namespace” .>: error calling include: template: no template “namespace” associated with template “gotpl”

Error: 1 chart(s) linted, 1 chart(s) failed
Enter fullscreen mode Exit fullscreen mode

I know this is a very simple example but good enough for understating how lint works.

Validate against Kubernetes Manifests

Helm template is the command you can use to render/generate Kubernetes manifests/templates out of your helm chart templates.

There is a command Helm install to install/deploy charts to Kubernetes cluster. Internally, It first executes helm template command and then deploys generated template output to the cluster.

helm template . > deployment.yaml
Enter fullscreen mode Exit fullscreen mode

Are your Kubernetes Manifests Valid ?

If you make mistakes while developing a chart then it might be possible that the generated Kubernetes Manifests generate errors when applied to the Kubernetes cluster but I want to know about those errors before deployment.

Kubeval is the tool to rescue. It’s a tool to validate your generated manifests against official Kubernetes specification and reports issues if any.

Can you spot any issue in this Deployment template ?

Run kubeval against this deployment manifest and look at the issues.

➜ mychart kubeval deployment.yaml

WARN — mychart/templates/deployment.yaml contains an invalid Deployment (myservice.nginx-deployment) — selector: selector is required

WARN — mychart/templates/deployment.yaml contains an invalid Deployment (myservice.nginx-deployment) — containerPort: containerPort is required

WARN — mychart/templates/deployment.yaml contains an invalid Deployment (myservice.nginx-deployment) — spec.replicas: Invalid type. Expected: [integer,null], given: string

##### This is the output if it was valid deployment #####
PASS — mychart/templates/deployment.yaml contains a valid Deployment (myservice.nginx-deployment)
Enter fullscreen mode Exit fullscreen mode

Additionally you can specify Kubernetes version against which you want to validate generated templates using option* “ — kubernetes-version v1.20.4”*

Custom Validations against Kubernetes Manifests

Let’s say I have the following simple requirements.

  • Containers should not run as root.

  • Docker images should come from my org repository.

You can address this by implementing an admission controller in Kubernetes when resources get deployed to the cluster but would it not be nice if we can apply this custom validation before deployment ?

Conftest is a framework which allows you to write rules using OPA policies and run it against the Kubernetes manifests.

Run these custom rules against deployment manifests using conftest which will report issues based on configured rules.

➜ mychart conftest test — policy . deployment.yaml

FAIL — deployment.yaml — main — Containers must not run as root
FAIL — deployment.yaml — main — image ‘nginx’ doesn’t come from myorg.com repository

2 tests, 0 passed, 0 warnings, 2 failures, 0 exceptions
Enter fullscreen mode Exit fullscreen mode

You can write any custom policies for all your resources which you can execute against Kubernetes manifests before deployment. That’s Pretty Cool.

Schema Validations for Custom Values

As I explained, you can verify Kubernetes manifests using Kubeval and Conftest tools but when you are creating a helm chart then you need to allow your users who are using the chart to add some custom values for any new feature. you need to validate that those custom values are in correct format to be consumed by the chart otherwise it will fail chart rendering which is a very difficult task to debug. how can we apply the first level of defense to make sure provided custom values are in correct format otherwise error it out with proper validation message.

Helm Chart provides Schema Validation feature for which you need to provide a schema file in a chart containing rules for all your custom values. It validates this schema validation first before executing any of these commands.

  • helm lint
  • helm template
  • helm install
  • helm upgrade

These are a few of the use cases using custom values.

  • Users should be able to specify memory and cpu requirements.

  • Some users want to specify different log locations for application logs.

  • Users want to supply environment *variables for the application container.*

Let’s implement the first use case where if the user specifies custom values for memory and cpu then it will take it otherwise set default values.

Here is values.schema.json file to validate against custom values.

This is custom-values.yaml which users can provide while consuming the chart.

As the user has provided wrong custom values, it should fail.

➜ mychart helm template . -f custom-values.yaml

Error: values don’t meet the specifications of the schema(s) in the following chart(s):
mychart:
- memory: Does not match pattern ‘^[0–9.]+[M|G]i$’
- cpu: Does not match pattern ‘^[0–9.]+m*$’
Enter fullscreen mode Exit fullscreen mode

You can do any type of validation as long as it’s supported by json schema specifications though the only condition is you should have all json schema rules available in file named values.schema.json in your chart.

Unit Testing

Like any other programming language, Unit tests are the first which developers should consider in the early stage of development. I see a scarcity of good unit testing frameworks available for Helm Charts.

There is a unit test framework helm-unittest. It is a very nice framework for unit testing and lots of active deployment is going on so one should definitely go for it.

There is another hacky way of testing Helm charts which is a mix of unit tests and regression tests.

Idea is very simple. you need to add a binding file containing custom values for each new feature you are implementing in helm chart and generate fixture file out of it using helm template command. You need to commit binding and it’s fixture file in the repository. Now create a simple shell script to be executed in CI which will create a fixture file out of a binding file on the fly against your changes in chart and compare it with existing fixture file. you just fail the test if there is difference in fixture files then you need to either fix your helm chart or update the existing fixture file if it’s expected behavior.

Testing using fixtures is very helpful in code refactoring and also can be considered for unit tests while writing new features.

Integration Tests using Kubetest

Till now, we have tested helm chart templates and Kubernetes manifests using different tools but we did not verify anything by actually deploying Kubernetes manifests in the Kubernetes cluster.

Why do we need integration tests ?

I have mentioned a few of the use cases which can only be verified by deploying resources in a cluster.

  • I have mounted the volume as writable which I want to verify by creating a file in volume.

  • There are some custom resources which I want to verify.

  • I want to verify the health-check of the internal load balancer created as part of Kubernetes service creation.

There are many tools available but I really liked Kubetest which is a pytest plugin. Kubetest makes it easy to write integration tests by providing abstraction on top of Kubernetes client.

It provides many helper functions so you don’t need to write complex code using Kubernetes client unless it’s absolutely necessary.

It’s very intuitive and fun to write integration tests using Kubetest. I was tempted to skip not putting any example code here because it is self explanatory once you look at this example code.

Summary

I have covered the basics about helm charts, various tools we can use for different types of testing including unit test and integration tests during the lifecycle from development to releasing the helm charts.

Don’t miss out to look at my github I have used as an example through out the article if anything is not clear.

Hope you enjoyed it. Cheers !

References

Top comments (2)

Collapse
 
dbaranowski profile image
Dan

You know the struggle of helm. May your hair not turn gray too soon. Thank you for this.

Collapse
 
deniscloudgeek profile image
Denis Trofimov

not exactly, Correct value should be "500m"

cpu: "500ml" // Correct value should be "500"