DEV Community

Cover image for One Chart to rule them all

One Chart to rule them all

Describing Kubernetes deployments having best practices applied is quite verbose and maintenance intensive. Doing that for multiple applications in various stages by different teams results in lots of yam(l)mer. Helm charts provide a convenient solution. For the best deployment experience, we're maintaining and using a generic application chart.

Resources over Resources

Operating applications on Kubernetes beyond basic tutorials requires a few resources:

  • the Deployment itself
  • a ServiceAccount most of the time
  • one to many ConfigMaps as well as Secrets
  • Service and Ingress if someone shall interact with your deployment
  • often a [Horizontal|Vertical]PodAutoscaler referring the deployment

As we're adopting GitOps practices for quite some time with great success - you may have heard that ๐Ÿ˜‡ - the manifests of all those resources have to be literally written into a Git repository.

Mentioning GitOps... for automating rollouts there are even more (custom) resources required in order to instruct our GitOps controllers (Flux & Friends):

  • the ImageRepository targeting the container registry
  • a ImagePolicy declaring version patterns to automate
  • and eventually a Canary as no one wants to observe rollouts manually

Observation - another great topic: To announce applications to the Prometheus monitoring stack, a ServiceMonitor resource comes on top.

Well... over time you'll probably add a PodDisruptionBudget or want to have a service mesh in place, adding furthermore resources, for instance DestinationRules and VirtualServices in case of Istio.
Wait - with Istio in place, the Ingress resource gets replaced by a VirtualService.

As we're constantly evolving, our added service mesh may get superseded by the Kubernetes Gateway API, introducing Gateway or HttpRoutes in favor of the Istio CRDs.

Guess you got the point. Writing all those manifest isn't even the hardest part. They need to be maintained constantly. Kubernetes APIs come and go, with some version lifecycle in between.

Scaling pains

Of course, some team has written all those resource manifests for the first application that was deployed. And they quickly put them into a Helm chart when it was about replication to multiple stages. The Helm chart could also be reused for further applications of that team, because - honestly - all applications ("microservices") are looking very similar.

To ensure, that changes on templated manifests don't break everything immediately, the team linted their Helm chart using different tools, hinting on upcoming Kubernetes API changes, or recommending best practices.

That's all nice and works out well. If there weren't other teams doing exactly the same but different. After a while, there were countless charts around, that all served the same purpose, but varied only slightly.

Like everywhere else, reuse is practiced by duplication, but we weren't able to copy solutions, because of the minor differences in the description of our application declarations.

Combine and generalize

So we did what was the obvious solution: We took the best from all those Helm charts and brewed a generic application chart.

This chart was even our first open sourced component. It describes a typical application workload, incorporating the requirements of many, many product teams - and guess what: they're all nearly the same.

Our application chart is tested against the latest seven Kubernetes versions (1.23 to 1.29 at the time of writing) in lots of variations using the Helm chart-testing project. Each permutation is also validated using a specialized Kubernetes manifest linter for best practices and security hardening.

Every time, someone requires something not yet possible, the chart is improved for the benefit of all. In the meanwhile, there's quite an impressive feature list, which you can easily study by reading its values.yaml - being also the documentation of the chart. Some highlights are:

  • Init container and sidecar configuration without hassle
  • Google Kubernetes Engine (GKE) support on workload identity and ingress config
  • Automated canary deployment support using Flagger, implemented using:
  • Service meshes (Istio or Linkerd) as well as the Gateway API
  • Various configuration, secret and volume mount options
  • Job execution before rollout for preparation tasks like database schema updates

All in all, the following [custom] resource manifests are managed by the chart (at the time of writing):

  • flagger.app Canary
  • image.toolkit.fluxcd.io ImagePolicy
  • image.toolkit.fluxcd.io ImageRepository
  • cloud.google.com BackendConfig
  • networking.istio.io DestinationRule
  • networking.istio.io VirtualService
  • ConfigMap
  • apps Deployment
  • autoscaling HorizontalPodAutoscaler
  • gateway.networking.k8s.io HTTPRoute
  • networking.k8s.io NetworkPolicy
  • policy PodDisruptionBudge
  • batch Job (as pre-install,pre-upgrade Helm hook)
  • Secret
  • ServiceAccount
  • Service
  • monitoring.coreos.com ServiceMonitor

Needless to mention, that the highest API version available is also respected, so full compatibility to all supported Kubernetes releases is ensured.

Sample time

As stated earlier, this chart, which we're using for a wide spectrum of different types of application on different Kubernetes versions, is available for everyone.

The chart can be installed using the Helm cli

helm repo add mms-tech https://helm-charts.mms.tech
Enter fullscreen mode Exit fullscreen mode

or Flux

---
apiVersion: source.toolkit.fluxcd.io/v1beta2
kind: HelmRepository
metadata:
  name: mms-tech
  namespace: default
spec:
  interval: 120m
  url: https://helm-charts.mms.tech
Enter fullscreen mode Exit fullscreen mode

To give you an impression on its usage, this is how our (well-known and beloved) GitHub app Technolinator is configured:

apiVersion: helm.toolkit.fluxcd.io/v2beta2
kind: HelmRelease
metadata:
  name: technolinator
  namespace: app
spec:
  chart:
    spec:
      chart: application
      version: "~1"
      sourceRef:
        kind: HelmRepository
        name: chart-repo
        namespace: default
      interval: 60m
  interval: 10m
  values:
    image:
      repository: ghcr.io/mediamarktsaturn/technolinator
      tag: 1.48.11 # {"$imagepolicy": "app:technolinator:tag"}
      tagSemverRange: "~1"
    secretEnvFrom:
      - technolinator-config
    resources:
      requests:
        cpu: "1"
        memory: 10Gi
      limits:
        cpu: "6"
        memory: 35Gi
    container:
      port: 8080
    livenessProbe:
      path: /q/health/live
    readinessProbe:
      path: /q/health/ready
    monitoring:
      serviceMonitor: true
      metricsPath: /q/metrics
    podSecurityContext:
      runAsUser: 201
      runAsGroup: 101
      fsGroup: 101
      fsGroupChangePolicy: Always
    volumeMounts:
      - name: data
        pvcName: technolinator-data
        mountPath: /data
    configuration:
      APP_ANALYSIS_TIMEOUT: 120M
      APP_PROCESS_LOGLEVEL: DEBUG
      APP_PULL_REQUESTS_CONCURRENCY_LIMIT: 10
Enter fullscreen mode Exit fullscreen mode

Again: All features are explorable in the values.yaml.
We will be happy, if this chart could also help others. And we're welcome contributions of any kinds for further improvements and use cases that we have not yet considered.

The end

There's a very divisive discussion about such generic (monster/mega) charts. Sure, unique applications, provided by their projects have their own charts, and it doesn't make any sense to squeeze them into a generic one.

But in our opinion it's different for internal business applications of a company. Using our generic chart is a matter of hardening, reuse and easy knowledge sharing on application deployment configuration. The first one demanding a certain setup crafts it once, for everyone's benefit.

Especially when it comes to repetitive complex configurations like sidecars used for integration with databases or database schema update jobs, there's no need to reinvent the wheel over and over.

Reach out (for instance using GitHub issues or discussions), if you'd like to discuss with us.


get to know us ๐Ÿ‘‰ https://mms.tech ๐Ÿ‘ˆ

Top comments (4)

Collapse
 
pmig profile image
Philip Miglinci

Hi Florian,

That's an interesting concept! If you only create one Helm chart, all these components will be installed in the same namespace right? What if multiple applications depend on the same base components?

Collapse
 
heubeck profile image
Florian Heubeck

Some manifests (e.g. the ServiceMonitor, or the ImagePolicy) have configurable namespaces, but yes, generally everthing goes into the same namespace.
What do you consider a base component?

Collapse
 
pmig profile image
Philip Miglinci

Operators usually prefer to get installed in a specific operator-system namespace this and if multiple applications require a specific version of the that operator always the helm chart needs to be touched. I'd rather prefer to have multiple smaller packages that depend on each other.

A base component could be the cert-manager, kube-prometheus stack or any other operator.

Thread Thread
 
heubeck profile image
Florian Heubeck

Think, I got it.

Our application helm chart is most of the time used for our own applications, not 3rd parties (but in rare cases).
So usually we of course use provided charts like from the prometheus-community project.

In terms of version compatibility with for instance ServiceMonitor, or other external CRDs, we use .Capabilities.APIVersions.Has checks, so that the appropriate API version is addressed on each environment.