DEV Community

Cover image for Setting up a helm OCI registry with ArgoCD hosted on Azure
Stanislas Bruhière
Stanislas Bruhière

Posted on

Setting up a helm OCI registry with ArgoCD hosted on Azure

With an increasing number of helm charts, some configuration blocks are bound to get duplicated. This is not usually a big problem when writing Kubernetes configuration as readability and simplicity of the configuration is most of the time more valued than factorization.

However, keeping 10+ helm charts consistent often means a lot of copy pasting back and forth to keep the same naming convention, labels and logic up to date. This is why I wanted a way to use a _helpers.tpl file shared between multiple charts.

There are a few ways to achieve that :

  • Creating a _helpers.tpl file at the root of the helm chart repository and creating symlinks in every chart directory. This can only work if all of your charts are in the same repository. However some of my charts were stored alongside the code to make deployment easier. So this solution could not work for me.

  • Creating a _helpers.tpl file in a discrete repository, then using this repository as a sub-module in every chart repository. This can (and has) worked for me in the past. However working with sub-modules can be a pain to integrate in CI, and in ArgoCD.

  • Using the new OCI registry support in helm. This is used actively by bitnami in their common chart. Since helm 3.8.0 this feature is supported by default.

Hosting and logging into an OCI registry on azure

OCI images are supported by default by Azure Container Registries (ACR).

Here is the minimal Terraform configuration to create an ACR, a service principal and export its username / password to be able to login from the CI to push the OCI images and pull them from ArgoCD.

# oci-registry.tf
data "azuread_client_config" "current" {}

# Deploy the ACR
resource "azurerm_container_registry" "registry" {
  name                          = "<acr-name>"
  resource_group_name           = "<resource-group-name>"
  location                      = "<my-location>"
  admin_enabled                 = false
  sku                           = "Basic"
  public_network_access_enabled = true
  zone_redundancy_enabled       = false
}

# Deploy an application to contribute to the ACR
resource "azuread_application" "oci_contributor" {
  display_name             = "OCI contributor"
  owners                   = [data.azuread_client_config.current.object_id]
  prevent_duplicate_names  = true
  device_only_auth_enabled = true
}

# Associate an azure service principal (SP) to generate credentials
resource "azuread_service_principal" "oci_contributor" {
  application_id = azuread_application.oci_contributor.application_id
  description    = "OCI contributor"
  owners         = [data.azuread_client_config.current.object_id]
}

# Create a password for the SP
resource "azuread_service_principal_password" "oci_contributor" {
  service_principal_id = azuread_service_principal.oci_contributor.object_id
}

# Gives the SP the right to contribute to the ACR
resource "azurerm_role_assignment" "oci_contributor" {
  scope                = azurerm_container_registry.acr.id
  role_definition_name = "AcrPush"
  principal_id         = azuread_service_principal.oci_contributor.object_id
  description          = "Give OCI Contributor rights to contribute to container registry"
}

# Output the SP client_id to reference it in the CI
output "oci_contributor_service_principal_client_id" {
  value = azuread_service_principal.oci_contributor.application_id
}

# Output the SP password to reference it in the CI
output "oci_contributor_service_principal_password" {
  value     = azuread_service_principal_password.oci_contributor.value
  sensitive = true
}
Enter fullscreen mode Exit fullscreen mode

Once this is deployed, we can create a new repository that will contain our chart(s) we want to share.

Creating the helpers repository chart

Create a repository and populate it as follows :

.
├── .github/
│   └── workflows/
│       ├── deploy-test.yaml
│       └── release.yaml
├── common/
│   ├── templates/
│   │   └── security.yaml
│   └── Chart.yaml
├── .gitignore
└── README.md
Enter fullscreen mode Exit fullscreen mode

The Chart.yaml will contain the following:

# Chart.yaml
apiVersion: v2
name: common
description: Shared helper function across different helm charts
type: application
appVersion: "1.16.0"
version: 0.1.0
Enter fullscreen mode Exit fullscreen mode

Make sure to ignore tgz files when testing

# .gitignore
*.tgz
Enter fullscreen mode Exit fullscreen mode

Writing a simple helper function

Here we will write two simple helper function to populate the SecurityContext for a pod and a container.

# security.yaml

{{/*
# DESCRIPTION
# Generate The pod's securityContext and to comply with namespaces with the annotation pod-security.kubernetes.io/enforce set to restricted
# PARAMETERS
  - user (optional): The user to run the container as. Defaults to 10000
# USAGE
# {{ include "common.security.podSecurityContext.restricted" dict | indent 4 }}
*/}}
{{- define "common.security.podSecurityContext.restricted" -}}
{{- $user := .user | default 10000 -}}
runAsNonRoot: true
runAsUser: {{ $user }}
runAsGroup: {{ $user }}
fsGroup: {{ $user }}
seccompProfile:
  type: RuntimeDefault
{{- end -}}

{{/*
# DESCRIPTION
# Generate The container's SecurityContext and to comply with namespaces with the annotation pod-security.kubernetes.io/enforce set to restricted
# PARAMETERS
  No parameters, just include the snippet and give it an empty dict
# USAGE
# {{ include "common.security.containerSecurityContext.restricted" dict | indent 4 }}
*/}}
{{- define "common.security.containerSecurityContext.restricted" -}}
allowPrivilegeEscalation: false
capabilities:
  drop: ["ALL"]
{{- end -}}
Enter fullscreen mode Exit fullscreen mode

I won't go into details for this file as this is standard helm template. This is just an example.

Building the OCI image manually

First let's check we are able to build the OCI image and push it to the registry.

Get your login / password from the two outputs in the terraform module.

Run

cd common
helm registry login <acr-name>.azurecr.io --username <acr-username> --password <acr-password> 
helm package .
helm push *.tgz "oci://<acr-name>.azurecr.io/helm"
Enter fullscreen mode Exit fullscreen mode

This will build your helm package and push it to the acr in the helm/common path (as the chart is named common) and under the 0.1.0 tag, defined in Chart.yaml.

Using the common chart as a dependency

In another helm chart, you can use the common helm chart as a dependency by adding these lines in the Chart.yaml

# other-chart/Chart.yaml
[...]
dependencies:
- name: common
  repository: oci://<acr-name>.azurecr.io/helm
  version: 0.1.0
Enter fullscreen mode Exit fullscreen mode

You will then need to run helm dependency update or helm dep up if you hate typing.
This will create (or update) the Chart.lock, which is required for deploying the chart.

Then calling the function will look something like this:

# other-chart/templates/deployment.yaml
[...]
    spec:
      securityContext:
        {{- include "common.security.podSecurityContext.restricted" (dict "user" 101) | nindent 8 }}
      containers:
[...]
Enter fullscreen mode Exit fullscreen mode

Passing credentials to ArgoCD

In your ArgoCD helm chart, you will need to add the following config in the valueFile.

argo-cd:
  configs:
    repositories:
      helm-oci:
        username: <acr-username>
        password: <acr-password>
        url: <acr-name>.azurecr.io/helm
        type: helm
        enableOCI: "true"
        name: helm-oci
Enter fullscreen mode Exit fullscreen mode

When you have update argocd with this config, you should see the following in the settings:
Argocd with helm-oci setup

Automatizing the helm chart release

To prevent making a billion release tags while testing, I setup an alpha and beta mechanism on the helm chart's release system:

  • When working on a branch, you immediately bump the version in the Chart.yaml and begin working.
  • On every commit, create a tag <helm chart version>-alpha that you can use to test on your chart that uses the common dependency
  • When you have tested everything, merge to main
  • On every commit to main, create a tag <helm chart version>-beta
  • To do a proper release, tag the commit you want to release with v<helm chart version>, this will create the tag <helm chart version>

We will write two github action files :

## deploy-test.yaml
name: Build image and push to registry
on:
  push:
    branches:
      - '**'

concurrency:
  group: ${{ github.ref }}
  cancel-in-progress: true

jobs:
  package-and-push-common-main:
    runs-on: helm-helpers-release-runner
    defaults:
      run:
        working-directory: ./common
    steps:
      - uses: actions/checkout@v3
      - name: Set up Helm
        uses: azure/setup-helm@v3
      - name: Login to Azure Container Registry
        run: |
          helm registry login ${{ vars.DOCKER_PROD_REGISTRY }} \
          --username ${{ secrets.DOCKER_PROD_USERNAME }} \
          --password ${{ secrets.DOCKER_PROD_PASSWORD }}
      - name: Get chart version
        id: get_chart_version
        uses: mikefarah/yq@v4.40.5
        with:
          cmd: yq e '.version' ./common/Chart.yaml
      - name: Set calculated chart version
        if: ${{ github.ref != 'refs/heads/main' }}
        run: |
          echo "CURRENT_VERSION=${{ steps.get_chart_version.outputs.result }}-alpha" >> $GITHUB_ENV
      - name: Set calculated chart version
        if: ${{ github.ref == 'refs/heads/main' }}
        run: |
          echo "CURRENT_VERSION=${{ steps.get_chart_version.outputs.result }}-beta" >> $GITHUB_ENV
      - name: Build and push chart
        run: |
          helm package . --version "$CURRENT_VERSION"
          helm push "common-${CURRENT_VERSION}.tgz" "oci://${{ vars.DOCKER_PROD_REGISTRY }}/helm"
Enter fullscreen mode Exit fullscreen mode

and

# release.yaml
name: Deploy
on:
  push:
    tags:
    - v*.*.*

concurrency:
  group: ${{ github.ref }}
  cancel-in-progress: true

jobs:
  release:
    runs-on: helm-helpers-release-runner
    defaults:
      run:
        working-directory: ./common
    steps:
    - uses: actions/checkout@v3
    - name: Set up Helm
      uses: azure/setup-helm@v3
    - name: Login to Azure Container Registry
      run: |
        helm registry login ${{ vars.OCI_REGISTRY_URL }} \
        --username ${{ secrets.OCI_REGISTRY_USERNAME }} \
        --password ${{ secrets.OCI_REGISTRY_PASSWORD }}
    - name: Get chart version
      id: get_chart_version
      uses: mikefarah/yq@v4.40.5
      with:
        cmd: yq e '.version' ./common/Chart.yaml
    - name: Ensure tag matches chart version
      run: |
        current_version=${{ steps.get_chart_version.outputs.result }}
        if [[ "${{ github.ref }}" != "refs/tags/v$current_version" ]]; then
          echo "Tag does not match chart version"
          exit 1
        fi
    - name: Build and push chart
      run: |
        helm package .
        helm push *.tgz "oci://${{ vars.OCI_REGISTRY_URL }}/helm"
Enter fullscreen mode Exit fullscreen mode

In the latter, I added a check to make sure the tag is matching the actual Chart version (this happened a lot when working with it :)).


Learning Planet Institute

This article was written in collaboration with the Learning Planet Institute, check them out on twitter

Top comments (1)

Collapse
 
plutov profile image
Alex Pliutau

Great write-up, we also have a bunch of articles on Kubernetes,CI/CD, check it out here - packagemain.tech