DEV Community

FlareCanary
FlareCanary

Posted on

Kubernetes 1.36 Removed gitRepo Volumes — Your Helm Charts Pass Validation, Your Pods Don't Schedule

Kubernetes v1.36 shipped on April 22, 2026, and one removal is worth a runbook entry on its own: the in-tree gitRepo volume driver is gone.

gitRepo was the volume type that let a pod clone a git repository at startup straight into a mounted volume. It had been deprecated since v1.11 (mid-2018), flagged for security issues (CVE-2024-10220, the symlink escape that let containers traverse out of the cloned tree), and finally retired in 1.36. There is no replacement field. There is no compatibility shim. The schema validator on a 1.36 API server still accepts the field — it has to, because old manifests in etcd reference it — but the kubelet refuses to mount it.

The result is a deploy-time failure that sails through every pre-deploy gate.

The CI/CD pipeline that lies to you

A Helm chart that includes a gitRepo volume looks fine to every tool in the chain:

volumes:
  - name: app-config
    gitRepo:
      repository: "https://github.com/acme/config-repo.git"
      revision: "main"
Enter fullscreen mode Exit fullscreen mode

helm lint passes. helm template renders. kubectl apply --dry-run=server against a 1.36 cluster returns pod/foo created (dry run) — because the API server still validates the schema. The CI pipeline goes green.

At actual apply time on a 1.36 node, the kubelet refuses to mount the volume and the pod sits in ContainerCreating forever, with events that say:

Warning  FailedMount  kubelet  MountVolume.SetUp failed for volume "app-config":
gitRepo volume plugin is no longer supported
Enter fullscreen mode Exit fullscreen mode

Nothing in the Helm chart's metadata indicates that this volume type is gone. Nothing in the chart's chart.yaml signals incompatibility with k8s 1.36. The chart's kubeVersion field is advisory, and most public charts don't update it for individual volume removals.

Where this hides in real codebases

Three places where gitRepo volumes survive in 2026 codebases:

  1. Internal "config-as-code" sidecars. Pre-2020 pattern: a sidecar mounts a gitRepo volume to pull the latest config repo on pod restart. Replaced in most teams by ConfigMaps or Vault, but legacy clusters and forgotten staging environments often still run it.
  2. Helm charts pinned to an old version. Charts on Artifact Hub from 2018-2020 era that haven't been re-published. helm install against a pinned version still pulls the old manifest. The chart's stated kubeVersion range often hasn't been narrowed to exclude 1.36.
  3. Custom operators that generate Pod specs. Operators written by platform teams that emit gitRepo volumes for config-loading. The operator itself doesn't fail upgrade, but the pods it generates do — and the operator's own readiness probe is usually unaware that its child workloads aren't running.

The third category is the most painful. The operator reports "healthy." The CRD instances are Reconciled. Only when you look at the pods directly do you see they've been stuck ContainerCreating for hours.

The migration that's not a search-and-replace

Kubernetes' own deprecation notice points at three replacements: an init container that runs git clone, the git-sync sidecar project, or an external operator like flux/argo for actual GitOps. Each one has different semantics from gitRepo:

Approach Volume mode Auth Refresh
gitRepo (removed) Cloned once at pod create None (public repos only) Pod restart only
Init container Cloned once at pod create Pass via secret-mounted volume Pod restart only
git-sync sidecar Continuously synced Service account, SSH key, or PAT Configurable interval
Argo/Flux operator Cluster-wide reconcile OIDC / GitHub App Continuous

The init-container approach is closest to drop-in, but it requires you to add an emptyDir volume the init container writes into and the main container reads from, plus a secret mount if your repo isn't public (gitRepo never supported auth, so anyone migrating from gitRepo is by definition working with public repos — but that means they're now exposed to the same supply-chain issues that made gitRepo unsafe in the first place).

git-sync is the closest in spirit. It's an actively maintained project, lives as a sidecar, and does periodic refresh. But it changes pod resource accounting, and on small clusters the extra container can push pods over their memory limits.

The Helm-chart fix isn't a one-line swap. Expect it to touch the pod spec, add an emptyDir volume, add an init container or sidecar, configure auth if the repo isn't public, and update probes if your liveness check assumed instant volume availability at container start.

What to do this week

Three actions, in order:

  1. Grep your manifests, charts, and operator code for gitRepo:. If you're still on 1.35 or earlier, you have until your next upgrade to fix it. If you're on 1.36 already and apply hasn't broken yet, you're flying on workloads that haven't been restarted since the upgrade.
  2. Audit your operator-generated Pod specs. Run kubectl get pods -A -o json | jq '.items[].spec.volumes[]? | select(.gitRepo)' against your clusters. Anything that comes back is a future FailedMount.
  3. Pin chart versions explicitly with kubeVersion guards. When you migrate, narrow the chart's kubeVersion to < 1.36 for the legacy version and bump the major for the migrated version. This stops helm upgrade --install from silently rolling forward to a broken combination.

Worth bonus: Ingress NGINX was retired entirely on March 24, 2026 — no more releases, bugfixes, or security updates. If your cluster's Ingress controller is ingress-nginx (the Kubernetes-project one, not NGINX Inc.'s nginx-ingress), the migration to InGate or NGINX Gateway Fabric is on the same runbook page.

The "everything passes, then breaks at deploy" pattern is what makes K8s upgrades load-bearing for an entire engineering org. The 1.36 gitRepo removal is the cleanest example we've seen this year of CI/CD validation that proves nothing about runtime behavior.

We built FlareCanary for the API-side version of this same pattern: schema accepts the request, response shape passes validation, but the field semantics changed. Same problem, different layer.


If your org runs Kubernetes 1.35 or earlier and any team has a gitRepo volume in their chart, mark the next 1.36 upgrade window as a known-risk deploy. Operator-generated pods are the easiest to miss.

Top comments (0)