The Problem: GitOps Auto-Sync Doesn't Work with Helm Values
If you're using GitHub Actions Runner Controller (ARC) v2 with ArgoCD for GitOps deployment, you've probably hit this frustrating wall:
ArgoCD doesn't auto-sync when you change Helm values.
You modify your runner configuration in Git, push, and... nothing happens. ArgoCD shows "Synced" but your changes aren't applied. You're forced to manually trigger sync every single time.
Why? Because ArgoCD's Git polling doesn't detect changes in inline Helm values or even external valueFiles. It's a known limitation when using Helm charts with ArgoCD.
The Journey: From Helm to Pure YAML
What We Tried (And Failed)
- GitHub Webhooks ❌ - Won't work if your cluster is in a private network
- External valueFiles ❌ - ArgoCD still doesn't detect changes reliably
- Checksum annotations ❌ - Too complex, requires CI/CD pipeline
- Manual sync scripts ❌ - Defeats the purpose of GitOps
The Breakthrough: Use AutoScalingRunnerSet Directly
ARC v2 uses a Custom Resource called AutoScalingRunnerSet. While the official docs recommend deploying it via the gha-runner-scale-set Helm chart, you can actually deploy it as a pure YAML manifest.
Benefits:
- ✅ ArgoCD detects changes immediately
- ✅ Auto-sync works perfectly
- ✅ No Helm templating complexity
- ✅ Direct control over runner configuration
- ✅ True GitOps experience
But there's a catch...
The Mystery: Controller Keeps Deleting the Resource
When we tried deploying AutoScalingRunnerSet as a pure YAML manifest, something strange happened:
$ kubectl apply -f autoscaling-runner-set.yaml
autoscalingrunnerset.actions.github.com/runners created
$ kubectl get autoscalingrunnersets
No resources found.
The resource disappeared immediately!
Investigating the Controller Logs
$ kubectl logs deployment/gha-rs-controller -n arc-system
INFO Autoscaling runner set version doesn't match the build version. Deleting the resource.
{"targetVersion": "0.9.2", "actualVersion": ""}
Aha! The controller was checking for a version and finding it empty, then immediately deleting the resource.
The Solution: A Hidden Label Requirement
After diving into the controller source code, we found the culprit:
if !v1alpha1.IsVersionAllowed(autoscalingRunnerSet.Labels[LabelKeyKubernetesVersion], build.Version) {
// Delete the resource
}
The controller checks for a specific label!
We used helm template to see what labels the Helm chart adds:
$ helm template test oci://ghcr.io/actions/actions-runner-controller-charts/gha-runner-scale-set \
--version 0.9.2 \
--set githubConfigUrl=https://github.com/myorg \
--set githubConfigSecret=github-token \
--set controllerServiceAccount.name=gha-rs-controller \
--set controllerServiceAccount.namespace=arc-system
Key finding: The Helm chart adds app.kubernetes.io/version: "0.9.2" label!
The Working Manifest
Here's a complete, working AutoScalingRunnerSet manifest that you can deploy with ArgoCD:
apiVersion: actions.github.com/v1alpha1
kind: AutoscalingRunnerSet
metadata:
name: github-runners
namespace: arc-runners
labels:
# CRITICAL: Controller checks this label for version compatibility
app.kubernetes.io/version: "0.9.2"
app.kubernetes.io/name: github-runners
app.kubernetes.io/component: autoscaling-runner-set
annotations:
# Optional: Bump this to force a rollout without changing spec
rollout-token: "2025-11-11"
spec:
# GitHub configuration
githubConfigUrl: "https://github.com/your-org"
githubConfigSecret: github-token
# Runner scale set name (appears in GitHub UI)
runnerScaleSetName: "k8s-runners"
# Runner group
runnerGroup: "default"
# Scaling configuration
minRunners: 1
maxRunners: 10
# Runner pod template
template:
spec:
containers:
- name: runner
image: ghcr.io/actions/actions-runner:latest
command: ["/home/runner/run.sh"]
# Docker-in-Docker configuration
env:
- name: DOCKER_HOST
value: tcp://localhost:2376
- name: DOCKER_TLS_VERIFY
value: "1"
- name: DOCKER_CERT_PATH
value: /certs/client
volumeMounts:
- name: docker-certs
mountPath: /certs/client
readOnly: true
- name: work
mountPath: /home/runner/_work
resources:
requests:
cpu: 250m
memory: 1Gi
limits:
memory: 3Gi
# Docker-in-Docker sidecar
- name: dind
image: docker:24-dind
securityContext:
privileged: true
env:
- name: DOCKER_TLS_CERTDIR
value: /certs
volumeMounts:
- name: docker-certs
mountPath: /certs/client
- name: docker-storage
mountPath: /var/lib/docker
- name: work
mountPath: /home/runner/_work
resources:
requests:
cpu: 250m
memory: 2Gi
limits:
memory: 7Gi
volumes:
- name: docker-certs
emptyDir: {}
- name: docker-storage
emptyDir: {}
- name: work
emptyDir: {}
ArgoCD Application Configuration
Deploy this with ArgoCD:
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: github-runners
namespace: argocd
spec:
project: default
source:
repoURL: https://github.com/your-org/infra
targetRevision: main
path: platform/github-runners
directory:
include: "autoscaling-runner-set.yaml"
destination:
server: https://kubernetes.default.svc
namespace: arc-runners
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- CreateNamespace=true
How It Works
-
You modify
autoscaling-runner-set.yamlin Git - Git push to your repository
- ArgoCD detects the change (within 3 minutes with default polling)
- Auto-sync triggers automatically
- Runners update without manual intervention
True GitOps! ✅
Important Notes
Version Label is Critical
The app.kubernetes.io/version label must match your ARC controller version:
$ kubectl get deployment gha-rs-controller -n arc-system -o yaml | grep image:
image: ghcr.io/actions/actions-runner-controller:0.9.2
If your controller is version 0.9.2, use:
labels:
app.kubernetes.io/version: "0.9.2"
Don't Specify listenerTemplate
The listener pod is managed automatically by the controller. Don't add a listenerTemplate section to your manifest - it will cause image pull errors.
Controller Must Be Installed First
Install the ARC controller before deploying runners:
helm install arc \
oci://ghcr.io/actions/actions-runner-controller-charts/gha-runner-scale-set-controller \
--namespace arc-system \
--create-namespace
Testing Your Setup
After deploying, verify everything works:
# Check AutoScalingRunnerSet
$ kubectl get autoscalingrunnersets -n arc-runners
NAME MINIMUM MAXIMUM CURRENT STATE PENDING RUNNING
github-runners 1 10 1 1
# Check pods
$ kubectl get pods -n arc-runners
NAME READY STATUS
github-runners-55655b45-listener 1/1 Running
github-runners-fddsm-runner-pjjqx 2/2 Running
# Check in GitHub
# Go to: Settings → Actions → Runners
# You should see your runner scale set online!
Benefits Over Helm Chart Approach
| Aspect | Helm Chart | Pure YAML |
|---|---|---|
| ArgoCD Auto-Sync | ❌ Doesn't work | ✅ Works perfectly |
| Configuration Changes | Manual sync needed | Automatic deployment |
| Complexity | Helm templating | Simple YAML |
| Version Control | Values in Git, hard to diff | Direct manifest, easy to diff |
| Learning Curve | Need to know Helm | Just Kubernetes |
Conclusion
By deploying AutoScalingRunnerSet as a pure YAML manifest with the correct app.kubernetes.io/version label, you get:
- True GitOps - Push to Git, auto-deploy
- Simplified Configuration - No Helm complexity
- Better Visibility - Clear, readable manifests
- Faster Iterations - No manual sync needed
This approach is not documented anywhere in the official ARC documentation, but it works perfectly and makes GitOps actually... GitOps.
Troubleshooting
Resource Gets Deleted Immediately
Symptom: AutoScalingRunnerSet disappears right after creation
Solution: Add the app.kubernetes.io/version label matching your controller version
Listener Pod Has ImagePullBackOff
Symptom: Listener pod fails with "image not found"
Solution: Remove the listenerTemplate section - let the controller manage it
ArgoCD Shows OutOfSync But Doesn't Sync
Symptom: Application stuck in OutOfSync state
Solution:
- Check if
automated: trueis set in syncPolicy - Manually trigger refresh:
kubectl annotate application github-runners -n argocd argocd.argoproj.io/refresh=hard --overwrite
Top comments (0)