ArgoCD 3.3 PreDelete Hook - Transforming GitOps Deletion into a Safe Lifecycle
ArgoCD 3.3: The Era of Complete GitOps Lifecycle Management
On February 28, 2026, ArgoCD 3.3 was officially released. Though a major version upgrade, instead of large-scale architecture changes, it focused on precisely filling in features that have been demanded for a long time in practical operations. Among these, the most significant change is the introduction of PreDelete Hook.
In GitOps workflow, deletion has always been a risky operation. Stateful databases, persistent volumes, applications with external system dependencies suddenly disappearing have repeatedly caused data loss and dependent microservices vanishing without notification. ArgoCD 3.3 elevates this dangerous deletion operation to an explicit lifecycle stage.
Now you can declaratively define policies: "You must perform this action before deletion."
Understanding PreDelete Hook: Completing the Sync Lifecycle
Existing ArgoCD provided 3-stage hooks: PreSync → Sync → PostSync. With the addition of PreDelete in 3.3, the complete lifecycle of applications is now complete.
Application Lifecycle
────────────────────────────
On Create/Update:
PreSync Hook → Sync → PostSync Hook
On Delete (3.3+):
PreDelete Hook → (proceed with actual deletion) → (PostDelete not yet available)
PreDelete Hook, when a deletion request arrives, executes a specified Kubernetes Job before actual resource removal. Only if this Job succeeds does deletion proceed, and if it fails, deletion itself is blocked. This is a way to completely implement GitOps's "declarative infrastructure" philosophy.
Practical Application: 5 PreDelete Hook Examples
1) Data Protection for Stateful Applications
apiVersion: batch/v1
kind: Job
metadata:
name: pre-delete-db-backup
annotations:
argocd.argoproj.io/hook: PreDelete
argocd.argoproj.io/hook-delete-policy: HookSucceeded
spec:
backoffLimit: 2
template:
spec:
restartPolicy: Never
containers:
- name: backup
image: postgres:16-alpine
command:
- sh
- -c
- |
echo "[$(date)] PreDelete: Starting database backup..."
# Create PostgreSQL dump
pg_dump $DATABASE_URL > /backup/dump-$(date +%Y%m%d%H%M%S).sql
# Upload to S3
aws s3 cp /backup/ s3://backups/pre-delete/ --recursive
# Success log
echo "[$(date)] PreDelete: Backup complete, allowing deletion to proceed"
env:
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: db-credentials
key: url
- name: AWS_ACCESS_KEY_ID
valueFrom:
secretKeyRef:
name: aws-credentials
key: access_key
- name: AWS_SECRET_ACCESS_KEY
valueFrom:
secretKeyRef:
name: aws-credentials
key: secret_key
volumeMounts:
- name: backup-storage
mountPath: /backup
volumes:
- name: backup-storage
emptyDir:
sizeLimit: 10Gi
The key is a single annotation: argocd.argoproj.io/hook: PreDelete. It follows the same pattern as existing PreSync/PostSync Hooks, so learning cost is minimal.
argocd.argoproj.io/hook-delete-policy: HookSucceeded means deletion proceeds only if Hook succeeds. Other options include:
HookSucceeded: Proceed with deletion only if Hook succeeds (default, recommended)HookFailed: Proceed with deletion only if Hook fails (rarely used)BeforeHookCreation: Delete resources before Hook creation (risky)
2) Service Mesh Traffic Draining
In Istio environments, when removing an app, ensure in-flight requests are completed and no new requests are accepted.
apiVersion: batch/v1
kind: Job
metadata:
name: pre-delete-drain-traffic
annotations:
argocd.argoproj.io/hook: PreDelete
spec:
backoffLimit: 1
template:
spec:
restartPolicy: Never
serviceAccountName: argocd-drain-traffic
containers:
- name: drain
image: bitnami/kubectl:latest
command:
- /bin/sh
- -c
- |
echo "PreDelete: Starting Istio traffic drain..."
# Set DestinationRule weight to 0
kubectl patch dr api-destination -p '{"spec":{"trafficPolicy":{"connectionPool":{"tcp":{"maxConnections":0}}}}}'
# Wait for in-flight requests to complete (30 seconds)
sleep 30
echo "PreDelete: Traffic drain complete"
3) External System Notification and Logging
apiVersion: batch/v1
kind: Job
metadata:
name: pre-delete-notification
annotations:
argocd.argoproj.io/hook: PreDelete
spec:
backoffLimit: 3
template:
spec:
restartPolicy: Never
containers:
- name: notify
image: curlimages/curl:latest
command:
- /bin/sh
- -c
- |
APP_NAME="$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace)"
TIMESTAMP=$(date -u +%Y-%m-%dT%H:%M:%SZ)
# Slack notification
curl -X POST $SLACK_WEBHOOK \
-H 'Content-Type: application/json' \
-d "{
\"text\": \"🗑️ Application deletion in progress\",
\"blocks\": [
{
\"type\": \"section\",
\"text\": {
\"type\": \"mrkdwn\",
\"text\": \"*$APP_NAME* is being deleted\n_Time: $TIMESTAMP_\"
}
}
]
}"
# PagerDuty notification (optional)
# curl -X POST https://api.pagerduty.com/incidents ...
env:
- name: SLACK_WEBHOOK
valueFrom:
secretKeyRef:
name: slack-credentials
key: webhook-url
4) DNS/CDN Cleanup
Clean up external resources like Route53 and CloudFront when app is deleted.
apiVersion: batch/v1
kind: Job
metadata:
name: pre-delete-cleanup-dns
annotations:
argocd.argoproj.io/hook: PreDelete
spec:
template:
spec:
restartPolicy: Never
serviceAccountName: pre-delete-cleanup
containers:
- name: cleanup
image: amazon/aws-cli:latest
command:
- /bin/bash
- -c
- |
echo "PreDelete: Starting DNS/CDN cleanup..."
# Remove record from Route53
aws route53 change-resource-record-sets \
--hosted-zone-id $HOSTED_ZONE_ID \
--change-batch '{
"Changes": [{
"Action": "DELETE",
"ResourceRecordSet": {
"Name": "'$DOMAIN_NAME'",
"Type": "CNAME",
"TTL": 300,
"ResourceRecords": [{"Value": "'$LB_HOSTNAME'"}]
}
}]
}'
# Invalidate CloudFront cache
aws cloudfront create-invalidation \
--distribution-id $DISTRIBUTION_ID \
--paths "/*"
echo "PreDelete: DNS/CDN cleanup complete"
env:
- name: HOSTED_ZONE_ID
valueFrom:
configMapKeyRef:
name: dns-config
key: hosted_zone_id
- name: DOMAIN_NAME
valueFrom:
configMapKeyRef:
name: dns-config
key: domain
5) Compliance Audit Logging
apiVersion: batch/v1
kind: Job
metadata:
name: pre-delete-audit-log
annotations:
argocd.argoproj.io/hook: PreDelete
spec:
template:
spec:
restartPolicy: Never
containers:
- name: audit
image: curlimages/curl:latest
command:
- /bin/sh
- -c
- |
TIMESTAMP=$(date -u +%Y-%m-%dT%H:%M:%SZ)
APP_NAME=$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace)
# Log audit event to CloudWatch
aws logs put-log-events \
--log-group-name /argocd/deletion-audit \
--log-stream-name production \
--log-events \
timestamp=$(date +%s000),message="{\"event\": \"app_deletion\", \"app\": \"$APP_NAME\", \"timestamp\": \"$TIMESTAMP\", \"deletedBy\": \"argocd-automated\"}"
# Also backup to S3
echo "{
\"timestamp\": \"$TIMESTAMP\",
\"application\": \"$APP_NAME\",
\"action\": \"PreDelete Hook executed\",
\"status\": \"success\"
}" | aws s3 cp - s3://audit-logs/argocd/$APP_NAME-$TIMESTAMP.json
💡 Practical Tip: PreDelete Hook must be designed very carefully, unlike existing PreSync/PostSync Hooks. Critical operations like backup and drain must be guaranteed to succeed, and on failure, must output clear error messages so that operations teams can manually intervene.
ArgoCD 3.3 Major Improvements Summary
1) OIDC Background Token Refresh
The problem of sudden logout when integrating with OIDC providers like Keycloak and Okta has been resolved. ArgoCD 3.3 automatically refreshes OIDC tokens in the background before expiration, and you can set refresh thresholds.
# argocd-cm ConfigMap
apiVersion: v1
kind: ConfigMap
metadata:
name: argocd-cm
namespace: argocd
data:
# OIDC token refresh time difference (default 10m)
oidc.token.refresh.before.expiry: "10m"
2) Shallow Git Cloning
Instead of entire Git history, fetch only needed commits in large monorepos or old projects. Repository fetch time can be reduced from minutes to seconds.
# argocd-cm ConfigMap
apiVersion: v1
kind: ConfigMap
metadata:
name: argocd-cm
namespace: argocd
data:
# Set shallow clone depth
reposerver.git.shallow.depth: "1"
# Can also be overridden per Repository
# repo.https://github.com/example/large-repo.git.shallow.depth: "10"
For very large repositories, set depth to 5-10 to preserve some history while improving performance.
3) Fine-grained Cluster Resource Control
AppProject's clusterResourceWhitelist has been expanded to restrict not only API groups and Kind but also individual resource names.
apiVersion: argoproj.io/v1alpha1
kind: AppProject
metadata:
name: team-a
spec:
sourceRepos:
- https://github.com/example/repo.git
destinations:
- namespace: team-a-*
server: https://kubernetes.default.svc
# Fine-grained resource control
clusterResourceWhitelist:
- group: ""
kind: Namespace
name: "team-a-*" # Pattern matching supported
- group: ""
kind: ConfigMap
name: "team-a-config"
- group: apps
kind: Deployment
# If name not specified, allow all Deployments
4) Native KEDA Support
ScaledObject and ScaledJob pause/resume can be controlled directly from UI, and accurate health status is displayed instead of previous 'Unknown' state.
3.2 → 3.3 Upgrade Practical Guide
⚠️ Warning: Before upgrading to 3.3, always backup the current version and verify first in a staging environment. Especially since PreDelete Hook, once executed, cannot be reverted, it must be designed carefully.
Step 1: Version Check and Planning
# Check current ArgoCD version
argocd version
# Check current Application status
argocd app list --output wide
# Save status of major Applications
argocd app get > /tmp/app-backup-before-upgrade.txt
Step 2: Helm Chart Update (Recommended)
# Update Helm repository
helm repo update
# Check available versions
helm search repo argo/argo-cd --versions | head -10
# Test in staging environment first
helm upgrade argocd argo/argo-cd \
-n argocd \
-f values-staging.yaml \
--version 7.8.0 \
--dry-run \
--debug
# Actual upgrade (production)
helm upgrade argocd argo/argo-cd \
-n argocd \
-f values-prod.yaml \
--version 7.8.0 \
--wait \
--timeout 5m
# Check upgrade status
kubectl rollout status deployment/argocd-server -n argocd
kubectl rollout status deployment/argocd-application-controller -n argocd
Step 3: Post-Upgrade Verification
# Check version
argocd version
# Re-check Application status
argocd app list --output wide
# Check sync status of major Applications
argocd app sync --dry-run
# Monitor Pod logs (check for issues)
kubectl logs -n argocd -l app.kubernetes.io/name=argocd-server --tail=50 -f
Step 4: Gradual PreDelete Hook Application
# Step 1: Apply first to non-critical apps in staging environment
# Add PreDelete Hook Job to staging-app/Chart.yaml
# Step 2: Test actual deletion in staging
argocd app delete staging-app --cascade=background
# Step 3: Verify Hook execution
kubectl get jobs -n staging-namespace | grep pre-delete
# Step 4: If no issues, apply to production critical apps
# Add PreDelete Hook Job to prod-app/Chart.yaml
# Step 5: Gradual rollout (canary deployment)
# Start with regular Applications, gradually expand to important apps
💡 Practical Tip: PreDelete Hook can be applied gradually to existing applications. Add Hook manifests starting with newly deployed apps, test deletion scenarios in staging, then apply to production. For existing apps, it's safer to start with less important apps.
Frequently Asked Questions
Q: What happens if PreDelete Hook fails?
The entire deletion operation is blocked. After retrying as many times as set in backoffLimit, if still failing, deletion is suspended, and you can see the failure status in ArgoCD UI. Manually resolve the cause and retry deletion.
Q: Can I use PreDelete Hook with existing PreSync/PostSync Hooks?
Yes, all Hook types operate independently. You can define PreSync, Sync, PostSync, and PreDelete Hooks on a single application, and only the relevant Hook executes at each lifecycle stage.
Q: Is Shallow Clone recommended for all projects?
Recommended in most cases. However, if using custom plugins that generate manifests based on Git history, full history may be needed, so should disable it for those repositories.
Q: Can I define multiple PreDelete Hooks?
Yes, you can define multiple Jobs as PreDelete. However, since ArgoCD executes them in parallel, if order is important (e.g., backup data first, then cleanup external services), sequential execution within a single Job is recommended.
Conclusion: Complete GitOps Lifecycle Management
ArgoCD 3.3's PreDelete Hook elevates GitOps from a simple deployment tool to a complete infrastructure management platform. Now you can declaratively define all stages: application creation, update, and deletion.
Especially in environments with many stateful applications, PreDelete Hook is essential. It automates and validates the most dangerous operations in operations: data protection, service draining, and external system cleanup.
We recommend testing PreDelete Hook in staging environment right now and gradually applying it to production applications.
This article was written with AI technology support. For more cloud-native engineering insights, visit the ManoIT Tech Blog.
Top comments (0)