Data loss is a disaster waiting to happen. In this part of the guide, we will implement a robust, atomic backup strategy that protects your n8n workflows, credentials, Redis queues, and custom nodes.
Your Stack:
- Target: PostgreSQL (n8n data), Redis (Queue state), N8N (Custom nodes & config)
- Storage: Central Backup PVC (PG/Redis) + Direct Upload (N8N)
- Cloud: Mega.nz (Off-site with 15-day retention)
- Encryption: Optional GPG encryption for sensitive data
π What You'll Accomplish in Part 4
βοΈ Create explicit backup namespace & Infrastructure
βοΈ Prepare Secrets for both namespaces (backup & prod)
βοΈ Implement Modular Backup Jobs (Postgres, Redis, N8N)
βοΈ Use Atomic Staging Strategy (Prevent partial file uploads)
βοΈ Add Optional GPG Encryption (Secure credentials)
βοΈ Upload verified backups to Mega.nz (Central & Direct)
βοΈ Implement Dual Retention (7 days local, 15 days cloud)
βοΈ Master** Restore Procedure** (Encrypted or Plain for all apps)
π§© Step 1: The Architecture (Hybrid Modular V3)
To solve "Race Condition" and "Cross-Namespace" issues simply, we use a Hybrid Approach.
- Central Jobs (Postgres & Redis): Run in
backupnamespace. Write to Centralbackup-pvc(staging->ready). Uploaded by Central Sync Job. - Direct Job (N8N): Runs in
prodnamespace. Mountsn8n-pvc, compresses, and uploads directly to Mega to avoid RBAC/Cross-Namespace complexity. - Sync Job: Scans only
/backup/ready(Postgres/Redis) and uploads to Mega.
π§© Step 2: Prepare Secrets
We need to prepare credentials in BOTH namespaces because N8N backup runs in prod.
1. Configure Rclone (Local Machine)
Do this on your local computer to generate the config file.
rclone config
# Name: mega
# Type: mega
# Account: Enter your Mega email/pass
Extract the config:
cat ~/.config/rclone/rclone.conf
Copy the entire output.
2. Create Rclone Secret (Server - Backup NS)
Paste the content into a file on your server:
nano rclone.conf
# ... paste content ...
kubectl create secret generic rclone-secret \
--from-file=./rclone.conf \
-n backup
3. Create Rclone Secret (Server - Prod NS)
Critical: Since the N8N job runs in prod, it needs the secret there too.
kubectl create secret generic rclone-secret \
--from-file=./rclone.conf \
-n prod
4. Create Encryption Secret (Optional but Recommended)
This protects your n8n credentials and workflows. (Only needed in backup namespace).
# Generate a strong passphrase
openssl rand -base64 32 > backup-passphrase.txt
# Create secret
kubectl create secret generic backup-encryption-secret \
--from-file=passphrase=backup-passphrase.txt \
-n backup
# **IMPORTANT:** Save this passphrase in a password manager!
# You will need it to restore your backups.
5. Copy Database Password
CronJobs run in backup namespace, but your database secret is in prod. We need to copy it.
kubectl get secret postgres-secret -n prod -o yaml | \
sed 's/namespace: prod/namespace: backup/' | \
kubectl apply -f -
π§© Step 3: Infrastructure
Create a namespace and a dedicated PVC.
File: backups/01-infrastructure.yaml
apiVersion: v1
kind: Namespace
metadata:
name: backup
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: backup-pvc
namespace: backup
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 30Gi
Apply it:
kubectl apply -f backups/01-infrastructure.yaml
π§© Step 4: Job 1 β PostgreSQL Backup
This job runs in backup, dumps to staging, encrypts it, and moves to ready.
File: backups/02-postgres-backup.yaml
apiVersion: batch/v1
kind: CronJob
metadata:
name: postgres-backup
namespace: backup
spec:
schedule: "0 2 * * *"
successfulJobsHistoryLimit: 3
jobTemplate:
spec:
template:
spec:
containers:
- name: backup-client
image: postgres:17-alpine
resources:
requests:
memory: "256Mi"
cpu: "100m"
limits:
memory: "512Mi"
cpu: "500m"
command:
- /bin/sh
- -c
- |
set -e
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
STAGING_DIR="/backup/staging/postgres"
READY_DIR="/backup/ready/postgres"
mkdir -p $STAGING_DIR $READY_DIR
FILENAME="n8n_db_${TIMESTAMP}.dump"
STAGING_FILE="${STAGING_DIR}/${FILENAME}"
READY_FILE="${READY_DIR}/${FILENAME}"
echo "π Starting PostgreSQL Backup to Staging..."
PGPASSWORD=$POSTGRES_PASSWORD pg_dump \
-h postgres.prod.svc.cluster.local \
-U n8n \
-d n8n \
-F c \
-b \
-f $STAGING_FILE
if [ ! -s $STAGING_FILE ]; then
echo "β Backup Failed or Empty!"
exit 1
fi
DUMP_SIZE=$(du -h $STAGING_FILE | cut -f1)
echo "β
Dump Created & Verified: $DUMP_SIZE"
# Encrypt (If passphrase exists)
if [ -f /etc/backup/passphrase ]; then
echo "π Installing GPG and Encrypting..."
apk add --no-cache gnupg > /dev/null 2>&1
cat $STAGING_FILE | gpg \
--batch \
--yes \
--passphrase-file /etc/backup/passphrase \
--symmetric \
--cipher-algo AES256 \
-c \
> ${STAGING_FILE}.gpg
if [ $? -ne 0 ] || [ ! -s ${STAGING_FILE}.gpg ]; then
echo "β Encryption Failed!"
exit 1
fi
ENC_SIZE=$(du -h ${STAGING_FILE}.gpg | cut -f1)
echo "β
Encrypted: $ENC_SIZE"
mv ${STAGING_FILE}.gpg ${READY_FILE}.gpg
rm $STAGING_FILE
echo "β
Encrypted Backup Ready: ${READY_FILE}.gpg"
else
echo "β οΈ No encryption key found. Moving plain text backup."
mv $STAGING_FILE $READY_FILE
echo "β
Plain Backup Ready: $READY_FILE"
fi
find $READY_DIR -name "*.dump*" -mtime +7 -delete
echo "========================================="
echo "β¨ PostgreSQL Backup Complete!"
echo "========================================="
env:
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: postgres-secret
key: password
volumeMounts:
- name: backup-storage
mountPath: /backup
- name: encryption-key
mountPath: /etc/backup
readOnly: true
volumes:
- name: backup-storage
persistentVolumeClaim:
claimName: backup-pvc
- name: encryption-key
secret:
secretName: backup-encryption-secret
optional: true
restartPolicy: OnFailure
Apply it:
kubectl apply -f backups/02-postgres-backup.yaml
π§© Step 5: Job 2 β Redis Backup
This job uses BGSAVE (non-blocking).
File: backups/03-redis-backup.yaml
apiVersion: batch/v1
kind: CronJob
metadata:
name: redis-backup
namespace: backup
spec:
schedule: "0 2 * * *"
successfulJobsHistoryLimit: 3
jobTemplate:
spec:
template:
spec:
containers:
- name: redis-client
image: redis:8.0-alpine
resources:
requests:
memory: "128Mi"
limits:
memory: "256Mi"
command:
- /bin/sh
- -c
- |
set -e
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
STAGING_DIR="/backup/staging/redis"
READY_DIR="/backup/ready/redis"
mkdir -p $STAGING_DIR $READY_DIR
FILENAME="redis_${TIMESTAMP}.rdb"
STAGING_FILE="${STAGING_DIR}/${FILENAME}"
READY_FILE="${READY_DIR}/${FILENAME}"
echo "π Starting Redis Backup..."
redis-cli -h redis.prod.svc.cluster.local BGSAVE
LAST_SAVE=$(redis-cli -h redis.prod.svc.cluster.local LASTSAVE)
echo "Waiting for BGSAVE to complete..."
while true; do
CURRENT_SAVE=$(redis-cli -h redis.prod.svc.cluster.local LASTSAVE)
if [ "$CURRENT_SAVE" -gt "$LAST_SAVE" ]; then
break
fi
sleep 2
done
echo "β
Redis Save Complete"
redis-cli -h redis.prod.svc.cluster.local --rdb - > $STAGING_FILE
if [ -s $STAGING_FILE ]; then
mv $STAGING_FILE $READY_FILE
find $READY_DIR -name "*.rdb" -mtime +7 -delete
echo "β
Redis Backup Ready"
else
echo "β Redis Backup Failed (Empty File)"
exit 1
fi
volumeMounts:
- name: backup-storage
mountPath: /backup
volumes:
- name: backup-storage
persistentVolumeClaim:
claimName: backup-pvc
restartPolicy: OnFailure
Apply it:
kubectl apply -f backups/03-redis-backup.yaml
π§© Step 6: Job 3 β N8N Backup (Direct Upload)
Strategy: This job runs in prod namespace, mounts n8n-pvc, compresses data, and uploads directly to Mega to mega:n8n-backups (separate folder).
File: backups/04-n8n-backup.yaml
apiVersion: batch/v1
kind: CronJob
metadata:
name: n8n-backup
namespace: prod
spec:
schedule: "15 2 * * *"
successfulJobsHistoryLimit: 3
jobTemplate:
spec:
template:
spec:
containers:
- name: n8n-client
image: rclone/rclone:latest
resources:
requests:
memory: "256Mi"
limits:
memory: "512Mi"
command:
- /bin/sh
- -c
- |
set -e
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
STAGING_DIR="/tmp/staging/n8n"
READY_DIR="/tmp/ready/n8n"
mkdir -p $STAGING_DIR $READY_DIR
FILENAME="n8n_data_${TIMESTAMP}.tar.gz"
READY_FILE="${READY_DIR}/${FILENAME}"
echo "π Starting N8N Backup..."
# 1. Compress .n8n folder
tar -czf ${STAGING_DIR}/${FILENAME} -C /n8n_data .n8n
if [ -s ${STAGING_DIR}/${FILENAME} ]; then
SIZE=$(du -h ${STAGING_DIR}/${FILENAME} | cut -f1)
echo "β
N8N Data Compressed: $SIZE"
# 2. Upload directly to Mega (folder: n8n-backups)
echo "π€ Uploading to Mega.nz..."
rclone copy $STAGING_DIR mega:n8n-backups \
--config=/tmp/rclone.conf \
--verbose
if [ $? -eq 0 ]; then
echo "β
N8N Backup Successful!"
# 3. Cleanup Cloud (15 days)
rclone delete mega:n8n-backups --min-age 15d \
--config=/tmp/rclone.conf
echo "π§Ή Cloud cleanup complete."
# 4. Cleanup Local
rm -rf $STAGING_DIR
else
echo "β Upload Failed!"
exit 1
fi
else
echo "β Compression Failed!"
exit 1
fi
volumeMounts:
- name: n8n-pvc
mountPath: /n8n_data
- name: rclone-config
mountPath: /secret
readOnly: true
volumes:
- name: n8n-pvc
persistentVolumeClaim:
claimName: n8n-pvc
- name: rclone-config
secret:
secretName: rclone-secret
restartPolicy: OnFailure
Apply it:
kubectl apply -f backups/04-n8n-backup.yaml
π§© Step 7: Job 4 β Mega.nz Sync (Central)
Scans only /backup/ready (Postgres & Redis).
File: backups/05-mega-sync.yaml
apiVersion: batch/v1
kind: CronJob
metadata:
name: mega-sync
namespace: backup
spec:
schedule: "30 2 * * *"
successfulJobsHistoryLimit: 3
jobTemplate:
spec:
template:
spec:
containers:
- name: rclone-sync
image: rclone/rclone:latest
command:
- /bin/sh
- -c
- |
cat /secret/rclone.conf > /tmp/rclone.conf
echo "π€ Starting Mega.nz Sync (PG & Redis)..."
rclone sync /backup/ready mega:k8s-backups \
--config=/tmp/rclone.conf \
--verbose \
--transfers 4 \
--max-age 15d
if [ $? -eq 0 ]; then
echo "β
Sync to Mega.nz Successful!"
echo "π§Ή Cloud cleanup: Removed files older than 15 days."
else
echo "β Sync Failed!"
exit 1
fi
volumeMounts:
- name: backup-storage
mountPath: /backup
- name: rclone-secret
mountPath: /secret
readOnly: true
volumes:
- name: backup-storage
persistentVolumeClaim:
claimName: backup-pvc
- name: rclone-secret
secret:
secretName: rclone-secret
restartPolicy: OnFailure
Apply it:
kubectl apply -f backups/05-mega-sync.yaml
π§© Step 8: Verification & Testing
1. Test All Backups
kubectl create job --from=cronjob/postgres-backup manual-pg -n backup
kubectl create job --from=cronjob/redis-backup manual-redis -n backup
kubectl create job --from=cronjob/n8n-backup manual-n8n -n prod
2. Verify Central Structure (PG/Redis)
kubectl run debug-check --image=busybox -n backup --restart=Never \
--overrides='{"spec":{"containers":[{"name":"debug","image":"busybox","command":["sh","-c","sleep 3600"],"volumeMounts":[{"name":"vol","mountPath":"/data"}]}],"volumes":[{"name":"vol","persistentVolumeClaim":{"claimName":"backup-pvc"}}]}}'
kubectl exec -it debug-check -- sh
ls -R /data/ready/
# Expect: postgres/ and redis/ folders
exit
kubectl delete pod debug-check -n backup
3. Test Sync
kubectl create job --from=cronjob/mega-sync manual-sync -n backup
Check Mega for k8s-backups folder.
Check Mega for n8n-backups folder (from direct upload).
π§© Step 9: Restore Procedures
Scenario 1: Restore PostgreSQL
1. Download (If needed):
kubectl run pg-download --image=rclone/rclone:latest --rm -it -n backup --restart=Never \
--overrides='{"spec":{"containers":[{"name":"downloader","image":"rclone/rclone:latest","command":["sh","-c","sleep 3600"],"volumeMounts":[{"name":"vol","mountPath":"/backup"},{"name":"conf","mountPath":"/secret","readOnly":true}]}],"volumes":[{"name":"vol","persistentVolumeClaim":{"claimName":"backup-pvc"}},{"name":"conf","secret":{"secretName":"rclone-secret"}}]}}' \
-- sh
# Inside shell:
rclone copy mega:k8s-backups /backup/ready --config=/secret/rclone.conf
exit
2. Restore:
kubectl run pg-restore --image=postgres:17-alpine --rm -it -n backup --restart=Never \
--overrides='{"spec":{"containers":[{"name":"restore","image":"postgres:17-alpine","command":["sh"],"env":[{"name":"PGPASSWORD","valueFrom":{"secretKeyRef":{"name":"postgres-secret","namespace":"backup","key":"password"}}],"volumeMounts":[{"name":"vol","mountPath":"/data"}]}],"volumes":[{"name":"vol","persistentVolumeClaim":{"claimName":"backup-pvc"}}]}}' \
-- sh
Inside shell:
# List backups
ls -l /data/ready/postgres/
# Decrypt (If encrypted)
gpg --batch --passphrase "YOUR_PASSPHRASE" \
--decrypt /data/ready/postgres/n8n_db_DATE.dump.gpg \
> /data/ready/postgres/restore.dump
# Or copy plain
# cp /data/ready/postgres/n8n_db_DATE.dump /data/ready/postgres/restore.dump
# Restore
psql -h postgres.prod.svc.cluster.local -U n8n -d postgres -c "DROP DATABASE IF EXISTS n8n;"
psql -h postgres.prod.svc.cluster.local -U n8n -d postgres -c "CREATE DATABASE n8n;"
pg_restore -h postgres.prod.svc.cluster.local -U n8n -d n8n /data/ready/postgres/restore.dump
echo "β
Restore Complete!"
Scenario 2: Restore Redis
1. Download (If needed): (Use same download command as PG).
2. Restore:
# Find redis pod name
REDIS_POD=$(kubectl get pod -n prod -l app=redis -o jsonpath='{.items[0].metadata.name}')
# Copy file
kubectl cp backup/ready/redis/redis_DATE.rdb -n backup \
${REDIS_POD}:/data/dump.rdb -n prod
# Restart
kubectl delete pod -n prod -l app=redis
Scenario 3: Restore N8N (From Cloud)
Since N8N uploads directly to mega:n8n-backups, we download from there.
1. Download & Extract:
kubectl run n8n-restore --image=rclone/rclone:latest --rm -it -n prod --restart=Never \
--overrides='{"spec":{"containers":[{"name":"restore","image":"rclone/rclone:latest","command":["sh","-c","sleep 3600"],"volumeMounts":[{"name":"n8n-data","mountPath":"/n8n_data"},{"name":"conf","mountPath":"/secret","readOnly":true}]}],"volumes":[{"name":"n8n-data","persistentVolumeClaim":{"claimName":"n8n-pvc"}},{"name":"conf","secret":{"secretName":"rclone-secret"}}]}}' \
-- sh
Inside restore shell:
# Download from N8N folder
rclone copy mega:n8n-backups /tmp/restore --config=/secret/rclone.conf
# List files
ls -l /tmp/restore/
# Extract (Replace filename)
tar -xzf /tmp/restore/n8n_data_DATE.tar.gz -C /tmp/restore
# Backup current .n8n (Safety)
mv /n8n_data/.n8n /n8n_data/.n8n_old
# Restore
cp -r /tmp/restore/.n8n /n8n_data/
echo "β
N8N Data Restored! Restarting pod..."
exit
2. Restart N8N:
kubectl delete pod -n prod -l app=n8n
π Testing Checklist
Before publishing, test this complete flow:
-
β Backup:
- Run
manual-pgβ Log "Encrypted Backup Ready" β - Run
manual-redisβ Log "Redis Backup Ready" β - Run
manual-n8nβ Log "N8N Backup Successful" β - Run
manual-syncβ Log "Sync to Mega.nz Successful" β - Check Mega: Two folders exist?
k8s-backups(DB) andn8n-backups(N8N) β
- Run
-
β Restore:
- Restore Postgres: Restore a test DB, restart N8N, check if workflows exist. β
- Restore Redis: Copy RDB, restart Redis. β
- Restore N8N: Download/Extract/Restart. Check custom nodes exist. β
π Summary Checklist
| Component | Status | Strategy |
|---|---|---|
| β Namespace & PVC | Done | Centralized Storage (PG/Redis) |
| β Postgres Backup | Done | Staging -> Encrypt -> Ready |
| β Redis Backup | Done | BGSAVE -> Ready |
| β N8N Backup | Done | Direct Upload (Prod ns) |
| β Mega Sync (PG/Redis) | Done | Upload Only "Ready" |
| β Restore Procedure | Done | All Apps Covered |
You now have a bulletproof, production-ready backup system
Top comments (0)