This post is aimed at engineers running Vault on bare metal or self-hosted Kubernetes — managed cloud clusters handle some of this automatically, but on-prem you're on your own.
Running Vault in Kubernetes — Lessons from a Homelab
Most Vault + Kubernetes tutorials cover the basics: install Vault, store a secret, inject it into a pod. This post covers what comes after that — persistent storage, unsealing, and getting secrets into pods properly using External Secrets Operator.
Why Vault in the First Place
Kubernetes Secrets are not secret. They're base64 encoded — which is encoding, not encryption. Anyone with the right RBAC permissions can decode them in seconds. There's no audit trail, no rotation, no central management.
Vault solves all of this. Secrets are encrypted at rest, every access is logged, and you can rotate credentials without touching your cluster.
The problem is getting there.
Start with Persistent Storage
The natural starting point is dev mode:
server:
dev:
enabled: true
devRootToken: "root"
Dev mode is convenient — Vault starts pre-initialized and pre-unsealed. Good for experimenting.
I started there too. Then a network routing issue caused my cluster to restart. I was troubleshooting Tailscale subnet routing — trying to reach cluster services from outside my home network — which left the routing table in a broken state and eventually took down the whole cluster. After bringing everything back up, ESO couldn't sync anything. The secrets were gone because dev mode stores everything in memory.
That's when persistent storage became obvious. Vault is deployed via the official HashiCorp Helm chart. Use a proper storage class from the start — the following are the relevant Helm values:
server:
dev:
enabled: false
dataStorage:
enabled: true
size: 1Gi
storageClass: longhorn # replace with your storage class
One thing to know: switching from dev mode to production mode isn't a config swap. Dev mode doesn't create a PVC, but you do need to delete the existing StatefulSet first — otherwise Helm will try to update it in place and hit an immutable fields error. Delete it, redeploy with the new config, then initialize fresh with vault operator init.
The Unseal Problem
Production Vault starts sealed after every restart. This is a security feature — if someone steals your disk, the data is useless without the unseal keys. But it also means every time your pod restarts, someone has to manually unseal it.
The proper solution is auto-unseal via a cloud KMS (AWS KMS, Azure Key Vault, etc.). For a homelab, I built a simpler workaround — a Kubernetes CronJob that runs every minute and unseals Vault automatically.
First, store your unseal keys in a K8s Secret. When you initialize Vault with vault operator init, it generates 5 unseal keys and requires any 3 of them to unseal. Store 3 of those keys here:
kubectl create secret generic vault-unseal-keys \
--from-literal=key1=<UNSEAL_KEY_1> \
--from-literal=key2=<UNSEAL_KEY_2> \
--from-literal=key3=<UNSEAL_KEY_3> \
-n vault
Then deploy the CronJob. It runs every minute, reads the keys from that Secret as environment variables, and passes them to vault operator unseal:
apiVersion: batch/v1
kind: CronJob
metadata:
name: vault-unsealer
namespace: vault
spec:
schedule: "* * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: vault-unsealer
image: hashicorp/vault:1.18.1
command:
- /bin/sh
- -c
- |
export VAULT_ADDR=http://vault.vault.svc:8200 # <service-name>.<namespace>.svc
vault operator unseal $KEY1
vault operator unseal $KEY2
vault operator unseal $KEY3
env:
- name: KEY1
valueFrom:
secretKeyRef:
name: vault-unseal-keys # the Secret we created above
key: key1
- name: KEY2
valueFrom:
secretKeyRef:
name: vault-unseal-keys
key: key2
- name: KEY3
valueFrom:
secretKeyRef:
name: vault-unseal-keys
key: key3
restartPolicy: OnFailure
Worth noting: storing unseal keys in the same cluster as Vault is circular from a security standpoint — if someone compromises the cluster, they have both. For a homelab it's an acceptable trade-off. In production, use cloud KMS where the unseal key lives outside the cluster entirely.
External Secrets Operator
Vault stores your secrets. But your pods need Kubernetes Secrets. These are two different things and Vault alone doesn't bridge the gap.
That's what External Secrets Operator (ESO) does. It watches for ExternalSecret resources in your cluster and automatically syncs secrets from Vault into native K8s Secrets.
Vault (source of truth)
→ ESO watches ExternalSecret resources
→ Creates/updates K8s Secrets automatically
→ Pod consumes K8s Secret as env var or volume mount
Step 1 — Tell ESO how to connect to Vault
A ClusterSecretStore is a cluster-wide resource that defines the connection to your secret backend — in this case Vault. Think of it as a named connection configuration that all your namespaces can reference.
ESO needs a token to authenticate against Vault. You store that token in a regular K8s Secret first:
kubectl create secret generic vault-auth-token \
--from-literal=token=<YOUR_VAULT_TOKEN> \
-n default
Then reference it in the ClusterSecretStore. The tokenSecretRef block points to that K8s Secret — name is the name of the secret, and key is the field inside it that holds the actual Vault token.
apiVersion: external-secrets.io/v1beta1
kind: ClusterSecretStore
metadata:
name: vault-connection # referenced by ExternalSecrets later
spec:
provider:
vault:
server: "http://vault.vault.svc:8200"
path: "secret" # the KV engine mount name in Vault
version: "v2"
auth:
tokenSecretRef:
name: vault-auth-token # K8s Secret that holds the Vault token
namespace: default
key: token # the field inside that Secret
Step 2 — Store the secret in Vault
Before ESO can sync anything, the secret needs to exist in Vault. If the path doesn't exist yet, ESO will report a SecretSyncedError status on the ExternalSecret resource — easy to miss if you're not watching. Create the secret first (make sure VAULT_ADDR and VAULT_TOKEN are set in your shell):
export VAULT_ADDR=http://<vault-address>:8200
export VAULT_TOKEN=<your-vault-token>
vault kv put secret/myproject/production/database password="your-db-password"
# "secret" here is the KV engine mount name, not a literal — adjust if yours is named differently
Step 3 — Define what to sync
An ExternalSecret tells ESO: fetch this specific secret from Vault and create a K8s Secret from it. It references the ClusterSecretStore you defined above.
The remoteRef.key is the path to the secret in Vault. A common convention is to organize secrets by project and environment — myproject/production/database, myproject/staging/database — so different clusters or namespaces can point to the right environment just by changing the path.
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
name: database-credentials # name of this ExternalSecret resource
namespace: production
spec:
refreshInterval: 1h # how often ESO re-syncs from Vault
secretStoreRef:
name: vault-connection # references the ClusterSecretStore above
kind: ClusterSecretStore
target:
name: database-credentials # name of the K8s Secret ESO will create
data:
- secretKey: db-password # key name in the resulting K8s Secret
remoteRef:
key: myproject/production/database
property: password # the field inside that Vault secret
ESO creates a K8s Secret called database-credentials in the production namespace and keeps it in sync. Update the value in Vault, the K8s Secret updates automatically within the refresh interval — without touching the cluster.
Env Vars vs Volume Mounts
Once the K8s Secret exists, you have two ways to consume it in a pod.
Environment variables are set at container start time and never update. If the secret rotates, the pod needs a restart to see the new value.
Volume mounts update automatically. When the K8s Secret changes, the mounted file updates inside the pod within roughly 60-90 seconds (kubelet sync period). No restart needed — as long as the app re-reads the file each time it needs the value rather than caching it at startup.
Here's what it looks like in a full Deployment spec:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
namespace: production
spec:
replicas: 1
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: my-app
image: my-app:latest
volumeMounts:
- name: credentials-vol
mountPath: /etc/secrets # files appear here inside the container
volumes:
- name: credentials-vol
secret:
secretName: database-credentials # the K8s Secret ESO created
The file name inside the pod matches the secretKey field defined in the ExternalSecret — in this example db-password, so the app reads /etc/secrets/db-password as a plain file. For anything that rotates — database passwords, API keys — volume mounts are the right choice.
Summary
The full working stack: Vault with Longhorn persistent storage, auto-unseal CronJob, ESO syncing secrets to K8s, pods consuming via volume mounts.
- Don't use dev mode for anything you care about — persistent storage from the start saves a lot of pain
- Initialize Vault before deploying anything that depends on it — ESO will report sync errors if Vault isn't ready
- Use volume mounts over env vars for secrets that rotate, and make sure your app reads the file on each use rather than caching it
- Test your unseal story before you need it at an inconvenient time
Top comments (0)