In this guide, youβll learn how to deploy a Persistent Volume Claim (PVC) and attach it to a Pod using a Persistent Volume (PV) β all automatically managed by Google Kubernetes Engine (GKE).
π§© Step 1: Introduction
When your pods restart, their data is lost because ephemeral storage doesnβt persist.
To make data persistent across pod restarts, we use:
- StorageClass β Defines the storage type (e.g., SSD, Balanced Disk)
- PersistentVolume (PV) β The actual disk created in GCP
- PersistentVolumeClaim (PVC) β A request for storage
- Pod β Uses the PVC to attach and use the PV
Weβll use the predefined GKE Storage Class: standard-rwo, which provisions balanced persistent disks.
βοΈ Step 2: Check Storage Classes in GKE
# List available Storage Classes
kubectl get sc
β In your GKE console:
- Go to Cluster β DETAILS β FEATURES
- Ensure Compute Engine persistent disk CSI Driver is enabled
π Step 3: Persistent Volume Claim (PVC)
File: 01-persistent-volume-claim.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mypvc1
spec:
accessModes:
- ReadWriteOnce
storageClassName: standard-rwo
resources:
requests:
storage: 1Gi
This requests a 1GB persistent volume using the standard-rwo storage class.
π§± Step 4: Pod with PVC Mounted
File: 02-mypod1.yaml
apiVersion: v1
kind: Pod
metadata:
name: mypod1
spec:
containers:
- name: pod-demo
image: nginx
ports:
- containerPort: 80
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: pvc-demo-vol
volumes:
- name: pvc-demo-vol
persistentVolumeClaim:
claimName: mypvc1
This pod mounts the PVC at /usr/share/nginx/html.
π Step 5: Deploy and Verify
# Deploy PVC
kubectl apply -f kube-manifests/01-persistent-volume-claim.yaml
# Check Storage Class details
kubectl describe sc standard-rwo
π Observation:
VolumeBindingMode: WaitForFirstConsumer
Means: GKE waits until a pod is scheduled before creating the PV (to ensure the disk is created in the same zone).
# List PVC and PV
kubectl get pvc
kubectl get pv
π‘ PVC will show as Pending (no pod yet).
Now deploy the pod:
kubectl apply -f kube-manifests/02-mypod1.yaml
After pod creation:
kubectl get pods
kubectl get pvc
kubectl get pv
β PV will be automatically created once the pod is running.
Check in your GCP console:
Compute Engine β Storage β Disks β Youβll see a new persistent disk provisioned.
π§ Step 6: Test Data Persistence
# Access the pod
kubectl exec -it mypod1 -- /bin/bash
# Inside the pod
echo "PVC and PV Demo 101" > /usr/share/nginx/html/index.html
cat /usr/share/nginx/html/index.html
ls /usr/share/nginx/html
curl localhost
exit
Now delete the pod:
kubectl delete pod mypod1
Check:
kubectl get pvc
kubectl get pv
β
Both should still exist β data is safe!
(Disks remain even if pods are deleted.)
π Step 7: Reattach PVC to Another Pod
File: 03-mypod2.yaml
apiVersion: v1
kind: Pod
metadata:
name: mypod2
spec:
containers:
- name: pod-demo
image: nginx
ports:
- containerPort: 80
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: pvc-demo-vol
volumes:
- name: pvc-demo-vol
persistentVolumeClaim:
claimName: mypvc1
Deploy the new pod:
kubectl apply -f kube-manifests/03-mypod2.yaml
Then verify:
kubectl exec -it mypod2 -- /bin/bash
cat /usr/share/nginx/html/index.html
ls /usr/share/nginx/html
curl localhost
exit
β Youβll still find index.html created earlier β showing that your data persisted.
π§Ή Step 8: Clean Up
kubectl delete -f kube-manifests/01-persistent-volume-claim.yaml
kubectl delete -f kube-manifests/02-mypod1.yaml
kubectl delete -f kube-manifests/03-mypod2.yaml
kubectl get pvc
kubectl get pv
Check your GCP Console:
Compute Engine β Storage β Disks β The PV-backed disk should now be deleted.
π§ Summary
Component | Description |
---|---|
StorageClass | Defines disk type (Balanced / SSD) |
PersistentVolume (PV) | Actual GCE disk created automatically |
PersistentVolumeClaim (PVC) | Requests specific storage type & size |
Pod | Mounts the PVC for persistent data |
π§± Visual Flow
ββββββββββββββββββββββββββββββββ
β StorageClass β
β Defines type of disk β
β (standard-rwo, SSD, etc.) β
ββββββββββββββββ¬ββββββββββββββββ
β
βΌ
ββββββββββββββββββββββββββββββββ
β Persistent Volume (PV) β
β Actual GCE Persistent Disk β
β automatically provisioned β
ββββββββββββββββ¬ββββββββββββββββ
β
βΌ
ββββββββββββββββββββββββββββββββ
β Persistent Volume Claim (PVC)β
β Requests storage with size β
β and access mode β
ββββββββββββββββ¬ββββββββββββββββ
β
βΌ
ββββββββββββββββββββββββββββββββ
β Pod β
β Uses PVC to attach the PV β
ββββββββββββββββββββββββββββββββ
β Now youβve successfully deployed persistent storage in GKE!
Your applications can now retain data even if pods or nodes are recreated.
π Thanks for reading! If this post added value, a like β€οΈ, follow, or share would encourage me to keep creating more content.
β Latchu | Senior DevOps & Cloud Engineer
βοΈ AWS | GCP | βΈοΈ Kubernetes | π Security | β‘ Automation
π Sharing hands-on guides, best practices & real-world cloud solutions
Top comments (0)