When I first looked at Kubernetes, I saw a wall of YAML and acronyms. PVC, PV, StatefulSet, Endpoint — none of it connected. What helped was stepping back and mapping out what each piece actually does before trying to use any of it.
Here's how I think about it now.
A pod is the smallest unit
A Pod is the smallest thing Kubernetes runs. It's one or more containers running together on the same node, sharing the same network and storage.
apiVersion: v1
kind: Pod
metadata:
name: my-app
labels:
app: my-app
spec:
containers:
- name: my-app
image: nginx:latest
ports:
- containerPort: 80
In practice I rarely create Pods directly. They're disposable. If a Pod dies, nothing brings it back on its own.
A deployment keeps pods running
A Deployment manages Pods for me. I tell it what to run and how many replicas I want, and it keeps that many alive.
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
spec:
replicas: 3
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: my-app
image: nginx:latest
ports:
- containerPort: 80
If a Pod crashes, the Deployment notices and spins up a replacement. It also handles rolling updates, swapping old pods for new ones without downtime.
Deployments are for stateless apps. If the app doesn't need to remember anything between restarts, a Deployment is the right call.
A StatefulSet is for apps with memory
StatefulSets are for apps that need stable identity: databases, queues, anything that writes to disk and needs to know who it is across restarts.
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: my-db
spec:
serviceName: my-db
replicas: 3
selector:
matchLabels:
app: my-db
template:
metadata:
labels:
app: my-db
spec:
containers:
- name: my-db
image: postgres:15
ports:
- containerPort: 5432
Unlike a Deployment, each pod in a StatefulSet gets a stable name: my-db-0, my-db-1, my-db-2. They start in order, and each one gets its own persistent storage that follows it around.
A service gives pods a stable address
Pods come and go. Their IP addresses change every time they restart. A Service gives me a stable endpoint that always points at the right pods.
apiVersion: v1
kind: Service
metadata:
name: my-app-service
spec:
selector:
app: my-app
ports:
- protocol: TCP
port: 80
targetPort: 80
type: ClusterIP
The selector is what connects it to the pods. It matches the labels on my Deployment, and that's how the Service knows where to send traffic.
There are a few Service types. ClusterIP is only reachable inside the cluster and is the default. NodePort exposes a port on every node. LoadBalancer provisions an external load balancer, usually through a cloud provider.
Endpoints are what actually back a service
This one confused me for a while. An Endpoint is the list of pod IPs that a Service routes to. Kubernetes manages them automatically based on which pods match the selector and are healthy.
I don't create Endpoints manually, but I do inspect them:
kubectl get endpoints my-app-service
When a Service wasn't routing traffic, this was the first thing I checked. It told me immediately whether Kubernetes had found any pods to send traffic to.
PersistentVolume is the actual storage
A PersistentVolume is storage that exists independently of any pod. It could be a disk on the node, an NFS share, or a cloud disk. Kubernetes abstracts all of that behind a common interface.
apiVersion: v1
kind: PersistentVolume
metadata:
name: my-pv
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /data/my-app
PersistentVolumeClaim is how a pod requests it
A PVC is how a pod asks for storage. Instead of pointing directly at a PV, it says "I need 5Gi with ReadWriteOnce access" and Kubernetes finds a PV that fits.
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
Then I reference the PVC in my pod:
volumes:
- name: my-storage
persistentVolumeClaim:
claimName: my-pvc
The split between PV and PVC is intentional. Whoever runs the cluster provisions the storage. Whoever deploys the app requests it. Neither needs to know the details of what the other did.
How it stacks up
Ingress
└── Service (stable network endpoint)
└── Deployment / StatefulSet (manages pod lifecycle)
└── Pod (runs the container)
└── PVC → PV (persistent storage, if needed)
Endpoints sit between the Service and the Pods, managed automatically by Kubernetes. Once this hierarchy clicked, everything else started to feel like detail work rather than a new concept to learn.
Top comments (0)