https://www.youtube.com/watch?v=yqXtFKrsR6Q
Most people learn Kubernetes as a parts list. API server. Scheduler. Controller manager. Kubelet. Etcd. Kube-proxy. Once the names are memorized, the parts list never collapses into a system. You can recite the components but you can't predict the cluster's behavior.
Forget the parts list.
Kubernetes is one idea, repeated. Once you see it, every component becomes the same shape applied to a different object.
What Kubernetes is for
Docker runs a container on your laptop. One container, one machine, you watch it yourself. Production isn't that. Production is many containers across many machines. Some crash. Some hang. Some get evicted because their host ran out of memory. You want three replicas of your service running on different nodes, restarting forever, no matter what fails.
That's the bridge problem. Someone has to watch what's running, compare it to what you wanted, and fix the gap. Kubernetes is that someone.
Pods, not containers
A Pod is not a container. A Pod contains containers. They share a network namespace, which means they share an IP and a localhost. They can also share storage volumes. Most pods have one container — but the abstraction is plural by design. Sometimes you want a sidecar: a logging agent next to your app, a service-mesh proxy intercepting traffic, two processes that pretend to live on the same tiny machine.
The trick that holds a pod together is a sixty-line C program called pause. It does nothing — it just holds the namespaces alive so the real containers can come, go, and crash without taking the IP with them.
The unit Kubernetes schedules is a pod. Every loop you're about to see is watching pods.
A controller is a thermostat
This is the whole video.
A thermostat reads what you want — sixty-eight degrees. It reads what the room is — sixty-five. If they don't match, it does one thing in the right direction. Then it reads again. That's a control loop.
A Kubernetes controller does the same thing. You declare what you want — three replicas of a web service. The actual state is being measured: there are two pods running. The controller sees the gap, creates one pod, then loops.
An autopilot works the same way. If the autopilot software restarts mid-flight, it doesn't replay every adjustment it ever made. It looks at where the plane is right now, where it should be, closes the gap. That's level-triggered, not edge-triggered. Controllers react to current state, not to the history of how things got there.
This is why a Kubernetes cluster is ridiculously good at recovering from failures. Every controller can be restarted. Every component can crash and reboot. The state lives outside them.
Every part of Kubernetes is one of these loops. Different objects, same shape.
The control plane
So how does the loop move information around?
There's one hub: the API server. Every component in Kubernetes talks to it. Nothing talks to anything else directly. The scheduler doesn't call the kubelet. They both talk to the API server, and they both watch the API server for changes.
Behind the API server sits etcd, a consensus key-value store. The rule: nothing else touches etcd. The API server is the only component that opens that connection.
When a controller wants to know what to do, it doesn't poll. It opens a watch on the resources it cares about — a streaming endpoint — and the API server pushes every change through it: ADDED, MODIFIED, DELETED.
There's no orchestrator orchestrating. There's a hub, a stream, and a lot of independent loops, all listening.
The scheduler doesn't run anything
Most people, including most people who've used Kubernetes for years, believe the scheduler runs pods. It does not. This is the single most common Kubernetes misconception.
The scheduler is a pure function:
- Inputs: a pending pod, the list of nodes
- Output: a node binding
It scores every node on memory, CPU, taints, affinities. Thirteen filter plugins narrow the candidates; thirteen score plugins rank them zero to a hundred. The winner gets picked. Then the scheduler does one thing: it writes a record back to the API server saying this pod belongs to this node.
That's it. Picks. Writes. Done.
Who runs the container? The kubelet. Every node runs a kubelet, and the kubelet is also watching the API server. When it sees a binding for a pod assigned to its node, it pulls the spec, calls the container runtime, and starts the workload.
Two loops. One picks where; the other runs what was picked. Neither calls the other.
Networking, briefly
Two ideas you have to know.
Every pod gets a real IP. Not a NAT. Not a port mapping. The IP that Pod-A sees as its own is the same IP every other pod sees for it. No translation. That's a stronger promise than Docker gives you on a single host.
A Service is not a real machine. It's a fake IP that exists only as iptables or IPVS rules. A daemon called kube-proxy maintains those rules on every node, watching the API server for changes. A Service IP isn't an address — it's a routing intention.
Operators are just controllers you write
Operators sound exotic. The Postgres operator. The Kafka operator. The Elasticsearch operator. They aren't.
An operator is a controller you wrote. That's the entire definition. Most of the time, an operator is just a pod running a custom controller.
You define a Custom Resource Definition. You write yaml that describes what you want — a PostgresCluster with three replicas, daily backups. You apply it. Then you write the controller: same shape as every other Kubernetes controller. It watches PostgresClusters, sees yours appear, reads the desired state, reads the actual state, computes the next step, loops.
The pattern is identical to the Deployment controller. To the Job controller. To kube-proxy. To the scheduler. They're all the same machine. Same loop. Same shape. Different objects.
Once that lands, there's no parts list. There's one pattern, applied everywhere. That's Kubernetes.
The honest tradeoff
Kubernetes is easy to use after some exposure. It's also super hard to set up.
Cluster setup is a yak shave. SELinux. Kernel parameters. Container runtime versions. Image registries. Certificates. Networking plugins, each with their own opinions. The kind of small papercuts that compound until you've spent a week on what a Docker container did in five seconds.
Networking is the worst part. Most production K8s outages start there.
Kubernetes wins when you have many services and many machines and you actually need declarative ops. The control loops earn their complexity. It loses when one server runs your app fine — when the system you'd build to manage Kubernetes is more code than the system Kubernetes was supposed to manage.
Use it when it earns itself. Don't use it because it's the default.
So what
The next time a Deployment is stuck, don't open the parts list. Find the controller. Ask what state it wants. Ask what state it's seeing. Find the gap, and find the reason it isn't closing.
It's a control loop. All the way down.
Top comments (0)