Kubernetes has revolutionized how we deploy and manage infrastructure. But with great power comes great complexity. If you've ever struggled with Custom Resource Definitions (CRDs), hand-rolled controllers, or tangled YAML, you're not alone. This post sets the stage for KRO (Kube Resource Orchestrator)—a project from Google, Microsoft, and AWS—by walking you through how Kubernetes really works under the hood and why KRO is here to save your day.
How Kubernetes Works – The Control Plane
The Kubernetes control plane is the brain of the cluster. It consists of:
- API Server: The front door to the cluster. Everything talks to this.
- etcd: A key-value store that remembers the entire cluster state.
- Controller Manager: Ensures reality matches the desired state.
- Scheduler: Assigns pods to appropriate nodes.
Extending Kubernetes with CRDs
Kubernetes is declarative. You declare what you want, and the system figures out how to do it. But what if you want to define your own resource types?
That’s where CRDs (Custom Resource Definitions) come in.
apiVersion: my.example.com/v1
kind: App
metadata:
name: demo
spec:
release: 1.0.2
Why CRDs Alone Aren’t Enough
You need a controller (or "operator") to make CRDs come alive. A controller watches for objects like app, then acts—e.g., provisioning infrastructure, updating apps, etc.
Problem? Controllers are hard to write.
You typically need:
- Code (often in Go)
- Frameworks like Kubebuilder or Operator SDK
- Deep knowledge of Kubernetes APIs
Common Pain Points:
- overloaded API server from poorly written watches
- etcd bloat from unused CRDs
- Complex reconcilers and ordering logic
Why Tools/Operator Like KCC, ACK, and ASO Aren’t Enough on Their Own
While tools like KCC (Google), ACK (AWS), and ASO (Azure) let you manage cloud resources using Kubernetes CRDs, they’re tightly coupled to their respective clouds.
They’re great for cloud integration, but not for composing reusable APIs or simplifying multi-step infrastructure workflows.
That’s where KRO steps in—it’s the glue that turns raw CRDs into full-blown platform APIs.
Introducing KRO (Kube Resource Orchestrator)
KRO is a Kubernetes‑native, cloud‑agnostic framework that lets platform teams define resource bundles as reusable APIs via a ResourceGraphDefinition (RGD). KRO helps you build a self-service developer platform that works across clouds by leveraging existing operators (KCC, ACK, ASO)
Platform teams define defaults and hide complexity, letting developers consume a clean interface.
When applied, KRO:
- Dynamically creates a CRD for your API.
- Sets up a controller to manage instances of that CRD.
- Manages ordering, dependencies via CEL expressions.
- Handles both Kubernetes-native and cloud resources (via ACK, KCC, ASO).
Example Use-cases:
- Provisioning a GKE/AKS/EKS cluster + IAM
- Deploying web apps combined with cloud databases and load balancers
How KRO Works: Architecture
- Define a ResourceGraphDefinition (RGD) – schema + resource templates + dependency definitions.
- KRO validates the RGD, creates the CRD, and deploys a micro‑controller for that custom type.
- A user instantiates your API object (like WebApp or GKECluster), and KRO generates underlying Kubernetes objects in the correct order.
- Lifecycle is managed automatically—updates, reconciliation, status reporting.
Conclusion
KRO builds on everything that makes Kubernetes powerful—and fixes what makes it painful. Before you dive into building with KRO, it's essential to understand the journey from kubectl to CRDs to controllers. Once you do, you’ll see why KRO isn’t just a new tool—it’s a new way of thinking about Kubernetes extensibility.
Top comments (0)