Shared storage works well for many workloads, but once latency and IO consistency start to matter, local disks become very attractive.
Kubernetes supports Local Persistent Volumes (Local PVs), but with a big limitation:
Local PVs must be statically provisioned.
That makes them hard to use in dynamic environments where workloads are created on demand.
We ran into this problem while trying to expose local storage through an Open Service Broker interface.
Why Static Local PVs Are a Problem
With static provisioning:
- PVs must exist before workloads request them
- Capacity planning becomes manual
- Automation pipelines break down
For service brokers and self-service platforms, this is a non-starter. Users expect storage to be provisioned dynamically.
The Approach We Took
Instead of fighting Kubernetes’ design, we worked around it.
The key idea was to separate:
- Scheduling decisions (still done by Kubernetes)
- Disk creation (done on the target node)
The Provisioning Flow
At a high level, our workflow looked like this:
- A service broker receives a request for local storage
- The broker submits a temporary “dummy” Kubernetes manifest with: resource requirements node affinity
- Kubernetes schedules the workload to a specific node
- Once the node is known, the broker: remotely creates the local disk generates the corresponding Local PV object
- The real workload is deployed and bound to that PV
- When the service is deleted, the local disk is cleaned up
This gave us something that felt like dynamic provisioning, even though Local PVs remain static under the hood.
Why This Worked
- Kubernetes still decides placement
- Disk creation happens only where needed
- No pre-provisioning of unused capacity
- Storage lifecycle is tied to the service instance
It’s not as elegant as a CSI driver, but for on-prem and hybrid clusters, it proved to be a practical solution.
Trade-offs and Lessons Learned
There are trade-offs:
- Requires node-level access
- Cleanup must be handled carefully
- Failure paths need extra attention
But in exchange, we got predictable performance and a much better developer experience for stateful workloads.
When This Pattern Makes Sense
This approach works best when:
- you control the cluster
- IO performance matters
- cloud block storage isn’t an option
- service brokers are part of your platform
For many internal platforms, this turned out to be “good enough” — and far better than manual PV management.
Open Source Implementation
We documented and open-sourced this approach as part of a larger platform project:
If you’ve built dynamic storage workflows around Local PVs (or decided not to), I’d love to hear what worked — and what didn’t.
Top comments (0)