<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: laoshanxi</title>
    <description>The latest articles on DEV Community by laoshanxi (@laoshanxi).</description>
    <link>https://dev.to/laoshanxi</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/laoshanxi"/>
    <language>en</language>
    <item>
      <title>Dynamic Local Persistent Volumes on Kubernetes via Open Service Broker</title>
      <dc:creator>laoshanxi</dc:creator>
      <pubDate>Fri, 09 Jan 2026 00:59:05 +0000</pubDate>
      <link>https://dev.to/laoshanxi/dynamic-local-persistent-volumes-on-kubernetes-via-open-service-broker-3h11</link>
      <guid>https://dev.to/laoshanxi/dynamic-local-persistent-volumes-on-kubernetes-via-open-service-broker-3h11</guid>
      <description>&lt;p&gt;Shared storage works well for many workloads, but once latency and IO consistency start to matter, local disks become very attractive.&lt;/p&gt;

&lt;p&gt;Kubernetes supports Local Persistent Volumes (Local PVs), but with a big limitation:&lt;/p&gt;

&lt;p&gt;Local PVs must be statically provisioned.&lt;/p&gt;

&lt;p&gt;That makes them hard to use in dynamic environments where workloads are created on demand.&lt;/p&gt;

&lt;p&gt;We ran into this problem while trying to expose local storage through an Open Service Broker interface.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Static Local PVs Are a Problem
&lt;/h2&gt;

&lt;p&gt;With static provisioning:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;PVs must exist before workloads request them&lt;/li&gt;
&lt;li&gt;Capacity planning becomes manual&lt;/li&gt;
&lt;li&gt;Automation pipelines break down&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For service brokers and self-service platforms, this is a non-starter. Users expect storage to be provisioned dynamically.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Approach We Took
&lt;/h2&gt;

&lt;p&gt;Instead of fighting Kubernetes’ design, we worked around it.&lt;/p&gt;

&lt;p&gt;The key idea was to separate:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Scheduling decisions (still done by Kubernetes)&lt;/li&gt;
&lt;li&gt;Disk creation (done on the target node)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The Provisioning Flow
&lt;/h2&gt;

&lt;p&gt;At a high level, our workflow looked like this:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A service broker receives a request for local storage&lt;/li&gt;
&lt;li&gt;The broker submits a temporary “dummy” Kubernetes manifest with:
resource requirements
node affinity&lt;/li&gt;
&lt;li&gt;Kubernetes schedules the workload to a specific node&lt;/li&gt;
&lt;li&gt;Once the node is known, the broker:
remotely creates the local disk
generates the corresponding Local PV object&lt;/li&gt;
&lt;li&gt;The real workload is deployed and bound to that PV&lt;/li&gt;
&lt;li&gt;When the service is deleted, the local disk is cleaned up&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This gave us something that felt like dynamic provisioning, even though Local PVs remain static under the hood.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why This Worked
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Kubernetes still decides placement&lt;/li&gt;
&lt;li&gt;Disk creation happens only where needed&lt;/li&gt;
&lt;li&gt;No pre-provisioning of unused capacity&lt;/li&gt;
&lt;li&gt;Storage lifecycle is tied to the service instance&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It’s not as elegant as a CSI driver, but for on-prem and hybrid clusters, it proved to be a practical solution.&lt;/p&gt;

&lt;h2&gt;
  
  
  Trade-offs and Lessons Learned
&lt;/h2&gt;

&lt;p&gt;There are trade-offs:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Requires node-level access&lt;/li&gt;
&lt;li&gt;Cleanup must be handled carefully&lt;/li&gt;
&lt;li&gt;Failure paths need extra attention&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But in exchange, we got predictable performance and a much better developer experience for stateful workloads.&lt;/p&gt;

&lt;h2&gt;
  
  
  When This Pattern Makes Sense
&lt;/h2&gt;

&lt;p&gt;This approach works best when:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;you control the cluster&lt;/li&gt;
&lt;li&gt;IO performance matters&lt;/li&gt;
&lt;li&gt;cloud block storage isn’t an option&lt;/li&gt;
&lt;li&gt;service brokers are part of your platform&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For many internal platforms, this turned out to be “good enough” — and far better than manual PV management.&lt;/p&gt;

&lt;p&gt;Open Source Implementation&lt;/p&gt;

&lt;p&gt;We documented and open-sourced this approach as part of a larger platform project:&lt;/p&gt;

&lt;p&gt;👉 &lt;a href="https://github.com/laoshanxi/app-mesh/blob/main/docs/source/success/open_service_broker_support_local_pv_for_K8S.md" rel="noopener noreferrer"&gt;https://github.com/laoshanxi/app-mesh/blob/main/docs/source/success/open_service_broker_support_local_pv_for_K8S.md&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you’ve built dynamic storage workflows around Local PVs (or decided not to), I’d love to hear what worked — and what didn’t.&lt;/p&gt;

</description>
      <category>containers</category>
      <category>kubernetes</category>
      <category>infrastructure</category>
    </item>
    <item>
      <title>Running Native (Non-Container) Workloads on Kubernetes: A Practical Experiment</title>
      <dc:creator>laoshanxi</dc:creator>
      <pubDate>Fri, 09 Jan 2026 00:51:17 +0000</pubDate>
      <link>https://dev.to/laoshanxi/running-native-non-container-workloads-on-kubernetes-a-practical-experiment-12</link>
      <guid>https://dev.to/laoshanxi/running-native-non-container-workloads-on-kubernetes-a-practical-experiment-12</guid>
      <description>&lt;p&gt;Kubernetes is excellent at orchestrating containers. But every now and then, you run into workloads that simply don’t fit well into the container model.&lt;/p&gt;

&lt;p&gt;In our case, we had several native binaries and host-level tools that needed to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;run on specific nodes&lt;/li&gt;
&lt;li&gt;access host resources directly&lt;/li&gt;
&lt;li&gt;integrate with existing CI/CD pipelines&lt;/li&gt;
&lt;li&gt;follow Kubernetes-style retries and lifecycle management&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Containerizing them felt forced. Privileged containers introduced security concerns, and tightly coupling containers to the host defeated the purpose of abstraction.&lt;/p&gt;

&lt;p&gt;So we tried a different approach.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem with “Just Containerize It”
&lt;/h2&gt;

&lt;p&gt;In theory, everything can be containerized. In practice, that often means:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;privileged mode&lt;/li&gt;
&lt;li&gt;direct host mounts&lt;/li&gt;
&lt;li&gt;fragile assumptions about the host environment&lt;/li&gt;
&lt;li&gt;unclear ownership when jobs fail or restart&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;At that point, Kubernetes is mostly being used as a scheduler and lifecycle tracker, not as an isolation boundary.&lt;/p&gt;

&lt;p&gt;We wanted to keep the good parts of Kubernetes — Jobs, retries, observability — without forcing native workloads into an unnatural container shape.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Core Idea
&lt;/h2&gt;

&lt;p&gt;Instead of running the workload inside the container, we flipped the model:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Kubernetes Jobs are still the scheduling primitive&lt;/li&gt;
&lt;li&gt;The container acts as a thin command forwarder&lt;/li&gt;
&lt;li&gt;The actual workload runs as a native OS process on the node&lt;/li&gt;
&lt;li&gt;From Kubernetes’ perspective, nothing unusual is happening:&lt;/li&gt;
&lt;li&gt;Jobs start&lt;/li&gt;
&lt;li&gt;Jobs finish&lt;/li&gt;
&lt;li&gt;Exit codes are recorded&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Under the hood, the Job lifecycle is mapped to a host-level process.&lt;/p&gt;

&lt;h2&gt;
  
  
  How It Works (High-Level)
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;A lightweight agent runs on each node, exposing a local control interface&lt;/li&gt;
&lt;li&gt;A Kubernetes Job starts a small container&lt;/li&gt;
&lt;li&gt;That container forwards the command to the local agent&lt;/li&gt;
&lt;li&gt;The agent launches and monitors the native process&lt;/li&gt;
&lt;li&gt;Job success or failure reflects the process exit code&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This keeps Kubernetes in control of when and where things run, while the host controls how they run.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Worked Well
&lt;/h2&gt;

&lt;p&gt;This approach gave us a few practical wins:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;No privileged containers&lt;/li&gt;
&lt;li&gt;Native tools run exactly as they expect&lt;/li&gt;
&lt;li&gt;Kubernetes still provides retries, logs, and status&lt;/li&gt;
&lt;li&gt;CI/CD pipelines remain unchanged&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For legacy tooling or migration phases, this turned out to be surprisingly effective.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Was Hard
&lt;/h2&gt;

&lt;p&gt;The hardest part wasn’t execution — it was lifecycle correctness.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Node restarts&lt;/li&gt;
&lt;li&gt;Job retries&lt;/li&gt;
&lt;li&gt;Partial failures&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;All of these can leave orphaned processes behind if ownership isn’t carefully designed. We ended up treating Kubernetes Jobs as lifecycle signals, while enforcing stricter cleanup logic on the host side.&lt;/p&gt;

&lt;p&gt;It’s not a perfect abstraction — but it’s an honest one.&lt;/p&gt;

&lt;h2&gt;
  
  
  When This Pattern Makes Sense
&lt;/h2&gt;

&lt;p&gt;This isn’t a replacement for containers. It works best when:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;workloads are hard to containerize&lt;/li&gt;
&lt;li&gt;host-level access is unavoidable&lt;/li&gt;
&lt;li&gt;you want Kubernetes semantics without container overhead&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For fully cloud-native services, containers are still the right answer. For everything else, this can be a pragmatic bridge.&lt;/p&gt;

&lt;h2&gt;
  
  
  Open Source Implementation
&lt;/h2&gt;

&lt;p&gt;We eventually open-sourced the tooling we built around this pattern, since it kept repeating across teams:&lt;/p&gt;

&lt;p&gt;👉 &lt;a href="https://github.com/laoshanxi/app-mesh/blob/main/docs/source/success/kubernetes_run_native_application.md" rel="noopener noreferrer"&gt;https://github.com/laoshanxi/app-mesh/blob/main/docs/source/success/kubernetes_run_native_application.md&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I’m curious how others approach native workloads in Kubernetes — especially in environments with frequent node churn.&lt;/p&gt;

</description>
      <category>architecture</category>
      <category>devops</category>
      <category>kubernetes</category>
      <category>containers</category>
    </item>
  </channel>
</rss>
