DEV Community

laoshanxi
laoshanxi

Posted on

Running Native (Non-Container) Workloads on Kubernetes: A Practical Experiment

Kubernetes is excellent at orchestrating containers. But every now and then, you run into workloads that simply don’t fit well into the container model.

In our case, we had several native binaries and host-level tools that needed to:

  • run on specific nodes
  • access host resources directly
  • integrate with existing CI/CD pipelines
  • follow Kubernetes-style retries and lifecycle management

Containerizing them felt forced. Privileged containers introduced security concerns, and tightly coupling containers to the host defeated the purpose of abstraction.

So we tried a different approach.

The Problem with “Just Containerize It”

In theory, everything can be containerized. In practice, that often means:

  • privileged mode
  • direct host mounts
  • fragile assumptions about the host environment
  • unclear ownership when jobs fail or restart

At that point, Kubernetes is mostly being used as a scheduler and lifecycle tracker, not as an isolation boundary.

We wanted to keep the good parts of Kubernetes — Jobs, retries, observability — without forcing native workloads into an unnatural container shape.

The Core Idea

Instead of running the workload inside the container, we flipped the model:

  • Kubernetes Jobs are still the scheduling primitive
  • The container acts as a thin command forwarder
  • The actual workload runs as a native OS process on the node
  • From Kubernetes’ perspective, nothing unusual is happening:
  • Jobs start
  • Jobs finish
  • Exit codes are recorded

Under the hood, the Job lifecycle is mapped to a host-level process.

How It Works (High-Level)

  • A lightweight agent runs on each node, exposing a local control interface
  • A Kubernetes Job starts a small container
  • That container forwards the command to the local agent
  • The agent launches and monitors the native process
  • Job success or failure reflects the process exit code

This keeps Kubernetes in control of when and where things run, while the host controls how they run.

What Worked Well

This approach gave us a few practical wins:

  • No privileged containers
  • Native tools run exactly as they expect
  • Kubernetes still provides retries, logs, and status
  • CI/CD pipelines remain unchanged

For legacy tooling or migration phases, this turned out to be surprisingly effective.

What Was Hard

The hardest part wasn’t execution — it was lifecycle correctness.

  • Node restarts
  • Job retries
  • Partial failures

All of these can leave orphaned processes behind if ownership isn’t carefully designed. We ended up treating Kubernetes Jobs as lifecycle signals, while enforcing stricter cleanup logic on the host side.

It’s not a perfect abstraction — but it’s an honest one.

When This Pattern Makes Sense

This isn’t a replacement for containers. It works best when:

  • workloads are hard to containerize
  • host-level access is unavoidable
  • you want Kubernetes semantics without container overhead

For fully cloud-native services, containers are still the right answer. For everything else, this can be a pragmatic bridge.

Open Source Implementation

We eventually open-sourced the tooling we built around this pattern, since it kept repeating across teams:

👉 https://github.com/laoshanxi/app-mesh/blob/main/docs/source/success/kubernetes_run_native_application.md

I’m curious how others approach native workloads in Kubernetes — especially in environments with frequent node churn.

Top comments (0)