February 2026 · kindling-sh/kindling
We just shipped v0.6.0 of kindling, and the headline feature is kindling sync — live hot reload for any language running in a Kubernetes pod. This post walks through how it works, what it replaces, and where the project is headed.
The Problem
If you're building on Kubernetes, your development loop probably looks like this:
- Edit code
- Build a container image
- Push to a registry
- Wait for a rollout
- Check logs
- Repeat
Even with local tools, steps 2–4 take 30–90 seconds. Multiply that by the number of iterations in a session and you lose hours per week to build latency. Docker Compose sidesteps Kubernetes entirely, but then you're not testing against real Services, Ingress, RBAC, or any of the infrastructure your app actually runs on.
Kindling takes a different approach: keep the full Kubernetes environment, but make the feedback loop sub-second.
How kindling sync Works
The sync command watches your local source directory using fsnotify, debounces changes (default 500ms), and copies modified files into the running container via kubectl cp. That part is straightforward. The interesting part is what happens next.
Runtime Detection
When you run kindling sync -d my-api --restart, the CLI reads /proc/1/cmdline from the target container to identify the running process. It matches the process name against a table of 30+ known runtimes and selects a restart strategy:
$ kindling sync -d my-api --restart
✓ Detected runtime: Python (uvicorn)
✓ Strategy: signal reload (SIGHUP)
⚡ Watching /Users/you/src/my-api → /app in pod my-api-6f8b9c4d7-x2k9p
The detection is based on the actual PID 1 binary, not filenames or heuristics. If the container runs uvicorn, kindling sends SIGHUP. If it runs node server.js, kindling injects a restart-loop wrapper. If it runs a compiled Go binary, kindling cross-compiles locally and syncs the binary.
Four Restart Modes
The sync engine implements four distinct restart strategies:
Signal reload — For servers that support graceful reload (uvicorn, gunicorn, Puma, nginx, Apache). Kindling sends SIGHUP to PID 1. No wrapper injection, no downtime, no container restart. This is the lightest path.
Wrapper + kill — For interpreted runtimes (Node.js, Python, Ruby, Deno, Bun, Elixir, Perl). Kindling patches the deployment to wrap the original command in a restart loop (while true; do <original cmd>; sleep 1; done), then kills the child process after each sync. The loop respawns it with the updated code. The deployment spec records the pre-patch revision for rollback.
Local build + binary sync — For compiled languages (Go, Rust, Java/Gradle, C#, C/C++, Zig). Kindling queries the Kind node's CPU architecture, cross-compiles on your host machine, and syncs the resulting binary into the container. For Go, the default command is:
CGO_ENABLED=0 GOOS=linux GOARCH=arm64 go build -o /tmp/kindling-build/<binary> .
The architecture is detected automatically — no manual GOARCH flag needed.
Auto-reload — For runtimes that already watch for file changes (PHP with mod_php/php-fpm, nodemon). Files are synced and the runtime picks them up. No restart signal needed.
Automatic Rollback
When you stop a sync session — Ctrl+C from the CLI or the Stop button in the dashboard — kindling restores the deployment to its pre-sync state. If a wrapper was injected, it performs a kubectl rollout undo to the saved revision. If only files were synced (signal-reload servers), it does a rollout restart to get a fresh pod from the original image. Either way, the container goes back to exactly where it was.
This matters because sync is a dev-time overlay, not a deployment mechanism. You shouldn't have to remember to clean up patched deployments.
Load: Full Rebuild Without the CI Round-Trip
Sync covers most iteration, but sometimes you need a real container build — you changed a dependency, updated a Dockerfile, or you're working with a language the sync engine doesn't have a runtime profile for yet. That's what Load does.
The pipeline is three steps, all local:
-
docker build— builds the image on your machine using the project's Dockerfile, with optional--platformfor cross-arch (e.g.,linux/arm64on an Apple Silicon host) -
kind load docker-image— loads the built image directly into the Kind cluster's containerd store, no registry push needed - Patch + rollout — updates the DSE or Deployment image reference and waits for the rolling update to complete
From the CLI, this is kindling push (which wraps git push and triggers the full CI pipeline) or, for a faster local-only path, the Load button in the dashboard. The dashboard's Load modal auto-discovers your project's service directories and Dockerfiles, so you pick a service and click build.
The key difference from sync: Load produces a real container image. The deployment runs your updated Dockerfile, installs new dependencies, and goes through the normal Kubernetes rollout. It takes 15–60 seconds instead of sub-second, but it's a complete build — and it's still entirely local, no CI minutes consumed.
For languages where sync doesn't yet have a runtime profile, Load is the inner loop. Edit code, click Load, wait for the rollout. It's slower than sync but still faster than pushing to a remote CI system.
The Dashboard
v0.6.0 also ships a built-in web dashboard (kindling dashboard) — a single-binary React app embedded in the CLI via go:embed. It shows your cluster state and provides one-click access to the inner loop:
- Each deployed service shows its detected runtime as a badge (Node.js, Python, Go, nginx, etc.)
- Sync button opens a modal pre-filled with the detected runtime and source directory, starts a sync session via the API
-
Load button triggers a full rebuild:
docker build→kind load→rollout restart - Live sync status (file count, duration) is visible on each card while a session is active
The dashboard talks to the same APIs as the CLI. Everything you can do in the terminal, you can do from the browser.
kindling dashboard --port 9090
What Kindling Actually Is
Kindling is a Kubernetes operator and CLI that gives you a complete dev environment on a local Kind cluster. The full system has two loops:
Outer loop (CI): git push triggers a real GitHub Actions workflow. The job runs on a self-hosted runner inside your Kind cluster. Kaniko builds the image (no Docker daemon), pushes to an in-cluster registry, and the kindling operator reconciles a DevStagingEnvironment CR into a running Deployment with Service, Ingress, and auto-provisioned dependencies (Postgres, Redis, MongoDB, Kafka, RabbitMQ, and others — 15 dependency types).
Inner loop (dev): kindling sync watches local files, syncs them into the running container, and restarts the process. Sub-second feedback without rebuilding an image. When you're done iterating, stop sync and the deployment rolls back.
The inner loop runs inside the outer loop. You push code to get a real staging environment, then use sync to iterate on it without waiting for builds. When you're satisfied, push again and the CI pipeline produces the real artifact.
Technical Details
A few implementation notes for anyone reading the source:
- File watching uses fsnotify with configurable debounce. Changes are batched and synced as a group to avoid thrashing the container with rapid saves.
-
Runtime detection parses
/proc/1/cmdlineviakubectl exec. The process name is matched against a lookup table, with fallback heuristics for bare interpreters (e.g.,python3running a gunicorn module gets detected as gunicorn by scanning the full command line for known framework entry points). -
Architecture detection for compiled-language cross-compilation reads the Kind node's architecture via
kubectl get node -o jsonpath='{.status.nodeInfo.architecture}'. -
Distroless containers — containers without a shell — are handled by falling back to the Kubernetes API's tar-stream endpoint instead of
kubectl cp, which requirestarin the container. - Wrapper injection patches the deployment spec's container command. The original command and revision number are recorded so rollback is exact, not just "undo the last change."
-
Default excludes skip
.git,node_modules,__pycache__,vendor,dist,.next, build outputs, and other artifacts. Additional patterns can be added with--exclude. - The dashboard frontend is a TypeScript/React SPA built with Vite, compiled to static assets, and embedded into the Go binary at build time. No separate process, no npm at runtime.
Getting Started
brew install kindling-sh/tap/kindling
kindling init
kindling deploy -f dev-environment.yaml
kindling sync -d my-service --restart
Or open the dashboard and click Sync.
The getting started guide walks through the full setup, from cluster bootstrap to inner-loop iteration.
Contributing
Kindling is Apache 2.0 licensed and actively looking for contributors and early adopters. The codebase is Go (operator + CLI) and TypeScript (dashboard), built with Kubebuilder v4. Current areas where contributions would have the most impact:
- Runtime profiles — adding detection and restart strategies for languages/frameworks not yet in the table (Scala, Haskell, OCaml, Swift, etc.)
- Sync engine hardening — edge cases around symlinks, large binary files, permission preservation across different container base images
- Dashboard features — log streaming, resource usage visualization, multi-environment views
- Testing — expanding e2e coverage across different project types and cluster configurations
- Documentation — tutorials, example projects, integration guides for specific frameworks
If you're building microservices on Kubernetes and tired of waiting for builds, try it out. File issues, open PRs, or just tell us what breaks.
GitHub: github.com/kindling-sh/kindling
Docs: kindling-sh.github.io/kindling
License: Apache 2.0
Top comments (0)