In the previous article we set up a Kubernetes home lab accessible over the internet with k3s. In this article, we’ll build a fast inner development loop on top of that environment.
Prerequisites
- Working Kubernetes environment (see previous setup)
- SSH access to the home server
- Basic kubectl and Helm knowledge
Container registry for development
First, let’s set up our environment to build and push container images. Assume a k3s cluster is running on a home server with a private container registry inside the cluster.
When you’re outside your home network, Cloudflare Tunnel can expose most services, but it doesn't fit to a container registry because Cloudflare Tunnel doesn't allow you to push a large image into your local network. For public distribution, we have to use a hosted registry. But for development, we can still push to the private in‑cluster registry by setting up a remote Docker context.
In the previous article, we set up SSH access to the home network. We’ll use it to talk to the Docker daemon on the home server. Example ~/.ssh/config
(replace buun.dev
with your hostname):
Host buun.dev
Hostname ssh.buun.dev
ProxyCommand /opt/homebrew/bin/cloudflared access ssh --hostname %h
Set DOCKER_HOST=ssh://buun.dev
so the Docker CLI talks to the remote engine over SSH instead of your local daemon.
Behind the scenes, the CLI opens an SSH session to buun.dev
and tunnels the Docker Engine API through it. From the CLI’s point of view, docker build
, docker push
, and docker run
all hit the remote engine. Bonus: offloading builds saves laptop battery.
Tag images with the prefix localhost:30500/
. That address works on the home server both inside and outside the cluster. Pushes happen from the remote host’s network perspective; if that host can reach the registry, docker push
succeeds even when your laptop can’t reach it directly.
Build, Push, and Deploy a Sample App
Let’s try it with a sample‑web‑app, a simple Next.js app with Drizzle ORM connecting to PostgreSQL.
git clone https://github.com/buun-ch/sample-web-app
cd sample-web-app
export DOCKER_HOST=ssh://buun.dev
docker build -t localhost:30500/buun-ch/sample-web-app:latest .
docker push localhost:30500/buun-ch/sample-web-app:latest
These commands build the Docker image and push it to the private registry. As mentioned earlier, the localhost:30500/...
tag is reachable inside and outside the cluster.
Next, create the database and user. If you’re using buun‑stack:
cd /path/to/buun-stack
just postgres::create-user-and-db
Create values.yaml
for development:
image:
imageRegistry: localhost:30500/buun-ch
repository: sample-web-app
tag: latest
pullPolicy: Always
env:
- name: DATABASE_URL
value: postgresql://todo:todopass@postgres-cluster-rw.postgres:5432/todo
migration:
enabled: true
env:
- name: DATABASE_URL
value: postgresql://todo:todopass@postgres-cluster-rw.postgres:5432/todo
Then deploy the Helm chart:
kubectl create namespace sample
helm upgrade --install sample-web-app ./charts/sample-web-app -n sample --wait -f values.yaml
Check the release status:
kubectl get all -n sample
Telepresence: in‑cluster DNS and routing without port‑forwarding
You could do a quick kubectl port-forward
and open the app, but redoing that after every deploy is tedious—and it doesn’t integrate DNS. Telepresence lets a local process join the cluster network as if it were running inside Kubernetes. Once connected, your laptop resolves service names and routes traffic through Telepresence, so you can hit services by DNS name directly.
- First‑time setup: install the Traffic Manager:
telepresence helm install
- Connect:
telepresence connect
Now you can reach services by DNS (e.g., <service>.<namespace>
) directly from your laptop, and GUI tools like DbGate work out of the box. When you’re done, disconnect:
telepresence quit
Dev orchestrators: Skaffold, Tilt, DevSpace
Now that Telepresence takes care of networking and DNS, the next bottleneck is the inner dev loop: build, tag, push, and roll out changes—over and over again across multiple services.
What we want in an inner dev loop:
- Fast incremental rebuilds: watch source and rebuild only what changed
- Instant live sync: edit locally and update containers immediately
- Unified feedback: logs and (optional) forwards in one place
Skaffold, Tilt, and DevSpace all deliver this. Here are minimal (illustrative) configs:
Skaffold
apiVersion: skaffold/v4beta6
kind: Config
metadata:
name: sample-web-app
build:
artifacts:
- image: localhost:30500/sample-web-app
context: ./apps/sample-web-app
deploy:
helm:
releases:
- name: sample-web-app
chartPath: charts/sample-web-app
valuesFiles:
- values.dev.yaml
Tilt
# Tiltfile (Starlark)
docker_build('localhost:30500/sample-web-app', './apps/sample-web-app')
# Optional but fast: sync edits and run commands in the container
live_update(
'localhost:30500/sample-web-app',
[
sync('./apps/sample-web-app', '/app'),
run('npm install', trigger=['package.json', 'package-lock.json'])
],
)
DevSpace
version: v2beta1
name: sample-web-app
images:
app:
image: localhost:30500/sample-web-app
context: ./apps/sample-web-app
deployments:
app:
helm:
chart:
path: charts/sample-web-app
values:
image:
repository: localhost:30500/sample-web-app
tag: dev
dev:
app:
imageSelector: localhost:30500/sample-web-app
sync:
- path: ./apps/sample-web-app:/app
ports:
- port: 3000
forward: 3000
Note: These snippets are examples only — not runnable as‑is.
Here is the GitHub star history of these tools:
Each tool has its own unique features and strengths, making them suitable for different use cases and preferences. I recommend Tilt for its flexibility and ease of use.
Tiltfile overview (sample web app)
Here’s the Tiltfile for sample-web-app:
allow_k8s_contexts(k8s_context())
config.define_string('registry')
config.define_bool('port-forward')
config.define_string('extra-values-file')
config.define_bool('enable-health-logs')
cfg = config.parse()
registry = cfg.get('registry', 'localhost:30500')
default_registry(registry)
docker_build(
'sample-web-app-dev',
'.',
dockerfile='Dockerfile.dev',
live_update=[
sync('.', '/app'),
run('pnpm install', trigger=['./package.json', './pnpm-lock.yaml']),
]
)
values_files = ['./charts/sample-web-app/values-dev.yaml']
extra_values_file = cfg.get('extra-values-file', '')
if extra_values_file:
values_files.append(extra_values_file)
print("📝 Using extra values file: " + extra_values_file)
helm_set_values = []
enable_health_logs = cfg.get('enable-health-logs', False)
if enable_health_logs:
helm_set_values.append('logging.health_request=true')
print("📵 Health check request logs enabled")
helm_release = helm(
'./charts/sample-web-app',
name='sample-web-app',
values=values_files,
set=helm_set_values,
)
k8s_yaml(helm_release)
enable_port_forwards = cfg.get('port-forward', False)
k8s_resource(
'sample-web-app',
port_forwards='13000:3000' if enable_port_forwards else [],
)
if enable_port_forwards:
print("🚀 Access your application at: http://localhost:13000")
- Define a few custom command‑line args (e.g., default container registry).
- Build the image.
- Deploy with Helm. Values differ by environment and are driven by the args defined above.
Tilt in action
Let’s run tilt up
and see it in action. The CLI prompts you to press space to open Tilt’s web UI in the browser; it shows the status of Kubernetes resources.
- Select the
sample-web-app
resource. The UI shows logs fordocker build
,docker push
, and the Helm release. - In this demo,
docker build
completes instantly because no code changed since the last build. - The app deploys. Open it in the browser to verify it’s reachable.
- Live Update: change the title to “ToDo App Test” and save. The browser updates immediately—no manual steps.
- Modify the Dockerfile. Tilt detects the change and rebuilds only the invalidated layer. When the push finishes, Tilt redeploys automatically.
- Update Helm values. Tilt detects the change, re‑renders the manifests, and reapplies them. When the rollout finishes, the new settings are live.
Kubernetes CLI productivity tips
These tips aim to reduce keystrokes, make output easier to scan, and centralize feedback while you iterate.
Autocompletion
Tab completion speeds up navigation and prevents typos for resource kinds and names. If you are using zsh, add the following to your shell init so it’s always available:
autoload -Uz compinit
compinit
eval "$(kubectl completion zsh)"
Usage: start typing a command and press Tab to complete names, e.g., kubectl get pod <TAB>
.
kubecolor + aliases
Colorized output makes table scanning faster. Alias kubectl
to kubecolor
, and remap completion so Tab still works.
alias kubectl=kubecolor
compdef kubecolor=kubectl
alias k=kubecolor
compdef kubecolor=kubectl
The short alias k
reduces keystrokes for frequent use.
zsh Global aliases for output
zsh Global aliases expand anywhere on the line and improve readability of kubectl output:
# Pretty YAML via bat
alias -g Y='-o yaml | bat -l yaml'
# Wide output
alias -g W='-o wide'
# YAML into a read‑only Neovim buffer
alias -g YE='-o yaml | nvim -c ":set ft=yaml" -R'
# Normal (without alias)
kubectl get deploy sample-web-app -o yaml | bat -l yaml
# Print manifest YAML with syntax highlight
kubectl get deploy sample-web-app Y
# Wide columns
kubectl get pods W
Watch with viddy
Dashboards like KDash and k9s are great, but I generally stick to kubectl plus viddy for quick loops. viddy
re‑runs a command, highlights changes in the output, and lets you choose a past timestamp to view that run’s output.
I use this alias to pair viddy
with kubectl:
alias vk='viddy -dtw kubectl'
Context/namespace helpers
Use kubectx and kubens to switch contexts and namespaces quickly.
Multi‑pod logs with stern
stern is a Kubernetes log tailer that streams logs from multiple pods (and containers) at once. It supports label selectors and regex filters, works across namespaces, and color‑codes/group streams so you can follow deployments and incidents in real time.
# Tail by label in a namespace
stern -n default -l app=sample-web-app
# Tail multiple apps with a regex and include timestamps
stern -n default 'web|api' -t
# Tail the last 5 minutes, raw lines
stern -n default -l app=sample-web-app --since 5m -o raw
# Focus on a specific container within each pod
stern -n default -l app=sample-web-app -c web
Wrap‑up
We built a smooth inner dev loop for a Kubernetes development environment:
- Remote Docker over SSH to push to the private registry
- Telepresence for DNS and routing without port‑forwarding
- Tilt for watching, incremental rebuilds, and live sync with clear feedback
These practices make Kubernetes development faster and less error‑prone.
Top comments (0)