This is a follow-up to How I Keep a Kubernetes CLI Lean. You don't need to read that first, but it gives context for what k3d-manager is.
The Part I Left Out
In the first article I described k3d-manager as having a provider abstraction — where the same commands work whether you're running k3d on macOS or k3s on Linux. I made it sound like a design decision.
It wasn't. It was a reaction.
Here's what actually happened.
How It Started: k3d on macOS
I started building k3d-manager entirely on macOS, using k3d — Kubernetes in Docker. Fast feedback loop, no VM overhead, everything running locally. That's where the dispatcher pattern, the lazy-loading plugin system, and the _run_command wrapper all came together.
At that point the code had no abstraction for cluster providers. There was one cluster type: k3d. It was baked in everywhere.
That worked fine — until it didn't.
The Wall: k3s Only Runs on Linux
k3s is a production-grade Kubernetes distribution. It's lightweight, single-binary, great for bare metal and cloud instances. It's also Linux-only. It requires systemd. It doesn't run on macOS, not even in a container.
As k3d-manager matured, I wanted to validate that it would work in environments beyond a local Mac. Real deployments — cloud instances, on-premise servers, edge devices — mostly run Linux. If the tool only worked on macOS with Docker, it wasn't actually portable.
So I set up an Ubuntu VM in Parallels Desktop on my M2 Mac and tried to run k3d-manager there with k3s instead of k3d.
It didn't work. Not because the logic was wrong — but the cluster provider behavior was hardcoded throughout. Every function that touched cluster lifecycle — create_cluster, destroy_cluster, get_kubeconfig — did it the k3d way. There was no concept of "how do you do this on k3s."
The Constraint Forced the Question
At this point I had two obvious options:
- Copy-paste the cluster functions and add
if [[ $PROVIDER == "k3s" ]]branches everywhere - Find a cleaner way to handle it
Option 1 is how most shell projects end up unmaintainable. I'd already been through that refactoring once early on — pulling responsibilities out of the main file into a proper lib/ directory. I didn't want to undo that discipline.
So I asked Claude.
What Claude Actually Contributed
This is the honest version of "AI-assisted development" that I think is worth documenting.
I described the problem: I have a codebase built around k3d. I need to support k3s. The two providers behave differently at the cluster level but the rest of the tooling — Vault, Jenkins, LDAP, Istio — should be the same regardless. What's the right pattern?
Claude suggested a provider abstraction: a defined interface that each provider implements, selected at runtime by an environment variable. Each provider lives in its own file — scripts/lib/providers/k3d.sh, scripts/lib/providers/k3s.sh — and implements the same set of functions. The dispatcher sources the right file based on CLUSTER_PROVIDER. Consumer code calls the interface, never the provider directly.
Not a new idea. This is a standard strategy pattern. But I hadn't named it that, and I wasn't going to arrive at it on my own while staring at a shell script that needed fixing.
That's the contribution: Claude supplied the vocabulary and the structure for something I needed but hadn't articulated. I had the domain problem. Claude had the pattern vocabulary.
What I Contributed
The pattern was a suggestion. Making it work was still my job.
Implementing it meant:
- Deciding which functions belonged in the interface (cluster lifecycle only — not secrets, not services)
- Working out how k3s's systemd-based lifecycle differs from k3d's Docker-based one
- Handling kubeconfig differently on k3s (lives at
/etc/rancher/k3s/k3s.yaml, needs sudo) - Making
_run_commandhandle the elevated permissions k3s requires without leaking into the rest of the codebase - Testing it on real Ubuntu — not mocked, not assumed, actually run on the VM
None of that came from Claude. It came from knowing what k3s actually does on a Linux system, what systemd expects, what breaks when you get it wrong.
The Same Pattern Appeared Again
Once the provider abstraction existed for cluster providers, the same problem appeared for directory services.
Jenkins can authenticate against standard OpenLDAP, against OpenLDAP with an AD-compatible schema (for local AD testing), or against a real enterprise Active Directory. Each behaves differently enough that sharing code is error-prone. But from Jenkins's perspective, they're all "a directory service."
Same solution: DIRECTORY_SERVICE_PROVIDER environment variable, interface defined in scripts/lib/dirservices/, each provider in its own file. Same pattern, different domain.
I didn't ask Claude again. I already had the template.
That's how patterns actually transfer — not by reading about them, but by having solved the problem once and recognising the shape when it appears again.
OrbStack: A Third Provider, Mid-Flight
The pattern appeared a third time when OrbStack entered the picture.
OrbStack is a macOS-native Docker and Linux VM runtime — faster than Docker Desktop, lighter on resources, and better integrated with macOS networking. When I started validating k3d-manager on an M4 Mac, OrbStack was the obvious runtime choice over Docker.
The problem was familiar: OrbStack uses its own Docker context. The existing k3d provider assumed the default Docker context. Running k3d against OrbStack required switching contexts — something that needed to be handled transparently, not manually every time.
Same solution: CLUSTER_PROVIDER=orbstack in scripts/lib/providers/orbstack.sh. It wraps the k3d provider with OrbStack's Docker context, auto-detected via orb status. From the consumer's perspective — Vault, Jenkins, Istio, everything above — nothing changes.
As of v0.2.0, OrbStack support is validated on both M4 and M2 Macs — full stack: cluster creation, Vault, Jenkins, Istio, smoke tests all green. It's also the runtime for the Stage 2 CI job that runs on every PR. Phase 3 — using OrbStack's native Kubernetes instead of k3d entirely — is still future work, but the current provider is solid.
That's a normal state for a tool built this way. The abstraction makes it possible to ship validated support incrementally without breaking anything else.
What This Says About How AI Contributes
I've been thinking about how to describe what Claude actually did on this project, because the common framings are both wrong.
"AI wrote my code" — no. The code came from domain knowledge I built over years. Claude doesn't know why k3s kubeconfig needs sudo, or what systemd expects, or why the LDAP bind DN has to match the bootstrap LDIF exactly. I know those things because I've broken them.
"AI just helped with boilerplate" — also no. The provider pattern suggestion wasn't boilerplate. It was the structural insight that prevented the codebase from turning into a mess of conditionals.
The accurate framing is something like: AI contributed the right abstraction at the right moment, because I had articulated the right problem.
That last part matters. If I had gone to Claude with "make this work for k3s," I'd have gotten something that worked for k3s, badly. I went with "here are two providers that behave differently at the cluster level, I need to support both without branching everywhere" — and that question had a clean answer.
Knowing what question to ask is not something the AI provided. That came from the same place as the domain knowledge: experience watching codebases go wrong.
The Practical Upshot
k3d-manager now runs on:
- macOS with k3d (Docker-based, original development environment)
- macOS with OrbStack (faster, lighter — validated on M4 and M2, Stage 2 CI runs here)
- Linux with k3s (systemd-based, validated on Ubuntu in Parallels)
- Any Linux VM or cloud instance where k3s runs
Same commands. Same plugins. Same Vault, Jenkins, LDAP, Istio setup. The provider abstraction is invisible to everything above it.
That's not a small thing for a local dev tool. It means the same automation that runs on your laptop also covers a cheap cloud VPS, an on-premise bare metal cluster, or a developer VM. The gap between local and production-like is much smaller than it looks.
And it came from hitting a wall, asking the right question, and having someone — or something — supply the pattern that fit.
Repo
wilddog64/k3d-manager — open, Apache 2.0. The provider abstraction is in scripts/lib/cluster_provider.sh and scripts/lib/providers/ if you want to see what it looks like in practice.
Top comments (2)
The honesty about the provider pattern being a reaction to a constraint rather than upfront design resonates — most of my cleanest abstractions came from hitting a wall, not from planning sessions. The shell provider dispatch approach is surprisingly elegant for bash.
Exactly this. The wall is the design session — you just don't know it at the
time. Glad the shell provider approach read as elegant rather than "why is
this in bash."