AI coding agents will write anything you ask them to. The hard part isn't getting them to generate code; it's getting them to generate the same kind of code your team would actually ship. Every time. Without supervision.
That gap closes when you stop tuning prompts and start engineering a harness: the rules, checks, agents, and procedures that keep the agent on the rails. The model alone won't get you there. The harness is what makes the agent dependable.
I built bridle (a Claude Code plugin) to scaffold and maintain that harness in any codebase.
First, install the Claude plugin:
Install
/plugin marketplace add tacoda/tacoda-marketplace
/plugin install bridle@tacoda-marketplace
Then in any project:
/bridle:generate-harness
That writes CLAUDE.md and .claude/ (rules, agents, skills, commands), inspects your codebase, and proposes values for every placeholder. Existing files are never silently overwritten: you see a diff and decide.
The five pillars
Bridle organizes everything around the parts of a working harness:
| Pillar | What it does |
|---|---|
| guidance |
CLAUDE.md and rules in .claude/rules/ — what to write before writing |
| guardrails | Lint, tests, type-checks the agent runs and won't bypass |
| flywheel | Reviews update rules; rules shape the next conversation |
| workflows | Agents, commands, skills — institutional knowledge as runnable procedures |
| discipline | Practices that keep the others fresh — debt maps, boy-scout rule, harness health |
Twenty-one slash commands cover the lifecycle. /bridle:learn turns a review comment into a rule. /bridle:audit walks the entire codebase looking for rule violations. /bridle:harness-health shows which rules are stale, where the gaps are, when you last fed the flywheel.
The flywheel is the part most teams skip. Reviews catch issues; nobody updates the rules; the next conversation makes the same mistakes. /bridle:learn is one command that closes that loop — feedback in, rule out, harness sharper.
Why a harness, and why now
The case for harness engineering is showing up everywhere.
Chris Parsons has the clearest practitioner writing on actually coding with AI well, not just turning it on:
Martin Fowler's site has been tracking the same pattern under the name harness engineering:
And there's a recent talk on building harnesses with sensors, what I have been calling guardrails, that catch the agent before it ships something wrong:
If you've read any of those and wondered "OK, but how do I actually set this up in a real codebase?"
That's what bridle does!
How it grew
Bridle replaces sellier, my Python CLI predecessor. The deliverable is the same — a markdown harness — but the install changed:
-
sellier:
pip install sellier && sellier init -
bridle: one
/plugin installcommand, no language runtime, no version skew.
The harness templates live inside the plugin and get copied into your project on demand. I've been writing about the idea as I go — the full thread is here:
The entry that maps most directly to bridle:
Try it
/plugin marketplace add tacoda/tacoda-marketplace
/plugin install bridle@tacoda-marketplace
/bridle:generate-harness
Five minutes from zero to a project-aware harness. From there you build the flywheel: every review becomes a rule, every rule shapes the next conversation, and the agent gets sharper with each pass.

Top comments (0)