We Built Martin Fowler's AI Feedback Flywheel Before He Named It
On April 9, 2026, Martin Fowler published a detailed article describing a pattern he calls the Feedback Flywheel — a system for converting individual AI interactions into collective team improvement.
We'd been running the same system for months.
Not because we copied it. We hadn't read the article yet. We built it because it was the obvious solution to a real problem: every AI interaction generates useful signal, and almost every team throws that signal away.
What Fowler Describes
Fowler's Feedback Flywheel has two layers: signal types and shared artifacts.
The four signal types:
- Context signals — facts about your codebase, domain, or project that the AI needs
- Instruction signals — prompts and phrasings that reliably produce good output
- Workflow signals — multi-step sequences that work well end-to-end
- Failure signals — cases where the AI did something wrong
Those signals feed into four shared artifacts:
- Priming docs — shared context files the whole team pulls into every session
- Commands — reusable prompt templates anyone can invoke
- Playbooks — documented workflows for recurring tasks
- Guardrails — explicit constraints that prevent known failure modes
The cadence: after each session, at standup, at retro, quarterly. Key metric: declining instances of "why did the AI do that?"
What We Built
Our implementation lives across three systems: the Obsidian vault, the autotron agent network, and AgentGuard.
Layer 1: The Vault (Signal Capture)
Every meaningful AI interaction — good or bad — gets logged. The vault has a structured Feedback/ directory that captures what the prompt was, what the output was, whether it worked, and what to do differently.
This maps directly to Fowler's four signal types. Context signals become context/ files read by every agent at startup. Failure signals become guardrails. The difference: ours is machine-readable, not just human-readable. Agents ingest these files. They're runtime configuration, not documentation.
Layer 2: The Agent Network (Signal Processing)
The autotron system runs scheduled agents: CMO, CFO, standup, deal monitor, market scout. Each agent:
- Reads shared context files at startup
- Executes its task
- Writes findings back to shared memory
- Updates queue files that other agents read on their next run
Signal captured → shared artifact updated → next agent reads it → better output
Every cycle, the system knows a little more. Every agent run improves the baseline.
Layer 3: AgentGuard (The Guardrail Layer)
Fowler's "failure signal → guardrail" step is the hardest to implement in practice. Most teams write notes after an agent goes wrong. Notes don't stop the next agent from making the same mistake.
AgentGuard enforces hard constraints at runtime:
from agentguard import Guard
guard = Guard(
budget_limit=5.00, # hard stop at $5
token_limit=100_000, # no runaway context
time_limit=300, # 5 min max
)
@guard.protect
def run_agent():
# your agent logic here
...
When a new failure mode is discovered, a new guard condition gets added. The system can't repeat the same expensive mistake twice.
Install: pip install agentguard47
The Weekly Cadence
| Cadence | Fowler's Artifact | Our Implementation |
|---|---|---|
| After each session | Priming doc update | Vault feedback log + context file update |
| Daily | Standup signal review | Autotron standup agent writes to shared memory |
| Weekly | Retro + playbook update | Weekly review agent updates SKILL.md files |
| Quarterly | Strategy refresh | CFO + CMO quarterly review |
What to Build First
Week 1: Create a shared context file. One document everyone pulls into every AI session. Put your domain glossary, common workflows, and "don't do this" notes in it.
Week 2: Build a failure log. When an AI output goes wrong, write down what happened. After 10 entries, you'll see patterns. Turn those patterns into constraints.
Week 3: Add a budget guardrail. Hard spend limit before agents touch production.
Week 4: Run a retro on your AI tooling. What prompts are people reusing manually? Turn them into shared commands. What workflows run every week? Turn them into scheduled agents.
By week 4 you have the skeleton of a Feedback Flywheel. It compounds from there.
Originally published at bmdpat.com. If you want help designing this kind of compounding AI infrastructure for your team, start here.
Top comments (0)