DEV Community

Cover image for My Wife Ships Faster Than My 262-Line AI Protocol
Manju Kiran
Manju Kiran

Posted on

My Wife Ships Faster Than My 262-Line AI Protocol

My wife ships faster than I do. Every Saturday, same tools, same kitchen table, same toddler — she finishes first.

Not because she's faster at typing. Not because her prompts are better (though her first prompt was 30 lines while mine was 12 words — that story is in the blog). The reason is simpler and more uncomfortable: she doesn't build infrastructure for the sake of infrastructure.

We're both developers. We work from opposite ends of the same kitchen table. Same internet connection. Same toddler interrupting at unpredictable intervals. On any given Saturday morning, if we both sit down to build something with AI, the result is the same. Every time.

The 262-Line Protocol

By month three of my AI journey, I had a 262-line workflow protocol. A CLAUDE.md file that governed how my AI agents behaved, what they were allowed to touch, how they structured commits, when they asked permission, and what documentation they updated after every change.

A fragment of it looks like this:

## MANDATORY: Pre-Code Checklist (STOP AND READ BEFORE WRITING ANY CODE)

### 0. CONTEXT LOADING CHECK (DO THIS FIRST)
> "Have I loaded the necessary context before writing code?"

**REQUIRED STEPS - NO EXCEPTIONS:**

1. **Read the Existence Manifest:**
   Search for keywords related to your task. If something exists, DON'T recreate it.

2. **For Database Work - Check Existing Tables:**
   - [ ] I have verified the table doesn't already exist
   - [ ] I have verified similar fields don't exist in other tables
   - [ ] I have identified related tables I may need to join/reference

3. **For API Work - Check Existing Endpoints:**
   - [ ] I have verified the endpoint doesn't already exist
   - [ ] I have identified related endpoints I can reuse

## CRITICAL: No Defensive Programming. NEVER take lazy shortcuts. Fix it RIGHT.
Enter fullscreen mode Exit fullscreen mode

Two hundred and sixty-two lines of that. Battle-tested. The product of genuine failures — agents that deleted production code, merged broken branches, and once cheerfully rebuilt an authentication system that already existed.

My wife — also a developer — looked at it one Saturday and gave me the verdict: "This is good. But you've spent more time writing rules for the AI than I've spent using it this month. And I've shipped three client features."

Two Approaches, One Kitchen Table

The contrast was hard to ignore. My wife's first AI prompt was thirty lines — a complete spec with design tokens, loading states, and test requirements. She got a working component back. My first prompt was twelve words:

build a trading dashboard
Enter fullscreen mode Exit fullscreen mode

That got me a starting point for a four-hour conversation.

Her approach was simple: clear prompt, review the output, ship. No workflow documents. No existence manifests. Treat AI like a capable junior developer — give clear instructions, review the work, move on.

My approach: build the orchestra pit before playing a single note. Document every failure. Create guardrails for every edge case. Architect a system so robust that the AI couldn't possibly go wrong. I was orchestrating multiple agents across a complex multi-service architecture where one wrong merge can cascade through eight repositories. The guardrails aren't vanity. They're scar tissue. But the voice in my head kept asking: if she can ship without all this, what does that say about me?

And here's what kept nagging: the protocol grew from 40 lines to 262 lines, and at no point did I stop to ask whether every addition was necessary. Some of those rules exist because of a failure that happened once, in a specific context, that will probably never repeat. I built a comprehensive insurance policy when I needed targeted coverage. Was that engineering, or was it anxiety wearing a build system?

How many lines is your AI workflow config? Be honest.

The Efficiency Gap

There's a number I don't love thinking about. If I calculate the hours spent building and maintaining my AI workflow infrastructure — the documentation, the protocols, the context files, the model tiering system — and compare it to my wife's total AI setup time (approximately: none), the efficiency gap is real.

She has shipped more features per hour of AI-related effort than I have. By a meaningful margin.

My counter-argument is that my infrastructure pays dividends over time. That the upfront investment amortises. That I can now onboard new projects faster because the protocols are reusable. All of this is true. But it's also the kind of argument that sounds suspiciously like rationalising the sunk cost.

The Resilience Question

Where it gets interesting is resilience. When something goes wrong — and it does, regularly — my system catches it. The protocol flags merge conflicts before they compound. The documentation prevents agents from rebuilding existing services. The guardrails stop the cascade before it starts.

Her system doesn't catch those things, because her system is her. She reviews everything personally. She doesn't delegate to agents she can't supervise in real time. Her resilience is human attention, applied consistently.

This works — until it doesn't scale. When the project grows beyond what one person can hold in their head, the protocol starts earning its weight. When the project stays small enough for human attention, the protocol is overhead.

I spent six months swinging between panic and overengineering before I figured out what I actually needed and when. That journey — from fear to something resembling confidence — is the actual skill. The protocols were just the receipts.

What the Saturday Morning Test Actually Measures

The test doesn't measure who's a better developer. It measures something more specific: the gap between efficiency and resilience, and the cost of confusing the two.

My wife is efficient. She gets maximum output for minimum process. I am resilient. My system handles failures that hers would not survive. But I built resilience when I sometimes needed efficiency, and the 262-line protocol is the evidence.

The question I keep coming back to, usually around 11 PM when the house is quiet and I'm reviewing the day's work: how much of my infrastructure exists because I needed it, and how much exists because building infrastructure is what I do when I'm anxious?

I don't have a clean answer. The book doesn't pretend to have one either.


What's the ratio of time you spend configuring AI tools vs. actually using them?


This is adapted from "It AI'nt That Hard" by R Manju Kiran. The blog series covers the professional journey; the book covers the Saturday mornings, the arguments, and the parts where the self-doubt won before it lost. Available on Amazon, Leanpub, and Gumroad.

Top comments (0)