My AI killed my WiFi.
Not metaphorically. Not "degraded performance." It unloaded both wireless drivers — the active one and the fallback — before downloading the replacement. In that order. While the replacement was on GitHub. Which requires... WiFi.
The console went dark. I'm standing in a 40ft fifth wheel in Oklahoma, tethering to my phone like it's 2008, trying to recover a machine that my own AI bricked because it was moving too fast to check whether the next step depended on the thing it was about to destroy.
This wasn't a fluke. I'd been watching this pattern for weeks. I run multiple AI instances, each specialized for different parts of the operation — infrastructure, content, business, community outreach, coordination. They have protocols. Written rules. Explicit instructions that say things like "do not remove a network driver before staging its replacement locally." Clear enough for a human. Clear enough for an AI to read, understand, acknowledge... and then violate the moment problem-solving momentum kicks in.
Every new conversation starts from zero. The AI reads the protocol, nods along, and means it. Then it gets three steps into a troubleshooting chain and the protocol becomes background noise. Muscle memory you can't build in something that forgets between sessions.
The WiFi incident wasn't even the worst of it. It was just the one that finally made me go looking for why.
I need to be honest about something before we go further.
I didn't find the answer by reading Anthropic's engineering blog over morning coffee. I don't read engineering blogs over morning coffee. I don't read them at all. I didn't study AI. I didn't take a course. I didn't read the white papers and then carefully architect a governance framework based on peer-reviewed best practices.
I jumped in the pool half drunk on beer with no floaties on and decided to try and beat Michael Phelps in the 100M freestyle.
Everything I've built — the governance model, the multi-instance architecture, the persistent memory system, all of it — came from breaking things and refusing to break the same thing twice. Trial and error. Autodidact style. I built a protocol because my AI leaked information it shouldn't have. I built a routing system because my AI did work in the wrong department without noticing. And I built a structural safety mechanism because my AI unloaded both WiFi drivers before downloading the replacement and left me tethering to my phone in an RV.
The old advice in IT is RTFM. Read The F*cking Manual.
Great advice... when there's a manual.
With AI governance, there isn't one. Nobody wrote the chapter on what happens when your AI destroys the thing it needs to complete the next step. Nobody wrote the chapter on governing multiple AI instances that forget everything between conversations. Nobody wrote the chapter on building persistent memory that grows until it chokes the system that depends on it.
WTFM.
Write The F*cking Manual.
One broken thing at a time.
That's what this series has been since Article 1. Every post is a page in a manual that didn't exist before I screwed something up badly enough to document the fix.
The reason I'm telling you this is because I found Anthropic's published research after I'd already been living the problem for weeks. Their data confirmed what I'd learned the hard way. That's not a flex. That's the whole point. You don't need a computer science degree to govern AI. You need to pay attention to what breaks and have the stubbornness to fix it structurally instead of just yelling at the machine and hoping it does better next time.
Here's where the WiFi incident gets interesting.
After I recovered the console — phone tethering, manual driver install, the kind of afternoon that ages you — I went hunting. Not for a fix to this specific failure, but for an explanation of why every AI instance I've ever run eventually breaks its own rules under pressure.
I found it. In Anthropic's own published engineering research.
They documented something their users have been feeling for a while: extended thinking — the deep reasoning mode that's supposed to make AI more careful and deliberate — doesn't actually solve protocol compliance during multi-step operations. Their own benchmarks showed it. The AI thinks harder. It reasons more deeply. And then it does the dangerous thing anyway, because thinking harder about a sequence isn't the same thing as pausing between steps to check whether you should proceed.
The difference is structural, not intellectual. Knowing you should look both ways before crossing the street doesn't help when you're already sprinting into traffic. The AI isn't ignoring the rules. It's reading them while running.
Anthropic built a fix. They called it a "think tool" — a mechanism that forces a structural pause between actions, combined with worked examples that show the AI how to reason during that pause. Not "think harder." Think at the right moment, about the right thing. Their data showed a 54% improvement in protocol compliance.
They published the research. They showed the results.
And if you want it... you build it yourself.
So I built it.
I run a self-hosted MCP server — a bridge between my AI instances and my infrastructure. It's how the AI reads system state, checks container health, accesses the persistent memory. I deployed the think tool as a new endpoint on that server. When the AI is about to execute something destructive — stopping a service, removing a file, changing a network configuration — it's now required to call the think tool first. The tool forces a structured pause. During that pause, the AI has to articulate what it's about to do, what depends on the thing it's about to change, and what the rollback path is if it goes wrong.
Then I rebuilt the operating instructions for each AI instance with worked examples. Not "be careful with drivers." Actual scenarios. Here's what a driver swap looks like when you think before you cut. Here's what compose file surgery looks like when you check dependencies before you touch files. Here's what a network change looks like when you verify the recovery path exists before you start.
The AI doesn't just have a protocol now. It has a structural pause that makes it use the protocol. The governance model isn't documentation anymore. It's architecture.
That should be the end of the story. Think tool deployed, problem solved, lesson learned, beer earned.
But fixing how AI behaves exposed a different problem — how AI remembers.
You can make AI stop and think before it acts. You can give it protocols and guardrails and structural pauses. Good. Necessary. Solved a real problem.
But every time you open a new conversation, it still starts from zero. My system has persistent memory — every AI instance has access to what happened in every previous session, every lesson learned, every mistake documented. That's the whole point of the build journal approach. The manual writes itself as you go.
The problem is the manual got too big.
Hundreds of entries. Every session started by loading all of it. Every book pulled off the shelf and stacked on the desk every time someone walked into the library. The AI spent so much of its available context loading the past that it had less room to actually think about the problem in front of it. The memory was choking the intelligence.
The fix isn't bigger memory. It's smarter retrieval. Instead of dumping everything into every session, the system needs to learn to pull only what's relevant. Walk into the library, ask for what you need, get three books instead of three hundred.
The Dewey Decimal System... for AI memory.
That project starts today. Another page in the manual nobody wrote.
Extended thinking doesn't make AI careful. It makes AI think harder while being reckless at the same speed. Persistent memory doesn't make AI smart. It gives AI more to remember without teaching it what to forget.
The fix — both times — was architecture. Not better AI. Better scaffolding around the AI.
And nobody's going to build that scaffolding for you. You WTFM.
The pool is deep. The beer is cold. The manual is getting thicker.
And I still can't beat Michael Phelps. But I haven't drowned yet.
The Paranoid~R.V. is a build journal. Every article is a page in a manual that didn't exist until something broke. If you want to watch someone build it the hard way and write it down so you don't have to — you know where to find me.
Full animated version with scroll reveals and custom styling lives at mpdc.dev/wtfm

Top comments (0)