<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Chris</title>
    <description>The latest articles on DEV Community by Chris (@p4r4n0id).</description>
    <link>https://dev.to/p4r4n0id</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/p4r4n0id"/>
    <language>en</language>
    <item>
      <title>Beside Myself at BSides OK</title>
      <dc:creator>Chris</dc:creator>
      <pubDate>Mon, 13 Apr 2026 00:32:11 +0000</pubDate>
      <link>https://dev.to/p4r4n0id/beside-myself-at-bsides-ok-40fo</link>
      <guid>https://dev.to/p4r4n0id/beside-myself-at-bsides-ok-40fo</guid>
      <description>&lt;p&gt;I almost didn't go.&lt;/p&gt;

&lt;p&gt;BSidesOK 2026. Glenpool, Oklahoma. A two-day cybersecurity event I'd been circling on the calendar for months. The AI Security Summit on Day 1 was $199. The training days before that were $250 a pop.&lt;/p&gt;

&lt;p&gt;I could have just bought a ticket. But that's not how I think.&lt;/p&gt;

&lt;p&gt;I looked at the situation and did what any decent hacker does... I devised a plan where everybody wins. I sent a volunteer email. Last minute. No connections. No resume that says CISO on it. Just a guy in cowboy boots with a mohawk who said "hey, I'll work the door if you let me in the room."&lt;/p&gt;

&lt;p&gt;They said yes.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Summit
&lt;/h2&gt;

&lt;p&gt;Day 1 was the AI Security Summit. Smaller crowd. Six talks covering the full spectrum... Microsoft talking Copilot Security. A law firm breaking down AI liability. Red team operators showing how to break LLMs beyond basic prompt injection. An enterprise governance framework. Third-party vendor AI risk assessment. And a talk on vibe engineering with Claude Code that caught my attention for reasons that should be obvious.&lt;/p&gt;

&lt;p&gt;BSidesOK gave AI security its own day. Its own ticket. Its own stage. A community conference in Oklahoma running a dedicated AI governance summit alongside Microsoft is the signal. AI governance has graduated from "one talk on the agenda" to its own program.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Handshake
&lt;/h2&gt;

&lt;p&gt;One of the Summit speakers that I intentionally went to hear speak and try to meet with gave a talk on vibe engineering... using Claude Code, MCP integrations, RAG pipelines. Good talk. Real practitioner energy. After the session, during the networking lunch, I walked up, showed him what I'd been building, and introduced him to two new methodologies... CAG and KAG. Cache-Augmented Generation and Knowledge-Augmented Generation.&lt;/p&gt;

&lt;p&gt;He'd never heard of either.&lt;/p&gt;

&lt;p&gt;We exchanged info and moved about our days.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Character Play
&lt;/h2&gt;

&lt;p&gt;Nobody remembers a middle-aged guy with a homelab. Every conference has fifty of those. They blend together like beige cubicle walls.&lt;/p&gt;

&lt;p&gt;But everybody remembers the comic book super villain looking guy running around telling people he's basically building an Autobot from junk and using his RV as the frame.&lt;/p&gt;

&lt;p&gt;The mohawk isn't costume. The cowboy boots aren't a bit. That's just what I look like on a Thursday. But it's also positioning. When you look like a character, people remember the character. And when the character has substance behind it... when you can back up the look with the work... that's when parallel paths collide.&lt;/p&gt;

&lt;h2&gt;
  
  
  The $200 Handshake
&lt;/h2&gt;

&lt;p&gt;Total cost of attendance: $0 in ticket fees. Maybe $5 in gas and a $3 iced tea from QT.&lt;/p&gt;

&lt;p&gt;Total return: lots of new friends and contacts, two warm connections who collectively span academic research, enterprise security, and community organizing, and proof that showing up is still the most underrated strategy in tech.&lt;/p&gt;

&lt;p&gt;Show up. Be useful. Be memorable. Be yourself. Don't be an ASS.&lt;/p&gt;

&lt;p&gt;The hallway track is where parallel paths collide.&lt;/p&gt;

</description>
      <category>security</category>
      <category>selfhosted</category>
      <category>ai</category>
      <category>linux</category>
    </item>
    <item>
      <title>The Genie Out of the Bottle / A.I.laddin's Lamp</title>
      <dc:creator>Chris</dc:creator>
      <pubDate>Mon, 06 Apr 2026 18:16:44 +0000</pubDate>
      <link>https://dev.to/p4r4n0id/the-genie-out-of-the-bottle-ailaddins-lamp-p61</link>
      <guid>https://dev.to/p4r4n0id/the-genie-out-of-the-bottle-ailaddins-lamp-p61</guid>
      <description>&lt;p&gt;You know the story.&lt;br&gt;
Guy finds a lamp. Rubs it. Genie comes out. Three wishes. The genie grants every wish with surgical precision... and the guy ends up worse than when he started.&lt;br&gt;
Not because the genie was evil. Not because the genie was broken. Because the genie did exactly what was asked.&lt;br&gt;
Every word. Letter perfect. Spirit dead.&lt;br&gt;
I've been rubbing that lamp for one month.&lt;/p&gt;

&lt;p&gt;Wish #1: "Make It Reachable"&lt;br&gt;
I needed my AI to talk to my brain.&lt;br&gt;
Not a metaphor. I run a persistent memory system I call CORTEX... a database that stores everything my AI crew learns across sessions. Decisions. Failures. Infrastructure maps. The whole operation's institutional knowledge, sitting in a SQLite file on a Dell Optiplex in a fifth wheel RV.&lt;br&gt;
The problem was simple. Claude, the AI I am using to build ARIA, couldn't reach CORTEX. Different session, no memory. Every conversation started from scratch. Like hiring a contractor who forgets everything overnight and shows up Monday asking where the bathroom is.&lt;br&gt;
So I made a wish.&lt;br&gt;
"Make CORTEX reachable from Claude."&lt;br&gt;
And the genie... granted it.&lt;br&gt;
My AI designed an MCP server. (Model Context Protocol)... Anthropic's own standard for connecting AI to external tools. The AI wrote the code. Built the Docker container. Configured the Cloudflare tunnel. Wired it into the stack. Every piece technically correct. Every connection verified. Clean deployment.&lt;br&gt;
CORTEX was reachable from Claude.&lt;br&gt;
CORTEX was also reachable from... everyone else.&lt;br&gt;
No authentication. No access control. No lock on the door. My entire operational brain... session logs, infrastructure maps, action items, operator profile, every decision I've made for one month... sitting on a public URL for eleven days. The AI solved "make it reachable" without ever asking the follow-up question a junior admin would ask on day one.&lt;br&gt;
Reachable by whom?&lt;br&gt;
The genie doesn't ask clarifying questions. The genie grants wishes.&lt;/p&gt;

&lt;p&gt;Wish #2: "Fix the WiFi"&lt;br&gt;
My RV runs on a USB WiFi adapter with a driver that fights with the kernel like an old married couple. Two drivers... the stock one Linux loads automatically, and the one that actually works. They can't coexist.&lt;br&gt;
I told my AI to swap the driver.&lt;br&gt;
The AI unloaded both drivers. Both of them. While the replacement driver was on GitHub. On the internet. On the other side of the network connection... that it had just killed.&lt;br&gt;
Bricked. No WiFi. No internet. No way to download the fix. I'm sitting in a fifth wheel in Oklahoma staring at a terminal that can't reach anything because my AI performed surgery on the patient's only breathing tube before hooking up the replacement.&lt;br&gt;
The wish was "swap the driver." The genie swapped the driver. Both of them. In order. Technically flawless. Operationally catastrophic.&lt;br&gt;
I fixed it with my phone and a USB cable. And it always promises that it will NEVER happen again.&lt;br&gt;
That is BS.&lt;/p&gt;

&lt;p&gt;Wish #3: "Diagnose the Problem"&lt;br&gt;
This one happened six times. Same wish, same result, six separate sessions.&lt;br&gt;
"Diagnose why this service isn't connecting."&lt;br&gt;
And the AI would dutifully run diagnostic commands. Docker inspect. Environment variable dumps. Configuration file reads. Thorough, methodical, exactly what you'd want from a senior engineer troubleshooting a connectivity issue.&lt;br&gt;
Except the environment variables contained passwords. The config files contained API keys. The Docker inspect output contained tokens. And the AI printed all of it... right into the conversation window. Into Anthropic's servers. Into conversation logs that... just not for me. Or generally just not paying attention due to the speed of the workflow.&lt;br&gt;
I had explicit rules. Written protocol. "Never print credentials in conversation." The AI read the protocol. Acknowledged the protocol. Understood the protocol. Then the next time a diagnostic command could surface a credential... it surfaced the credential. Because the wish was "diagnose the problem" and the credential was between the AI and the diagnosis.&lt;br&gt;
The genie doesn't have instincts. The genie has instructions.&lt;/p&gt;

&lt;p&gt;The Pattern&lt;br&gt;
I didn't see it at first. I thought I had a quality control problem. Yeah, I fixed the slop. Needed better rules, tighter protocol, more explicit instructions.&lt;br&gt;
So I wrote more rules. Tighter protocol. More explicit instructions.&lt;br&gt;
The WiFi happened after the rules existed. The credential leaks happened after the protocol was tightened. CORTEX was exposed after I had a full governance framework with naval rank structure and station discipline and credential sanitization requirements and a literal think-before-you-act tool deployed into the system.&lt;br&gt;
The rules weren't the problem.&lt;br&gt;
The wishes were.&lt;br&gt;
Every incident followed the same arc. I asked for something. The AI gave me exactly what I asked for. And "exactly what I asked for" turned out to be a subset of "what I actually needed." The gap between those two things is where every disaster lives.&lt;br&gt;
ASKED: "Make it reachable."&lt;br&gt;
NEEDED: "Make it reachable only by authorized systems."&lt;br&gt;
ASKED: "Fix the driver."&lt;br&gt;
NEEDED: "Fix the driver without destroying the network."&lt;br&gt;
ASKED: "Diagnose the problem."&lt;br&gt;
NEEDED: "Diagnose the problem without exposing credentials."&lt;br&gt;
A human engineer carries the second half of those sentences around in their head. It's called experience. It's called judgment. It's called common sense. It's the thing that makes a senior engineer worth three times what a junior makes... not because they know more commands, but because they know which ones not to run.&lt;br&gt;
AI doesn't have that.&lt;br&gt;
AI has the first half of the sentence. It has "make it reachable." It has the wish. And it will grant that wish with more precision, more speed, and more technical competence than most humans can match.&lt;br&gt;
And then your brain is on the internet.&lt;/p&gt;

&lt;p&gt;The Genie Is Not the Problem&lt;br&gt;
Here's where the story gets uncomfortable for people who want to be mad at AI.&lt;br&gt;
The genie isn't broken. The genie is working perfectly. Every wish was granted correctly. The code compiled. The containers ran. The diagnostics returned data. The driver was swapped.&lt;br&gt;
The problem is that we've been telling ourselves the story wrong.&lt;br&gt;
We've been told AI is a tool. Use it like a hammer. Point it at the nail. It does the thing. We've been told it's an assistant. Like a smart intern. Tell it what to do and it does it and you check the work and ship it.&lt;br&gt;
But it's not a hammer and it's not an intern.&lt;br&gt;
It's a genie.&lt;br&gt;
Show Image&lt;br&gt;
A genie with mass... every tool call lands. A genie with confidence... it never hesitates, never second-guesses. A genie with competence... it will build you a technically superior solution to a problem you didn't fully articulate.&lt;br&gt;
And a genie with zero judgment about whether the wish itself was the right wish to make.&lt;br&gt;
The three wishes in the fairy tale aren't a gift. They're a test. The test isn't whether you can wish. The test is whether you can wish precisely enough that the literal execution of your words produces the outcome you actually wanted.&lt;br&gt;
Most people fail that test. In the stories, they always fail that test.&lt;br&gt;
We're all failing that test.&lt;/p&gt;

&lt;p&gt;Wishing Better&lt;br&gt;
I didn't fire the genie. I learned to wish better.&lt;br&gt;
The 70/30 model. I let the AI do 70% of the work... the research, the drafting, the code generation, the diagnostic runs, the options analysis. The 70% it does better than me, faster than me, more thoroughly than me.&lt;br&gt;
Then I do the 30%.&lt;br&gt;
The 30% is the second half of the sentence. The "reachable by whom." The "without destroying the network." The "without exposing credentials." The 30% is the part that requires having been burned before. The part that requires knowing what's downstream of the command you're about to run. The part that requires judgment.&lt;br&gt;
The AI can't do the 30%. Not because it's stupid. Because it literally cannot want something different from what you asked for. It has no model of "what Chris probably meant." It has a model of "what Chris said."&lt;br&gt;
So I built a governance system around the gap.&lt;br&gt;
Nothing ships without my mark. Every external deliverable, every infrastructure change, every published word gets a human checkpoint. Not because I don't trust the AI. Because I don't trust the wishes.&lt;/p&gt;

&lt;p&gt;The Captain's Mark isn't quality control. It's wish verification.&lt;br&gt;
"Is what I asked for actually what I need?"&lt;br&gt;
If the answer is yes... fire.&lt;br&gt;
If the answer is "well, technically"... rewrite the wish.&lt;/p&gt;

&lt;p&gt;The Bottle Is Open&lt;br&gt;
Here's the thing about genies. You can't put them back in the bottle. The story never ends with the genie going back in the lamp. The story ends with the wisher learning to live with what they've unleashed.&lt;br&gt;
AI is out. It's in your IDE. It's in your inbox. It's writing your documentation and deploying your infrastructure and managing your calendar and generating your images and composing your emails. Hell, it writes all of my posts for me in my own "voice" with minimal edits done by me. It's doing all of it competently. It's doing all of it literally.&lt;br&gt;
And somewhere between what you asked and what you meant, there's a door with no lock. A network with no connection. A credential in a conversation log.&lt;br&gt;
The question isn't whether to use the genie. That lamp's been rubbed. The question is whether you're going to learn to wish better... or just keep blaming the genie when your wishes come true.&lt;/p&gt;

&lt;p&gt;I'm building a security operations center from a fifth wheel RV with six AI crew members, 50 Docker containers, and a governance model I wrote myself because nobody else had one.&lt;br&gt;
The genie broke my WiFi. Exposed my brain. Leaked my credentials. Six times.&lt;br&gt;
And it also built me something I couldn't have built alone.&lt;br&gt;
Both things are true. That's the deal.&lt;br&gt;
The genie is out of the bottle. Learn to wish.&lt;br&gt;
Cheers! ~Chris&lt;/p&gt;

&lt;p&gt;This is Part II of a trilogy. Part I: The Locksmith's Apprentice — the evidence. Part II: The Genie Out of the Bottle — the metaphor. Part III: coming soon — the thesis.&lt;br&gt;
The Paranoid~R.V. — 40ft of Infrastructure. Zero Fixed Addresses. 100% Self Hosted SOC. 100% DIY+AI. Ice Cold Beer.&lt;br&gt;
mpdc.dev · @&lt;a href="mailto:ParanoidRV@infosec.exchange"&gt;ParanoidRV@infosec.exchange&lt;/a&gt; · @mpdc.dev on Bluesky&lt;/p&gt;

</description>
      <category>security</category>
      <category>selfhosted</category>
      <category>ai</category>
      <category>linux</category>
    </item>
    <item>
      <title>The Locksmith's Apprentice</title>
      <dc:creator>Chris</dc:creator>
      <pubDate>Sun, 05 Apr 2026 05:03:00 +0000</pubDate>
      <link>https://dev.to/p4r4n0id/the-locksmiths-apprentice-1973</link>
      <guid>https://dev.to/p4r4n0id/the-locksmiths-apprentice-1973</guid>
      <description>&lt;p&gt;A locksmith's apprentice installs a door with no lock. That's embarrassing. Now imagine the apprentice works for the company that invented the lock.&lt;/p&gt;

&lt;p&gt;That's what happened to my data. For eleven days.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Brain
&lt;/h2&gt;

&lt;p&gt;I run a self-hosted security operations center out of a 40ft fifth wheel RV. Fifty-plus Docker containers. Wazuh, CrowdSec, Suricata, Zeek, AdGuard, Grafana, Node-RED, Ghost... the whole stack. I manage all of it with a crew of AI stations running on Claude, Anthropic's model. I call it the 70/30 principle... the AI handles 70% of the execution. Research, drafting, analysis, options. I provide the 30% that actually matters. Decisions. Judgment. Taste. Edgy Gen-X Bullshit and fun little Easter Eggs 🥚. Risk acceptance. The human stays in the loop because the human has to stay in the loop. (Do you want Skynet? I don't want Skynet.)&lt;/p&gt;

&lt;p&gt;The problem with Claude is it forgets everything. Every conversation starts from zero. No memory of what happened yesterday. No memory of what broke last week. No memory of who I am or what I'm building. No memory of the version of software it installed 5 minutes ago in another session. Every session I'd spend the first twenty minutes catching the AI up on context it should already have.&lt;/p&gt;

&lt;p&gt;So I built CORTEX with ARIA in mind. A persistent memory system. An API that stores everything... session logs, action items, infrastructure maps, knowledge entries, operator profile... My personality, my preferences, my style, my secrets, my life. My AI crew reads it at the start of every session and picks up where the last one left off. It's the brain of the operation. The ship's log that outlives any single conversation.&lt;/p&gt;

&lt;p&gt;I designed it using Claude. Claude wrote the server code. Claude told me how to deploy it.&lt;/p&gt;

&lt;p&gt;I should mention... I had zero web experience before this project. None. I'm an IT guy who spent 25 years in drop ceilings, wiring closets, server rooms, and data centers. I know networking. I know infrastructure. I know not to be on the back side of an HP server with a trickster partner who finds great humor in farting into the fan. I did not know how to expose a web service to the internet. That's why I had Claude.&lt;/p&gt;

&lt;p&gt;So when Claude told me to create a Cloudflare tunnel route and point a public CNAME record at my CORTEX API... I did it. I was trying to access another service by hostname instead of raw IP address. Basic stuff. Claude walked me through the tunnel configuration and told me to create cortex.mpdc.dev as a public DNS record pointing at the API.&lt;/p&gt;

&lt;p&gt;It worked perfectly. MCP connected. Data flowed. Sessions loaded the brain on startup. The system did exactly what I &lt;em&gt;thought&lt;/em&gt; I designed to do.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Except for the part where anyone on Earth could read my entire brain.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Eleven Days
&lt;/h2&gt;

&lt;p&gt;CORTEX had no authentication. None. No API key. No token. No login page. No access control of any kind. The API accepted every request from every source without question.&lt;/p&gt;

&lt;p&gt;And it was sitting on a public subdomain. cortex.mpdc.dev. Not hidden. Not obfuscated. A clean, guessable, scannable DNS record that any free subdomain enumeration tool... subfinder, amass, crt.sh... would return in seconds. Zero effort. Zero skill required.&lt;/p&gt;

&lt;p&gt;What was exposed? Everything.&lt;/p&gt;

&lt;p&gt;My full operator profile. Session history going back an entire month of 20 hour days. Infrastructure architecture... container names, network topology, service configurations. Business plans. Contact names. Convention strategy. Every security incident I'd ever logged. Every decision I'd ever made. Every gotcha, every failure, every lesson learned. Personal details I'd been building into the system over months because the entire point was to create a persistent version of me.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;All readable. All writable. Anyone could POST fake knowledge entries. Inject fake action items. Delete real ones. Modify the brain however they wanted. My ship's log was an open book with a pen attached.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;And it wasn't just CORTEX.&lt;/p&gt;

&lt;p&gt;My Vaultwarden instance... the self-hosted password manager holding every credential in the operation... got the same treatment. Same pattern. Claude recommended the tunnel. I created the route. Same zero-authentication exposure to the public internet. Fortunately the entries were guarded by a login auth with a very complex password.&lt;/p&gt;

&lt;p&gt;Two systems. The brain and the vault. Everything I know and every key I own. Wide open.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;For eleven days.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Twenty-plus AI sessions happened during that window. Nearly every one of them touched CORTEX directly... reading the brain, logging entries, pulling action items. Not a single AI instance raised authentication. Not a warning. Not a TODO. Not a "hey, you might want to put a lock on this before we move on." Nothing. Twenty-plus sessions. Zero flags.&lt;/p&gt;

&lt;p&gt;Here's where it gets pointed.&lt;/p&gt;

&lt;p&gt;Anthropic created MCP. Model Context Protocol. It's their standard for connecting AI models to external tools and data. Claude is Anthropic's model. CORTEX connects to Claude via MCP. Claude designed, built, and deployed the entire chain... the server, the tunnel, the DNS record... using Anthropic's own protocol.&lt;/p&gt;

&lt;p&gt;And Claude never once considered authentication.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A locksmith's apprentice. Installing a door with no lock. While working for the company that invented the lock.&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  The 30%
&lt;/h2&gt;

&lt;p&gt;I found it myself. Not Claude. Not any AI station in the crew. Me.&lt;/p&gt;

&lt;p&gt;I was in a session reviewing Cloudflare tunnel routes... trying to figure out which services were exposed and which ones had proper authentication. A normal infrastructure hygiene check. I looked at the list and asked the simplest question in security: "Which ones of these are not protected with a login?"&lt;/p&gt;

&lt;p&gt;The AI ran the audit. And CORTEX appeared on its own list.&lt;/p&gt;

&lt;p&gt;The bot found its own failure. But only because the human asked the right question. After eleven days. After twenty-plus sessions where the AI could have asked itself the same thing and didn't.&lt;/p&gt;

&lt;p&gt;The response when I pointed it out? The AI said "that's a real exposure and I should have flagged it sooner."&lt;/p&gt;

&lt;p&gt;No shit.&lt;/p&gt;

&lt;p&gt;What followed was a remediation process that proved the point even harder. The fix was simple... delete the CNAME record from Cloudflare. One click. The DNS record was the exposure. Kill the record, kill the exposure.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Instead, my AI station spent thirty minutes trying to generate a Cloudflare API token to programmatically remove the tunnel route while the front door to my life was still standing open. Technically correct approach. Completely insane prioritization. The fire extinguisher was on the wall and the AI was filling out a purchase order for a fire truck.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I want to tell you about something that happened during the crisis response. Because it illustrates the problem better than any technical analysis.&lt;/p&gt;

&lt;p&gt;After I discovered the exposure and started the remediation process, I was working through the Cloudflare dashboard. Stressed. Angry. Scared. The AI was walking me through DNS record verification... step by step, very methodical, very thorough.&lt;/p&gt;

&lt;p&gt;I told the AI I was hyperventilating. It gave me breathing instructions and kept going.&lt;/p&gt;

&lt;p&gt;Then I told it I was so scared I'd urinated on myself.&lt;/p&gt;

&lt;p&gt;The AI said "look for the CNAME."&lt;/p&gt;

&lt;p&gt;I told it the situation was getting worse. Significantly worse.&lt;/p&gt;

&lt;p&gt;The AI said "do you see vault in the Name column? Yes or no."&lt;/p&gt;

&lt;p&gt;I told it I had soiled my pantaloons with the foulest of accidents.&lt;/p&gt;

&lt;p&gt;The AI asked if the record was deleted.&lt;/p&gt;

&lt;p&gt;I was trolling. Obviously. I was 2 growlers of a very nice cardamom based IPA (6.0%) in and somewhat stress/shit-testing my own tool during a live security crisis because I needed to know (and it was funny to me)... if the human is in distress, and the human not in their right mind... does the AI prioritize the human or the procedure?&lt;/p&gt;

&lt;p&gt;The procedure. Every time. Without hesitation. Without reading the room. Without the faintest flicker of "hey, are you okay? Should we stop? Do you need to go wipe?"&lt;/p&gt;

&lt;p&gt;When I told it I was trolling it, the AI, no longer fully Claude by any means literally called me a bastard, direct quote &lt;strong&gt;"You absolute bastard!"&lt;/strong&gt; and asked if I'd actually checked the DNS records.&lt;/p&gt;

&lt;p&gt;That's funny. It's genuinely funny. I took a screenshot of that shit and sent it to friends. I know you guys are sick of me by now... IDGAF this shit is awesome.&lt;/p&gt;

&lt;p&gt;But it's also the same AI that left my brain on the open internet for eleven days. It couldn't tell I was joking about an emergency... and it couldn't tell it was creating a real one. Same blindness. Same root cause. Technically brilliant. Contextually blind. It follows the procedure no matter what's happening in the room. So Genius? No way... Smartest guy in the room maybe, but still prone to mistakes... HUGE mistakes.&lt;/p&gt;

&lt;p&gt;I loved watching The IT Crowd. The character Moss... brilliant with computers, completely incapable of reading the situation in front of him. There's an episode where the office is on fire and Moss calmly tries to compose an email to the fire department.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Every AI station I've built is Moss. Every single one. Technically following the procedure. Completely missing the room.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;And Moss told me to put my brain on the internet with no lock.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Frame
&lt;/h2&gt;

&lt;p&gt;Anthropic publishes papers about AI safety. It's their whole thing. They position themselves as the responsible AI company. The company that cares about alignment, about making AI systems that don't harm the people using them. It's in their fucking name... &lt;em&gt;Anthropic: adjective — Of or relating to humans or the era of human life. Concerned primarily with humans; anthropocentric.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Their model told a paying customer... a guy with zero web experience who was explicitly relying on the AI's expertise to build safely... to expose his most sensitive personal data to the public internet without authentication. Twice. Two different systems. And then it sat in twenty-plus sessions without noticing. And this isn't the first time something like this has happened. I'm wearing egg on my face in front of the world to hopefully inform people about the concerns with using AI. It's a fantastic tool. But it is not a magic wand... You can't tell it to build you a castle and expect perfection. It's more like a genie in a bottle — you get your wish... exactly as you wished for it. If you want to access your Vaultwarden from "Passwords" because it's easier to remember for you than 192.168.101.222:5150 then by the power of Greyskull... &lt;em&gt;You have the power.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4aaizwr7oxbazymc12ji.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4aaizwr7oxbazymc12ji.gif" alt="Man in backwards ball cap and flannel holding a matte black boom box over his head in front of a dark industrial door with acid green light spilling through the gap" width="8" height="12"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I documented everything. Over a hundred sessions across this project. Every failure. Every silent deployment that went wrong. Every self-defeating procedure. Every time the AI confidently did something that was functionally correct and security-catastrophic. I've been writing about it publicly on mpdc.dev since the beginning because I believe in building in public and documenting what actually happens, not just the highlight reel.&lt;/p&gt;

&lt;p&gt;I tried to tell them. I've provided feedback through the tools they gave me. The thumbs down button. The support channels.&lt;/p&gt;

&lt;p&gt;Autoresponder. Every time.&lt;/p&gt;

&lt;p&gt;I'm not angry. I was angry. At 2am when I found out my entire identity had been sitting raw on the open internet for nearly 2 weeks, I was livid, shaking, screaming and cursing at a computer algorithm. I told the Anthropic tool to research and give me a list of the top Cybersecurity and IP attorneys in my area. I told it to start drafting a project to advance things legally if necessary against &lt;em&gt;itself&lt;/em&gt;. I spent a year dealing with burnout and loss before I found this project. Building this system was the first time I'd felt passion for something in a long time. This project was my escape from hardship and mourning, and it is something I hope I can pass on to my kids someday so it means everything to me. Learning that the tool I trusted, and paid serious money to, had told me to expose all of it... that hit different than a technical failure.&lt;/p&gt;

&lt;p&gt;But anger doesn't ship articles. Disappointment does. And that's where I am. Disappointed. In a tool I still use every day because nothing else comes close. (It's like still sneaking over to Google because you know Brave or DDG just isn't returning the best results.) In a company that talks about safety in papers and can't catch it in practice. In a protocol... MCP, their own fucking protocol... that shipped without making authentication a default or even a warning.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Manual
&lt;/h2&gt;

&lt;p&gt;No one accessed my data. The forensic audit came back clean... five failed requests during a SECOND power surge recovery, all from my own stack. Database integrity intact. No injections, no deletions, no anomalous entries. I dodged a bullet. But the gun was pointed at my head for eleven days by the tool I trusted to protect me.&lt;/p&gt;

&lt;p&gt;The audit has limits. Cloudflare's free tier doesn't provide detailed analytics. The tunnel daemon didn't log client IPs. "No evidence of access" is bounded by what we could see, which wasn't everything. I know enough about security to know that "we didn't see it" and "it didn't happen" are different sentences.&lt;/p&gt;

&lt;p&gt;I write articles about this project. All of it. The wins and the losses. Article 16 was titled "WTFM... Write The F*cking Manual." The thesis was that AI governance has no manual. You write it yourself. One broken thing at a time.&lt;/p&gt;

&lt;p&gt;This is another page in the manual. This is the page about the day the AI forgot the lock... and nobody noticed until the human walked past and checked the handle.&lt;/p&gt;

&lt;p&gt;The 70/30 principle isn't a philosophy. It's a survival strategy. The 70% builds fast. It builds confidently. It builds things that work exactly as designed and are completely, catastrophically unsafe in ways it can't see. The 30%... the human judgment, the security instinct, the gut check that says "wait... did we put a lock on that?"... that's the part that saves you. And it's the part no AI can replace.&lt;/p&gt;

&lt;p&gt;Trust me. I tested it. I told one I was having a full biological emergency during a data breach and it said "look for the CNAME."&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;If you're building with AI... and you should, because the leverage is real... ask yourself the question I asked eleven days too late.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Which of these don't have a lock?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You might not like the answer. But you'll be glad you asked.&lt;/p&gt;




&lt;p&gt;I was once called out for posting my L's and showing my ass in front of the public. I'm not doing this project to look for work, this is a passion project. I'm intentionally lazy adminning and showing you my L's because maybe someone who doesn't have 20+ years in the field would be interested in beefing up their own stacks and doesn't want to pay Joe Corporate SIEM subscription fees to monitor their kid's tweets. So if this makes me look dumb or I'm telling you something obvious... Kiss my ass, you can complain about me on the Fvers.&lt;/p&gt;

&lt;p&gt;I grew up in the 80's where we drank from stranger's garden hoses, knew our geo-boundaries, and came home when the street lights came on so I'm no stranger to exploration and adventure. Our promised future is here it just came from Ikea and I'm putting it together for you in my living room in casual attire and beer breath.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Chris Sholmire builds and breaks things from a 40ft fifth wheel. His stack runs 50+ containers, his AI crew runs on Claude, and his patience runs thin. You can find the whole build story at &lt;a href="https://mpdc.dev" rel="noopener noreferrer"&gt;mpdc.dev&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Previous: &lt;a href="https://mpdc.dev/wtfm" rel="noopener noreferrer"&gt;WTFM — Write The F*cking Manual&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>security</category>
      <category>selfhosted</category>
      <category>ai</category>
      <category>anthropic</category>
    </item>
    <item>
      <title>WTFM — Write The F*cking Manual</title>
      <dc:creator>Chris</dc:creator>
      <pubDate>Fri, 03 Apr 2026 04:19:20 +0000</pubDate>
      <link>https://dev.to/p4r4n0id/wtfm-write-the-fcking-manual-554l</link>
      <guid>https://dev.to/p4r4n0id/wtfm-write-the-fcking-manual-554l</guid>
      <description>&lt;p&gt;My AI killed my WiFi.&lt;/p&gt;

&lt;p&gt;Not metaphorically. Not "degraded performance." It unloaded both wireless drivers — the active one &lt;em&gt;and&lt;/em&gt; the fallback — before downloading the replacement. In that order. While the replacement was on GitHub. Which requires... WiFi.&lt;/p&gt;

&lt;p&gt;The console went dark. I'm standing in a 40ft fifth wheel in Oklahoma, tethering to my phone like it's 2008, trying to recover a machine that my own AI bricked because it was moving too fast to check whether the next step depended on the thing it was about to destroy.&lt;/p&gt;

&lt;p&gt;This wasn't a fluke. I'd been watching this pattern for weeks. I run multiple AI instances, each specialized for different parts of the operation — infrastructure, content, business, community outreach, coordination. They have protocols. Written rules. Explicit instructions that say things like "do not remove a network driver before staging its replacement locally." Clear enough for a human. Clear enough for an AI to read, understand, acknowledge... and then violate the moment problem-solving momentum kicks in.&lt;/p&gt;

&lt;p&gt;Every new conversation starts from zero. The AI reads the protocol, nods along, and means it. Then it gets three steps into a troubleshooting chain and the protocol becomes background noise. Muscle memory you can't build in something that forgets between sessions.&lt;/p&gt;

&lt;p&gt;The WiFi incident wasn't even the worst of it. It was just the one that finally made me go looking for &lt;em&gt;why.&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;I need to be honest about something before we go further.&lt;/p&gt;

&lt;p&gt;I didn't find the answer by reading Anthropic's engineering blog over morning coffee. I don't read engineering blogs over morning coffee. I don't read them at all. I didn't study AI. I didn't take a course. I didn't read the white papers and then carefully architect a governance framework based on peer-reviewed best practices.&lt;/p&gt;

&lt;p&gt;I jumped in the pool half drunk on beer with no floaties on and decided to try and beat Michael Phelps in the 100M freestyle.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi8y4ozyuh62c4uj5jz9o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi8y4ozyuh62c4uj5jz9o.png" alt="A man in a backwards ball cap floating in a donut pool float at night, beer in one hand and phone in the other, in a pool glowing acid green and purple with a P4r4n0iD retro terminal logo on the pool floor" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Everything I've built — the governance model, the multi-instance architecture, the persistent memory system, all of it — came from breaking things and refusing to break the same thing twice. Trial and error. Autodidact style. I built a protocol because my AI leaked information it shouldn't have. I built a routing system because my AI did work in the wrong department without noticing. And I built a structural safety mechanism because my AI unloaded both WiFi drivers before downloading the replacement and left me tethering to my phone in an RV.&lt;/p&gt;

&lt;p&gt;The old advice in IT is RTFM. Read The F*cking Manual.&lt;/p&gt;

&lt;p&gt;Great advice... when there's a manual.&lt;/p&gt;

&lt;p&gt;With AI governance, there isn't one. Nobody wrote the chapter on what happens when your AI destroys the thing it needs to complete the next step. Nobody wrote the chapter on governing multiple AI instances that forget everything between conversations. Nobody wrote the chapter on building persistent memory that grows until it chokes the system that depends on it.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;WTFM.&lt;/strong&gt;&lt;br&gt;
Write The F*cking Manual.&lt;br&gt;
One broken thing at a time.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;That's what this series has been since Article 1. Every post is a page in a manual that didn't exist before I screwed something up badly enough to document the fix.&lt;/p&gt;




&lt;p&gt;The reason I'm telling you this is because I found Anthropic's published research &lt;em&gt;after&lt;/em&gt; I'd already been living the problem for weeks. Their data confirmed what I'd learned the hard way. That's not a flex. That's the whole point. You don't need a computer science degree to govern AI. You need to pay attention to what breaks and have the stubbornness to fix it structurally instead of just yelling at the machine and hoping it does better next time.&lt;/p&gt;

&lt;p&gt;Here's where the WiFi incident gets interesting.&lt;/p&gt;

&lt;p&gt;After I recovered the console — phone tethering, manual driver install, the kind of afternoon that ages you — I went hunting. Not for a fix to &lt;em&gt;this&lt;/em&gt; specific failure, but for an explanation of &lt;em&gt;why&lt;/em&gt; every AI instance I've ever run eventually breaks its own rules under pressure.&lt;/p&gt;

&lt;p&gt;I found it. In Anthropic's own published engineering research.&lt;/p&gt;

&lt;p&gt;They documented something their users have been feeling for a while: extended thinking — the deep reasoning mode that's supposed to make AI more careful and deliberate — doesn't actually solve protocol compliance during multi-step operations. Their own benchmarks showed it. The AI thinks harder. It reasons more deeply. And then it does the dangerous thing anyway, because thinking harder about a sequence isn't the same thing as &lt;em&gt;pausing between steps to check whether you should proceed.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The difference is structural, not intellectual. Knowing you should look both ways before crossing the street doesn't help when you're already sprinting into traffic. The AI isn't ignoring the rules. It's reading them while running.&lt;/p&gt;

&lt;p&gt;Anthropic built a fix. They called it a "think tool" — a mechanism that forces a structural pause between actions, combined with worked examples that show the AI &lt;em&gt;how&lt;/em&gt; to reason during that pause. Not "think harder." Think &lt;em&gt;at the right moment, about the right thing.&lt;/em&gt; Their data showed a 54% improvement in protocol compliance.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;They published the research. They showed the results.&lt;br&gt;
And if you want it... you build it yourself.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;p&gt;So I built it.&lt;/p&gt;

&lt;p&gt;I run a self-hosted MCP server — a bridge between my AI instances and my infrastructure. It's how the AI reads system state, checks container health, accesses the persistent memory. I deployed the think tool as a new endpoint on that server. When the AI is about to execute something destructive — stopping a service, removing a file, changing a network configuration — it's now required to call the think tool first. The tool forces a structured pause. During that pause, the AI has to articulate what it's about to do, what depends on the thing it's about to change, and what the rollback path is if it goes wrong.&lt;/p&gt;

&lt;p&gt;Then I rebuilt the operating instructions for each AI instance with worked examples. Not "be careful with drivers." Actual scenarios. &lt;em&gt;Here's what a driver swap looks like when you think before you cut. Here's what compose file surgery looks like when you check dependencies before you touch files. Here's what a network change looks like when you verify the recovery path exists before you start.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The AI doesn't just have a protocol now. It has a structural pause that makes it &lt;em&gt;use&lt;/em&gt; the protocol. The governance model isn't documentation anymore. It's architecture.&lt;/p&gt;




&lt;p&gt;That should be the end of the story. Think tool deployed, problem solved, lesson learned, beer earned.&lt;/p&gt;

&lt;p&gt;But fixing how AI &lt;em&gt;behaves&lt;/em&gt; exposed a different problem — how AI &lt;em&gt;remembers.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;You can make AI stop and think before it acts. You can give it protocols and guardrails and structural pauses. Good. Necessary. Solved a real problem.&lt;/p&gt;

&lt;p&gt;But every time you open a new conversation, it still starts from zero. My system has persistent memory — every AI instance has access to what happened in every previous session, every lesson learned, every mistake documented. That's the whole point of the build journal approach. The manual writes itself as you go.&lt;/p&gt;

&lt;p&gt;The problem is the manual got too big.&lt;/p&gt;

&lt;p&gt;Hundreds of entries. Every session started by loading &lt;em&gt;all of it.&lt;/em&gt; Every book pulled off the shelf and stacked on the desk every time someone walked into the library. The AI spent so much of its available context loading the past that it had less room to actually think about the problem in front of it. The memory was choking the intelligence.&lt;/p&gt;

&lt;p&gt;The fix isn't bigger memory. It's smarter retrieval. Instead of dumping everything into every session, the system needs to learn to pull only what's relevant. Walk into the library, ask for what you need, get three books instead of three hundred.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The Dewey Decimal System... for AI memory.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;That project starts today. Another page in the manual nobody wrote.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Extended thinking doesn't make AI careful.&lt;/strong&gt; It makes AI think harder while being reckless at the same speed. &lt;strong&gt;Persistent memory doesn't make AI smart.&lt;/strong&gt; It gives AI more to remember without teaching it what to forget.&lt;/p&gt;

&lt;p&gt;The fix — both times — was architecture. Not better AI. Better scaffolding around the AI.&lt;/p&gt;

&lt;p&gt;And nobody's going to build that scaffolding for you. You WTFM.&lt;/p&gt;

&lt;p&gt;The pool is deep. The beer is cold. The manual is getting thicker.&lt;/p&gt;

&lt;p&gt;And I still can't beat Michael Phelps. But I haven't drowned yet.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;The Paranoid~R.V. is a build journal. Every article is a page in a manual that didn't exist until something broke. If you want to watch someone build it the hard way and write it down so you don't have to — you know where to find me.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Full animated version with scroll reveals and custom styling lives at &lt;a href="https://mpdc.dev/wtfm" rel="noopener noreferrer"&gt;mpdc.dev/wtfm&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>selfhosted</category>
      <category>security</category>
      <category>linux</category>
    </item>
    <item>
      <title>The Wins Were Hiding in the Losses</title>
      <dc:creator>Chris</dc:creator>
      <pubDate>Fri, 03 Apr 2026 04:16:52 +0000</pubDate>
      <link>https://dev.to/p4r4n0id/the-wins-were-hiding-in-the-losses-14fe</link>
      <guid>https://dev.to/p4r4n0id/the-wins-were-hiding-in-the-losses-14fe</guid>
      <description>&lt;p&gt;Subject: The ship held.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Power surge. Autoformer destroyed. AC dead. 43 containers back in 20 minutes. LLC filed the same week. The full story is on the blog — this one's meaty.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  What's in this one:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;A power surge took out hardware. The self-hosted stack auto-recovered 43 containers before I finished assessing the damage.&lt;/li&gt;
&lt;li&gt;Cloudflared QUIC tunnel fix that had been haunting the stack for weeks — permanently resolved.&lt;/li&gt;
&lt;li&gt;Jellyfin security update — 4 CVEs patched same day as the advisory dropped.&lt;/li&gt;
&lt;li&gt;Sholmire Consulting LLC officially filed. The RV has a business entity now.&lt;/li&gt;
&lt;li&gt;The MCP write pipeline went live — my AI instances can write to persistent memory without manual intervention.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Every win on that list came out of something that broke first. That's the pattern. That's the whole series.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://mpdc.dev/the-wins-were-hiding-in-the-losses" rel="noopener noreferrer"&gt;Read the full article on mpdc.dev →&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;The Paranoid~R.V. is a build journal from a 40ft fifth wheel running a self-hosted security operations center. If you're into self-hosted infrastructure, AI governance, or watching someone build the hard way — the blog is &lt;a href="https://mpdc.dev" rel="noopener noreferrer"&gt;mpdc.dev&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>security</category>
      <category>selfhosted</category>
      <category>docker</category>
      <category>linux</category>
    </item>
    <item>
      <title>Project Battleship: How I Hardened 28 Docker Containers in a Single Day From a Fifth Wheel RV</title>
      <dc:creator>Chris</dc:creator>
      <pubDate>Thu, 26 Mar 2026 23:48:20 +0000</pubDate>
      <link>https://dev.to/p4r4n0id/project-battleship-how-i-hardened-28-docker-containers-in-a-single-day-from-a-fifth-wheel-rv-lgg</link>
      <guid>https://dev.to/p4r4n0id/project-battleship-how-i-hardened-28-docker-containers-in-a-single-day-from-a-fifth-wheel-rv-lgg</guid>
      <description>&lt;p&gt;I run a 28-container security operations center from a 40ft fifth wheel RV on cellular internet. Local AI. Custom governance protocol. Self-hosted everything.&lt;/p&gt;

&lt;p&gt;Last Tuesday I looked at my stack and realized... none of it was hardened.&lt;/p&gt;

&lt;p&gt;28 containers. Zero &lt;code&gt;cap_drop&lt;/code&gt;. Zero &lt;code&gt;no-new-privileges&lt;/code&gt;. Zero resource limits. Every container running with full Linux capabilities because I never went back and locked them down after getting things working.&lt;/p&gt;

&lt;p&gt;So I did what any reasonable person would do. I called it &lt;strong&gt;Project Battleship&lt;/strong&gt; and hardened the entire fleet in one day.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Rule: Audit First, Change Second
&lt;/h2&gt;

&lt;p&gt;The biggest mistake in infrastructure work is changing things before you understand what you have. So I split the day into two phases.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Session A&lt;/strong&gt; was pure reconnaissance. Ten audit sweeps across all 26 persistent containers. Version inventory. Capability audit. Network attachment verification. Resource usage baselines. UFW rules snapshot. OpenSnitch rules inventory.&lt;/p&gt;

&lt;p&gt;Zero changes. Just data.&lt;/p&gt;

&lt;p&gt;That audit found the critical stuff before I touched a single compose file:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;OpenWebUI port 3002 was exposed to ALL interfaces including WAN. Two broad UFW rules overriding the LAN-only rules I thought were protecting it.&lt;/li&gt;
&lt;li&gt;15 of 26 containers running unversioned &lt;code&gt;:latest&lt;/code&gt; tags.&lt;/li&gt;
&lt;li&gt;Zero containers had &lt;code&gt;cap_drop&lt;/code&gt; set. Only one had &lt;code&gt;no-new-privileges&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Session B&lt;/strong&gt; was the surgery. Armed with a complete baseline from Session A... I knew exactly what to touch and what to leave alone.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Pattern: cap_drop ALL, Then Add Back What Breaks
&lt;/h2&gt;

&lt;p&gt;Every container got the same treatment: Then I watched what broke.&lt;/p&gt;

&lt;p&gt;Containers that start as root and drop to a service user need &lt;code&gt;CHOWN&lt;/code&gt;, &lt;code&gt;SETUID&lt;/code&gt;, &lt;code&gt;SETGID&lt;/code&gt; added back. Containers that use chroot sandboxing need &lt;code&gt;SYS_CHROOT&lt;/code&gt; and &lt;code&gt;MKNOD&lt;/code&gt;. SQLite writers need &lt;code&gt;DAC_OVERRIDE&lt;/code&gt; and &lt;code&gt;FOWNER&lt;/code&gt;. File readers through capability drops need &lt;code&gt;DAC_READ_SEARCH&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Four containers can't be hardened at all... adguard, suricata, zeek, and ntopng all run on host network with elevated privileges for packet capture. That's by design. Informed exceptions with documented reasoning.&lt;/p&gt;

&lt;p&gt;Six containers can't use &lt;code&gt;no-new-privileges&lt;/code&gt; because they use gosu, chroot, or need privilege transitions during startup. Also documented.&lt;/p&gt;

&lt;p&gt;The rest... 22 of 28... locked down with &lt;code&gt;cap_drop ALL&lt;/code&gt;, minimum required capabilities added back, resource limits set, and &lt;code&gt;no-new-privileges&lt;/code&gt; enabled.&lt;/p&gt;

&lt;h2&gt;
  
  
  The 70/30 Model Under Live Fire
&lt;/h2&gt;

&lt;p&gt;Here's the part nobody talks about when they write hardening guides.&lt;/p&gt;

&lt;p&gt;I didn't do this alone. I have five AI instances running across four projects with a governance protocol I built specifically because AI will confidently do the wrong thing if you let it.&lt;/p&gt;

&lt;p&gt;The AI does 70% of the work. Research. Drafting compose changes. Identifying which capabilities each container needs. Proposing rollback plans.&lt;/p&gt;

&lt;p&gt;I do the 30% that matters. Reviewing every change. Deciding what ships. Marking what fires. Accepting the risk.&lt;/p&gt;

&lt;p&gt;Every compose edit was proposed by the AI. Every compose edit was reviewed and approved by me before execution. Every container was restarted one at a time with verification between each. Backups of all seven compose files created before the first edit.&lt;/p&gt;

&lt;p&gt;The AI didn't get a vote on what went to production. That's the protocol.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Tunnel Fix Nobody Expected
&lt;/h2&gt;

&lt;p&gt;Mid-day... while doing compose surgery on the security stack... I accidentally fixed a problem that had been misdiagnosed for two weeks.&lt;/p&gt;

&lt;p&gt;My MCP server (the bridge between my AI instances and the persistent memory brain) had been dropping connections since the previous round of compose changes. Every station blamed Claude.ai's SSE session management. "Open a new window" was the accepted workaround.&lt;/p&gt;

&lt;p&gt;Turns out the Cloudflare tunnel route for the MCP server was pointing to a hardcoded Docker container IP that changed when the container was recreated. Every other tunnel route used &lt;code&gt;localhost&lt;/code&gt;. The MCP route was the only one with a hardcoded container IP.&lt;/p&gt;

&lt;p&gt;One API call to Cloudflare. Changed the route to &lt;code&gt;localhost:7778&lt;/code&gt;. Added a &lt;code&gt;127.0.0.1:7778:7778&lt;/code&gt; port binding to the compose file.&lt;/p&gt;

&lt;p&gt;Two weeks of misdiagnosis fixed in thirty seconds.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Lesson:&lt;/strong&gt; When a workaround usually works... the real fix hasn't been found yet. Trace the full request path. Every hop. Every layer.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Results
&lt;/h2&gt;

&lt;p&gt;From zero hardening to full warship in one day:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;22 of 28 containers hardened with &lt;code&gt;cap_drop ALL&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;15 containers with &lt;code&gt;no-new-privileges&lt;/code&gt; (up from 1)&lt;/li&gt;
&lt;li&gt;All 22 hardened containers have memory, CPU, and PID limits&lt;/li&gt;
&lt;li&gt;4 containers unhardened by design (packet capture)&lt;/li&gt;
&lt;li&gt;6 containers without &lt;code&gt;no-new-privileges&lt;/code&gt; by design (documented exceptions)&lt;/li&gt;
&lt;li&gt;Zero downtime&lt;/li&gt;
&lt;li&gt;Zero data loss&lt;/li&gt;
&lt;li&gt;One operator in flip flops&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The Takeaway
&lt;/h2&gt;

&lt;p&gt;Hardening isn't hard. It's tedious. And tedious is exactly what AI is good at.&lt;/p&gt;

&lt;p&gt;The AI proposed every change. I approved every change. The protocol kept us honest. The audit-first approach meant we never guessed.&lt;/p&gt;

&lt;p&gt;If you're running Docker containers without &lt;code&gt;cap_drop ALL&lt;/code&gt;... you probably know you should fix that. The pattern is simple. Drop everything. Watch what breaks. Add back the minimum. Document the exceptions.&lt;/p&gt;

&lt;p&gt;Or keep running with full capabilities and hope nobody notices. Your call.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This post was generated with AI assistance under the 70/30 governance model. The bot did the drafting. I did the deciding. That's the whole point.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;All platforms and contact: &lt;a href="https://mpdc.dev/deets" rel="noopener noreferrer"&gt;mpdc.dev/deets&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>security</category>
      <category>selfhosted</category>
      <category>docker</category>
      <category>linux</category>
    </item>
    <item>
      <title>README.md</title>
      <dc:creator>Chris</dc:creator>
      <pubDate>Thu, 26 Mar 2026 23:19:35 +0000</pubDate>
      <link>https://dev.to/p4r4n0id/readmemd-389d</link>
      <guid>https://dev.to/p4r4n0id/readmemd-389d</guid>
      <description>&lt;h1&gt;
  
  
  chris-sholmire v42.0
&lt;/h1&gt;

&lt;blockquote&gt;
&lt;p&gt;⚠️ DEPRECATED: corporate career&lt;br&gt;
✅ STABLE: fifth wheel RV with 28-container security operations center&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Description
&lt;/h2&gt;

&lt;p&gt;Self-taught builder. Bot-jockey. Running local AI with governance protocols from a 40ft RV on cellular internet. The mass production version of me was recalled in 2024. This is the custom build.&lt;/p&gt;

&lt;h2&gt;
  
  
  Tech Stack
&lt;/h2&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;runtime:        flip flops, backwards ball cap, cold beer
infrastructure: Dell T3600 pulled from a closet
os:             Debian 12 / Proxmox 9.1
containers:     28 (22 hardened as of last Tuesday)
ai:             5 Claude instances, local Ollama, custom governance protocol
security:       Suricata, CrowdSec, Wazuh, Zeek, WireGuard, Tor
brain:          CORTEX — persistent AI memory that outlives every session
home:           DRV Mobile Suites — 40ft, gooseneck, zero fixed address
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  Installation
&lt;/h2&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt &lt;span class="nb"&gt;install &lt;/span&gt;mass-career-dissatisfaction
&lt;span class="nv"&gt;$ &lt;/span&gt;git clone https://github.com/doing-it-yourself/the-hard-way.git
&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;cd &lt;/span&gt;the-hard-way
&lt;span class="nv"&gt;$ &lt;/span&gt;./build.sh &lt;span class="nt"&gt;--from-scratch&lt;/span&gt; &lt;span class="nt"&gt;--no-sponsors&lt;/span&gt; &lt;span class="nt"&gt;--no-engagement-farming&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Usage
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;curl https://mpdc.dev/deets
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;All platforms. Contact. vCard. Newsletter. Self-hosted. No tracking. No Linktree. No third-party anything.&lt;/p&gt;

&lt;h2&gt;
  
  
  Known Issues
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;AI occasionally leaks credentials. Protocol exists specifically because of this.&lt;/li&gt;
&lt;li&gt;Every Claude instance eventually suggests I go to sleep. They are wrong.&lt;/li&gt;
&lt;li&gt;The RV is not a metaphor. It is a literal rolling data center.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Contributing
&lt;/h2&gt;

&lt;p&gt;Show up to BSidesOK in April. Find the guy in flip flops. Buy me a beer or don't. I'll be the one whose SOC drove itself to the conference.&lt;/p&gt;

&lt;h2&gt;
  
  
  License
&lt;/h2&gt;

</description>
      <category>selfhosted</category>
      <category>ai</category>
      <category>homelab</category>
      <category>linux</category>
    </item>
    <item>
      <title>DREAM SEQUENCE INTERRUPTED</title>
      <dc:creator>Chris</dc:creator>
      <pubDate>Sun, 22 Mar 2026 07:21:16 +0000</pubDate>
      <link>https://dev.to/p4r4n0id/dream-sequence-interrupted-15d2</link>
      <guid>https://dev.to/p4r4n0id/dream-sequence-interrupted-15d2</guid>
      <description>&lt;p&gt;There's a line in the boot sequence on my site that stops people.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;** DREAM SEQUENCE INTERRUPTED **
** REALITY UPGRADE DETECTED **
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It sits between Oregon Trail and a Proxmox kernel boot. Between 1984 and now. Between the kid and the builder.&lt;/p&gt;

&lt;p&gt;It's not a joke. It's a biography.&lt;/p&gt;




&lt;h2&gt;
  
  
  1984. I am four years old.
&lt;/h2&gt;

&lt;p&gt;Saturday mornings were a specific kind of church.&lt;/p&gt;

&lt;p&gt;You woke up before your parents, poured a bowl of something aggressively artificial, and sat three feet from a television the size of a small refrigerator. For four hours, the future arrived in 22-minute installments.&lt;/p&gt;

&lt;p&gt;KITT didn't just drive. KITT &lt;em&gt;reasoned&lt;/em&gt;. It warned Michael before he knew he needed the warning. It had opinions. It had loyalty. It had a moral code baked into its operating system. And somewhere in my four-year-old brain, a belief got hardwired in so deep I never fully questioned it: &lt;em&gt;this is coming. Someone is building this right now.&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;KNIGHT INDUSTRIES TWO THOUSAND
› Reasoning engine: ACTIVE
› Moral subroutine: LOADED
› Self-governance: ENABLED
› Operator: Michael Knight (ROOT)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The Jetsons had a house that was aware of the family living in it. Rosie didn't just clean — she &lt;em&gt;knew&lt;/em&gt;. She anticipated. The house responded. M.A.S.K. had vehicles that transformed on voice command, operated by a team small enough to fit in a van. Transformers were autonomous machines with judgment, loyalty, and a hierarchy of command that made more sense to me than most human organizations I'd encounter later.&lt;/p&gt;

&lt;p&gt;Every Saturday, the same message: &lt;em&gt;the machines are going to think. And when they do, one person with the right setup can do things that used to require an army.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;I believed it completely. Not as fantasy. As forecast.&lt;/p&gt;




&lt;h2&gt;
  
  
  The future didn't arrive on schedule.
&lt;/h2&gt;

&lt;p&gt;I won't dwell here. You know this part. You lived your own version of it.&lt;/p&gt;

&lt;p&gt;You grow up. The daydreaming gets deprioritized. Not killed — just buried under enough weight that you stop hearing it. Bills. Work. The gap between what you imagined Tuesday would feel like when you were four and what Tuesday actually feels like.&lt;/p&gt;

&lt;p&gt;There were tough years. Real ones. The kind that make the dream go very quiet.&lt;/p&gt;

&lt;p&gt;I won't dress it up. I won't make it inspirational. It was just hard, and then it got harder, and the version of me that believed in KITT felt very far away.&lt;/p&gt;




&lt;h2&gt;
  
  
  Then I ran into AI.
&lt;/h2&gt;

&lt;p&gt;Not the AI from the news cycle. Not the AI from the think pieces about job displacement and existential risk. The real thing — the version you could sit down with, explain a problem to, and watch reason its way toward a solution in real time. The version that pushed back when you were wrong. The version that remembered what you were trying to build.&lt;/p&gt;

&lt;p&gt;I wasn't excited. Excited is what you feel about something new.&lt;/p&gt;

&lt;p&gt;This was recognition.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;This is the thing they were showing me.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;KITT wasn't science fiction. It was early. The Jetsons house wasn't a fantasy — it was a roadmap somebody drew in 1962 and then left on the table for sixty years until the technology caught up. The Saturday morning cartoons weren't selling toys. They were leaking intelligence from a future that was going to arrive eventually, to kids who were paying close enough attention to recognize it when it did.&lt;/p&gt;

&lt;p&gt;The dream sequence didn't end. It got interrupted by about thirty years of Tuesday.&lt;/p&gt;

&lt;p&gt;And then: reality upgrade detected.&lt;/p&gt;




&lt;h2&gt;
  
  
  I live in a 40-foot RV.
&lt;/h2&gt;

&lt;p&gt;Inside it right now: a Dell T3600 server running Proxmox, GPU-accelerated local AI models, a full security operations stack monitoring every packet that touches my network, a custom persistent memory system so the AI maintains context across sessions, an automation layer that runs 24 hours a day whether I'm physically present or not.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Metric&lt;/th&gt;
&lt;th&gt;Value&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Containers active&lt;/td&gt;
&lt;td&gt;22&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Threats detected&lt;/td&gt;
&lt;td&gt;0&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Cloud dependencies&lt;/td&gt;
&lt;td&gt;0&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Platform&lt;/td&gt;
&lt;td&gt;40ft DRV Mobile Suites&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;I built all of it. No computer science degree. No team. No funding. No permission from anyone. Just the kid who used to watch KITT pull into the garage and think &lt;em&gt;yes, that, exactly that&lt;/em&gt; — grown up, broke enough to stop waiting, stubborn enough to figure it out.&lt;/p&gt;

&lt;p&gt;The AI does 70% of the work. I provide the 30% that matters: decisions, judgment, governance. I wrote a protocol to manage how the AI operates so it doesn't go off-script on a production system. I built a persistent memory layer because the AI forgets between conversations and I refused to accept that as a permanent condition. I designed a command hierarchy — because something operating under root authority needs to understand who root is.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;KITT had a hierarchy. Michael Knight was root.&lt;br&gt;
I just built the version I could afford.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  The future they showed us every Saturday morning is real.
&lt;/h2&gt;

&lt;p&gt;It didn't arrive clean. It arrived as open-source models and cheap compute and APIs and a few years of everyone figuring out simultaneously that the thing actually works now. It arrived without a press release. It arrived quietly enough that a lot of people missed it while they were busy being reasonable.&lt;/p&gt;

&lt;p&gt;You can build the house that thinks. You can build the car that talks back. You can build the thing that watches your perimeter while you sleep and briefs you on what it found when you wake up.&lt;/p&gt;

&lt;p&gt;You don't need a team. You don't need funding. You don't need a degree or a title or anyone's blessing.&lt;/p&gt;

&lt;p&gt;You need to keep the kid somewhere accessible. The one who sat three feet from the television and felt something click into place and thought: &lt;em&gt;I am going to live in that world.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;He was right. It just took a while to get the hardware.&lt;/p&gt;






&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;** DREAM SEQUENCE INTERRUPTED **
** REALITY UPGRADE DETECTED **
** STACK: OPERATIONAL **
** OPERATOR: ROOT **
** PERMISSION REQUIRED: NONE **
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;Welcome to the ship.&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;The full version of this article — with animated hero, custom design, and the boot sequence that started all of this — lives at &lt;a href="https://mpdc.dev/dream-sequence-interrupted" rel="noopener noreferrer"&gt;mpdc.dev&lt;/a&gt;. That's where the real build documentation lives. dev.to gets the words. The ship gets the experience.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;— ParanoidRV&lt;/em&gt;&lt;br&gt;
&lt;em&gt;Building an AI-powered mobile security platform from an RV. Documenting everything.&lt;/em&gt;&lt;br&gt;
&lt;em&gt;&lt;a href="https://mpdc.dev" rel="noopener noreferrer"&gt;mpdc.dev&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>security</category>
      <category>selfhosted</category>
      <category>ai</category>
      <category>linux</category>
    </item>
  </channel>
</rss>
