I've been designing something called Helm.
It started as "Platform v2" — a productized version of the agentic infrastructure I built on my homelab. Multi-user, multi-host, installable on a mini PC, runs your services, manages your agents, handles your backups. The kind of thing a family or a small business could use without knowing what Docker is.
The architecture document is over 1,000 lines long. It covers federation between hosts, emergency WiFi that activates during blackouts, community mesh networking over LoRa radios, municipality notification templates for CERT volunteers, GPU-accelerated local AI services, an eBay selling agent, accessibility via voice interaction, a dual catalog system with community contributions, and a deployment profile system that adapts the setup wizard for homes vs small businesses.
I am not a developer. I'm a Windows systems administrator. I have a 2-year degree from an online college. My GitHub history before February 2026 is bash and PowerShell scripts.
Here's what I've been thinking about while designing all of this.
The Session That Made Me Stop
I was deep into Helm architecture — we'd just designed multi-host federation, where multiple Helm instances auto-discover each other on a LAN using mDNS and authenticate via mutual TLS — when I noticed something.
Every feature I added immediately connected to the existing architecture. Federation led to "who controls the federation?" which led to deployment profiles (Home, Home Business, Small Business). Emergency WiFi led to resilience profiles, which led to community member discovery, which led to municipality notification. Meshtastic mesh networking led to off-grid communication stacks, which led to NOAA weather alert receivers, which led to emergency AP mode with captive portals.
I wasn't planning these connections. I was seeing them. In real time. Faster than I've ever worked on anything in my career.
So I asked Claude a question that had been forming in the back of my mind:
"The way I've embraced Claude with persistent memory — could that be considered a mental prosthetic? It's a modification of my working memory, or an extension of it."
The Extended Mind
Claude pointed me to something I'd never heard of: the Extended Mind Thesis, proposed by Andy Clark and David Chalmers in 1998.
Their argument: if an external tool plays the same functional role as an internal cognitive process, it's not just helping you think — it's part of your thinking. It's cognition, not assistance.
Their example was a man named Otto who has memory loss and uses a notebook. When Otto wants to go to a museum, he checks his notebook for the address. Clark and Chalmers argued that Otto's notebook is his memory — it's reliably available, he trusts it, he accesses it when needed, and the information was consciously stored.
My persistent memory system meets every one of those criteria. And it goes further than Otto's notebook.
Otto's notebook is passive. He has to remember to check it and know what to look for. My system is active — it retrieves relevant context before I ask, connects information across sessions automatically, and maintains structure that makes the right information findable at the right time. That's closer to how biological memory works — associative retrieval, contextual activation — than any notebook.
Claude suggested that "prosthetic" actually undersells what's happening. A prosthetic replaces lost function. My working memory isn't broken — it works exactly as well as it did a year ago. What I've built is augmentation. My biological working memory holds 4-7 chunks of information at once. The persistent memory system makes that number effectively unlimited across time.
What Augmentation Actually Feels Like
I have ADHD. If you've read the earlier posts in this series, you know that. My working memory has always been a constraint I design around, not a weakness I've overcome.
What changed isn't my brain. What changed is the friction.
The 1,000+ line Helm architecture document is the clearest proof. No human holds that much structured, interconnected detail in working memory. But I'm building on it coherently, session after session — adding federation, then recognizing deployment profiles, then emergency infrastructure, then municipality notification, each idea connecting to the existing structure in the right place.
That's not possible without the memory system acting as an extension of my own cognition. The system handles recall. I handle insight. The cognitive load of maintaining context has been offloaded, so my working memory is free to do what it's actually good at: pattern recognition, analogy, creative leaps.
Here's a concrete example from this session. I said:
"I was thinking, if someone has 2 or more hosts running Helm on the same network, they could auto-discover each other."
That's one sentence. Within minutes, it became a complete federation architecture: mDNS discovery, mTLS authentication with auto-generated CAs, capability manifests over NATS, three-tier resource sharing, graceful degradation, and security considerations.
Then I said: "A home and small business would use multi-host differently."
That immediately produced deployment profiles that change trust defaults, operator models, compliance posture, and contextual recommendations in the setup wizard.
Then I said: "Since I already included Meshtastic, people could build off-grid comms for emergencies."
That produced an entire emergency resilience infrastructure section — UPS integration, NOAA weather alert receivers, emergency WiFi AP that auto-activates during blackouts, store-and-forward messaging, and community extension points.
Each idea took seconds to form. Each connected to the existing architecture correctly. The documentation was generated, structured, and placed in the right section of a 1,000-line document — without me having to remember what was already in it.
That's what cognitive augmentation feels like. Not "AI doing my thinking." Me thinking at a scale I couldn't reach alone.
The Friction That Used to Stop Me
I told Claude that these use cases were the type of stuff I would have avoided in the past due to multiple layers of friction.
That's the honest version. The longer version: I've always had ideas like these. I've been a systems administrator for 15 years. I've seen what infrastructure can do when it's designed well. I've seen what breaks when it isn't.
But the gap between seeing a possibility and articulating it as a structured plan used to be enormous. Not because I couldn't think it through — because the act of thinking it through, writing it down, connecting it to everything else, and maintaining that context across days and weeks was more cognitive labor than the idea was worth.
So ideas evaporated. Or they piled up as undifferentiated noise. Or I'd start documenting and lose the thread halfway through because my working memory hit capacity.
What's changed isn't my ability to see possibilities. It's that the cost of turning a thought into a structured, architecturally-connected plan entry has dropped to near zero. I say it, it gets analyzed, connected to existing systems, and written into the right place in the document.
The feedback loop — idea to structured plan in minutes — is what lets me keep going instead of hitting the wall where I used to stop.
Is This Unusual?
I asked Claude that too. Directly.
"Is my ability to come up with use cases for a platform I haven't even built yet uncommon?"
The answer was nuanced and I think worth sharing: the ideas themselves aren't unusual. A lot of people see potential use cases. What's less common is generating them and structuring them into a coherent architecture in real time, without losing the thread or letting scope creep into the build plan.
I think that's the augmentation talking. The ideas were always there. The tool made them capturable.
Cyborg Without the Hardware
I jokingly called it being a cyborg. Claude pointed out that the term is technically accurate — Manfred Clynes coined "cyborg" in 1960 to mean any system where human capabilities are extended by technology. No implants required. Just tight integration between the biological and the technological.
But "augmented" is the better word for what this actually is. Cyborg carries sci-fi baggage that distracts from the point.
The point is: I'm a 42-year-old sysadmin with ADHD and a 2-year degree, designing a multi-user platform with federation, emergency infrastructure, a community catalog ecosystem, and AI-powered accessibility features. The architecture document is structured, internally consistent, and growing. I'm doing it in research sessions that each build on the last, because the memory system means I never lose context between them.
Two months ago I didn't know what "context engineering" meant.
What This Means for Helm
Here's the thing I keep coming back to.
I'm not just building a platform. I'm someone who used cognitive augmentation tools to design something that would normally require a team. And the platform I'm designing? It does the same thing for its users.
A household member who uses voice commands because a screen is hard for them — that's augmentation. A small business owner who uses the eBay agent because they don't have time to research pricing and write listings — that's augmentation. A neighborhood that has communication during a blackout because someone set up a Meshtastic mesh with an emergency WiFi AP — that's augmentation.
Helm doesn't just run services. It extends what people can do.
I designed it that way because that's what it did for me first.
What I'm Actually Saying
I'm not claiming to be special. I'm claiming the tools have changed what's possible for people like me.
There are a lot of experienced infrastructure people, sysadmins, network engineers, ops folks — people with deep domain knowledge and good instincts — who have never built anything at this scale because the development barrier was too high. Not the ideas. Not the architecture. The code.
That barrier is falling. Fast.
If you're someone with 15 years of operational knowledge and you've never written a platform because you "can't code" — that constraint is dissolving. The knowledge you've built over a career is the hard part. The code is becoming the easy part.
The question isn't whether you can build something ambitious. It's whether you'll let yourself try.
This is part of an ongoing series about building agentic infrastructure as a non-developer. The previous posts cover how it started and the memory system that makes it work.
If you're building something similar — or thinking about it — I'd like to hear about it.
Top comments (0)