Let me be upfront about something.
I'm a Computer Science Engineer. I work at a global telecommunications company managing more than 7,000 servers spread across the world. I know infrastructure deeply — HPE, VMware, Linux, Docker, AWS. I know exactly what happens when a system fails at 2am because I'm the one who fixes it.
I understand systems. I understand networks. I understand why things break and how to prevent them from breaking.
What I had never done is build and launch a software product of my own. Not because I didn't understand how things work — I knew perfectly well how to design an architecture, how services need to communicate with each other, what data needs to persist and what doesn't, when to use WebSockets and when not to. That was clear to me from the start.
The gap was something else: time and implementation speed. Converting a clear architecture into working code, component by component, endpoint by endpoint, takes months when you're doing it alone in your spare time.
AI didn't teach me to design systems. It let me build them at the speed my mind designs them.
What I built
NEXUS Ecosystem is a suite of six self-hosted Docker management tools:
- NEXUS — container management (start, stop, deploy, terminal, metrics)
- Watcher — automatic image update detection
- Pulse — uptime monitoring for HTTP, TCP, DNS, APIs
- Security — CVE vulnerability scanning + VirusTotal integration
- Notify — multi-channel alerts (email, Telegram, Discord)
- Hub — central control with SSO, event bus, log center, automations
Stack: React 18, Node.js, Express, Socket.io, Docker. Published on GitHub and Docker Hub. Running 24/7 on my homelab.
A team of seven engineers would have taken close to a year to build this. I built it in weeks of evening sessions, as a side project, while working full time managing global infrastructure.
How it actually works with AI
Everybody talks about AI writing code. That's not what happened here — or at least, that's not the useful frame.
The useful frame is: AI changed what I could execute on at once.
As a systems engineer, I understand distributed systems deeply. I know how Docker networking works. I know what happens when a service can't reach its dependency. I know why you need persistent volumes. I know the difference between a health check and a readiness check. I know what an event bus is and why you'd use one.
What I didn't have was the bandwidth to express all of that understanding in code at the speed I was thinking it. Designing the architecture takes minutes. Implementing it correctly, with all the edge cases, across six interconnected services — that's where the time goes.
With Claude Code, I describe the system I want in terms I already understand deeply, and it produces the implementation. I review it, I understand it, I modify it when it's wrong — and I know when it's wrong because I understand the system. I'm the architect. AI is the contractor.
I know exactly what I want built. I can specify it precisely. I can inspect the work and know when it doesn't meet the spec. I just don't have to write every line myself.
The parts that were still hard
I want to be honest about this, because most AI hype glosses over it.
Debugging distributed systems is still hard. When Security was emitting events that Hub wasn't receiving, tracking down whether the problem was the event bus, the network, the auth middleware, or the Socket.io room configuration took hours. AI helped narrow it down, but it didn't eliminate the work.
Architecture decisions are still yours. AI will implement whatever you ask it to. It won't tell you that your data model is wrong until you've built three features on top of it and discovered the problem yourself. The decisions that matter — how services communicate, where state lives, what the failure modes are — those are entirely on you. This is where domain expertise is irreplaceable.
Context management is a real skill. A large codebase across six tools has more context than fits in any single session. Knowing which files to include, how to describe the current state, when to start fresh versus continue — this is something you have to learn. The CLAUDE.md file I maintain for the project is as important as any piece of code.
You have to know enough to know when it's wrong. This is critical. When Claude Code suggested using nice and ionice to limit Grype's CPU usage inside a Docker container on Windows/WSL2, I knew immediately that was wrong — those Linux process priority tools don't work the way you'd expect in that environment. Someone without infrastructure experience might have shipped that and spent weeks confused. That judgment — knowing when the implementation is subtly broken — comes from years of experience, not from AI.
What this means for the "AI will replace developers" debate
I've been watching this debate with particular interest, because I sit in an unusual position relative to it.
Here's what I actually think:
AI didn't replace a developer to build NEXUS. It enabled someone with deep infrastructure and systems expertise — who already understood exactly what needed to be built — to build it without needing a dedicated development team.
That's a different thing. And I think it's more interesting than the replacement narrative.
The engineers I'd worry about, if I were an engineer, aren't the senior people building complex distributed systems. Their judgment, architecture instincts, and debugging skills are more valuable with AI than without — they can move faster without moving sloppier.
The role that's genuinely under pressure is the one that's primarily about translation: taking a specification from a domain expert and converting it into working code. That translation work — from "I need the system to do X" to a working implementation — is exactly what AI is getting good at.
But here's the flip side: if you're a domain expert who already understands what needs to be built, AI is giving you superpowers you didn't have before. Infrastructure engineers, DevOps specialists, data analysts, scientists — people who understand their domain deeply and can design the right solution — are suddenly able to build things at a speed that wasn't previously possible.
That's not replacement. That's expansion.
The uncomfortable part
I built software that works, that solves real problems, that has real users. It has a design system. It has real-time communication. It has security scanning. It has an SSO system. It has an event-driven automation engine.
I designed every piece of it. I made every architecture decision. I knew what needed to be built and why.
AI let me build it at the speed I designed it.
What this tells me is that the gap between "I understand how this should work" and "I can ship this" is closing fast. For people who already have the domain knowledge and the systems thinking — and just needed the implementation bandwidth — that gap is already gone.
The value of deep expertise hasn't decreased. If anything, it's more valuable now — because the people who truly understand what needs to be built can now actually build it.
Where NEXUS is now
NEXUS Ecosystem is open source and running in my homelab right now — six services, unified Hub with SSO, 24/7. The plan is to publish the Hub integration formally once it's been running stably for a few weeks.
If you're an infrastructure engineer, DevOps specialist, or systems person who has the domain knowledge and the architecture vision but has always needed a development team to execute it — that constraint is gone.
- GitHub: github.com/Alvarito1983
- Docker Hub: hub.docker.com/u/afraguas1983
Top comments (0)