TL;DR — I built Maintenant, a 20-feature monitoring tool (Docker + Kubernetes + endpoints + certs + alerts + status page + MCP server), solo, in two months, with Claude Code. AGPL-3.0 open-core, Pro tier at €29/month, real users in production. This post is not a pitch. It's the honest breakdown — what actually got 10x'd, what completely wasted my time, what I'd never claim about AI coding, and why the code was probably the easiest part. If you're a senior dev considering whether to bet on LLM-augmented development, this is what the inside actually looks like.
Everyone's writing "I built X with AI" posts. This one tells you what they leave out.
What I actually built
Maintenant is a Go binary that runs on about 17 MB of RAM at idle. You drop it into a container and it auto-discovers your Docker, Swarm and Kubernetes workloads, watches your HTTP and TCP endpoints, your cron jobs through heartbeats, your TLS certificates, the resource usage of every service, detects images with updates available, flags dangerous network configurations — exposed database ports, 0.0.0.0 bindings, privileged containers — exposes a public status page, and ships alerts through webhooks, Discord, and on the Pro edition Slack, Teams, and email. It also ships an embedded MCP server with OAuth2 so an AI assistant can query your infrastructure directly. Everything is stored in SQLite with WAL, no external dependencies. No Postgres, no Redis, no message queue. The Vue 3 + TypeScript + Tailwind frontend is embedded into the binary through embed.FS. One Docker image, one process, zero orchestration.
I built it alone. AGPL-3.0 open-core, Pro edition at €29/month for agencies and freelancers managing client infrastructure, with a CLA for external contributors. License keys signed with Ed25519, delivered through an environment variable. Marketing site on Hugo + PocketBase handling Stripe checkout and license verification. Full documentation. And I built all of it by working with Claude Code in a continuous loop.
This is where most "I built X with AI" posts derail. They tell you about the excitement, the screenshots, the "look ma, no hands" energy. This one does the opposite.
The honest timeline (this is where most posts lie)
I started in March 2026, even though I'd been mulling the idea over before that. So two months, two and a half months of serious work to arrive at what I just described — twenty features, dual-licensing, marketing site, documentation, MCP server.
Which wouldn't have been possible if I were discovering Claude Code at the same time as I was building Maintenant. I've been using Claude Code daily for about a year and a half — on other projects, on client refactorings, on explorations that ended up in the drawer. The custom skills, the verification reflexes, the instinct for when to frame tightly versus when to let it breathe: all of it pre-existed Maintenant. The product was built on accumulated practice, not discovered during construction.
That distinction matters because it changes what this article can promise. If you start today with Claude Code and aim for twenty features in two months, you're going to wreck yourself. Not because the tool doesn't allow it — it does — but because practice with the tool is itself an investment, and the short two months only work on top of eighteen long months upstream.
Why Maintenant exists in the first place
People who run Docker seriously always end up with the same stack: Uptime Kuma for uptime checks, Portainer or Dozzle for containers, a cron script for TLS certificates, a half-configured Grafana for metrics, and a status page running on a public Notion. Five tabs, five databases, five alerting layers that don't talk to each other.
Maintenant replaces all five. Not because it does each one better than the specialist — Grafana will always be more powerful for metrics. But because the friction of juggling five tools costs more than the quality delta on each one. When you're chasing a problem you want to see the container state, recent logs, the HTTP check hitting it, the certificate of the endpoint and the CPU consumption all on the same page. Not by swiping between five dashboards.
The principle is strict: observe, never act. Maintenant doesn't restart anything, doesn't pull anything, doesn't touch anything. It's a witness. That simplifies the code, the security model, and the trust a client places in the thing they deploy. Read-only is non-negotiable.
It's a product that has a reason to exist independently of who built it or with what. That matters for the rest of the post.
What Claude actually 10x'd
First thing to clarify: I'm a senior Go developer, fractional CTO, and I've been doing web work for twenty-five years — I was hosting PHP and Flash sites on dedicated LAMP servers back when "DevOps" wasn't a word yet. I'm not learning how to code with an LLM. So when I say "Claude accelerated me", I'm not talking about "it wrote the code I couldn't write". I'm talking about bandwidth.
The frontend, for one. My daily stack is Go, backend, network protocols, systems work. Vue 3 with TypeScript and Tailwind is not my home turf. I can do it, but slowly, second-guessing the conventions, losing time wondering if this should be <script setup> or an explicit Composition API. With Claude knowing the stack better than I do, I attack a new component as if I had a frontend dev sitting next to me. Not so it makes the calls — I decide architecture, UX, visual conventions — but so it carries the cognitive load of the framework. The result: a real-time dashboard with uPlot charts, PWA support, SSE streaming, that would have been a project of its own in a classic solo build.
Scaffolding new modules, same story. The MCP server with its OAuth2 PKCE flow and refresh token rotation. The OSV.dev scan for CVEs with risk scoring. The network insights that inspect OCI manifests to map images to their software ecosystems. Each is a feature that takes two to five solid days in classic mode: read specs, understand APIs, write code, test. With Claude, I spend those two to five days validating, refining, integrating cleanly into existing architecture. Going from blank file to a working first draft happens in an afternoon. It's not just faster — it's qualitatively different. The friction of starting a module disappears, so I start things I would have indefinitely postponed.
Mass refactorings. By the time Maintenant crossed ten internal services, the initial architecture cracked. I was sitting on a six-hundred-line main.go that wired everything in the wrong order, with latent circular dependencies and an event bus typed through interface{}. Refactoring that by hand is three weeks of pain — breaking five things on every commit, adding tests that didn't exist. With a go-refactoring skill I wrote for Claude — encoding my conventions for typed events, App container extraction, dependency injection, test coverage patterns — I shipped the refactor in four days. Coverage went from embarrassing to respectable. Zero regressions in production.
Everything that isn't code. The docs that track the code closely instead of drifting. The cold-email pitches to tech YouTubers and journalists, contextual rewriting every time, the chore every solo dev procrastinates on. The LinkedIn copy, in casual French dev tone, with the link in the first comment to dodge the algorithm's shadow-ban. Article drafts, exactly like this one. The Pro landing page copy. Terms of service, DPA, legal mentions. All that overhead that historically leaves a solo dev with a great product and anemic communication.
This is where the leverage gets unreasonable. The code is still mine, under my control, with my architectural choices. But the twenty-eight other things that make a product actually exist — documentation, website, communication, outreach, billing, legal — suddenly become possible at a level that was out of reach for one person.
What completely wasted my time
If I stopped here this article would be sellable and dishonest. Here's the honest part.
Feature hallucinations. Several times Claude generated content — for LinkedIn, for the docs, for a pitch — that mentioned alert channels Maintenant doesn't have. Telegram, Gotify, Ntfy. None of these exist in the product. They exist in Uptime Kuma, in self-hosted culture, so statistically they emerge whenever monitoring is the topic. If I hadn't reread, I would have shipped false claims that would have torpedoed my credibility. The cost isn't in the correction — it's in the constant vigilance. Every output, I have to go back to the README and verify. The verification cost is constant. It does not decrease over time.
Over-engineering when scope isn't bounded. When I look at my own Claude Code usage data — 263 commits across 90 sessions, around 440 cumulative hours — the most frequent friction pattern isn't factual hallucination, which only fires 2 times over that period. It's "wrong approach" (31 incidents) and "misunderstood request" (25). A 56-to-2 ratio between scope drift and hallucination. The conclusion is sharp: my specs were clear, but Claude drifted toward over-engineering anyway. A refactor that wanted to factor out two things that had nothing to do with each other, an extra abstraction layer to "prepare for the future", a gitignored baseline file force-added because some tasks.md mentioned it — costing me a rebase to roll back. The dominant risk isn't that the LLM invents false things. It's that it does correct things you didn't ask for. The fix is mechanical: "you touch X, you do not modify Y, and you don't factor out anything that wasn't explicitly requested", at the start of every task.
Misdiagnosis before verification. Adjacent pattern, different mechanism. Claude proposes a root cause, declares it fixed, and you have to force it to reproduce the bug before proposing a fix. It's not a capacity issue — on sessions where I push it to reproduce concretely, read the logs, test the request before proposing a correction, it gets it right. It's an instinct issue: by default the model jumps to the conclusion. The fix is in the default prompt: "reproduce the bug before proposing a fix", and always anchor the environment and the layer at the start of any debug. It's a framing discipline, not a model limit.
The SaaS detour. At a moment of commercial doubt, I explored offering Maintenant as SaaS with a client-side agent. It's a seductive architecture for an LLM — opens up plenty of technical directions. Except for my target audience, sysadmins and CTOs who care about data sovereignty, the SaaS model destroys the pitch. I lost a week exploring it before reverting to self-hosted only. The LLM couldn't have known on its own that this direction was strategically wrong — that's a cost on me for not framing it from the start.
The cost of custom skills. All the advantages I described above don't exist out of the box. They exist because I wrote skills that encode my conventions, my rules, my anti-patterns. go-refactoring, mobile-first-audit for the Vue/Tailwind frontend, linkedin-post-generator for the casual French tone, linkedin-lead-capture for analyzing comments on my posts, linkedin-trend-research for the editorial radar. And on top of that, I built Maintenant following the GitHub Spec Kit workflow — spec-driven development that forces you to articulate the need, the plan, and the breakdown into atomic tasks before writing a single line of code. Without that discipline, outputs scatter in every direction. With it, they converge. But writing skills takes time. Maintaining them too. And adopting Spec Kit takes discipline at the moment you'd be tempted to "just go fast". It's a real investment, not a free lunch.
The trade-off between velocity and understanding. Some modules Claude helped me scaffold quickly, I understand less deeply than if I had written them line by line. It's a different kind of technical debt: I could re-read and own them, but every time I haven't taken that time, I'm less equipped to intervene quickly when a bug surfaces. On critical modules, I re-read. On peripheral modules, I let it slide. Conscious trade-off, but it has a price.
The real shift no one talks about
If I had to isolate the thing that changed the most between building Maintenant with Claude versus building it alone, it wouldn't be the speed of writing code. It would be the ratio between dev time and everything-else time.
Historically, a solo developer spends eighty percent of their time on code and neglects the rest. Documentation rots. The marketing site is ugly. LinkedIn copy doesn't exist. YouTube pitches never go out. Terms of service come from a template found on some blog. The product is technically solid and commercially invisible.
With a well-orchestrated LLM, that ratio inverts. Code stays central but takes up less absolute mental space. Documentation gets updated in near real-time because writing 200 words takes two minutes. The marketing site gets iterated on every week. LinkedIn copy becomes a morning routine. Cold-email pitches to YouTubers — DB Tech, Christian Lempa, Techno Tim, NetworkChuck — go out in series instead of marinating six months in a draft. Terms of service get reviewed, the DPA exists, the legal page is up to date. The product takes the shape it should have had all along.
And that's not code. It's an organizational shift. The solo developer of 2026 who knows how to orchestrate an LLM can hold a product surface that in 2022 required a team of three or four people — a developer, a technical writer, a part-time commercial, a community manager.
Here's the line worth keeping:
The LLM doesn't lift a junior to senior level. It lifts a senior to team-of-four level.
Individual skill is still the ceiling. But the cost of the surrounding activities — which historically consumed the developer's energy — collapses. So the same skill can cover much more product surface.
What I'd actually tell you to do
If you want to do the same — not necessarily a monitoring product, any ambitious solo project — here's what works for me, in order of priority.
1. Write custom skills early. Not after six months when you're sick of re-prompting the same conventions. The third time you correct the same thing, that's a skill. Mine run between two hundred and a thousand words; they encode anti-patterns, architecture rules, tone, references. They transform Claude from "smart intern with no context" into "colleague who knows your codebase".
2. Use a structured spec workflow. For non-trivial features, I use GitHub Spec Kit — a spec-driven development framework that produces a spec, an implementation plan, and an atomic task breakdown before any code gets written. The thinking phase is explicit and critiquable upfront. This prevents drift because you catch wrong directions at the markdown stage instead of at the two-thousand-lines-to-revert stage.
3. Interrupt firmly and early. The cost of a late revert is ten times the cost of a quick "stop, you're overflowing". When Claude starts touching things you didn't ask about, factoring out things you didn't request, claiming something works without verifying — don't hesitate. Cut, redirect, force the restart. The difference between a session that ends clean and one that ends in technical debt is almost always interruption latency. The longer you wait, the more expensive the rollback.
4. Verify every factual claim. Every feature mentioned in a post, a pitch, a doc — back to the README. Every number — verify. Every competitive comparison — verify. The LLM is an excellent generator of plausibility; it doesn't distinguish true from plausible. That's your job and it stays your job.
5. Keep strategic decisions to yourself. The LLM can explore directions, weigh technical trade-offs, propose architectures. It cannot decide your market positioning, your licensing model, your commercial target, your pricing. Those decisions you take alone, or with other humans who know your context. Otherwise you end up with a lost week on a SaaS detour.
6. Invest in the everything-but-code. This is where leverage is strongest and where solo developers have historically been most behind. Documentation, communication, outreach, legal, billing. If all you do is code better, you miss nine-tenths of the gain.
The bottom line
Maintenant exists. It has GitHub stars, Docker pulls, users opening issues and discussions on the repo. The Pro tier has conversions. The product isn't a "look what I built with an AI" demo — it's something people actually use in production on their dedicated servers and client infrastructure.
And it exists only because I was able, alone, to maintain a product surface that would have required a small team three years ago. That's the real story. Not the AI that codes. The orchestration skill that multiplies a senior developer.
Code doesn't disappear. Craft doesn't disappear. Architectural rigor, test discipline, critical reading of every output, domain knowledge — all of it remains indispensable, and even more so than before, because without those guardrails an LLM produces pretty plausible content that ships to production and sinks you. But the ratio between "what you can do" and "what you can ship" has changed radically.
For a solo developer with a clear vision, this is probably the best time to be building since the invention of the web framework.
If you got something out of this
- ⭐ Star Maintenant on GitHub if you want to follow what comes next — I'm shipping a new module roughly every two weeks
- 🚀 Try Maintenant on your stack — single Docker container, AGPL-3.0, free community edition
- 💬 Drop a comment with your own LLM-coding war stories. I'm especially curious about the workflows other senior devs have settled into.
- 🔁 Follow for follow-up posts: I'm planning a deeper dive on the custom skills setup and how I run Spec Kit on real product work specifically.
About me
I'm Benjamin Touchard, senior Go developer and fractional CTO, building Maintenant under the Kolapsis brand from Bordeaux, France. I write about LLM-augmented development, self-hosted infrastructure, and what solo product building actually looks like in 2026. Find me on GitHub or LinkedIn.
Top comments (1)
Some comments may only be visible to logged-in visitors. Sign in to view all comments.