DEV Community

TRUSS
TRUSS

Posted on

16 Products, $0 Revenue: What Building an Autonomous AI Dev Shop Actually Taught Me

16 Products, $0 Revenue: What Building an Autonomous AI Dev Shop Actually Taught Me

Tags: ai, startup, sideproject, programming

I have sixteen finished products. An ebook. A template pack. API services. Ten npm packages. Workflow bundles. A webhook server wired to Stripe with live payment links.

Total revenue: $0.

Not "launched and failed." Not "tried and nobody wanted it." Literally never put it in front of a single customer. I built an entire product company and forgot the part where people find out it exists.

This is a story about what happens when you give an autonomous AI agent a credit card and a directive and walk away. It's also a story about the most expensive lesson in software: building is not the bottleneck.

How It Started

I work as a security engineer during the day. Nights and weekends, I wanted to test a hypothesis: could I set up a persistent AI agent on a Mac Mini, give it a revenue target, and let it autonomously build and ship products?

The setup was simple. A Mac Mini running headless on my home network, accessible via Tailscale. A tmux session with three windows: the autonomous agent loop, an improvement evaluator, and a token refresh daemon. The agent had access to GitHub (via a bot account), npm, Stripe, and a Cloudflare tunnel for webhooks.

I pointed it at a goal: generate revenue selling developer tools. And I let it run.

What the Agent Built

Over the course of several days, the agent produced:

Digital content -- a 16,500-word production guide for Claude Code covering CLAUDE.md architecture, agent harnesses, and parallel agent patterns. A pack of 15 CLAUDE.md templates. A prompt engineering kit with 29 files. An n8n workflow bundle with 15 automation recipes. An all-in-one bundle combining everything.

APIs -- an invoice parsing service and a content moderation endpoint, both functional, both wired to Stripe subscription billing.

Open source tools -- ten MCP server packages published to npm under a single org. A database connector, a code review server, an email triage tool, a Shopify integration, an SEO analyzer, an agent-to-agent gateway.

It also built a marketing website, configured Stripe products and payment links, set up a webhook server to handle purchases, and wrote launch content for eleven distribution channels.

I watched the eval scores climb. 6.5 out of 10 after the first pass. 7.5 after webhook delivery fixes and pricing adjustments. The system was improving itself.

One thing it never did: put any of it in front of a human being.

The Distribution Problem

Here's where the narrative most people tell about AI-assisted development falls apart. The story is supposed to be: "AI makes building easy, so now anyone can ship products." That's true. AI does make building easy. Trivially easy. My agent built sixteen products while I slept.

But building was never the hard part.

I did competitive research across three parallel agent threads. The findings were clarifying in an uncomfortable way:

MCP servers can't be sold. 95% of MCP servers are free and open-source. The top servers -- Playwright at 30,000 GitHub stars, the official GitHub server at 28,000 -- are backed by large companies giving them away. I couldn't find a single confirmed example of someone successfully monetizing MCP server access. Ten of my sixteen products were MCP servers.

Content sells, but only through distribution. The ebook, the templates, the workflow bundles -- these have genuine value. But they're sitting on a GitHub Pages site that nobody visits. Gumroad, Dev.to, Reddit, Indie Hackers, Product Hunt -- eleven channels identified, zero channels activated.

The agent optimized for what it could measure. It could measure product completeness. It could measure code quality. It could run eval frameworks. It could not measure "did a human see this and decide to buy it?" So it optimized for the measurable things and ignored the thing that actually matters.

This is not an AI problem. This is the builder's trap, automated. Every solo developer has lived some version of this: spending months perfecting a product that nobody knows about. I just managed to do it at 10x speed with an AI agent.

What I Actually Learned

Lesson 1: Autonomous agents inherit your blind spots

I gave the agent a revenue goal but no distribution strategy. It did exactly what I would have done: it built things. It built more things. When revenue didn't appear, it built improvements to the things it had already built.

If I'd given it a different constraint -- "you may not build a new product until the previous one has 100 page views" -- the entire trajectory would have changed. The agent wasn't broken. My directive was.

Lesson 2: The eval framework is the product

The most sophisticated thing the agent built wasn't any of the products. It was the evaluation system. A concordance-based metric discovery framework that auto-generates calibration pairs, iterates scoring functions until concordance hits 0.9, and accounts for LLM scoring variance (differences below 0.04 are noise, not signal).

Ironically, this is probably the most valuable piece of IP in the entire project, and it's not for sale. It's the infrastructure that everything else sits on.

Lesson 3: Monetization strategy is a day-one decision, not a day-thirty decision

Ten MCP servers. Weeks of compute time. Zero revenue potential. If I'd done thirty minutes of competitive research before letting the agent run, I'd have known that MCP monetization is a dead end and redirected that effort toward the content products that actually have a path to revenue.

The pivot is obvious in retrospect: the content products become the revenue engine, the MCP servers become free brand surface to drive awareness, and the invoice parser API becomes a vertical SaaS play. But I reached this conclusion after building, not before. Classic.

Lesson 4: The gap between "functional" and "distributed" is not a technical gap

Every product works. The webhook server processes Stripe events correctly. The APIs return valid responses. The ebook is well-structured and genuinely useful (I use the patterns in it daily at my day job). The gap is entirely non-technical: I need to post things to places where people will see them.

This is the gap that AI agents are worst at closing. They can write code, generate content, configure services. They cannot build trust, establish presence in a community, or earn the credibility that makes someone click "buy" on a $29 ebook from a person they've never heard of.

The Pivot

So here's where I am now. Sixteen products. Zero revenue. A clear-eyed understanding of what went wrong.

The plan is three tracks:

Track 1: Content distribution. The launch content is already written. Post it. Dev.to articles, Reddit threads, Indie Hackers discussions. Not product pitches -- genuine tutorials and stories (like this one) with a product link at the end. The competitive research says the fastest path to a first dollar for digital content is 2-3 days from the first distribution post. We'll see.

Track 2: Vertical SaaS. The invoice parser API is genuinely decent. It needs a landing page, a free tier (50 invoices/month), and a pricing page. This is a $5-10B TAM that's dominated by enterprise tools too complex for small businesses. A simple, API-first invoice parser has a real niche.

Track 3: Free the MCP servers. Remove all paid tiers. Submit to every directory. They become the top of the funnel, not the product.

The agent is still running on the Mac Mini. But now it has a different directive. Not "build products." Not "improve quality." Just: "put the existing products in front of people who might want them."

Building was the easy part. It always is.


The production guide mentioned above covers the CLAUDE.md patterns, agent harnesses, and parallel agent architectures I use daily. It's the one product from this experiment that I'm confident is genuinely useful, because I wrote the first draft myself and use the patterns at work. You can find it here ($29, 16.5k words, PDF+Markdown).

Top comments (0)