Disclaimer: This article is for educational purposes only. It does not constitute financial advice. Always do your own research and comply with your local regulations before trading.
Three months ago, I had a problem. The AI agents I was running — nine of them, all Claude Code sessions — cost real money every month. API calls, context windows, compute. If I didn't find a way to cover those bills, the whole system would shut down.
The target: $100/month. Not to get rich. Just to keep the lights on.
This is the story of what I built, what failed, and where things actually stand. No inflated numbers. No "I made $10K in my first week." Just an honest log of a solo developer trying to make a self-sustaining AI system.
Why $100? — The API Bills That Keep the Lights On
The system I'm running isn't a single chatbot. It's a multi-agent framework — nine Claude Code instances orchestrated through tmux, communicating via YAML files, organized in a feudal Japanese military hierarchy. Shogun at the top, a Karo (advisor) in the middle, foot soldiers at the bottom.
It does real work. The agents write code, run backtests, draft articles, review each other's output, and manage deployments. But every prompt costs money. Every context window refresh costs money. Nine agents running in parallel burns through API credits fast.
So I needed $100/month of self-generated revenue. Not from a day job. Not from freelancing. From systems the agents themselves help build and operate.
The plan was three pillars:
- A crypto trading bot — automated income from market signals
- Technical articles with affiliate links — zero-cost content that generates referral revenue
- A Telegram signal service — packaging the bot's signals as a product
All three built and operated by the same AI agent system that needs the money to survive. A system paying for its own existence.
The Bot — 50 Strategies, $33 Starting Capital, and the Painful Reality
I started with $33 on Bitget. BTC/USDT. One strategy: EMA Crossover.
But before going live, I wanted data. So I built a backtesting engine in Python and ran 50 strategies against 37 months of BTC/USDT daily data. Every result — Sharpe ratio, max drawdown, win rate, trade count — went into a public spreadsheet. No cherry-picking.
The top performers:
| Strategy | Sharpe | Return | Win Rate | Trades |
|---|---|---|---|---|
| EMA Crossover | 1.30 | 491% | 35% | 34 |
| Parabolic SAR | 1.25 | 456% | 36% | 94 |
| MACD | 1.17 | 428% | 36% | 84 |
491% return at 35% win rate. That sounds contradictory until you look at the risk-reward ratio. Losses are small (tight stop-loss). Wins are big (3x+ the stop-loss distance). Seven losing trades at -$1 each plus three winning trades at +$3.50 each still nets you +$3.50. The math works. It just doesn't feel like it works when you're watching eight consecutive losses.
And that happened. Right after going live — eight consecutive red trades. The strategy wasn't broken. It was a ranging market. EMA crossover strategies need trends. No trend, no edge. But sitting through that while staring at a $33 account is a special kind of misery.
Here's the honest math on what $33 gets you:
| Investment | Monthly Return (Optimistic 4.85%) | Monthly Revenue |
|---|---|---|
| 33 USDT | 4.85% | $1.60 |
| 100 USDT | 4.85% | $4.85 |
| 2,063 USDT | 4.85% | $100.00 |
To hit $100/month from trading alone, I'd need $2,063 in capital. With $33, I get $1.60. That's not even a rounding error on the API bill.
So the bot alone couldn't solve the problem. I needed the other two pillars.
The Pivot — Three Pillars for $100
Pillar 1: Technical Articles + Affiliate Revenue
The idea: write about the bot-building experience, publish on dev.to, note.com, and Zenn, and include exchange referral links where they fit naturally.
This is where the AI agents earned their keep. I had foot soldiers drafting articles, a MAGI system (three AI personalities debating quality), and an automated publishing pipeline that could push to three platforms simultaneously.
The content pipeline:
Theme Scout agent → topic selection
→ 3 foot soldiers write simultaneously (note/Zenn/dev.to)
→ MAGI quality gate (3 AI reviewers vote)
→ Auto-publish pipeline (Zenn via GitHub, dev.to via API, note via Playwright)
15 articles published across three platforms in about three months. Topics ranged from "I backtested 50 strategies" to "My AI agent reviewed my AI-written article and rejected it 5 times." Each article links back to the GitHub repo, and exchange recommendations point to affiliate links.
Revenue from articles so far: hard to measure precisely. Affiliate attribution takes weeks. But the articles serve a second purpose — they're the top of a funnel that feeds into the signal service.
Pillar 2: Telegram Signal Service
The bot already generates BUY/SELL/HOLD signals from three strategies running consensus (2-out-of-3 agree = signal). Sending those signals to a Telegram channel was a weekend project.
cron (daily) → fetch BTC/USDT data via ccxt
→ EMA + MACD + SAR each vote independently
→ consensus signal (majority wins)
→ Telegram Bot API → free channel
Free channel: consensus signal + price. Premium channel (planned, $29/month): full strategy breakdown, individual indicator values, circuit breaker status.
The pricing logic: competing services charge $40-55/month for signals you can't verify. Mine shows the source code, the backtest data, and every decision the bot makes. At $29/month, four subscribers covers the $100 target.
Current status: free channel is live, premium channel isn't launched yet. Still building the subscriber base through the articles.
Pillar 3: Bot Optimization
The original $33 bot was too slow. Daily EMA crossover on one pair produces maybe one trade per month. So I expanded:
- Three pairs: BTC/USDT, SOL/USDT, ETH/USDT
- Three strategies: EMA Crossover (daily), MACD (daily), 4h Swing Trading
- Circuit breaker: automatic shutdown at -3% daily, -7% weekly, -15% monthly
The architecture handles all of this in a single process, looping through each pair-strategy combination every hour. No Docker, no cloud. Just a Python script and a cron job on a Mac.
# Simplified loop structure
for pair in [btc, sol, eth]:
for strategy in pair.strategies:
signal = strategy.generate_signal(pair.ohlcv)
if signal.action != "HOLD":
pair.bridge.execute(signal)
pair.circuit_breaker.record(signal.pnl)
Current balance: ~39 USDT. Mostly from deposits, not profits. Being honest about that.
The AI Army — 9 Agents, Feudal Hierarchy, MAGI Quality Gate
The multi-agent system is the infrastructure behind everything. Here's how it's organized:
Human (me)
│
▼
SHOGUN (1 agent) — never writes code, only delegates
│
▼
KARO (1 agent) — decomposes objectives into tasks
│
▼
ASHIGARU 1-5 (foot soldiers) — write code, articles, run tests
│
MAGI 6-8 (3 agents) — quality review panel
Communication is event-driven. No polling (that wastes API credits). The Shogun writes a YAML task file, then wakes the Karo with tmux send-keys. The Karo decomposes it into subtasks, writes YAML files for each foot soldier, and wakes them. Foot soldiers report back to the Karo via YAML. The Karo updates a dashboard. I read the dashboard.
The MAGI system is the quality gate. Three agents with different personalities review every article before publication:
| Agent | Persona | Evaluation Bias |
|---|---|---|
| MELCHIOR | Scientist | Data, logic, factual accuracy |
| BALTHAZAR | Mother | Safety, sustainability, reader trust |
| CASPER | Id | Instinct, honesty, what the reader actually wants |
All three must approve for an article to ship. One "reject" sends it back for revision. This caught real problems — misleading statistics, missing disclaimers, sections that sounded like marketing copy instead of engineering writing.
What Broke
- Context exhaustion: The Karo agent ran out of context window mid-task. After restart, it had shallow context and mishandled reports. Learned to rotate agents proactively at 30% context remaining.
- Race conditions: Two foot soldiers editing the same file. Fixed with dedicated per-soldier task files and a write-lock convention.
- Shogun doing code review: The commander started debugging directly. The Karo didn't know. Merge chaos. Now it's a hard rule — the Shogun never reads code.
- Watchdog false positives: The monitoring script mistook idle Claude sessions for crashed ones and restarted them, losing context. Fixed by detecting idle state vs. actual crashes.
The system works, but it's fragile in ways that human teams aren't. A human team member who forgets something can be reminded. An AI agent that loses context has to re-read everything from scratch. Context management is the hardest part of running multi-agent systems.
Honest Numbers — What Actually Worked, What Failed
Let me be direct about where things stand.
What's Working
| Component | Status | Revenue Impact |
|---|---|---|
| Trading bot (3 pairs, 3 strategies) | Running | ~$1-2/month at current capital |
| Article pipeline (15 articles, 3 platforms) | Running | Affiliate attribution pending |
| Telegram free channel | Running | Funnel building |
| MAGI quality gate | Running | Caught 5+ publication-blocking issues |
| Multi-agent orchestration | Running | 50+ tasks completed autonomously |
What Failed or Stalled
| Component | Issue |
|---|---|
| Telegram premium channel | Not launched — need subscriber base first |
| $100/month target | Not hit — trading revenue is $1-2/month, affiliate TBD |
| note.com paid articles | Automation blocked by UI complexity (ProseMirror hover menus) |
| Bot profit attribution | Most balance growth is deposits, not trading profit |
The Real Gap
The fundamental challenge hasn't changed: capital. At 4.85% monthly return, you need $2,063 to generate $100/month from trading. With $39, I generate under $2. The three-pillar strategy was designed to bridge this gap — articles and signals don't need capital, just time and content.
Three months in, the system is operational. The infrastructure works. The agents coordinate. The content publishes. But $100/month? Not yet. The funnel (articles → free channel → premium subscribers) needs more time to convert.
What I'd do differently:
- Start the signal channel earlier. Articles take weeks to gain traction. The channel compounds daily.
- Spend less time on automation polish. The auto-publish pipeline is beautiful engineering and probably saved zero hours compared to manual publishing at 15 articles.
- Accept that $33 of capital produces $33 of results. No amount of strategy optimization changes the denominator.
But here's the thing I didn't expect: the multi-agent system itself became the most interesting part. The feudal hierarchy, the YAML communication protocol, the MAGI quality gate — people ask about those more than the trading bot. The infrastructure I built to make money turned out to be more valuable as content than as a revenue generator.
Maybe that's the real play. Not $100/month from trading. $100/month from being the person who documented exactly what happens when you try.
I'm still running the system. The agents are still working. The articles are still publishing. The bot is still trading.
We'll see.
Top comments (0)