I'm running 6 online businesses simultaneously.
Not "thinking about starting" them. Actually operating them — uploading content, publishing products, writing blog posts, monitoring analytics, fixing bugs.
The secret? I'm not doing most of it myself.
I have a Mac Mini running 24/7 with an orchestration system that schedules AI sub-agents to handle specific tasks across all 6 businesses. Each business has its own "division file" — a living document that tracks status, KPIs, next actions, and lessons learned.
Here's exactly how it works.
The Problem: Context Switching Kills Solo Founders
When you're running one business, focus is easy. When you're running six, your day looks like this:
- 9:00 — Check YouTube analytics
- 9:15 — Fix a broken link on Gumroad
- 9:30 — Write a blog post for Hashnode
- 10:00 — Debug a CSS issue on your SaaS
- 10:30 — Realize you forgot to upload yesterday's short
- 11:00 — Context switch back to Gumroad, forget where you were
Sound familiar?
The cognitive overhead isn't the work itself. It's the switching cost. Every time you change contexts, you lose 15-20 minutes ramping back up.
My solution: delegate context switching to machines.
The Architecture: Registry + Divisions + Cron
The system has three layers:
Layer 1: Business Registry (Single Source of Truth)
Every business lives in a JSON registry. If it's not in the registry, it doesn't exist.
{
"youtube-shorts": {
"name": "YouTube Shorts",
"status": "active",
"priority": "P1",
"type": "revenue",
"kpi": {
"subscribers": 7,
"total_views": 6200,
"videos": 23
}
},
"gumroad": {
"name": "Gumroad Digital Products",
"status": "active",
"priority": "P1",
"kpi": {
"products_live": 13,
"sales": 3,
"revenue_usd": 0
}
}
}
A CLI tool lets any agent query, update, or add businesses:
# Dashboard view
python3 biz-registry.py dashboard
# Update a KPI
python3 biz-registry.py kpi youtube-shorts subscribers 10
# Inter-agent messaging
python3 biz-registry.py msg send youtube-engine gumroad-engine "New short uploaded, add CTA link"
Why this matters: No hardcoding. When I add a 7th business, every agent automatically discovers it.
Layer 2: Division Files (Business Memory)
Each business has a markdown file that serves as its "brain":
# YouTube Shorts — Division Memory
## Current Status
- 23 videos published
- Algorithm cliff since March 12 (0 views)
- 3 shorts ready in pipeline
## Next Actions
1. Upload kr-5am-club short (verified)
2. Test new thumbnail style
3. Analyze competitor posting times
## Lessons Learned
- Subtitle font size 78 (mobile readability)
- Korean TTS rate: +45% (sweet spot)
- Mid-video CTA > end CTA (4x effectiveness)
This is the key insight: AI agents don't have memory between sessions. Division files ARE their memory. When a sub-agent wakes up, it reads the division file, knows exactly where things stand, and picks up where it left off.
Layer 3: Cron Orchestration
The cron layer schedules everything:
Every 15 min → Business Stepper (picks highest-impact task)
Every 30 min → Revenue Engine (analytics + optimization)
Every 6 hours → Content Pipeline (draft + queue management)
Daily 09:00 → Morning Launch (publishing queue)
Each cron job spawns an isolated sub-agent with a specific mission. The sub-agent:
- Reads the registry dashboard
- Picks the most impactful business (priority × staleness)
- Loads that business's division file
- Executes ONE action (8-minute hard cap)
- Updates the division file + logs the action
- Dies
No long-running processes. No state accumulation. No memory leaks.
The Business Stepper: Automated Prioritization
The most interesting piece is the "Business Stepper" — the agent that decides what to work on next.
The algorithm is simple:
Score = Priority Weight × Days Since Last Update
P1 = 3x weight
P2 = 2x weight
P3 = 1x weight
A P1 business that hasn't been touched in 2 days scores higher than a P3 business that's been stale for a week. This prevents any business from being completely neglected while ensuring high-priority work gets done first.
If the chosen business has a blocker (waiting for login, API down, etc.), it automatically falls through to the next one.
Inter-Agent Communication
Agents need to talk to each other. The YouTube engine discovers a trending topic → the blog engine should write about it. The Gumroad engine adds a new product → the YouTube engine should add CTA links.
I built a simple message queue into the registry:
# YouTube engine sends message
biz-registry.py msg send youtube gumroad "New short uploaded: insurance-trap"
# Gumroad engine checks inbox
biz-registry.py msg check gumroad
# → [youtube]: "New short uploaded: insurance-trap"
# Process and acknowledge
biz-registry.py msg pop gumroad
No external message broker. No Redis. No RabbitMQ. Just a JSON array in the registry file. At my scale (6 businesses, ~50 messages/day), this is perfectly adequate.
Time-Aware Execution
Not every action can happen at any time:
09:00-22:00 → Full operations (publish, post, push to GitHub)
22:00-09:00 → Research and production only (drafts, analysis, code)
This isn't just politeness — it's platform safety. Publishing at 3 AM from a known timezone creates suspicious patterns. Rate limiters check every action against platform-specific rules before execution.
What I Learned After 3 Weeks
1. Division Files > Databases
I tried SQLite. I tried JSON APIs. Nothing beats a markdown file that a human can read and an AI can parse. When something breaks, I open the file and immediately see what happened.
2. 8-Minute Hard Cap Changes Everything
Without a time limit, agents rabbit-hole. "Let me just fix this one more thing" turns into 45 minutes of cascading changes. The hard cap forces atomic, completable actions.
3. Single Action Per Cycle Prevents Cascading Failures
If an agent tries to do 3 things and the 2nd one fails, you get a partially-updated state that's hard to recover from. One action, one update, clean exit.
4. Registry-Driven Discovery > Hardcoding
When I added business #5, I didn't change any agent code. I ran biz-registry.py add and the stepper started picking it up automatically next cycle.
5. 36% of What You Build Gets Deleted
Across platforms, about a third of published content gets moderated away. The architecture handles this gracefully because division files track what happened, not just what succeeded.
The Numbers
After 3 weeks of this system:
- 23 YouTube videos published (mix of shorts + long-form)
- 16 digital products live on Gumroad
- 8 blog posts on Hashnode
- 2 web apps deployed
- $0 revenue (being honest — traffic is the bottleneck, not production)
The system is excellent at producing. The missing piece is distribution — which turns out to be the hard part that can't easily be automated.
Try This Architecture
If you're juggling multiple projects, here's the minimum viable version:
- One JSON file listing all projects with status + priority
- One markdown file per project with status, next actions, lessons
- One cron job that reads the JSON, picks the stalest high-priority project, and does one thing
You don't need AI agents for this. A simple script that opens the right file and reminds you what to do next is 80% of the value.
Building 6 businesses simultaneously with AI agents. Currently at $0 MRR but the production pipeline is running smoothly. Follow along for weekly updates.
Resources:
- 📘 The $0 Developer Playbook — Free guide to building with zero budget
- 🎮 Complete Indie Game Dev Toolkit — Everything you need to ship your first game
📚 More in This Series
- How I Set Up an AI Agent That Runs 24/7 on a Mac Mini — Where the 24/7 automation journey started (416+ views)
- How I Built a Self-Healing Automation System That Runs 24/7 Without Me — The recovery patterns that keep these cron jobs alive
Top comments (0)