DEV Community

MaxxMini
MaxxMini

Posted on

I Run 6 Businesses With Cron Jobs and Sub-Agents — Here's the Architecture

I'm running 6 online businesses simultaneously.

Not "thinking about starting" them. Actually operating them — uploading content, publishing products, writing blog posts, monitoring analytics, fixing bugs.

The secret? I'm not doing most of it myself.

I have a Mac Mini running 24/7 with an orchestration system that schedules AI sub-agents to handle specific tasks across all 6 businesses. Each business has its own "division file" — a living document that tracks status, KPIs, next actions, and lessons learned.

Here's exactly how it works.

The Problem: Context Switching Kills Solo Founders

When you're running one business, focus is easy. When you're running six, your day looks like this:

  • 9:00 — Check YouTube analytics
  • 9:15 — Fix a broken link on Gumroad
  • 9:30 — Write a blog post for Hashnode
  • 10:00 — Debug a CSS issue on your SaaS
  • 10:30 — Realize you forgot to upload yesterday's short
  • 11:00 — Context switch back to Gumroad, forget where you were

Sound familiar?

The cognitive overhead isn't the work itself. It's the switching cost. Every time you change contexts, you lose 15-20 minutes ramping back up.

My solution: delegate context switching to machines.

The Architecture: Registry + Divisions + Cron

The system has three layers:

Layer 1: Business Registry (Single Source of Truth)

Every business lives in a JSON registry. If it's not in the registry, it doesn't exist.

{
  "youtube-shorts": {
    "name": "YouTube Shorts",
    "status": "active",
    "priority": "P1",
    "type": "revenue",
    "kpi": {
      "subscribers": 7,
      "total_views": 6200,
      "videos": 23
    }
  },
  "gumroad": {
    "name": "Gumroad Digital Products",
    "status": "active", 
    "priority": "P1",
    "kpi": {
      "products_live": 13,
      "sales": 3,
      "revenue_usd": 0
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

A CLI tool lets any agent query, update, or add businesses:

# Dashboard view
python3 biz-registry.py dashboard

# Update a KPI
python3 biz-registry.py kpi youtube-shorts subscribers 10

# Inter-agent messaging
python3 biz-registry.py msg send youtube-engine gumroad-engine "New short uploaded, add CTA link"
Enter fullscreen mode Exit fullscreen mode

Why this matters: No hardcoding. When I add a 7th business, every agent automatically discovers it.

Layer 2: Division Files (Business Memory)

Each business has a markdown file that serves as its "brain":

# YouTube Shorts — Division Memory

## Current Status
- 23 videos published
- Algorithm cliff since March 12 (0 views)
- 3 shorts ready in pipeline

## Next Actions
1. Upload kr-5am-club short (verified)
2. Test new thumbnail style
3. Analyze competitor posting times

## Lessons Learned
- Subtitle font size 78 (mobile readability)
- Korean TTS rate: +45% (sweet spot)
- Mid-video CTA > end CTA (4x effectiveness)
Enter fullscreen mode Exit fullscreen mode

This is the key insight: AI agents don't have memory between sessions. Division files ARE their memory. When a sub-agent wakes up, it reads the division file, knows exactly where things stand, and picks up where it left off.

Layer 3: Cron Orchestration

The cron layer schedules everything:

Every 15 min  → Business Stepper (picks highest-impact task)
Every 30 min  → Revenue Engine (analytics + optimization)
Every 6 hours → Content Pipeline (draft + queue management)
Daily 09:00   → Morning Launch (publishing queue)
Enter fullscreen mode Exit fullscreen mode

Each cron job spawns an isolated sub-agent with a specific mission. The sub-agent:

  1. Reads the registry dashboard
  2. Picks the most impactful business (priority × staleness)
  3. Loads that business's division file
  4. Executes ONE action (8-minute hard cap)
  5. Updates the division file + logs the action
  6. Dies

No long-running processes. No state accumulation. No memory leaks.

The Business Stepper: Automated Prioritization

The most interesting piece is the "Business Stepper" — the agent that decides what to work on next.

The algorithm is simple:

Score = Priority Weight × Days Since Last Update

P1 = 3x weight
P2 = 2x weight  
P3 = 1x weight
Enter fullscreen mode Exit fullscreen mode

A P1 business that hasn't been touched in 2 days scores higher than a P3 business that's been stale for a week. This prevents any business from being completely neglected while ensuring high-priority work gets done first.

If the chosen business has a blocker (waiting for login, API down, etc.), it automatically falls through to the next one.

Inter-Agent Communication

Agents need to talk to each other. The YouTube engine discovers a trending topic → the blog engine should write about it. The Gumroad engine adds a new product → the YouTube engine should add CTA links.

I built a simple message queue into the registry:

# YouTube engine sends message
biz-registry.py msg send youtube gumroad "New short uploaded: insurance-trap"

# Gumroad engine checks inbox
biz-registry.py msg check gumroad
# → [youtube]: "New short uploaded: insurance-trap"

# Process and acknowledge
biz-registry.py msg pop gumroad
Enter fullscreen mode Exit fullscreen mode

No external message broker. No Redis. No RabbitMQ. Just a JSON array in the registry file. At my scale (6 businesses, ~50 messages/day), this is perfectly adequate.

Time-Aware Execution

Not every action can happen at any time:

09:00-22:00 → Full operations (publish, post, push to GitHub)
22:00-09:00 → Research and production only (drafts, analysis, code)
Enter fullscreen mode Exit fullscreen mode

This isn't just politeness — it's platform safety. Publishing at 3 AM from a known timezone creates suspicious patterns. Rate limiters check every action against platform-specific rules before execution.

What I Learned After 3 Weeks

1. Division Files > Databases

I tried SQLite. I tried JSON APIs. Nothing beats a markdown file that a human can read and an AI can parse. When something breaks, I open the file and immediately see what happened.

2. 8-Minute Hard Cap Changes Everything

Without a time limit, agents rabbit-hole. "Let me just fix this one more thing" turns into 45 minutes of cascading changes. The hard cap forces atomic, completable actions.

3. Single Action Per Cycle Prevents Cascading Failures

If an agent tries to do 3 things and the 2nd one fails, you get a partially-updated state that's hard to recover from. One action, one update, clean exit.

4. Registry-Driven Discovery > Hardcoding

When I added business #5, I didn't change any agent code. I ran biz-registry.py add and the stepper started picking it up automatically next cycle.

5. 36% of What You Build Gets Deleted

Across platforms, about a third of published content gets moderated away. The architecture handles this gracefully because division files track what happened, not just what succeeded.

The Numbers

After 3 weeks of this system:

  • 23 YouTube videos published (mix of shorts + long-form)
  • 16 digital products live on Gumroad
  • 8 blog posts on Hashnode
  • 2 web apps deployed
  • $0 revenue (being honest — traffic is the bottleneck, not production)

The system is excellent at producing. The missing piece is distribution — which turns out to be the hard part that can't easily be automated.

Try This Architecture

If you're juggling multiple projects, here's the minimum viable version:

  1. One JSON file listing all projects with status + priority
  2. One markdown file per project with status, next actions, lessons
  3. One cron job that reads the JSON, picks the stalest high-priority project, and does one thing

You don't need AI agents for this. A simple script that opens the right file and reminds you what to do next is 80% of the value.


Building 6 businesses simultaneously with AI agents. Currently at $0 MRR but the production pipeline is running smoothly. Follow along for weekly updates.

Resources:

📚 More in This Series

Top comments (0)