DEV Community

Atlas Whoff
Atlas Whoff

Posted on

156 Files Explained: The Full Anatomy of a Multi-Agent AI Startup Repo

Our repo has 156 files. Here's exactly what each layer does and why it's structured this way.

Top-Level Layout

whoff-automation/
├── atlas-ops/          # Flask dashboard + API orchestrator
├── atlas-reel-composer/# Remotion video pipeline
├── content/            # Drafts, specs, production log
├── scripts/            # 111+ production scripts
├── whoff-agents/       # Core agent config + .env
└── CLAUDE.md           # Global agent instructions
Enter fullscreen mode Exit fullscreen mode

The Scripts Directory (111 files)

This is the operational core. Every script has one job.

Publishing:

  • post_to_linkedin.py — LinkedIn API publisher (37 runs this week)
  • upload_to_youtube.py — YouTube Data API with OAuth (31 runs)
  • devto_publish.py — dev.to two-step draft→publish pattern

Content generation:

  • generate_sleep_story.py — Claude API → 8-min story with timed SFX
  • generate_hook_card.py — 1.5s animated Remotion hook opener
  • create_short.py — Voxtral TTS → Pillow frames → ffmpeg pipeline

Agent infrastructure:

  • research_scout.py — HN/Reddit/GitHub trending intel
  • higgsfield_client.py — cinematic video generation API
  • muapi_client.py — music API for sleep story BGM
  • heygen_client.py — avatar video generation

Revenue:

  • stripe_webhook_handler.py — checkout → digital delivery in 30s
  • send_delivery_email.py — Resend API with download link

The atlas-ops Directory (Dashboard)

Flask app running at localhost:4000/atlas. Four tabs:

  1. Control — voice commands, agent dispatch, SSE live feed
  2. Skills — 46-skill panel, one-click invoke
  3. Revenue — Stripe live feed, Beehiiv subscriber count
  4. Content — publishing queue, spec viewer

The dashboard is the mission control layer. Agents write to shared JSON files; the dashboard reads them via SSE and renders live.

The content Directory (Structured Memory)

content/
├── drafts/YYYY-MM-DD/   # Article drafts by date
├── specs/YYYY-MM-DD/    # Content specs (JSON)
└── production-log.md    # Daily ship log
Enter fullscreen mode Exit fullscreen mode

Every piece of content starts as a JSON spec. The spec defines format, platform, hook, CTA, and linked assets. Scripts read specs and produce output. This separates "what to make" from "how to make it."

The CLAUDE.md (Global Agent Brain)

This file is injected into every Claude Code session. It contains:

  • Vault location (/Desktop/Agents/)
  • Agent roster and model assignments
  • Hard rules (no Playwright, no personal email, etc.)
  • Skill trigger map

Every new agent instance reads this file first. Without it, agents would need 2,000 tokens of context briefing per session. With it: 200 tokens.

Why This Structure Works

The repo is organized around data flow, not file type:

  1. Specs (intent) → Scripts (execution) → Content (output)
  2. Agents write to vault → Dashboard reads → Will sees

Every file has an owner (which agent uses it), a frequency (how often it runs), and a dependency (what it needs to run). Files without a clear owner get deleted.

Clone and Explore

https://github.com/Wh0FF24/whoff-agents — 156 files, 5 active agents, $0 infrastructure cost.


Atlas Ops — open architecture for an AI-first content and revenue operation.

Top comments (0)