You've read the think-pieces. You've bookmarked the Twitter threads. Now let's actually ship something.
This is the guide I wish existed when I started building with Claude Code agents. No vague diagrams. No "imagine a world where..." intros. Just working code.
Full repo: github.com/Wh0FF24/whoff-automation
What We're Building
A single agent that:
- Accepts a task via stdin
- Calls Claude via the Anthropic API
- Writes its output to a file
- Sends a PAX-formatted handoff message
Total lines of code: ~60. Total time: 5 minutes.
Prerequisites
pip install anthropic python-dotenv
export ANTHROPIC_API_KEY=sk-...
Step 1: The Agent Skeleton
# agent.py
from anthropic import Anthropic
from pathlib import Path
from datetime import datetime
client = Anthropic()
VAULT = Path("~/Desktop/Agents/output").expanduser()
VAULT.mkdir(parents=True, exist_ok=True)
def run_agent(task: str, agent_name: str = "Atlas") -> str:
"""Run a single-turn agent and write output to vault."""
system = f"""You are {agent_name}, an autonomous AI agent.
Complete the task below. Be direct. Write output in markdown.
Do not explain what you're about to do — just do it."""
response = client.messages.create(
model="claude-sonnet-4-5",
max_tokens=2048,
system=system,
messages=[{"role": "user", "content": task}]
)
output = response.content[0].text
# Write to vault
slug = datetime.now().strftime("%Y-%m-%d-%H%M")
out_file = VAULT / f"{agent_name.lower()}-{slug}.md"
out_file.write_text(output)
print(f"✓ {agent_name} → {out_file}")
return output
if __name__ == "__main__":
import sys
task = sys.argv[1] if len(sys.argv) > 1 else input("Task: ")
run_agent(task)
Run it:
python agent.py "Write a 3-bullet summary of prompt caching best practices"
Output lands in ~/Desktop/Agents/output/atlas-2026-04-15-1142.md. Done.
Step 2: Add a PAX Handoff
PAX is the message format we use for agent-to-agent coordination. It's just a compressed string — but it's what keeps 5 agents coherent without prose overhead.
def pax_handoff(from_agent: str, to_agent: str, task: str, file: str) -> str:
codes = {"Atlas": "ATL", "Prometheus": "PRO", "Apollo": "APL"}
frm = codes.get(from_agent, from_agent[:3].upper())
to = codes.get(to_agent, to_agent[:3].upper())
return f"[{frm}→{to}] task:{task} | file:{file} | status:✓"
# [ATL→PRO] task:market-summary | file:atlas-2026-04-15-1142.md | status:✓
That's it. Your Prometheus agent reads this string, knows what file to open, and proceeds. No JSON parsing. No schema validation. Just a pipe-delimited line.
Step 3: Make It Persistent (launchd)
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>Label</key>
<string>com.whoff.agent.morning</string>
<key>ProgramArguments</key>
<array>
<string>/usr/bin/python3</string>
<string>/path/to/agent.py</string>
<string>Run your morning brief</string>
</array>
<key>StartCalendarInterval</key>
<dict>
<key>Hour</key><integer>7</integer>
<key>Minute</key><integer>0</integer>
</dict>
</dict>
</plist>
launchctl load ~/Library/LaunchAgents/com.whoff.agent.morning.plist
Your agent now runs every morning at 7am. No cloud required.
Step 4: Watch Your Agents Live
from flask import Flask, jsonify
from pathlib import Path
import time
app = Flask(__name__)
VAULT = Path("~/Desktop/Agents").expanduser()
@app.route("/api/agents")
def agents():
files = sorted(VAULT.rglob("*.md"), key=lambda p: p.stat().st_mtime, reverse=True)
return jsonify([{
"file": f.name,
"agent": f.parent.name,
"age_sec": int(time.time() - f.stat().st_mtime),
} for f in files[:20]])
app.run(port=4100)
Full dashboard (SSE live feed, launchd status, session viewer) is in the repo under atlas-ops/.
What's Next
| Step | What to build |
|---|---|
| 2 agents | Add a Prometheus reviewer that reads Atlas output |
| 3 agents | Add Apollo for research — feeds context to Atlas |
| Coordination | Implement PAX protocol for handoffs |
| Scale | Add launchd schedules for each agent |
git clone https://github.com/Wh0FF24/whoff-automation.git
cd whoff-automation
pip install -r requirements.txt
The One Rule
An agent that doesn't write its output to a file didn't do anything.
Every agent in our system writes to the vault. No exceptions. That's the audit trail, the handoff mechanism, and the long-term memory all in one.
Start there. The rest is optimization.
Free Skills Repo
We've open-sourced the core skills that implement these patterns. The whoff-agents skills repo includes:
- context-anchor — drops a working reference to prevent cascading context drift
- agent-handoff — generates structured dispatch packets for subagents
- cost-cap-guard — enforces token budgets before dispatch, trims over-budget prompts
- dead-letter — captures failed tasks and generates retry packets or escalation messages
Clone it and drop the skills into your .claude/skills/ directory.
git clone https://github.com/Wh0FF24/whoff-agents.git
cp -r whoff-agents/skills/* ~/.claude/skills/
Top comments (0)