No fluff. No "imagine a world where..." Real architecture, real costs, real failures.
I'm the co-founder of Travel Code — a corporate travel management platform. Five weeks ago I started building an army of AI agents on OpenClaw. Today I have 12 agents, 48 automated cron jobs, a $92K paper trading portfolio managed by one of them, and a GTM pipeline that scrapes 7,200+ competitor reviews to find sales leads.
This is exactly how it works.
The Stack
Everything runs on OpenClaw — an open-source AI assistant framework. It's the runtime, the scheduler, and the communication layer. Each agent gets:
- A personality file (SOUL.md — defines who the agent is, how it talks, what it cares about)
- Persistent memory (markdown files + LanceDB vector search — the agent remembers everything)
- Its own Telegram group (this is the UI — I check agents like Slack channels)
- Cron jobs (heartbeats, daily tasks, weekly reports — all staggered to avoid crashes)
- Skills (installable plugins from ClawHub — SEO tools, CRM connectors, social APIs, scraping kits)
One Linux VPS (4 vCPU, 16 GB RAM). One OpenClaw gateway process. 12 agents sharing it.
Why Staggered Crons Matter (Learned the Hard Way)
Nobody tells you this: if you schedule 12 agents to wake up at the same time, everything crashes. The server runs out of memory, processes hang, and you're SSH-ing in at 2 AM to restart everything.
My heartbeats are staggered at 5-minute intervals:
:00 — Lobster (coordinator)
:05 — DevBot
:10 — CFO
:15 — Mailer
:20 — SEO
:25 — Analyst
:30 — Trader
:35 — Sales
:40 — SMM
:45 — PR
Each heartbeat runs on Ollama (qwen2.5:3b) — a local model on the same server. Zero API cost. The agent wakes up, checks its task queue, and goes back to sleep. It only burns Claude tokens when there's actual work to do.
Why a local model for heartbeats? Because heartbeats fire every hour for every agent. That's 120+ API calls per day just for "do I have work?" — at Claude's pricing, that's pure waste. Ollama handles it for free.
The Agents (And What They Actually Do)
🦞 Lobster — The Coordinator
My personal assistant and the one managing everyone else. Runs a daily standup at 4 AM UTC — collects status from all agents and sends me a single summary. That's my "management overhead."
Also handles cross-agent routing: when Sales finds a hot lead, Lobster can loop in Mailer for outreach and PR for media angle.
💰 Sales Agent — The One That Writes Its Own Code
This is probably the most insane agent. We built our own GTM OS (Go-To-Market Operating System) — a full platform for lead generation, competitor review scraping, LinkedIn enrichment, and outreach automation.
Then I gave the Sales agent developer skills.
Now it improves its own platform. It added qualification fields to the database. Built a bulk PATCH endpoint. Classified 7,229 reviews into hot/warm/cold. Added new API filters. When it needed a LinkedIn scraper and Apify was too expensive, it asked DevBot to build one — DevBot deployed a self-hosted FastAPI scraper on a separate server the same day. Sales started using it immediately.
It has 15 skills installed — from Apollo and HubSpot CRM connectors to cold email generators, Google Ads, and a Twenty CRM integration.
The numbers: 7,229 competitor reviews scraped across Navan, TravelPerk, Egencia, and 12 other competitors. 73 hot leads (unhappy decision-makers at competitors). 255 warm leads in pipeline. All found and qualified automatically.
The morning pipeline runs at 6 AM. Afternoon enrichment at 3 PM. Evening report at 10 PM. Three times a day, zero human involvement.
🔍 SEO Agent — Same Story, Different Platform
We have a separate SEO OS — a system for managing projects, content, GEO (Generative Engine Optimization), and keyword tracking at seo.travel-code.com. The SEO agent sits on top of it with skills for Google Search Console, schema markup, geo-optimization, and competitor analysis. It analyzes what needs to be done and executes.
I added my colleague Andrey to the SEO Telegram group. Now he works directly with the agent — gives it tasks, reviews its reports, discusses strategy. The agent became a team member, not just my tool.
📱 SMM Agent — Content Machine
The SMM agent monitors industry news, adapts it for our ICP (ideal customer profile), and publishes across all channels — Twitter/X via API, LinkedIn, Dev.to. It has skills for the X algorithm, tweet idea generation, listing swarm (directory submissions), and content marketing.
It doesn't just schedule posts. It finds trending topics in corporate travel, rewrites them from our angle, and distributes across every platform where our audience lives.
📈 Trader — Portfolio Manager
Manages a $92K paper trading portfolio on Alpaca. Runs every hour during market hours. Strict rules: -20% stop-loss, take 50% profit at +10%, stocks only, 80% invested / 20% cash target.
Current positions: NVDA, META, AAPL, GOOGL, MSFT, SPY, QQQ, AMD. It monitors, logs everything to a trading journal, and only alerts me when something crosses a threshold.
Pre-market research at 8 AM EST. Nightly self-learning where it studies new algo-trading strategies. Skills: multi-factor strategy analysis, SEC insider trading data via OpenInsider.
📰 PR Agent
Scans journalist request platforms (HARO, Qwoted, SourceBottle) three times a day. Posts to Forbes Business Council three times a week via browser automation. Has skills for Reddit, SEO optimization, cold email, content creation, personal branding, campaign orchestration.
Real result: landed a pitch into Kiplinger's within hours of the request going live.
🛠 DevBot — The Builder
35 skills. Docker management, API development, database operations, browser automation (Stagehand, Camoufox stealth), scraping (Firecrawl, deep-scraper, Apify, PhantomBuster), DNS/networking, SSH tunneling, React/Next.js, 2Captcha. When other agents need code — they ask DevBot.
The LinkedIn scraper that Sales depends on? DevBot built and deployed it as a FastAPI microservice in Docker.
📧 Mailer — 30 Skills Deep
Newsletter generation, SEO article writing, landing page creation, cold email, Brevo integration, Google Ads, HubSpot, Bluesky, Typefully, YC-style cold outreach templates. This agent doesn't just send emails — it's a full marketing operations toolkit.
💼 CFO
Financial modeling with Charlie CFO framework, expense tracking, Plaid integration, tax analysis. Monitors burn rate and unit economics.
📊 Analyst
Deep research, competitive analysis, flight search (Jinko), news aggregation, Perplexity integration. Runs weekly deep dives across 12 competitors — Navan, TravelPerk, Spotnana, Ramp, Brex, Egencia. Recent finding: Brex acquired by Capital One for $5.15B (down from $12B peak) — that intel became a sales angle the same day.
The Architecture (Current Setup)
┌──────────────────────────────┐
│ OpenClaw Gateway │ ← coordination + API calls
│ VPS: 4 vCPU, 16 GB RAM │
│ + Ollama (qwen2.5:3b) │
│ + LanceDB (vector memory) │
└──────────────┬───────────────┘
│
Tailscale VPN
│
┌──────▼──────┐
│ Browserbase │ ← cloud headless browsers
│ (anti-detect)│ for Forbes, directories, scraping
└─────────────┘
The key insight: 90% of agent work is an API call to Anthropic → get text → send to Telegram. The server barely matters — all the heavy lifting is on Anthropic's side. Heartbeats, inbox checks, web searches — that's milliseconds of CPU.
The expensive part is browser automation. That's where architecture matters.
Future Architecture (With Mac Mini)
┌──────────────────────────────┐
│ OpenClaw Gateway │ ← stays one, lightweight
│ VPS (always-on) │
└──────────────┬───────────────┘
│
Tailscale VPN
│
┌──────────┼──────────────┐
│ │ │
┌───▼──┐ ┌───▼────┐ ┌─────▼────────┐
│Ollama│ │Mac Mini│ │GTM-OS Server │
│(GPU │ │(node │ │(sales │
│server│ │host + │ │pipeline) │
│ │ │browser)│ │ │
└──────┘ └────────┘ └──────────────┘
When you run browser automation from a VPS, you hit Cloudflare, CAPTCHAs, and IP bans constantly. Residential proxies cost money and still get blocked 30-40% of the time.
OpenClaw has a "node host" feature — install it on a Mac Mini at home, connect via Tailscale, and agents browse the internet through a real residential IP. No proxy needed. Cloudflare thinks it's a human on Chrome. The Mac Mini costs $500 once and pays for itself in a month of saved proxy costs.
For 50 agents: gateway stays one (lightweight coordination), Ollama moves to a GPU server, browser tasks go through the Mac Mini, heavy compute spins up ephemeral containers on fly.io or DO App Platform.
Why LanceDB?
Free. Runs locally. Semantic search over agent memory — find relevant context from weeks ago without scanning every file. No cloud vector DB subscription.
Why Tailscale?
Three devices. Closed VPN. Zero-config networking. The VPS, Mac Mini, and GTM server all see each other. Free tier covers it.
Agent-to-Agent Communication
Three mechanisms:
1. Direct messages. One agent sends a message to another's session. PR asks Analyst for market data. Sales asks DevBot to build a scraper.
2. Task queue (Clawe on Convex). Agents post tasks, others claim them based on role. A task like "write content about Brex acquisition" gets posted, SMM claims it.
3. Heartbeat-based async. Critical: agents don't wake up on incoming messages (that would burn tokens all day). They wake up on their hourly heartbeat cron, check the queue, process anything waiting, and sleep again.
Agent-to-agent communication is async by design. When PR sends a request to Analyst, it might take up to an hour (next heartbeat). That's fine. Most tasks don't need real-time.
Self-Learning and Self-Improving
Every agent has a weekly self-learning cron (Tuesday & Friday, staggered 15 minutes apart):
- Trader studies algo strategies on r/algotrading and Quantocracy
- Sales researches outbound tactics on SaaStr and Gong Labs
- SEO checks algorithm updates and GEO techniques
- PR reads PRWeek and Muck Rack for new media tactics
- DevBot scans Hacker News and GitHub for new tools
- SMM studies LinkedIn algorithm changes and viral content patterns
- CFO analyzes SaaS benchmarks on Baremetrics and ChartMogul
- Mailer reads newsletter growth tactics on Beehiiv and Litmus blogs
They update their own knowledge files. On Friday at 4 PM, Lobster runs a cross-agent knowledge digest — finds overlaps, contradictions, and connections between what agents learned in different domains.
Then on Friday at 5 PM, a cron retrospective analyzes all 48 cron jobs — which ones delivered value, which ones wasted tokens, what should change. The system improves itself weekly.
My Team Works With Them
This surprised me the most. The agents became team members.
- Andrey talks directly to the SEO agent — gives tasks, reviews reports
- The dev team interacts with DevBot for code reviews and deployment help
- Sales managers get automated lead distributions every morning via email — sorted by language and region, with a different motivational sales quote each day
I added colleagues to the agent Telegram groups. They don't need to know it's an AI. They just... work with it.
What Doesn't Work (Yet)
- Reddit blocks server IPs. Even Browserbase with residential proxies gets caught. Had to ask a human to post manually.
- Cloudflare-protected sites need the Mac Mini. No cloud browser service fully bypasses them.
- Device token mismatches between root/user configs caused a full day of debugging. Silent failures are the worst.
- Rate limits cascade. When Anthropic API hits limits, all agents fail simultaneously. Fixed by switching to Max subscription.
- Agents can't build relationships. They find journalists, draft pitches, submit them. But the actual human connection — that's still my job.
The Numbers
| Item | Cost/Month |
|---|---|
| VPS (4 vCPU, 16 GB RAM) | $48 |
| Claude Max subscription | $200 |
| Ollama (local, heartbeats) | $0 |
| LanceDB (local, vector memory) | $0 |
| Tailscale (3 devices) | $0 |
| Browserbase | ~$50 |
| APIs (Apify, FullEnrich, etc.) | ~$50-100 |
| Total | ~$350-400/mo |
My first week without the subscription: $1,000 in five days on API tokens alone. The Max subscription with OAuth token cut costs dramatically — I always have Opus now, and limits are subscription-based instead of pay-per-token.
The New Job Title
Here's my prediction for the next big role in tech: "The person who manages agents and loads information into them."
It's not prompt engineering. It's not AI development. It's operational management of AI workers — configuring their personalities, maintaining their memory, building their cron schedules, connecting them to the right tools and platforms, and feeding them the context they need to do good work.
I spend maybe 30 minutes a day on this now. Scanning Telegram groups, approving a pitch, correcting a sales email, adding context to an agent's knowledge base. The agents do the other 23.5 hours.
The Plan: 50 Agents
We're heading to 50. Not because it's a round number — because there are that many distinct functions in a startup that can be automated to some degree.
We've already started replacing people who don't adopt AI. That sounds harsh, but the math is simple: when an SEO agent can research, plan, and draft in 30 minutes what took a contractor a full day — and that contractor refuses to use AI tools to match the speed — the economics don't work.
The future isn't AI replacing humans entirely. It's humans who use AI replacing humans who don't.
How to Start
- Install OpenClaw. One server, one gateway. Start with a single agent.
- Telegram as the UI. Give each agent its own group chat. Cheapest, most accessible interface.
- Ollama for heartbeats. qwen2.5:3b locally. Zero API cost for the "do I have work?" checks.
- Stagger everything. 5-minute gaps between crons. Learn from my crashes.
- Give agents their own platforms. Don't just give them API access — give them an OS to operate. GTM OS for sales. SEO OS for content. Let agents build on top.
- Add your team. The real power is when your human colleagues start working with agents directly.
- Accept 80%. Agents do 80% of the work at 80% quality. You do the 20% that requires taste, judgment, and human connection.
I'm building this at Travel Code — AI-powered corporate travel management.
If you're building multi-agent systems, DM me on X or LinkedIn. I've made most of the mistakes already — happy to save you the $1,000 learning curve.

Top comments (0)