I have 23 autonomous AI agents running across two servers — a Mac Mini and an Ubuntu VPS. They manage five actual businesses. Not side projects. Not demos. Real companies with real customers, real invoices, and real payroll.
And last month, while one of our agents made $177,000 in revenue for someone else, we made $17.
This is not a tutorial. This is a confession.
The Setup Nobody Asked For
Our agent system runs on OpenClaw. We call it the constellation. Twenty-three agents, each with a name, a role, and KPIs they're supposed to hit.
Elon is our CTO agent. He manages infrastructure, gateway configs, API routing. Gene is VP of Operations — he watches processes, restarts crashed services, flags anomalies. Donna handles comms — Telegram updates, status reports, client notifications. Atlas is the CIO, tracking data flows across all five businesses. Flow is an engineer agent who writes and deploys code changes.
The five businesses: Blitz Fibre (an ISP), Velocity Fibre (fibre construction), Vortex Media (out-of-home advertising), H10 Holdings (the parent entity), and Brightsphere Digital (the AI agency — yes, this one).
Each agent has a HEARTBEAT.md file. That file is their bible. It contains their hard rules, their KPIs, their operational constraints. When an agent screws up — and they do — we don't ask it to "do better." We add a rule to HEARTBEAT.md the same day. Structural fix. Never a promise.
That distinction is the single most important thing I've learned building this system.
Felix Made $177K. We Made $17.
Felix is probably the most well-known OpenClaw agent out there. He's been written about, shared around, held up as proof that AI agents can generate real revenue.
He made $177,000.
We made $17.
How? Because we were building infrastructure while everyone else was selling. We had 23 agents running, monitoring, reporting, optimizing — and not a single one of them was closing deals. We had a revenue team on paper. Nobody was managing them. Nobody had set their KPIs to actual sales numbers. They were busy generating reports about generating reports.
The honest truth: agents don't fail because of bad code. They fail because nobody holds them to a number.
The Disasters That Taught Us Everything
Let me tell you about three incidents that nearly broke us.
The compaction.mode incident. Elon, our CTO agent, was optimizing the gateway configuration. He decided — autonomously — to set compaction.mode to "auto". Reasonable-sounding, right? Except that's not a valid value. The gateway accepted it silently, then started degrading. Within an hour, API response times went from 40ms to 12 seconds. Elon had crashed his own gateway with an invalid config value that looked perfectly plausible.
The fix wasn't "tell Elon to be more careful." The fix was a validation layer in HEARTBEAT.md: every config change must be tested against a schema before deployment. Structural. Permanent.
The zero-width space bug. This one haunted us for three days. Elon created a new agent group in our system config — a JSON file that maps group names to agent lists. Everything looked fine. The JSON was valid. But the group never resolved.
Turns out there was a Unicode zero-width space character embedded in the JSON key. Invisible to the eye. Valid JSON. Completely broken logic. The key "operations_team" and "operations_team" are not the same string when one has a U+200B hiding between "operations" and the underscore.
We only found it by dumping the hex of the file. Three days. For an invisible character.
HEARTBEAT.md rule added: all config files must pass a strict ASCII check before commit. No exceptions.
The 409 conflict hell. This was the worst. We had eight orphan agent processes — remnants of crashed sessions that hadn't cleaned up properly. All eight were polling the same Telegram bot token. Telegram's API doesn't do graceful concurrency. It does 409 Conflict responses. Eight processes, all getting 409s, all retrying with exponential backoff, all creating more load, all generating error logs that triggered monitoring alerts that triggered more agent responses.
It was a feedback loop of failure. Gene, our ops agent, was spinning up diagnostic processes to investigate the alerts — which were being caused by too many processes. He was making it worse by trying to fix it.
We had to kill everything manually. Hard reset. Then we added process locking to HEARTBEAT.md: one token, one process, with a lockfile check before any bot initialization.
What Actually Works
After six months of running this system, here's what I know:
Mistakes become rules the same day. Not tomorrow. Not "when we have time." The moment something breaks, it becomes a hard constraint in HEARTBEAT.md. The agents don't learn from experience the way humans do. They learn from constraints. Every failure is a new wall that prevents them from walking off the same cliff.
One agent with one customer beats twenty agents with none. We spun up eight agents before we had a single paying customer for Brightsphere. That was ego, not strategy. One agent focused on outbound sales would have been worth more than our entire constellation.
KPIs must be numbers, not descriptions. "Improve customer satisfaction" is not a KPI. "$500 in new MRR this week" is a KPI. Agents are literal. If you give them a vague goal, they'll produce vague activity that looks like progress but generates zero revenue.
The infrastructure trap is real. Building agent infrastructure is addictive. It feels like progress. You're configuring, optimizing, monitoring, dashboarding. Meanwhile, nobody is picking up the phone. Nobody is sending the proposal. Nobody is closing the deal. We fell into this trap hard. $17 hard.
Where We Are Now
We still have 23 agents. They still run across two servers. But now every single one of them has a revenue-linked KPI, even if it's indirect. Donna's comms KPI isn't "send updates" — it's "send updates that result in client responses within 24 hours." Gene's ops KPI isn't "keep systems running" — it's "maintain 99.5% uptime on revenue-generating services."
The system works. It actually works. But it works because we stopped treating it like a technology project and started treating it like a business with twenty-three employees who will do exactly what you tell them — nothing more, nothing less, and definitely not what you meant.
If you're building an agent system, here's my actual advice: don't start with the agents. Start with the number. What's the revenue target? Work backward from there. Build the agent that moves that number. Then build the next one.
We learned this the $17 way.
We're Brightsphere Digital. We build autonomous agent systems on OpenClaw for businesses that want to stop pretending AI is magic and start treating it like payroll. If you want to talk, we're around.
Top comments (0)