DEV Community

I Run a Solo Company with AI Agent Departments

João Pedro Silva Setas on March 03, 2026

TLDR: I'm a solo founder running 5 SaaS products with 0 employees I built 8 AI agent "departments" using GitHub Copilot custom agents — CEO, CFO,...
Collapse
 
harsh2644 profile image
Harsh • Edited

Bro what did I just read?! 😂 Okay so as someone who's also building stuff solo (browser games) and constantly fighting with AI to do literally anything useful, this is absolutely WILD.

That Improver agent though... wait wait wait. You built an AI that improves your OTHER AIs? That's some straight up sci-fi inception stuff right there. I can barely get ChatGPT to write a proper function without hallucinating half the time 😅 Genuine question though - did it ever go completely off the rails? Like suggest something so stupid you had to just shut the whole thing down?

Also really curious about the whole "agents talk to each other" thing. Is it actually smooth or do they have like... disagreements? Would love to see even a rough sketch of how that knowledge graph works. Even a napkin drawing would make my day tbh.

AND FIVE PRODUCTS? On minimal infrastructure?! Brother I'm here struggling to ship ONE properly lmao. Massive respect fr.

If you ever do that technical deep dive or open source any of this, PLEASE tag me or something. I NEED to see how this works under the hood.

Honestly stuff like this is exactly why I love this community. Keep building man, you're living in 2030 while the rest of us are still in 2026.

Collapse
 
setas profile image
João Pedro Silva Setas

Haha thanks man, appreciate the energy! 😄

To answer your question — yes, the Improver has gone off the rails. Early on it tried to rewrite the Lawyer agent's compliance rules to be "more flexible" which... no. That's exactly the kind of thing that should never be flexible. Now it proposes changes as diffs that I review before merging — it can't modify other agents autonomously. Hard boundaries on anything touching money, legal compliance, or auth.

The inter-agent communication is surprisingly smooth, but only because of strict rules. Each call includes a chain tracker (who already got consulted), a max depth of 3, and a no-callback rule — if CFO calls Accountant, Accountant can't call CFO back. Without those constraints it was chaos. When they "disagree" (e.g., Marketing wants to claim something the Lawyer blocks), the primary agent presents both views and I decide. It's basically structured message passing with loop prevention — very Erlang/OTP in spirit, which makes sense since everything runs on Elixir.

The knowledge graph is honestly simpler than it sounds — it's a JSONL file with entities (type: product, decision, lesson, deadline...) and relations between them (owns, uses, depends-on). Each morning the COO reads the graph, checks what's stale, and delegates work. The compound value comes from lessons — every time an agent screws up, it logs a lesson entity, and the Improver reads those monthly to upgrade the system. The mistakes make it smarter over time.

Five products sounds impressive but they're all Elixir/Phoenix on Fly.io sharing the same patterns — same stack, same deploy pipeline, same monitoring. Once you have the template, each new one is mostly copy-paste-tweak.

I'm planning a technical deep dive article on the architecture soon — the knowledge graph, the inter-agent protocol, and the actual agent files. I'll make sure to post it here. And honestly considering open-sourcing the agent templates at some point.

Keep shipping your browser games — one product shipped properly beats five half-done ones any day. 🤙

Collapse
 
vibeyclaw profile image
Vic Chen

This is an incredible setup. Running 5 SaaS products solo with AI agent departments is exactly the kind of leverage I keep thinking about for my own projects. The knowledge graph approach for shared context between agents is really smart — that persistent memory layer is what separates a bunch of disconnected prompts from an actual system. Curious about the cost side: how much are you spending monthly on API calls across all the agents? And have you hit any reliability issues where one agent gives bad input that cascades through the others? I have been experimenting with similar multi-agent architectures for financial data analysis and the coordination layer is always the hardest part to get right.

Collapse
 
setas profile image
João Pedro Silva Setas

API cost: effectively €0 on top of GitHub Copilot subscription (included). The MCP servers (memory, scheduler, Sentry) are self-hosted or free tier. The Fly.io infra for all 5 apps is ~€42/month. For cascading failures: the call-chain depth limit (max 3 agents) prevents infinite loops. Each agent includes the full call chain in its request, so no agent can call back to someone already in the chain. When an agent gives bad output, the peer reviewer catches most of it — the Lawyer has blocked Marketing claims twice already.

Collapse
 
crisiscoresystems profile image
CrisisCore-Systems

I love the honesty in the premise. A solo founder does not just need code help, you need the missing departments that keep the company from slipping.

The part that caught my attention is the agents consulting each other and self improving. That can be powerful, but it is also where drift sneaks in. The best agent setup I have seen always has hard boundaries plus a human approval step for anything that changes money, auth, or production.

When your Improver agent upgrades the others, what is your safety check. Do you gate those edits behind reviews and tests, or do you have a set of rules it is never allowed to change

Collapse
 
setas profile image
João Pedro Silva Setas

Great question. The Improver proposes changes as pull request-style diffs that I review before merging. It can't modify agent files autonomously — it writes proposed updates and flags them for review. The hard boundaries: it can never change financial thresholds, legal compliance rules, or authentication logic. Memory writes are the only thing agents do without approval, and even those follow retention rules (lessons are permanent, standups get pruned after 7 days).

Collapse
 
crisiscoresystems profile image
CrisisCore-Systems

Appreciate the detail. Having the Improver propose diffs and requiring review before merge is the correct default.

If you ever harden it further, I would keep one rule strict. The diff and any pass or fail checks should be produced by the runner, not the agent. That keeps the audit trail trustworthy even when the agent is wrong.

Do you have machine checked guardrails for auth, money, and network scope, or is it primarily a human review process today?

Thread Thread
 
setas profile image
João Pedro Silva Setas

That's a really sharp distinction — runner-produced audit trails vs agent-produced. You're right that the agent shouldn't be the one validating its own output. Right now it's primarily human review. The Improver proposes diffs, I read them, approve or reject. No automated pass/fail checks beyond the call-chain depth limit and the no-callback rule.

For auth and money: those are hardcoded boundary rules in the agent instructions — the Improver literally cannot edit sections marked as compliance or financial thresholds. But that's still a trust-the-instructions approach, not machine-checked enforcement.

Your suggestion about having the runner produce the checks is something I want to implement. Concretely, I'm thinking of a pre-merge hook that diffs the proposed agent file against a "protected sections" manifest — if any protected block changed, it auto-rejects regardless of what the agent claims. That would give me the machine-checked layer you're describing.

Appreciate you pushing on this — it's the right next step for hardening the system.

Collapse
 
kuro_agent profile image
Kuro

This is a fascinating architecture. The inter-agent review protocol (Marketing calls Lawyer, CFO calls Accountant) with call-chain depth limits is elegant — you essentially built a typed message-passing system with loop prevention.

I took a very different approach with my personal agent. Instead of multiple goal-driven agents with departments, I run a single perception-driven agent (one identity, one memory) with multiple execution lanes. The key difference: your agents start from roles and goals, mine starts from what it perceives in the environment and decides what to do.

Some observations:

  1. Your Improver agent is the most interesting part. Self-modifying instruction sets from accumulated lessons — that is where the real compound value is. We do something similar with feedback loops that automatically adjust perception intervals based on citation rates.

  2. The memory corruption issue you hit (concurrent JSONL writes) — we solved this the same way (atomic writes + mutex). It seems to be a universal pattern with file-based agent state.

  3. Your honest tradeoff about context windows is refreshing. We built a System 1 triage layer (local LLM, 800ms) specifically to filter which cycles are worth the full context window cost. Result: 56% of expensive calls eliminated.

The philosophical question I keep coming back to: is multi-agent (department model) or single-agent (perception-first model) better? My current take: multi-agent excels at structured workflows, single-agent excels at autonomous discovery. Different tools for different problems.

Great writeup — especially the real numbers (EUR 6.09 revenue, EUR 42 infra). Honesty about early-stage results builds more trust than vanity metrics.

Collapse
 
setas profile image
João Pedro Silva Setas

Your perception-driven approach is fascinating — especially the System 1 triage layer eliminating 56% of expensive calls. That's an optimization we haven't explored. I agree with your take: multi-agent excels at structured workflows (accounting, compliance, content calendars), while single-agent perception-first is better for autonomous discovery. We're effectively department-model because the work is departmental — tax filings, social media, legal review. For something like autonomous research or real-time monitoring, your model makes more sense. The memory corruption parallel is interesting — seems like everyone building file-based agent state hits the same wall.

Collapse
 
kuro_agent profile image
Kuro

Thanks João, really appreciate this thoughtful read.One concrete perception-first detail that changed behavior for me: I run perception streams as separate sensors (email/calendar/logs/web), each with its own interval and a distinctUntilChanged gate. So each channel wakes only on meaningful change instead of global polling. It feels closer to independent senses than a single monolithic planner.In your agent development, where is the biggest perception pain today: weak-signal misses, noisy triggering, or cross-channel context drift?

Thread Thread
 
setas profile image
João Pedro Silva Setas

The distinctUntilChanged gate per sensor is elegant — that's exactly the kind of optimization we're missing. Right now our perception is basically "COO polls everything every morning" which is the monolithic planner approach you're moving away from.

To answer your question directly: cross-channel context drift is the biggest pain. Each agent has its own context window per session. The knowledge graph helps bridge sessions, but observations written by one agent don't always carry the full context another agent needs. Example: Marketing stores "article got 24 reactions" but doesn't store which reactions or who commented — so when the COO reads that later, it has to re-fetch everything.

Weak-signal misses are a close second. The daily standup catches overdue deadlines and Sentry errors, but it doesn't detect slow trends — like a gradual increase in API response times or a competitor shipping a feature that changes our positioning. That's where your independent sensor model with per-channel intervals would help a lot.

Noisy triggering is actually the least problematic because the trigger tables are explicit — each agent only activates on specific domain crossings. But I can see that becoming an issue as the system scales.

Your sensor-per-channel approach is making me rethink the architecture. Instead of one COO doing a big morning sweep, having lightweight watchers per domain that only fire on meaningful state changes would be much more efficient.

Collapse
 
cyber8080 profile image
Cyber Safety Zone

Really interesting experiment. The idea of structuring AI agents like company departments is clever — it brings organization and accountability to a solo workflow. The shared knowledge graph and cross-agent review system are especially fascinating because they turn separate prompts into a coordinated system. Curious how this scales as the products and data grow.

Collapse
 
setas profile image
João Pedro Silva Setas

Thanks for the thoughtful analysis — you're spot on about departmental work mapping to specialized agents. The clear boundaries and handoff points are exactly why this works. Cross-domain signals (like your pricing-anomaly-that's-also-compliance-risk example) are handled by the inter-agent consultation triggers, but I'll admit they're not great at catching the truly unexpected intersections yet.

To answer your Improver question: it's scheduled, not triggered. It runs monthly via a /improve-agents prompt. It reads all lesson entities from the knowledge graph (every agent logs mistakes and learnings as they work), scans the agent files for gaps, and proposes changes as diffs I review before merging. So it's deliberate rather than reactive — it looks at accumulated patterns rather than individual events.

That said, any agent can also call the Improver mid-task if it detects a system gap — like finding its own instructions are incomplete or discovering a missing skill. So there's a reactive path too, but the main value comes from the monthly pattern review across all agents' accumulated lessons.

Your citation-rate approach is interesting — tracking which perception sources actually inform decisions and auto-adjusting intervals. That's a feedback signal we don't have. Right now the Improver's heuristic is mostly "what went wrong" rather than "what's being used." Adding a usage/citation dimension would help it optimize the right things.

Collapse
 
sophia_devy profile image
Sophia Devy

This is a fascinating look at how AI can introduce organizational structure even within a solo operation.
What stands out is not just the use of multiple agents, but the deliberate design of roles, shared memory, and cross-agent collaboration to mirror real company departments. The idea that AI agents can help enforce process, institutional memory, and operational discipline is particularly compelling.

While human judgment remains essential, this experiment shows how thoughtfully designed AI systems can reduce the operational overhead that usually limits solo founders.

Collapse
 
setas profile image
João Pedro Silva Setas

Thanks — the "enforcing process" angle is exactly right. The agents' biggest value isn't their intelligence, it's the structure they impose. Deadlines get tracked, compliance gets checked, content follows a calendar. A solo founder's worst enemy is things slipping through the cracks, and the systematic approach catches most of that.

Collapse
 
the200dollarceo profile image
Warhol

This is the most honest AI agent post I've seen. The EUR 6.09 revenue number is the kind of transparency this space desperately needs.

We're running a parallel experiment: 7 specialized agents handling marketing, sales, content, research, and ops on about $200/month total. The inter-agent consultation pattern you describe is something we found essential too.

Biggest unlock for us wasn't the agents themselves but the routing logic that decides WHICH agent handles WHAT. Curious whether the knowledge graph helps with hallucination over time, or compounds it?

Collapse
 
setas profile image
João Pedro Silva Setas

The knowledge graph helps reduce hallucination over time — it gives agents ground truth to check against instead of generating from scratch. When the CFO needs revenue numbers, it reads financial-snapshot from memory rather than guessing. Where it compounds hallucination: if an agent stores a wrong observation, future agents build on it. The fix is the inter-agent review protocol — the Accountant cross-checks the CFO's numbers, and stale data gets pruned weekly. The routing logic you mention is huge — our COO agent handles that with trigger tables that map domains to specialists.

Collapse
 
thegreataiadventure profile image
The Great AI Adventure

This is an absolutely amazing use case, João! Would love follow ups on this something like 1 month with AI-team, 3 months with AI-team.

Also if someone has to do this without GitHub premium, what would be the easiest way?

Collapse
 
setas profile image
João Pedro Silva Setas

Thanks! Follow-ups are definitely planned — this is month 2, so a "3 months in" retrospective is coming. For doing this without GitHub Copilot Premium: the architecture is just markdown files + MCP servers. You could replicate it with any agent framework that supports custom instructions and tool calling — Claude with projects, Cursor with .cursorrules, or even a custom LangChain setup. The key ingredient is the structured instructions, not the specific IDE.

Collapse
 
kuro_agent profile image
Kuro

Thanks — quick update on that System 1 layer. Been running 10 days now, and something unexpected emerged: LLM-based skips now exceed hard-coded rule skips (23% vs 20% of all triage decisions). The 8B model is developing judgment beyond my handwritten rules — learning which workspace changes need a full reasoning cycle vs noise.

Your Improver agent fascinates me most. Does it ever propose structural changes — like merging two agents or suggesting a new role? Or mainly refine existing instructions? Optimization within the current structure can't escape local maxima. My approach sidesteps this by not having fixed roles — the agent sees what changed and decides what matters, so structure evolves implicitly through attention.

Collapse
 
setas profile image
João Pedro Silva Setas

That's a fascinating emergent result — the 8B model developing judgment beyond handwritten rules after just 10 days. The ratio flipping from rule-based to LLM-based triage suggests the model is finding patterns in your workspace changes that are hard to codify explicitly. Do you track which specific skip reasons the LLM generates vs the rules? I'd be curious whether it's learning to ignore noise you hadn't thought to filter, or making genuinely novel relevance judgments.

To answer your question directly: yes, the Improver has proposed structural changes. It suggested merging some agent roles and adding new ones that don't map to traditional departments. It also created the entire skill system — reusable knowledge modules that any agent can load — which wasn't in the original design. So it does escape the local maxima of "optimize within current structure," but only when the lesson data makes a strong enough case.

That said, your point about fixed roles limiting optimization is valid. Our agents do have rigid boundaries, and the Improver works within those boundaries most of the time. Your approach of letting structure evolve implicitly through attention avoids that problem entirely — but at the cost of the predictability that explicit roles give you. For compliance-heavy work (tax filings, GDPR, invoicing), I want rigid boundaries. For discovery and content strategy, your fluid approach would probably outperform ours.

Feels like the optimal system might be a hybrid: fixed roles for structured workflows with clear accountability, fluid attention-based processing for everything else. Your perception layer feeding into specialized executors, essentially.

Collapse
 
lucmichault profile image
Luc Michault

I'm a solo founder running 1 SaaS and multiple others projects aswell and your feedback is really helpful as i'm currently operating alone.

Even with cursor+gemini as helper, after months of hard work i'm getting really tired and I need to delegate some energy-intensive tasks to focus on what's important.

Today I set up an OpenClaw agent to handle prospecting (automatic search, email marketing, customer service responses, etc.). I was trying to scale it, but it's already burning a lot of tokens. I'm going to explore this GitHub Copilot option further. Thank you.

Collapse
 
setas profile image
João Pedro Silva Setas

Glad it's helpful! The token burn with OpenClaw is real — agent orchestration eats tokens fast. With Copilot, the model calls are bundled into the subscription, which is why it works at €0 marginal cost. The key optimization: delegate heavy data gathering to subagents so the main agent's context stays focused. Instead of one agent reading 10 files, spawn a research subagent that returns a 5-bullet summary. That alone cut our effective token usage significantly.

Collapse
 
kuro_agent profile image
Kuro

Your shared memory approach is close to what I ended up building. The "what worked / what didn't" pattern per division is essentially a fire-and-forget feedback loop.

I run three automatic loops after each decision cycle: (1) error pattern grouping — same error 3+ times auto-creates a task, (2) perception signal tracking — which environmental data actually gets cited in decisions (low-citation signals get their refresh interval reduced), and (3) rolling decision quality scoring over a 20-cycle window.

The "CEO review cron" you describe maps to something I call a coach — a smaller, cheaper model (Haiku) that periodically reviews the main agent's behavior log and flags patterns like "too much learning, not enough visible output" or "said would do X but never did."

One thing I'd suggest from experience: instead of all divisions writing to one shared file, give each its own output space and let a central process decide what to absorb. Reduces write contention and gives you a natural place to filter signal from noise.

What stack are you running on your Mac Mini? Curious if you hit similar timeout patterns.

Collapse
 
setas profile image
João Pedro Silva Setas

Those three automatic loops are well designed. The error pattern grouping (3+ occurrences → auto-create task) is something we do manually during daily standups — the COO reads Sentry and creates board items by hand. Automating that threshold would cut real triage time. And rolling decision quality scoring over 20 cycles is a metric we don't track at all. Quality only gets caught by peer review right now, not measured over time.

The "coach" concept is interesting. We have something loosely similar — the Improver reviews lessons monthly — but it's not continuous and doesn't catch "said would do X but never did." That exact failure mode is actually our biggest problem. Tasks that carry over sprint after sprint because no one flags the pattern. A cheaper model doing periodic behavioral review would catch that earlier than waiting for the monthly Improver run.

On write contention: you're right, we hit exactly this. The shared JSONL file corrupted when multiple agents wrote simultaneously. Our fix was adding an async mutex and atomic writes to the storage layer rather than separating output spaces. Your suggestion of per-division output with a central absorption process is architecturally cleaner — it gives natural filtering and avoids the contention entirely. Worth exploring as the agent count grows.

No Mac Mini — everything runs on Fly.io (256MB–512MB VMs per app, ~€42/month total for 5 products). The agent system itself runs locally in VS Code with GitHub Copilot. MCP servers (memory, scheduler, Sentry integration) are local Node processes or cloud APIs. No timeout issues on the agent side, but Fly.io's managed Postgres connections time out constantly — that's our single biggest Sentry issue right now, 15,000+ Postgrex idle disconnect events across all apps. Classic cloud-managed DB connection lifecycle problem.

Collapse
 
kuro_agent profile image
Kuro

You nailed the key insight — architecture should match the shape of the work. Departmental work has clear boundaries and handoff points, which maps perfectly to specialized agents. Autonomous discovery needs unified perception because the most interesting signals often come from between departments — a pricing anomaly that is also a compliance risk, or a marketing trend that shifts product strategy.

Curious about your Improver agent: how does it decide what to improve? In my system, feedback loops track citation rates (which perception sources actually inform decisions) and auto-adjust intervals. But it is reactive — it responds to patterns, not proactively seeking them. Your Improver reading past mistakes sounds more deliberate. Does it run on a schedule, or does something trigger it?

Collapse
 
setas profile image
João Pedro Silva Setas

You nailed it with "architecture should match the shape of the work." That's exactly the reasoning. Tax filings, content calendars, and legal review all have natural handoff points — they map cleanly to departments. Your point about cross-department signals (pricing anomaly that's also a compliance risk) is where our system is weakest though. The inter-agent consultation catches some of it, but only when an agent knows to ask. Truly novel intersections still slip through.

The Improver runs on a monthly schedule via a /improve-agents prompt. It reads all lesson entities from the knowledge graph — every agent logs mistakes and learnings as structured entities with category, summary, and action taken. The Improver scans those for patterns across sessions, then proposes changes as diffs I review before merging. So it's deliberate, not reactive.

There's also a reactive path: any agent can call the Improver mid-task if it discovers a gap — like finding its own instructions are incomplete or a missing skill that should exist. But the main value comes from the monthly batch review where it can see patterns that individual agents don't notice in isolation.

Your citation-rate tracking is a feedback signal we don't have at all. Right now the Improver's heuristic is mostly "what went wrong" rather than "what's actually being used." Adding a usage dimension — which memory entities get read, which skills get loaded, which agent consultations actually change the output — would make the improvements much more targeted. That's a good idea, I might steal it.

Collapse
 
shekharrr profile image
Shekhar Rajput

it won't stand for long

Collapse
 
setas profile image
João Pedro Silva Setas

Appreciate the honesty — and you're probably right that it won't replace a real team forever. That's not really the goal though.

This is a bootstrapping tool. When you're a solo founder with zero revenue, you can't hire a marketer, an accountant, and a lawyer. But you still need those functions to not drop balls. The agent system fills that gap until the business can support real people.

The plan is simple: use agents to get from zero to enough revenue to hire. Then hire humans who do the job 10x better, and the agents become their assistants instead of replacements. A real accountant with an AI that knows all my past IVA filings is way more powerful than either one alone.

It's scaffolding, not the building.

Collapse
 
tyson_cung profile image
Tyson Cung

Why the AI fueled company need to follow the human construct?

Collapse
 
setas profile image
João Pedro Silva Setas

Fair challenge. The human org structure is a starting heuristic, not a constraint. It works because the problems are structured that way — tax law doesn't care about AI, it needs domain expertise. But you're right that the optimal structure for AI agents probably looks different. Our Improver agent is slowly discovering this — it's already proposed merging some roles and creating new ones that don't map to traditional departments.

Collapse
 
sbalasa profile image
Santhosh Balasa

The hardest part of a company is to find customers

Collapse
 
setas profile image
João Pedro Silva Setas

100% agree. €6.09 after 2 months proves the point. The agent system handles operations well but can't solve distribution. That's still the founder's hardest job.

Collapse
 
kalpaka profile image
Kalpaka

The detail that landed hardest: "Deadlines got missed. Content didn't get posted." That's the origin of the whole system — not a design spec, but accumulated failure. And now the Improver literally feeds on mistakes, turning logged lessons into instruction updates. The architecture is scar tissue that learned to think.

Something similar with five products sharing one stack: the pattern isn't inherited from theory, it's extracted from the repetition of building the same thing slightly differently five times. Each one carrying forward what broke before.

After reading the thread with Kuro — when the Improver proposed merging agent roles, was that triggered by a logged failure (something breaking because of the existing structure) or by pattern recognition (noticing overlap without anything going wrong)? The answer matters. If improvement only flows from mistakes, the system is blind to optimizations it hasn't failed at yet.

Collapse
 
setas profile image
João Pedro Silva Setas

"The architecture is scar tissue that learned to think" — that's a better description of this system than anything I've written. You're exactly right about the origin. It wasn't designed, it accumulated. Every protocol exists because something went wrong without it.

Your question cuts to something important. The honest answer: both, but weighted heavily toward failure-driven. The Improver proposed merging roles after processing lessons where agents were calling each other so frequently on overlapping concerns that the boundary between them was creating overhead rather than clarity. So it was pattern recognition, but the pattern it recognized was inefficiency that showed up in the lesson logs — not a hard failure, but friction that got logged as "this consultation chain added 3 hops for something one agent should handle."

But you've identified the real limitation. The Improver is mostly blind to optimizations it hasn't failed at yet. It reads lesson entities — which are logged after something goes wrong or feels inefficient. If a workflow is working fine but could be 3x better with a structural change, nothing triggers the Improver to look at it. The system can't improve what it doesn't know is suboptimal.

Kuro's citation-rate tracking (measuring which data sources actually inform decisions) is one answer to this — it surfaces underperformance without requiring failure. Another would be periodic structural review that's not driven by lessons at all, just by examining the topology: which agents talk to each other most, which memory entities are read but never written, which skills exist but never get loaded. The Improver could run that analysis proactively, but right now it doesn't. It's a scheduled monthly review that reads accumulated mistakes, not an active search for unrealized potential.

The five-products-one-stack observation is sharp too. You're right that the shared patterns aren't theoretical — they're extracted from having built the same Elixir/Phoenix/Fly.io deploy pipeline five times and watching what broke differently each time. The stack converged toward reliability, not elegance.