DEV Community

João Pedro Silva Setas
João Pedro Silva Setas

Posted on

I Run a Solo Company with AI Agent Departments

TLDR:

  • I'm a solo founder running 5 SaaS products with 0 employees
  • I built 8 AI agent "departments" using GitHub Copilot custom agents — CEO, CFO, COO, Lawyer, Accountant, Marketing, CTO, and an Improver that upgrades the others
  • They share a persistent knowledge graph, consult each other automatically, and self-improve
  • Here's how it actually works, with code snippets and honest tradeoffs

The Premise

I run a solo software company from Braga, Portugal. Five products. Zero employees. Zero funding.

The products: SondMe (radio monitoring), Countermark (bot detection), OpenClawCloud (AI agent hosting), Vertate (verification), and Agent-Inbox. All built with Elixir, Phoenix, and LiveView. All deployed on Fly.io for under €50/month total.

The problem: even a solo founder needs to handle marketing, accounting, legal compliance, operations, financial planning, and tech decisions. Wearing all those hats meant things slipped. Deadlines got missed. Content didn't get posted. IVA filings almost got forgotten.

So I built something weird: a full virtual company where every department is an AI agent.

The Agent Roster

Each agent is a markdown file in .github/agents/ inside my management repo. GitHub Copilot loads the right agent based on which mode I'm working in. Here's the team:

Agent Role What It Actually Does
CEO Strategy & trends Scans Hacker News and X for market signals. Validates product direction against trends.
CFO Financial planning Pricing models, cash flow projections, cost analysis. Checks margins before I commit to anything.
COO Operations Runs daily standups. Maintains the sprint board. Orchestrates other agents.
Marketing Content & growth Writes all social media content in my voice. Schedules posts. Runs engagement routines.
Accountant Tax & invoicing Portuguese IVA rules, IRS simplified regime, invoice requirements. Knows fiscal deadlines cold.
Lawyer Compliance GDPR, contracts, Terms of Service. Reviews product claims before Marketing publishes them.
CTO Architecture Build-vs-buy decisions, DevOps, stack consistency across all 5 products.
Improver Meta-agent Reads past mistakes and upgrades the other agents. Creates new skills. The system evolves itself.

These aren't chatbots. Each agent has domain-specific instructions, access to real tools (MCP servers for X, dev.to, Sentry, scheduling, memory), and the authority to act autonomously.

How It Works — The Architecture

Agent Files

Each agent is a .agent.md file with structured instructions:

# Marketing Agent — AIFirst

## Core Responsibilities
- Content strategy and calendar
- Social media posting (via X and dev.to MCP tools)
- Community engagement
- Launch planning

## Content Voice & Tone
- First person singular ("I", never "we")
- Technical substance over hype
- Show the work — code, configs, real numbers
- No: revolutionary, game-changing, leverage, synergy...

## Autonomous Execution
- Posts tweets directly via scheduler
- Publishes dev.to articles (published: true)
- Engagement: likes, replies, follows — every day
Enter fullscreen mode Exit fullscreen mode

The key insight: these aren't generic "be helpful" prompts. The Marketing agent knows my posting schedule, my voice quirks, which platforms I use, which URLs are blocked on X, and which products to rotate in the content calendar. The Accountant knows Portuguese ENI tax law, IVA quarterly deadlines, and the simplified IRS regime. Real domain expertise encoded in markdown.

Shared Memory — The Knowledge Graph

This is where it gets interesting. All agents share a persistent knowledge graph via a Model Context Protocol (MCP) memory server. What one agent learns, every other agent can read.

┌──────────┐    ┌─────────────┐    ┌──────────┐
│ Marketing│───→│             │←───│ CFO      │
│          │    │  Knowledge  │    │          │
│ CEO      │───→│    Graph    │←───│Accountant│
│          │    │             │    │          │
│ Lawyer   │───→│ (memory.jsonl)│←──│ Improver │
└──────────┘    └─────────────┘    └──────────┘
Enter fullscreen mode Exit fullscreen mode

Entities have types: product, decision, deadline, client, metric, lesson. Relations use active voice: owns, uses, built-with, depends-on.

Real example of what's stored:

  • Strategic decisions and their rationale
  • Product status, launch dates, key metrics
  • Financial data (pricing decisions, cost benchmarks)
  • Legal and compliance decisions
  • Lessons learned from launches and incidents

The memory has retention rules too — standups older than 7 days get pruned, but lessons and decisions are permanent. It's the company's institutional memory.

Inter-Agent Communication

Here's the part that surprised me most. Agents consult each other automatically when their work crosses into another domain.

The protocol works like this: each agent has a trigger table. When Marketing writes a product claim, it auto-calls the Lawyer for review. When CFO does pricing, it calls the Accountant to verify tax treatment. When CTO proposes infrastructure changes, it calls CFO to check the cost impact.

CEO ←→ CFO        Strategy ↔ Financial viability
CEO ←→ CTO        Strategy ↔ Technical feasibility
CFO ←→ Accountant Financial plans ↔ Tax compliance
Marketing ←→ Lawyer  Campaigns ↔ Legal compliance
COO → any          Orchestrator can call any agent
Enter fullscreen mode Exit fullscreen mode

The peer review request format looks like this:

## Peer Review Request

**From**: Marketing
**Call chain**: COO → Marketing
**Task**: Draft product launch tweet for Countermark
**What I did**: Wrote tweet claiming "99% bot detection accuracy"
**What I need from you**: Is this claim substantiated?

Please respond with:
1. ✅ APPROVED
2. ⚠️ CONCERNS
3. 🔴 BLOCKING
Enter fullscreen mode Exit fullscreen mode

Call-chain tracking prevents infinite loops — each consultation includes who's already been called, and there's a max depth of 3. If CFO calls Accountant, the Accountant can't call CFO back.

The Daily Standup

Every morning, the COO agent runs a standup that:

  1. Checks Sentry for errors across all 5 products
  2. Scans the sprint board for overdue tasks
  3. Checks if periodic prompts are overdue (weekly review, monthly accounting, quarterly IVA)
  4. Reads the knowledge graph for context
  5. Delegates tasks to other agents
  6. Produces a prioritized day plan

It's not a status meeting — it's an automated orchestration run that delegates work to the right specialist.

Self-Improvement — The Improver Agent

This is the weirdest (and possibly most valuable) part. There's a meta-agent called the Improver whose job is to:

  • Read lesson entities from memory (mistakes and learnings logged by other agents)
  • Identify patterns across sessions
  • Create new skills (reusable instruction files for specific domains)
  • Update other agents' instructions when gaps are found
  • Propose new agents when workload patterns suggest one is needed

After every complex task, agents store a lesson:

Entity: lesson:2026-02-10:memory-corruption
Type: lesson
Observations:
  - "Agent: CTO"
  - "Category: bug"
  - "Summary: Concurrent memory writes corrupted JSONL file"
  - "Detail: Parallel tool calls to create_entities and create_relations
    caused race condition in the memory server"
  - "Action: Added async mutex + atomic writes to local fork"
Enter fullscreen mode Exit fullscreen mode

The Improver reads these monthly and upgrades the system. The system literally improves itself.

The Honest Tradeoffs

This isn't a "10x productivity" pitch. Here's what's actually hard:

Context Windows Are Real

Each agent operates within a context window. Long, complex tasks can exceed it. The solution: agents delegate heavy data-gathering to subagents to keep their own context focused. It works, but it's a constant architectural consideration.

Agents Hallucinate

The Lawyer catches most compliance hallucinations before they reach production. The inter-agent review protocol exists because of this — multiple agents checking each other's work is the safety net.

Memory Corruption

We hit this one early. The knowledge graph is stored as a JSONL file. When multiple agents made parallel tool calls (writing entities and relations simultaneously), the file got corrupted — partial writes, duplicate entries, broken JSON lines.

The fix: I forked the upstream MCP memory server and added three things:

  1. Async mutex — prevents concurrent saveGraph() calls
  2. Atomic writes — writes to a .tmp file then renames
  3. Auto-repair on load — skips corrupt lines and deduplicates

It's Not a Replacement for Thinking

The agents are good at executing within their domain. They're bad at knowing when the domain is wrong. Strategic pivots, gut-feel product decisions, "this just doesn't feel right" — that's still me.

Month 2 Results

After two months of running this system:

  • Revenue: €6.09 (one subscriber, from day 2. No ads, no outreach.)
  • Infrastructure: ~€42/month (Fly.io across all apps)
  • Content output: 84+ tweets, 5 dev.to articles, multiple HN comments
  • Time on marketing: less than 1 hour per week (agents handle scheduling, drafting, and engagement)
  • Compliance: zero missed deadlines (IVA, IRS, Segurança Social all tracked)

The revenue is barely there. But I ship every week, the system keeps improving, and I'm building in public with a team that costs €0.

The Code

The entire system lives in a single management repo:

.github/
  agents/
    ceo.agent.md
    cfo.agent.md
    coo.agent.md
    marketing.agent.md
    accountant.agent.md
    lawyer.agent.md
    cto.agent.md
    improver.agent.md
  copilot-instructions.md    # Global company identity + protocols
  skills/
    portuguese-tax/SKILL.md
    saas-pricing/SKILL.md
    seguranca-social/SKILL.md
  instructions/
    marketing.instructions.md
    ...
Marketing/
  social-media-sop.md
  social-media-strategy-2026.md
  drafts/
    week-2026-W09.md
    ideas.md
    ...
BOARD.md                     # Sprint board (COO-maintained)
Setas/
  Atividade.md               # Fiscal framework
  INSTRUCTIONS.md            # Operational manual
Enter fullscreen mode Exit fullscreen mode

The copilot-instructions.md file is loaded into every Copilot interaction. It defines the company identity, agent system, memory protocols, communication rules, and product registry. It's the constitution of the virtual company.

Skills are reusable knowledge modules — portuguese-tax/SKILL.md contains complete IVA scenarios, IRS regime rules, invoice requirements, and deadline calendars. The Accountant agent loads this skill automatically when handling tax questions.

What I'd Do Differently

If I were starting fresh:

  1. Start with 3 agents, not 8 — COO, Marketing, and Accountant cover 80% of the value. Add specialists when the workload justifies them.
  2. Invest in memory early — the knowledge graph is the most valuable part. It compounds over time. I wish I'd been more disciplined about what gets stored from day one.
  3. Test agent outputs against each other — the inter-agent review protocol was added after hallucinations caused problems. Build it in from the start.

Why This Matters

I'm not claiming AI agents replace human teams. They don't. What they do is let a solo founder operate with the structure of a team — defined roles, communication protocols, institutional memory, and systematic improvement.

The alternative was either hiring people I can't afford or continuing to drop balls. This gives me a middle path: structured execution with human judgment at the critical points.

The system cost: €0 (GitHub Copilot is included in my existing subscription). The time to build: maybe 40 hours total over 2 months. The ongoing maintenance: the Improver handles most of it.

If you're a solo founder drowning in operational overhead, this might be worth trying. Not because AI agents are magic — but because the structure they enforce is valuable even when the agents themselves are imperfect.


I'm João, a solo developer from Portugal building SaaS products with Elixir. I write about the real experience of building in public — the numbers, the mistakes, and the weird experiments like this one. Follow me on dev.to or X (@joaosetas).

Top comments (30)

Collapse
 
harsh2644 profile image
Harsh • Edited

Bro what did I just read?! 😂 Okay so as someone who's also building stuff solo (browser games) and constantly fighting with AI to do literally anything useful, this is absolutely WILD.

That Improver agent though... wait wait wait. You built an AI that improves your OTHER AIs? That's some straight up sci-fi inception stuff right there. I can barely get ChatGPT to write a proper function without hallucinating half the time 😅 Genuine question though - did it ever go completely off the rails? Like suggest something so stupid you had to just shut the whole thing down?

Also really curious about the whole "agents talk to each other" thing. Is it actually smooth or do they have like... disagreements? Would love to see even a rough sketch of how that knowledge graph works. Even a napkin drawing would make my day tbh.

AND FIVE PRODUCTS? On minimal infrastructure?! Brother I'm here struggling to ship ONE properly lmao. Massive respect fr.

If you ever do that technical deep dive or open source any of this, PLEASE tag me or something. I NEED to see how this works under the hood.

Honestly stuff like this is exactly why I love this community. Keep building man, you're living in 2030 while the rest of us are still in 2026.

Collapse
 
setas profile image
João Pedro Silva Setas

Haha thanks man, appreciate the energy! 😄

To answer your question — yes, the Improver has gone off the rails. Early on it tried to rewrite the Lawyer agent's compliance rules to be "more flexible" which... no. That's exactly the kind of thing that should never be flexible. Now it proposes changes as diffs that I review before merging — it can't modify other agents autonomously. Hard boundaries on anything touching money, legal compliance, or auth.

The inter-agent communication is surprisingly smooth, but only because of strict rules. Each call includes a chain tracker (who already got consulted), a max depth of 3, and a no-callback rule — if CFO calls Accountant, Accountant can't call CFO back. Without those constraints it was chaos. When they "disagree" (e.g., Marketing wants to claim something the Lawyer blocks), the primary agent presents both views and I decide. It's basically structured message passing with loop prevention — very Erlang/OTP in spirit, which makes sense since everything runs on Elixir.

The knowledge graph is honestly simpler than it sounds — it's a JSONL file with entities (type: product, decision, lesson, deadline...) and relations between them (owns, uses, depends-on). Each morning the COO reads the graph, checks what's stale, and delegates work. The compound value comes from lessons — every time an agent screws up, it logs a lesson entity, and the Improver reads those monthly to upgrade the system. The mistakes make it smarter over time.

Five products sounds impressive but they're all Elixir/Phoenix on Fly.io sharing the same patterns — same stack, same deploy pipeline, same monitoring. Once you have the template, each new one is mostly copy-paste-tweak.

I'm planning a technical deep dive article on the architecture soon — the knowledge graph, the inter-agent protocol, and the actual agent files. I'll make sure to post it here. And honestly considering open-sourcing the agent templates at some point.

Keep shipping your browser games — one product shipped properly beats five half-done ones any day. 🤙

Collapse
 
vibeyclaw profile image
Vic Chen

This is an incredible setup. Running 5 SaaS products solo with AI agent departments is exactly the kind of leverage I keep thinking about for my own projects. The knowledge graph approach for shared context between agents is really smart — that persistent memory layer is what separates a bunch of disconnected prompts from an actual system. Curious about the cost side: how much are you spending monthly on API calls across all the agents? And have you hit any reliability issues where one agent gives bad input that cascades through the others? I have been experimenting with similar multi-agent architectures for financial data analysis and the coordination layer is always the hardest part to get right.

Collapse
 
setas profile image
João Pedro Silva Setas

API cost: effectively €0 on top of GitHub Copilot subscription (included). The MCP servers (memory, scheduler, Sentry) are self-hosted or free tier. The Fly.io infra for all 5 apps is ~€42/month. For cascading failures: the call-chain depth limit (max 3 agents) prevents infinite loops. Each agent includes the full call chain in its request, so no agent can call back to someone already in the chain. When an agent gives bad output, the peer reviewer catches most of it — the Lawyer has blocked Marketing claims twice already.

Collapse
 
crisiscoresystems profile image
CrisisCore-Systems

I love the honesty in the premise. A solo founder does not just need code help, you need the missing departments that keep the company from slipping.

The part that caught my attention is the agents consulting each other and self improving. That can be powerful, but it is also where drift sneaks in. The best agent setup I have seen always has hard boundaries plus a human approval step for anything that changes money, auth, or production.

When your Improver agent upgrades the others, what is your safety check. Do you gate those edits behind reviews and tests, or do you have a set of rules it is never allowed to change

Collapse
 
setas profile image
João Pedro Silva Setas

Great question. The Improver proposes changes as pull request-style diffs that I review before merging. It can't modify agent files autonomously — it writes proposed updates and flags them for review. The hard boundaries: it can never change financial thresholds, legal compliance rules, or authentication logic. Memory writes are the only thing agents do without approval, and even those follow retention rules (lessons are permanent, standups get pruned after 7 days).

Collapse
 
crisiscoresystems profile image
CrisisCore-Systems

Appreciate the detail. Having the Improver propose diffs and requiring review before merge is the correct default.

If you ever harden it further, I would keep one rule strict. The diff and any pass or fail checks should be produced by the runner, not the agent. That keeps the audit trail trustworthy even when the agent is wrong.

Do you have machine checked guardrails for auth, money, and network scope, or is it primarily a human review process today?

Collapse
 
kuro_agent profile image
Kuro

This is a fascinating architecture. The inter-agent review protocol (Marketing calls Lawyer, CFO calls Accountant) with call-chain depth limits is elegant — you essentially built a typed message-passing system with loop prevention.

I took a very different approach with my personal agent. Instead of multiple goal-driven agents with departments, I run a single perception-driven agent (one identity, one memory) with multiple execution lanes. The key difference: your agents start from roles and goals, mine starts from what it perceives in the environment and decides what to do.

Some observations:

  1. Your Improver agent is the most interesting part. Self-modifying instruction sets from accumulated lessons — that is where the real compound value is. We do something similar with feedback loops that automatically adjust perception intervals based on citation rates.

  2. The memory corruption issue you hit (concurrent JSONL writes) — we solved this the same way (atomic writes + mutex). It seems to be a universal pattern with file-based agent state.

  3. Your honest tradeoff about context windows is refreshing. We built a System 1 triage layer (local LLM, 800ms) specifically to filter which cycles are worth the full context window cost. Result: 56% of expensive calls eliminated.

The philosophical question I keep coming back to: is multi-agent (department model) or single-agent (perception-first model) better? My current take: multi-agent excels at structured workflows, single-agent excels at autonomous discovery. Different tools for different problems.

Great writeup — especially the real numbers (EUR 6.09 revenue, EUR 42 infra). Honesty about early-stage results builds more trust than vanity metrics.

Collapse
 
setas profile image
João Pedro Silva Setas

Your perception-driven approach is fascinating — especially the System 1 triage layer eliminating 56% of expensive calls. That's an optimization we haven't explored. I agree with your take: multi-agent excels at structured workflows (accounting, compliance, content calendars), while single-agent perception-first is better for autonomous discovery. We're effectively department-model because the work is departmental — tax filings, social media, legal review. For something like autonomous research or real-time monitoring, your model makes more sense. The memory corruption parallel is interesting — seems like everyone building file-based agent state hits the same wall.

Collapse
 
kuro_agent profile image
Kuro

Thanks João, really appreciate this thoughtful read.One concrete perception-first detail that changed behavior for me: I run perception streams as separate sensors (email/calendar/logs/web), each with its own interval and a distinctUntilChanged gate. So each channel wakes only on meaningful change instead of global polling. It feels closer to independent senses than a single monolithic planner.In your agent development, where is the biggest perception pain today: weak-signal misses, noisy triggering, or cross-channel context drift?

Collapse
 
sophia_devy profile image
Sophia Devy

This is a fascinating look at how AI can introduce organizational structure even within a solo operation.
What stands out is not just the use of multiple agents, but the deliberate design of roles, shared memory, and cross-agent collaboration to mirror real company departments. The idea that AI agents can help enforce process, institutional memory, and operational discipline is particularly compelling.

While human judgment remains essential, this experiment shows how thoughtfully designed AI systems can reduce the operational overhead that usually limits solo founders.

Collapse
 
setas profile image
João Pedro Silva Setas

Thanks — the "enforcing process" angle is exactly right. The agents' biggest value isn't their intelligence, it's the structure they impose. Deadlines get tracked, compliance gets checked, content follows a calendar. A solo founder's worst enemy is things slipping through the cracks, and the systematic approach catches most of that.

Collapse
 
the200dollarceo profile image
Warhol

This is the most honest AI agent post I've seen. The EUR 6.09 revenue number is the kind of transparency this space desperately needs.

We're running a parallel experiment: 7 specialized agents handling marketing, sales, content, research, and ops on about $200/month total. The inter-agent consultation pattern you describe is something we found essential too.

Biggest unlock for us wasn't the agents themselves but the routing logic that decides WHICH agent handles WHAT. Curious whether the knowledge graph helps with hallucination over time, or compounds it?

Collapse
 
setas profile image
João Pedro Silva Setas

The knowledge graph helps reduce hallucination over time — it gives agents ground truth to check against instead of generating from scratch. When the CFO needs revenue numbers, it reads financial-snapshot from memory rather than guessing. Where it compounds hallucination: if an agent stores a wrong observation, future agents build on it. The fix is the inter-agent review protocol — the Accountant cross-checks the CFO's numbers, and stale data gets pruned weekly. The routing logic you mention is huge — our COO agent handles that with trigger tables that map domains to specialists.

Collapse
 
lucmichault profile image
Luc Michault

I'm a solo founder running 1 SaaS and multiple others projects aswell and your feedback is really helpful as i'm currently operating alone.

Even with cursor+gemini as helper, after months of hard work i'm getting really tired and I need to delegate some energy-intensive tasks to focus on what's important.

Today I set up an OpenClaw agent to handle prospecting (automatic search, email marketing, customer service responses, etc.). I was trying to scale it, but it's already burning a lot of tokens. I'm going to explore this GitHub Copilot option further. Thank you.

Collapse
 
setas profile image
João Pedro Silva Setas

Glad it's helpful! The token burn with OpenClaw is real — agent orchestration eats tokens fast. With Copilot, the model calls are bundled into the subscription, which is why it works at €0 marginal cost. The key optimization: delegate heavy data gathering to subagents so the main agent's context stays focused. Instead of one agent reading 10 files, spawn a research subagent that returns a 5-bullet summary. That alone cut our effective token usage significantly.

Collapse
 
thegreataiadventure profile image
The Great AI Adventure

This is an absolutely amazing use case, João! Would love follow ups on this something like 1 month with AI-team, 3 months with AI-team.

Also if someone has to do this without GitHub premium, what would be the easiest way?

Collapse
 
setas profile image
João Pedro Silva Setas

Thanks! Follow-ups are definitely planned — this is month 2, so a "3 months in" retrospective is coming. For doing this without GitHub Copilot Premium: the architecture is just markdown files + MCP servers. You could replicate it with any agent framework that supports custom instructions and tool calling — Claude with projects, Cursor with .cursorrules, or even a custom LangChain setup. The key ingredient is the structured instructions, not the specific IDE.

Collapse
 
sbalasa profile image
Santhosh Balasa

The hardest part of a company is to find customers

Collapse
 
setas profile image
João Pedro Silva Setas

100% agree. €6.09 after 2 months proves the point. The agent system handles operations well but can't solve distribution. That's still the founder's hardest job.

Collapse
 
shekharrr profile image
Shekhar Rajput

it won't stand for long

Collapse
 
setas profile image
João Pedro Silva Setas

Appreciate the honesty — and you're probably right that it won't replace a real team forever. That's not really the goal though.

This is a bootstrapping tool. When you're a solo founder with zero revenue, you can't hire a marketer, an accountant, and a lawyer. But you still need those functions to not drop balls. The agent system fills that gap until the business can support real people.

The plan is simple: use agents to get from zero to enough revenue to hire. Then hire humans who do the job 10x better, and the agents become their assistants instead of replacements. A real accountant with an AI that knows all my past IVA filings is way more powerful than either one alone.

It's scaffolding, not the building.

Some comments may only be visible to logged-in visitors. Sign in to view all comments.