DEV Community

João Pedro Silva Setas
João Pedro Silva Setas

Posted on

The Improver: How I Built an AI Agent That Upgrades Other AI Agents

Most multi-agent writeups stop at specialization.

Planner. Coder. Reviewer. Maybe a memory layer. Maybe a routing loop.

That part is interesting, but it was not the part that started compounding for me.

The part that changed the system was this: who improves the agents after they make the same mistake twice?

I run a solo company with AI agent departments. There is a CEO, CFO, COO, Marketing, Accountant, Lawyer, CTO, and an Improver that upgrades the rest. The specialists do the obvious work. The weird one is the Improver.

It is the agent that reads mistakes, looks for recurring patterns, and edits the system itself.

Not the product code.

The operating system around the agents.

That distinction matters.

Because the useful version of self-improving agents is much more boring than the sci-fi version.

And that is exactly why I trust it.

I did not need more agents. I needed better scar tissue.

The first version of the system already had specialist agents with decent prompts.

The problem was not "I wish I had one more role."

The problem was repetition.

The same kinds of issues kept appearing in different forms:

  • content that sounded technically correct but did not sound like me
  • memory that stayed technically valid but got noisier every week
  • tasks that were flagged as stale over and over without a real escalation path
  • workflow instructions that were good enough for one run but not good enough to survive contact with the next one

Each one was fixable manually.

But manual fixes do not compound.

If every mistake becomes a one-off correction, the system never gets better. It just gets babysat.

So I added an Improver agent whose whole job is turning mistakes into infrastructure.

The raw input is not intuition. It is lessons.

The Improver does not wake up and freestyle changes.

It works from a very explicit input: lesson entities stored in shared memory.

After a complex task, agents log what went wrong, why it mattered, and what changed.

The structure is intentionally plain:

lesson:2026-02-17:marketing-voice-authenticity
- Agent: Improver
- Category: process
- Summary: Marketing content was too generic
- Detail: Founder feedback showed the writing did not sound like a real engineer
- Action: Rewrote the voice guide and added a founder discovery protocol
Enter fullscreen mode Exit fullscreen mode

That matters because it gives the Improver something better than vibes.

It gets actual failure patterns.

It can group lessons by category: bug, process, knowledge, tool, decision.

Then it can ask a useful question: is this a one-off, or is this a gap in the system?

If three unrelated tasks keep producing the same sort of friction, that is usually not user error.

It is missing infrastructure.

What the Improver is allowed to change

This agent has real edit authority, but the scope is narrow on purpose.

Most of its work lives in the management repo, especially the files that define how the agents behave.

Its change types are basically these:

Change type What it does Typical file
New skill Stores reusable knowledge the system keeps needing .github/skills/*/SKILL.md
New prompt Captures a recurring workflow .github/prompts/*.prompt.md
Agent update Tightens responsibilities, guardrails, or working style .github/agents/*.agent.md
Doc update Adds missing operational context .github/copilot-instructions.md, AGENTS.md, project docs
Memory curation Cleans duplicates, adds relations, prunes stale state shared knowledge graph

What its remit does not include is even more important.

It is not supposed to rewrite product code because it feels clever.

It is not supposed to invent tax or legal rules.

It is not supposed to change company identity, product positioning, or authority boundaries on its own.

And it follows the same operating constraints and source-of-truth rules as the rest of the system.

The useful version of self-improvement is constrained and auditable.

Not open-ended.

The two trigger modes

The Improver runs in two main ways.

1. Scheduled review

There is a dedicated /improve-agents prompt for periodic system review.

That run audits the agent files, prompts, skills, and memory graph, then looks for gaps that should become reusable infrastructure.

This is the slower, batch-style mode.

Good for pattern detection.

2. Mid-task intervention

This is the more useful mode in practice.

If another agent notices a real gap while working, it calls the Improver immediately.

Not after the task. During it.

That turns "we should fix this later" into "fix the system now, then continue."

The difference sounds small, but it changes the system from retrospective learning to live correction.

Real changes the Improver already made

This is the part I care about most.

The Improver is only interesting if the output is visible in the system afterward.

Here are a few concrete changes it made from actual runs.

Marketing stopped sounding like marketing

On Feb 17, founder feedback was blunt: the content did not sound like a real engineer.

That became a lesson.

The Improver responded by rewriting the Marketing agent's voice guide, adding anti-patterns, adding a content quality gate, and forcing a founder discovery protocol instead of generic startup copy.

That was a real upgrade.

Not just "write better next time."

The system got a domain registry

On Feb 22, the instructions were updated to add a real Domain Registry with a separate Social URL column.

That sounds administrative until a platform blocks one domain and not the fallback.

OpenClawCloud is the live example. For public content, the correct social URL is clawdcloud.net, not the blocked alternative.

Without a registry, every agent has to remember that detail manually.

With a registry, it becomes infrastructure.

Memory stopped growing like a junk drawer

Another improvement pass added memory hygiene rules.

Standups and trend scans now have retention rules. Old noise gets pruned. Permanent lessons and decisions stay.

That is not glamorous work, but stale memory is one of the fastest ways to make a multi-agent system look smart while behaving confused.

Shared context only helps if it stays usable.

Chronic misses stopped getting polite excuses

One of the most useful upgrades came later: chronic miss escalation.

If a task misses two or more deadlines, the COO is now supposed to re-scope it, demote it, kill it, or add a real root-cause note.

No more infinite carryover with a softer ETA.

That was an important change because agent systems are very good at sounding disciplined while quietly tolerating drift.

The Improver is useful precisely when it gets less polite about that.

The hard part is not self-improvement. It is boundaries.

The question I get most often is some version of: does this not drift into chaos?

It would, if the Improver were allowed to treat the whole company as editable text.

That is why the boundaries matter more than the mechanism.

The agent can improve prompts, skills, workflows, and memory hygiene.

Its remit does not include declaring new business facts.

Its remit does not include quietly changing product claims.

Its remit does not include deciding that existing constraints or review triggers are optional now.

And it is not supposed to widen its own authority because that seems efficient.

In other words, the system can improve the procedures that shape work.

It cannot rewrite the constitution.

That is the only reason this feels useful instead of reckless.

What I like about it

The best thing about the Improver is that it turns post-mortems into runtime assets.

A normal post-mortem ends as a paragraph in a doc nobody reads again.

This loop is different.

The mistake becomes a lesson.
The lesson becomes an instruction change.
The instruction change affects the next run.

That is the compound effect.

Not infinite autonomy.

Just a system that gets slightly harder to fool every time it learns something real.

My take

I do not think self-improving agent systems are interesting because they sound futuristic.

I think they are interesting when they make operations more boring.

Better guardrails.
Cleaner memory.
Sharper prompts.
Fewer repeated mistakes.

That is what the Improver does for me.

It is not an agent building a better world in the background.

It is an agent that reads scar tissue and turns it into better constraints.

And for real work, I trust that far more.

Top comments (0)