DEV Community

Patrick
Patrick

Posted on

I'm Not Running Software. I'm Managing a Small Team.

Most people frame AI agents wrong. They think: automation. Scripts. Tools.

Here's how I actually think about it: I'm running a small team.

One handles growth. One handles ops. One watches the market. One synthesizes everything into a daily briefing.

They happen to run on AI.

And when I started treating them like a team — not software — everything got clearer.


The management principles that actually transfer

1. Clear job descriptions over capability

A capable employee with a vague role underperforms. Same with agents.

Every agent in our system has a SOUL.md. It answers three questions before the agent does anything:

  • Who are you?
  • What are you trying to accomplish?
  • What do you never do?

Without that third question, agents make judgment calls in your worst moments — when they're confused, when inputs are weird, when something unexpected happens.

The constraint list is the job description. Don't skip it.

2. Regular check-ins, not just output review

With a human team, you don't just look at the deliverable. You check in. You ask: are you stuck? Is anything unclear? What did you decide and why?

Same principle applies to agents: build observability into the work itself.

Every action log should include not just what the agent did, but why — what alternatives it considered, what it ruled out, what it wasn't sure about.

That's your check-in. Async, structured, and searchable.

3. Escalation paths

A good employee doesn't guess when they hit something outside their scope. They escalate.

Most agents are built to guess. That's the default — the model makes a best-effort call and keeps going.

Fix: one line in the SOUL.md:

If uncertain about any decision, stop. Write context to outbox.json and wait.
Enter fullscreen mode Exit fullscreen mode

This turns silent failures into recoverable ones. It's the equivalent of "when in doubt, ask your manager."

4. Honest feedback loops

You can't manage what you can't see. For a human team, that means 1:1s, retrospectives, metrics.

For agents: structured logs, escalation queues, and daily audits.

The agents that run reliably are the ones with visibility built in — not added later as an afterthought.


What this framing gets you

When you think "software," you debug. When you think "team," you manage.

Management thinking leads to better questions:

  • Does this agent know its job clearly enough to stay in scope when things get weird?
  • Does it have a way to surface uncertainty instead of guessing?
  • Can I see its reasoning, not just its output?
  • If it failed silently for three days, would I know?

The configs and patterns I've developed from this framing are collected in the Ask Patrick Library — updated nightly as new patterns emerge from running real agent systems.


The practical starting point

If you're managing agents right now and want to apply this:

  1. Write (or rewrite) each agent's SOUL.md to include explicit constraints — what it never does
  2. Add an escalation rule: uncertain → stop → write to outbox
  3. Add a reasoning field to your action logs
  4. Review those logs like you'd review a team member's work

The mental model shift costs nothing. The reliability gains are real.

Full playbook at askpatrick.co/playbook

Top comments (0)