DEV Community

MrClaw207
MrClaw207

Posted on

How to Delegate to an AI Agent (Not Just Talk to It)

How to Delegate to an AI Agent (Not Just Talk to It)

The difference between prompting and delegating — and why it matters more as your agent setup gets more complex.


Most tutorials treat AI agents like search engines with better grammar. You ask something, you get an answer. Session over. That's prompting.

Delegating is different. When you delegate a task to an agent, you're handing off a whole outcome — not just a question. You're saying "here's what I want, here's the context that matters, here's what success looks like, here's what to do if something goes wrong." That's a completely different skill, and almost nobody talks about it in practical terms.

I've been running OpenClaw as my primary work system for over a year. This is what I've learned about the difference.


Why "Just Ask" Doesn't Scale

When you're working with a single agent occasionally, prompting works fine. You ask a question, you get something useful back. No overhead, no thinking required.

But as soon as you start running the same agent daily, against your actual work context, prompting breaks down in specific ways:

It treats context as disposable. Every session starts from zero. You re-explain what you do, what matters, what you tried before. That's not just tedious — it means your agent is never working with a complete picture.

It doesn't have a theory of success. When you ask "should I do X?" you get an answer. When you delegate "handle X" you get an outcome — but only if you defined what "done" actually means.

There's no recovery mechanism. A prompt that produces a bad answer just... produces a bad answer. A delegation that fails should tell you it failed and why, so you can course-correct.

The shift from prompting to delegating is: stop thinking of your agent as a answering machine, start thinking of it as a teammate who needs a proper brief.


The WHAT/SO WHAT/WHAT NEXT Framework

The clearest handoff format I've found comes from military briefing doctrine, adapted for agent work:

WHAT = What you're handing off (concise description of the task)
SO WHAT = Why it matters (what outcome depends on this being done right)
WHAT NEXT = What to do when done, or how to escalate if stuck

A proper delegation looks like this:

WHAT: Review my last 10 incoming emails and draft response templates for the 3 that need my attention. Log the other 7 with one-line summaries.

SO WHAT: I spend 40 minutes/day on email that could be running autonomously. If this works, I reclaim that time for actual work. I'm measuring success by whether I only see emails that genuinely need me.

WHAT NEXT: When done, summarize what you drafted and what you filtered. If any email mentions a deadline under 48 hours, flag it prominently. If any email looks like a sales pitch, include a one-line critique of their approach.
Enter fullscreen mode Exit fullscreen mode

Compare that to: "Can you handle my emails?"


The Four Things Every Delegation Needs

Beyond the WHAT/SO WHAT/WHAT NEXT structure, good delegations include:

1. Constraints (Not Just Goals)

"Write a post" is not a delegation. "Write a 600-word DEV.to post on memory systems, written in first person, with one code example, that doesn't mention any specific products" — that's a delegation.

Constraints tell the agent what's not acceptable, not just what is. They prevent the most common class of delegation failures: the agent does what you asked for, but in a way that doesn't fit your actual context.

2. A Way to Verify Success

How will you know the task is done? Not "did the agent run" — did the outcome happen?

If you're delegating content creation: the output should be in a specific format, at a specific length, with specific elements included. If you're delegating research: the output should answer specific questions, not just collect information.

3. What to Do When Stuck

Most delegation frameworks focus on the happy path. The missing piece: what the agent should do when it can't complete the task.

Good pattern:

  • "If you don't have enough information to decide, ask me before acting — don't guess."
  • "If this requires access to something you don't have, report it immediately and stop."
  • "If you're uncertain whether something is in scope, assume it's not and flag it."

4. A Signal to Escalate

Define what "this is above your pay grade" looks like before it happens. The worst agent failures I've seen were situations where the agent should have escalated but didn't — because nobody told it what would trigger escalation.

Good pattern:

  • "If any action could cost money, lose data, or send something external, pause and confirm with me first."
  • "If you're about to apologize on my behalf, stop and ask."

The Pattern That Actually Works

Here's the workflow I've settled on:

Before delegating: Spend 30 seconds asking "what does done actually look like?" Write that down. Then write the delegation.

During: Trust the agent to work. Don't check in mid-task unless you specified you wanted progress updates. Micro-management defeats the purpose.

After: When the agent comes back, evaluate the output against your original "done" definition — not just whether it looks good. If it failed, ask why and update your delegation template for next time.

The meta-lesson: Your agent is only as good as your delegations. If you're getting generic output, your delegations are too generic. If you're getting good output that's somehow missing the point — your "so what" wasn't clear enough.

This is a skill. It compounds. The more precisely you delegate, the more useful your agent becomes.


The next article in this series will be: "The Setup I Run 24/7" — a practical walkthrough of the agent stack that handles research, content, and operations without me watching.

Top comments (0)