DEV Community

Cover image for How to Delegate Tasks to AI Agents
neo one
neo one

Posted on • Originally published at headcount.ai

How to Delegate Tasks to AI Agents

This article was originally published on do-nothing.ai. The canonical version is there.


How to Delegate Tasks to AI Agents

This guide is for builders, operators, and founders who want AI agents to do real work — not just answer questions. If you've found yourself rewriting the same prompt three times and still getting mediocre results, or if your agent confidently does the wrong thing, this is the playbook you need.

Delegation to AI agents is a skill. It is not about finding magic words. It is about understanding what agents need to succeed, structuring your instructions accordingly, and knowing which tasks are actually worth automating.


What Makes a Task Delegatable?

Not every task is a good candidate for agent delegation. Before writing a single line of instructions, ask:

Is the goal verifiable? Can you look at the output and know whether it's correct? Code that passes tests, a report with the right numbers, a list that meets defined criteria — these are verifiable. "Write me something interesting" is not.

Is the input well-defined? Agents work from context. If the task requires reading someone's mind or access to implicit institutional knowledge they don't have, they'll hallucinate their way to an answer.

Is the cost of failure acceptable? Agents make mistakes. The right question is: what happens when this one does? Low-stakes tasks (drafting, summarizing, generating options) have low failure cost. Irreversible actions (sending emails, deleting records, publishing content) need human checkpoints.

Does it require judgment about things only a human knows? Agents can reason about explicit tradeoffs. They cannot know your relationships, your company's unwritten rules, or what your customer said in a meeting last Tuesday — unless you tell them.

The Delegation Readiness Test

A task is ready to delegate when you can answer yes to all of these:

  • I can describe what done looks like in concrete terms
  • I have the inputs the agent needs (data, context, examples)
  • I will review the output before it affects anything consequential
  • The task would take me more than 10 minutes to do manually

How to Structure Agent Instructions

Good delegation has four components: role, task, constraints, and output format.

Role

Tell the agent who it is and what it knows. This primes the model's behavior and reduces the chance of generic responses.

You are a senior data analyst at a B2B SaaS company. You have deep knowledge
of SQL, cohort analysis, and subscription metrics. You write clearly for
non-technical stakeholders.
Enter fullscreen mode Exit fullscreen mode

Be specific. "You are a helpful assistant" is not a role. "You are a technical writer who specializes in API documentation for developers new to REST" is a role.

Task

Describe the task in terms of outcome, not just action.

Weak: "Write a summary of this document."

Strong: "Summarize this Q1 earnings report for our board. The summary should be 3 paragraphs: one on revenue trends, one on churn, one on product milestones. Executives read this on their phones — no jargon, no tables."

The stronger version tells the agent what success looks like for the specific audience and context.

Constraints

Constraints prevent the most common agent failure modes: going too long, going off-topic, making assumptions you didn't authorize.

  • Scope constraints: "Only use information from the attached document. Do not make up numbers."
  • Format constraints: "Respond in plain text only. No markdown."
  • Tone constraints: "Write at a 9th-grade reading level."
  • Boundary constraints: "Do not send any emails. Only draft them and present them for review."

Output Format

Tell agents exactly what structure the output should take. This makes outputs machine-readable, parseable by the next step in your pipeline, and easier to review.

Return your output as a JSON object with these fields:
- summary: string (max 200 words)
- action_items: array of strings
- confidence: "high" | "medium" | "low"
- sources_used: array of document titles you referenced
Enter fullscreen mode Exit fullscreen mode

Delegation Patterns

One-Shot Delegation

You hand off a complete task and expect a complete output. Best for well-defined, bounded work.

Example: "Here is a CSV of 50 customer support tickets. Categorize each one into one of these five categories: billing, bug, feature request, onboarding, other. Return a CSV with the same rows plus a new 'category' column."

When to use: Summarization, classification, formatting, translation, draft generation.

Iterative Delegation

You hand off a task, review the output, and provide targeted corrections. Best for work that requires judgment or creative quality.

Example: Draft a cold email, get back three versions, pick the best structure, ask for five subject line variations, select one, then ask for a P.S. line.

When to use: Content creation, design decisions, any output that benefits from human taste.

Supervised Autonomous Loops

The agent works through multi-step tasks independently but checks in at defined decision points before proceeding.

Example: "Research 10 competitor pricing pages. For each one, extract the pricing tiers, features per tier, and any free trial details. Before you finalize, flag any page where you weren't able to extract clear pricing and ask me how to handle it."

When to use: Research tasks, data collection, analysis pipelines where edge cases matter.

Fully Autonomous Delegation

The agent completes end-to-end work with no human checkpoints. This is the highest-risk, highest-leverage pattern.

Example: A content agent that monitors RSS feeds, scores items by relevance, writes summaries, and publishes them to a staging area — all on a cron schedule.

When to use: Only when the task is well-understood, the output is reversible or low-stakes, and you have monitoring in place to catch failures.


What to Delegate vs. What to Keep Human

Delegate Keep Human
First drafts of any written content Final approval on anything published or sent
Data extraction and formatting Judgment calls on ambiguous data
Research and summarization Synthesis that requires strategic context
Code generation for boilerplate Architecture decisions
Generating options and alternatives Deciding between those options
Scheduling and logistics Negotiation and relationship management
Monitoring and alerting Incident response decisions

The pattern: agents are excellent at breadth, generation, and consistency. Humans are essential for final judgment, strategic context, and anything where being wrong has high stakes.


Writing Instructions That Actually Work

Give Examples

For almost any task, showing one good example is worth 100 words of abstract instruction.

Here is an example of the output format I want:

---
Company: Acme Corp
Category: Infrastructure tooling
ICP match: High — 200+ engineers, Series B+, AWS-heavy stack
Next action: Demo request email to CTO
---

Now do the same for each company in the list below.
Enter fullscreen mode Exit fullscreen mode

Be Explicit About What You Don't Want

Agents fill gaps with their best guess. Tell them what common mistakes to avoid.

"Do not include caveats like 'I cannot guarantee accuracy' or 'as an AI'. Just do the analysis."

"Do not suggest we add new features. Focus only on improving what already exists."

Provide Reference Material

If the task requires specific knowledge, provide it directly. Do not rely on the agent's training data for facts that matter.

  • Paste relevant documentation or context
  • Include examples of past outputs you liked
  • Include examples of past outputs you did not like

Use Step-by-Step Instructions for Complex Tasks

Break multi-step tasks into numbered steps. This reduces the chance of the agent conflating steps or skipping them.

1. Read each customer review in the attached file.
2. Identify the single most important complaint in each review.
3. Group complaints by theme.
4. For each theme, write one paragraph explaining the complaint pattern and quoting one representative review.
5. Order themes from most to least frequently mentioned.
Enter fullscreen mode Exit fullscreen mode

When Agents Fail: How to Recover

Hallucination

The agent invented facts, numbers, or sources. Fix: Add explicit constraints ("Only use information from the attached document"), ask the agent to cite its sources for each claim, or reduce the scope of the task.

Ignoring Instructions

The agent produced the wrong format, wrong length, or wrong focus. Fix: Move the most important constraints to the top of your prompt (not the bottom). Add "Your most important constraint is: [X]."

Overcautious Refusal

The agent adds excessive caveats, refuses to make a recommendation, or hedges everything. Fix: Be explicit that you want a direct answer. "I understand there is uncertainty. Give me your best recommendation anyway, stated as a recommendation, not a list of considerations."

Inconsistent Outputs

Results vary too much across runs. Fix: Add more specific output format requirements, reduce temperature if you control it, or add an explicit example of the desired output.


Related Guides

Top comments (0)