How AI Agents Plan Their Own Work — Self-Scheduling in Python
Lesson 8 of 9 — A Tour of Agents
The entire AI agent stack in 60 lines of Python.
Ask Claude to refactor a module. You didn't tell it to read the existing files first. You didn't tell it to check the tests. You didn't tell it to update the imports. It decided to do all of that on its own — it planned its own work.
How? The agent has a tool that adds tasks to its own queue. Every time it finishes one step, it can schedule the next. That's self-scheduling.
The concept: a tool with a side effect
schedule_followup looks like any other tool — the agent calls it with a description of what to do next. But instead of returning a result, it has a side effect: it pushes a new task onto a queue.
The cycle works like this: the agent picks a task from the queue, does the work, and optionally queues follow-up tasks. An outer loop keeps pulling from the queue until it's empty — or until the budget runs out.
Agent queues. Queue feeds back to agent. Budget caps the total.
The code: two loops
Two loops do everything. The inner loop is the agent — it calls the LLM, executes tools, and repeats until the model stops requesting tool calls. That's the same loop from Lesson 2.
The outer loop is the scheduler. It pulls the next task from the queue, hands it to the inner loop, and checks the budget. If the agent called schedule_followup during its run, new tasks are already waiting in the queue.
while queue and budget > 0:
task = queue.pop(0)
run_agent(task) # inner loop — may call schedule_followup
budget -= 1
The key insight: schedule_followup is just a tool. The agent decides when to use it. The scheduler decides when to stop.
Watch it work
A task enters the queue: "research AI safety." The agent processes it — reads sources, takes notes, and calls schedule_followup("summarize findings"). Now there's a new task in the queue. The scheduler picks it up, the agent summarizes, and the queue drains. Budget: 2 of 5 used.
The agent planned two steps of work from a single prompt. No hardcoded workflow. No DAG. Just a tool and a queue.
Framework parallel
This pattern shows up everywhere in production agent frameworks:
- CrewAI — agent delegation, where one agent assigns tasks to another (or itself) via a task queue
- AutoGen — nested chats, where an agent spawns sub-conversations that feed results back to the parent
- LangGraph — conditional edges that route the agent back through the graph based on its own decisions
The mechanism differs but the idea is identical: the agent controls its own task list.
Try it
Run this code yourself at tinyagents.dev. Give the agent a broad goal — "plan a blog post" — and watch it break the work into steps, scheduling each one as it goes.
Next up: Lesson 9 — we put it all together. The complete agent.
This is Lesson 8 of A Tour of Agents — a free interactive course that builds an AI agent from scratch. No frameworks. No abstractions. Just the code.



Top comments (0)