DEV Community

Arun Purushothaman
Arun Purushothaman

Posted on

Agents in 60 lines of python : Part 3

The Agent Loop

The entire AI agent stack in 60 lines of Python.

You've seen Claude search files, read them, then search again. ChatGPT with Code Interpreter writes code, runs it, sees an error, fixes it, runs again. That back-and-forth isn't magic. It's a loop.

Lesson 2's agent made one tool call and stopped. That's fine for "what's 2 + 2?" but useless for anything real. Multi-step tasks — analyzing a codebase, debugging a function, researching a topic — need the agent to keep going until the job is done.

The agent loop is what makes that possible. And it's shockingly simple.


The concept

Real agents loop: call a tool, see the result, decide what's next, repeat until done. The LLM decides when to stop.

The flow is:

  1. Build messages — system prompt + user task
  2. Ask LLM — send everything so far
  3. tool_calls? — the LLM either returns an answer (done) or asks for tools
  4. Execute tool — run it, append the result
  5. Back to Ask LLM — with the growing messages array

That loop-back arrow from "Execute tool" to "Ask LLM" is the entire difference between a toy and a useful agent.


The code: a for loop with a safety limit

async def agent(task, max_turns=5):
    messages = [
        {"role": "system", "content": "Use tools."},
        {"role": "user", "content": task},
    ]
    for turn in range(max_turns):
        msg = await ask_llm(messages)
        if not msg.get("tool_calls"):
            return msg.get("content", "")
Enter fullscreen mode Exit fullscreen mode

for turn in range(max_turns) — that's the whole loop. Each iteration is one "turn": ask the LLM, check if it wants a tool. If not, return the answer. The max_turns safety limit prevents runaway loops (an LLM that keeps calling tools forever).


The dispatch: executing tools

When the LLM does ask for a tool, we run it and feed the result back:

        messages.append(msg)
        for tc in msg["tool_calls"]:
            name = tc["function"]["name"]
            args = json.loads(tc["function"]["arguments"])
            result = tools[name](**args)
            messages.append({
                "role": "tool",
                "tool_call_id": tc["id"],
                "content": str(result),
            })
Enter fullscreen mode Exit fullscreen mode

The tool_call_id links each result to its request. When the LLM asks for two tools at once, this is how it knows which result belongs to which call.

The messages array grows with every turn. That's the agent's memory within a single task.


Watch the cycle

Message in, LLM thinks, tool runs, result feeds back. Around and around until the LLM decides it has enough to answer.

Try "add 3 and 4, then uppercase hello" — the LLM chains two tool calls across multiple turns. Watch the messages array grow.


What changed from Lesson 2

Lesson 2 was a single LLM call: ask once, maybe use a tool, done. That can't handle "search for files, then read the interesting ones, then summarize." The loop is what makes agents actually useful.

This is the entire runtime of LangChain's AgentExecutor, OpenAI's Agents SDK, AutoGen — a while loop over messages. Every agent framework wraps this pattern. Now you've seen it bare.


The full code

Here's the complete agent in ~30 lines:

tools = {"add": lambda a, b: a + b, "upper": lambda text: text.upper()}
TOOL_DEFS = [
    {"type": "function", "function": {"name": "add", "description": "Add two numbers",
        "parameters": {"type": "object",
            "properties": {"a": {"type": "number"}, "b": {"type": "number"}}}}},
    {"type": "function", "function": {"name": "upper", "description": "Uppercase text",
        "parameters": {"type": "object",
            "properties": {"text": {"type": "string"}}}}},
]

async def ask_llm(messages):
    resp = await client.chat.completions.create(
        model="gpt-4o-mini", messages=messages, tools=TOOL_DEFS
    )
    return resp.choices[0].message

async def agent(task, max_turns=5):
    messages = [
        {"role": "system", "content": "Use tools to answer. Be concise."},
        {"role": "user", "content": task},
    ]
    for turn in range(max_turns):
        msg = await ask_llm(messages)
        if not msg.get("tool_calls"):
            return msg.get("content", "")
        messages.append(msg)
        for tc in msg["tool_calls"]:
            name = tc["function"]["name"]
            args = json.loads(tc["function"]["arguments"])
            result = tools[name](**args)
            messages.append({
                "role": "tool",
                "tool_call_id": tc["id"],
                "content": str(result),
            })
    return "Max turns reached"
Enter fullscreen mode Exit fullscreen mode

Try it

Run this code yourself at tinyagents.dev. Lesson 3 has a live sandbox — type a task, watch the loop cycle through the graph in real time.

Next up: Lesson 4 — Conversation. The agent loop handles a single task. But what about back-and-forth dialogue? That's where conversation memory comes in.


This is Lesson 3 of A Tour of Agents — a free interactive course that builds an AI agent from scratch. No frameworks. No abstractions. Just the code.

Top comments (0)