DEV Community

Arun Purushothaman
Arun Purushothaman

Posted on

Agents in 60 lines of python : Part 2

Tools = Dict

When ChatGPT says "Used browser" or Claude says "Running search" — what's actually happening? The LLM can't run code. But it can say "call add with a=10, b=5" — a structured request. Your code executes it.

That's the whole trick. Tools are functions in a dictionary. The LLM picks which one to call.


The concept

The LLM decides. Your code executes.

A message goes in. The LLM either returns a tool call (a structured request with a function name and arguments) or plain text. If it's a tool call, your code looks it up and runs it. If not, you return the text directly.

One dict. One lookup. One call.


The registry

Step one: build a dictionary of callables. Every key is a tool name. Every value is the function that runs it.

tools = {
    "add": lambda a, b: a + b,
    "upper": lambda text: text.upper(),
}
Enter fullscreen mode Exit fullscreen mode

Lambda, function, class method — anything that takes arguments and returns a value. That's your tool registry.

You also describe these tools for the LLM using JSON Schema, so it knows what exists and what arguments each tool accepts:

TOOL_DEFS = [
    {"type": "function", "function": {
        "name": "add", "description": "Add two numbers",
        "parameters": {"type": "object",
            "properties": {"a": {"type": "number"}, "b": {"type": "number"}}}}},
    {"type": "function", "function": {
        "name": "upper", "description": "Uppercase a string",
        "parameters": {"type": "object",
            "properties": {"text": {"type": "string"}}}}},
]
Enter fullscreen mode Exit fullscreen mode

This is the wire format OpenAI and Groq expect in the tools field. When ChatGPT shows that little plugin icon before calling a tool — this JSON schema is how it knew the tool existed.


The dispatch

When the LLM wants a tool, your agent looks it up by name and calls it with the arguments. One line does the work:

async def agent(task):
    d = await ask_llm(task)
    if d.get("tool") and d["tool"] in tools:
        result = tools[d["tool"]](**d["args"])
        return f"{d['tool']}({d['args']}) = {result}"
    return d.get("text", "No tool needed")
Enter fullscreen mode Exit fullscreen mode

The key line: tools[d["tool"]](**d["args"]). Dictionary lookup, then call with keyword arguments. Same pattern as an Express router or Redux reducer.

tools[name](**args) — that's the whole pattern.


The data flow

Message goes in. LLM decides — tool call, or plain text. If it's a tool, look up, execute, return. No magic.


Framework parallel

LangChain's @tool decorator, CrewAI's tool registration — they build this dict for you. Here you've seen what's inside.


Try it

Run it yourself at tinyagents.dev. Try "add 10 and 5" — the LLM returns a tool call, your code executes it. Try "what is Python?" — no tool needed, the LLM answers directly. The LLM decides.

Next up: Lesson 3 — The Agent Loop.


This is Lesson 2 of A Tour of Agents — a free interactive course that builds an AI agent from scratch. No frameworks. No abstractions. Just the code.

Top comments (0)