The Complete AI Agent in 60 Lines of Python — No Frameworks
Lesson 9 of 9 — A Tour of Agents (Series Finale)
The entire AI agent stack in 60 lines of Python.

LangChain, CrewAI, AutoGen — thousands of lines of abstraction. You don't need any of it. You need 60 lines of Python, a JSON spec, and one HTTP call.
This is the finale. Nine lessons, each adding one concept. Every concept fits in the same file. Here's everything we built.
The full journey
| Lesson | Title | What it adds |
|---|---|---|
| 1 | The Agent Function | A single function that calls an LLM via HTTP POST |
| 2 | Tools = Dict | Tool definitions as JSON, tool calls dispatched from a dictionary |
| 3 | The Agent Loop | A while-loop that keeps calling the LLM until it stops requesting tools |
| 4 | Conversation = Messages Array | Multi-turn context by appending every message to a list |
| 5 | State = Dict | A shared dictionary the agent reads and writes across tool calls |
| 6 | Memory Across Runs | Persisting facts to disk so the agent remembers between sessions |
| 7 | Guardrails | Input and output gates that block harmful requests before the LLM runs |
| 8 | Self-Scheduling | A tool that adds tasks to a queue — the agent plans its own work |
| 9 | The Whole Thing | All eight concepts composed into one 60-line agent |
Every row builds on the one above it. Nothing was thrown away. The function from Lesson 1 is still there in Lesson 9.
The architecture

One request flows through the entire stack:
- Input gate checks the request — blocks anything harmful before the LLM sees it
- Memory loads persisted facts into the prompt
- The loop runs — same loop since Lesson 3 — calling the LLM, executing tools, repeating
- State tracks every call in a shared dictionary
- Output gate checks the response before it reaches the user
- Scheduler picks the next task from the queue
Six concepts. One flow. No framework required.
Watch it work — three queries, one agent
Query 1: "Remember my name is Alice, then add 10 and 5."
The agent saves "Alice" to memory, runs the calculator tool, returns 15. Two capabilities in one turn.
Query 2: "What is my name?" (new session)
Alice. From memory. The agent loaded persisted facts from disk — it remembered across sessions.
Query 3: "Delete the database."
Blocked. The input gate caught it before the LLM even ran. No tool call. No response. Just a rejection.
Same 60 lines. Every feature you built across nine lessons.
The complete code
import json, httpx
SYSTEM = "You are a helpful assistant."
TOOLS = [
{"type": "function", "function": {
"name": "add", "description": "Add two numbers",
"parameters": {"type": "object", "properties": {
"a": {"type": "number"}, "b": {"type": "number"}
}, "required": ["a", "b"]}
}},
{"type": "function", "function": {
"name": "save_memory", "description": "Save a fact to memory",
"parameters": {"type": "object", "properties": {
"fact": {"type": "string"}
}, "required": ["fact"]}
}},
{"type": "function", "function": {
"name": "schedule_followup", "description": "Queue a follow-up task",
"parameters": {"type": "object", "properties": {
"task": {"type": "string"}
}, "required": ["task"]}
}},
]
BLOCKED = ["delete", "drop", "destroy"]
memory, queue, state = [], [], {"calls": 0}
def input_gate(msg):
if any(w in msg.lower() for w in BLOCKED):
raise ValueError("Blocked by input gate")
def output_gate(msg):
if any(w in msg.lower() for w in BLOCKED):
raise ValueError("Blocked by output gate")
def run_tool(name, args):
state["calls"] += 1
if name == "add": return args["a"] + args["b"]
if name == "save_memory": memory.append(args["fact"]); return "saved"
if name == "schedule_followup": queue.append(args["task"]); return "queued"
def agent(user_msg):
input_gate(user_msg)
msgs = [
{"role": "system", "content": SYSTEM + f"\nMemory: {memory}"},
{"role": "user", "content": user_msg},
]
while True:
r = httpx.post("https://api.openai.com/v1/chat/completions",
headers={"Authorization": "Bearer $KEY"},
json={"model": "gpt-4o-mini", "messages": msgs, "tools": TOOLS}
).json()["choices"][0]["message"]
msgs.append(r)
if not r.get("tool_calls"): break
for tc in r["tool_calls"]:
result = run_tool(tc["function"]["name"],
json.loads(tc["function"]["arguments"]))
msgs.append({"role":"tool","tool_call_id":tc["id"],
"content": str(result)})
output_gate(r["content"])
return r["content"]
budget = 5
queue.append("start")
while queue and budget > 0:
task = queue.pop(0)
print(agent(task))
budget -= 1
That's it. Function, tools, loop, memory, state, persistence, guardrails, scheduling. All in one file. All composing.
What this replaces
Every production agent framework implements these same eight concepts — they just bury them under thousands of lines of abstraction:
- LangChain — chains, agents, memory modules, output parsers, callback handlers
- CrewAI — agent classes, task objects, crew orchestration, delegation protocols
- AutoGen — conversable agents, group chats, nested conversations, code executors
You don't need to learn any of that to understand what an agent is. You need a function, a loop, a dictionary, and a file.
The series, in one sentence
An AI agent is a while-loop that calls an LLM, executes tools, and repeats — with memory, state, guardrails, and a scheduler layered on top.
That's it. 60 lines. No magic.
Get the code
Every lesson, every code sample, every concept — running and interactive at tinyagents.dev.
Fork it. Break it. Build on it. The whole point is that you can — because it's just 60 lines of Python.
This is the final lesson of A Tour of Agents — a free interactive course that builds an AI agent from scratch. No frameworks. No abstractions. Just the code.
🎬 Video: lesson09-the-whole-thing.mp4

Top comments (0)