The Agent Function
Lesson 1 of 9 — A Tour of Agents
Every time you send a message to ChatGPT, Claude, or any LLM — your app makes one HTTP POST request and gets a response back. That's it. No magic. No framework. One function.
This is where agents start.
What is an agent, really?
Strip away the buzzwords and an AI agent is a pipeline with four steps:
Your message → agent() → POST /completions → Response
Your message goes in. A function wraps it into the right format. An HTTP call goes out. A response comes back.
That flow diagram is the entire architecture of Lesson 1. There's no orchestration engine. No agent framework. Just a function that talks to an API.
The function
Here's the core of it — a function called ask_llm:
async def ask_llm(messages):
resp = await pyfetch(
"https://api.groq.com/v1/chat/completions",
method="POST",
headers={"Authorization": f"Bearer {KEY}"},
body=json.dumps({
"model": "llama-3.3-70b",
"messages": messages
})
)
data = await resp.json()
return data["choices"][0]["message"]["content"]
This function takes a list of messages, sends them to an LLM API, and returns the response content. That's the entire "AI" part of an agent.
The messages array is the conversation format every LLM uses — a list of objects with a role ("system", "user", "assistant") and content. The API doesn't remember previous messages. You send the full conversation every time.
The wrapper
Now wrap it:
def agent(prompt):
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": prompt}
]
return ask_llm(messages)
One function. One API call.
The system message sets the behavior. The user message is whatever you type. The function passes both to the LLM and returns what it says.
This is the simplest possible agent. It can't use tools. It can't loop. It can't remember anything between calls. But it's the foundation everything else builds on.
Watch the data flow
When you call agent("What is the capital of France?"), here's what happens:
- Your string gets wrapped into a messages array
-
ask_llmsends that array as an HTTP POST to/chat/completions - The API returns a JSON response
- You extract
choices[0].message.content— that's your answer
No state. No memory. No side effects. Data flows in one direction: message in, response out.
Try it yourself
This lesson runs entirely in your browser at tinyagents.dev. Type a message, watch it flow through the diagram, see the HTTP request, read the response.
The next 8 lessons build on this foundation:
- Lesson 2 adds tools (a Python dictionary, literally)
- Lesson 3 adds the agent loop (a for loop with a safety limit)
- By Lesson 9, you'll have a complete agent with tools, memory, streaming, and multi-agent orchestration
All in about 60 lines of Python.
This is Lesson 1 of A Tour of Agents — a free interactive course that builds an AI agent from scratch. No frameworks. No abstractions. Just the code.




Top comments (0)