Smart people keep asking me the wrong questions about AI agents. Not because they're not smart — because they're missing the vocabulary.
───
I've been building AI agents that trade real money on prediction markets. And every time I explain what I'm doing to someone outside the AI bubble, I hit the same wall: they don't have the words to ask the right questions.
"So it's like a bot?" Kind of. "Is it safe?" Depends what you mean by safe. "Who's in control?" That's actually the most important question — and most people can't even frame it yet.
Here are the terms that changed how I think about all of this.
───
🧠 What an AI Agent Actually Is
LLM (Large Language Model)
The brain. Trained on enormous amounts of text, it predicts what comes next — whether that's a word, a sentence, or a decision. GPT, Claude, Gemini: all LLMs. An LLM alone just talks. It can't do.
Agent
An LLM plus tools plus a loop. The defining difference: an agent can affect the world outside the conversation. It can read your email, place a trade, send a message, run code. If an LLM is a consultant giving advice, an agent is the assistant who actually books the flight.
Skill / Tool
What the agent can reach for. Search the web. Check a wallet balance. Place a Polymarket order. Each capability is a "tool" or "skill" — discrete actions the agent can choose to invoke. The agent decides when to use them. You decide which to give it.
───
⚙️ How It Thinks
Token
LLMs don't read words — they read tokens. Chunks of text: sometimes a full word, sometimes a syllable, sometimes punctuation. Everything about capacity and cost is measured in tokens. When people say "this model has a 200k context window" — that's 200,000 tokens, roughly 150,000 words.
Context Window
The agent's working memory. Everything it can "see" at once: the conversation history, the documents you fed it, the tools it has, the task you gave it. When the window fills, old content gets dropped. This is why long-running agents sometimes "forget" earlier instructions — they literally ran out of room.
Hallucination
When the model generates something confident, fluent, and wrong. Not lying — it has no concept of truth. It's pattern-matching on what a plausible response looks like. For a writing assistant, hallucinations are annoying. For an agent managing money, they're dangerous. This is why agent design matters: you constrain the agent so a hallucination can't cause real-world damage.
🏗️ How Agents Are Built
System Prompt
The instructions baked in before the conversation starts. This is where you define the agent's personality, constraints, and goals. "Never delete messages." "Only trade markets with >$50k liquidity." "Always ask before spending more than $100." The system prompt is how you control what the agent does with the power you've given it.
Harness
The scaffolding around the LLM. System prompt, tool definitions, memory, error handling, retry logic — everything that turns a raw model into a useful agent. The model is the engine. The harness is the car. A powerful engine in the wrong chassis will still crash.
MCP (Model Context Protocol)
A standard for connecting AI agents to external data — files, calendars, databases, APIs. Think of it as USB-C for AI
Top comments (0)