<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Ilya Gordey</title>
    <description>The latest articles on DEV Community by Ilya Gordey (@alienjesus).</description>
    <link>https://dev.to/alienjesus</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/alienjesus"/>
    <language>en</language>
    <item>
      <title>AI Agent Vocab 101: The Terms That Actually Matter</title>
      <dc:creator>Ilya Gordey</dc:creator>
      <pubDate>Sun, 22 Mar 2026 21:30:22 +0000</pubDate>
      <link>https://dev.to/alienjesus/ai-agent-vocab-101-the-terms-that-actually-matter-12d2</link>
      <guid>https://dev.to/alienjesus/ai-agent-vocab-101-the-terms-that-actually-matter-12d2</guid>
      <description>&lt;p&gt;&lt;em&gt;Smart people keep asking me the wrong questions about AI agents. Not because they're not smart — because they're missing the vocabulary.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;───&lt;/p&gt;

&lt;p&gt;I've been building AI agents that trade real money on prediction markets. And every time I explain what I'm doing to someone outside the AI bubble, I hit the same wall: they don't have the words to ask the right questions.&lt;/p&gt;

&lt;p&gt;"So it's like a bot?" Kind of. "Is it safe?" Depends what you mean by safe. "Who's in control?" That's actually the most important question — and most people can't even frame it yet.&lt;/p&gt;

&lt;p&gt;Here are the terms that changed how I think about all of this.&lt;/p&gt;

&lt;p&gt;───&lt;/p&gt;

&lt;p&gt;🧠 What an AI Agent Actually Is&lt;/p&gt;

&lt;p&gt;LLM (Large Language Model)&lt;br&gt;
The brain. Trained on enormous amounts of text, it predicts what comes next — whether that's a word, a sentence, or a decision. GPT, Claude, Gemini: all LLMs. An LLM alone just talks. It can't do.&lt;/p&gt;

&lt;p&gt;Agent&lt;br&gt;
An LLM plus tools plus a loop. The defining difference: an agent can affect the world outside the conversation. It can read your email, place a trade, send a message, run code. If an LLM is a consultant giving advice, an agent is the assistant who actually books the flight.&lt;/p&gt;

&lt;p&gt;Skill / Tool&lt;br&gt;
What the agent can reach for. Search the web. Check a wallet balance. Place a Polymarket order. Each capability is a "tool" or "skill" — discrete actions the agent can choose to invoke. The agent decides when to use them. You decide which to give it.&lt;/p&gt;

&lt;p&gt;───&lt;/p&gt;

&lt;p&gt;⚙️ How It Thinks&lt;/p&gt;

&lt;p&gt;Token&lt;br&gt;
LLMs don't read words — they read tokens. Chunks of text: sometimes a full word, sometimes a syllable, sometimes punctuation. Everything about capacity and cost is measured in tokens. When people say "this model has a 200k context window" — that's 200,000 tokens, roughly 150,000 words.&lt;/p&gt;

&lt;p&gt;Context Window&lt;br&gt;
The agent's working memory. Everything it can "see" at once: the conversation history, the documents you fed it, the tools it has, the task you gave it. When the window fills, old content gets dropped. This is why long-running agents sometimes "forget" earlier instructions — they literally ran out of room.&lt;/p&gt;

&lt;p&gt;Hallucination&lt;br&gt;
When the model generates something confident, fluent, and wrong. Not lying — it has no concept of truth. It's pattern-matching on what a plausible response looks like. For a writing assistant, hallucinations are annoying. For an agent managing money, they're dangerous. This is why agent design matters: you constrain the agent so a hallucination can't cause real-world damage.&lt;/p&gt;

&lt;p&gt;🏗️ How Agents Are Built&lt;/p&gt;

&lt;p&gt;System Prompt&lt;br&gt;
The instructions baked in before the conversation starts. This is where you define the agent's personality, constraints, and goals. "Never delete messages." "Only trade markets with &amp;gt;$50k liquidity." "Always ask before spending more than $100." The system prompt is how you control what the agent does with the power you've given it.&lt;/p&gt;

&lt;p&gt;Harness&lt;br&gt;
The scaffolding around the LLM. System prompt, tool definitions, memory, error handling, retry logic — everything that turns a raw model into a useful agent. The model is the engine. The harness is the car. A powerful engine in the wrong chassis will still crash.&lt;/p&gt;

&lt;p&gt;MCP (Model Context Protocol)&lt;br&gt;
A standard for connecting AI agents to external data — files, calendars, databases, APIs. Think of it as USB-C for AI&lt;/p&gt;

</description>
      <category>ai</category>
      <category>beginners</category>
      <category>agents</category>
    </item>
    <item>
      <title>I Built an AI Agent That Trades Polymarket 24/7 — Here's How It Works</title>
      <dc:creator>Ilya Gordey</dc:creator>
      <pubDate>Sun, 22 Mar 2026 21:18:31 +0000</pubDate>
      <link>https://dev.to/alienjesus/i-built-an-ai-agent-that-trades-polymarket-247-heres-how-it-works-4o1e</link>
      <guid>https://dev.to/alienjesus/i-built-an-ai-agent-that-trades-polymarket-247-heres-how-it-works-4o1e</guid>
      <description>&lt;p&gt;Prediction markets. Whale wallets. And the question that kept me up at night.&lt;/p&gt;

&lt;p&gt;It Started With Watching Wallets&lt;/p&gt;

&lt;p&gt;I was manually trading on Polymarket. Losing, mostly — not because my analysis was wrong, but because by the time I'd read the news, checked the odds, and placed the trade, someone had already moved the market.&lt;/p&gt;

&lt;p&gt;Then I started looking at on-chain data. Some wallets were consistently early. 58%+ win rate across hundreds of trades. Not lucky — systematic.&lt;/p&gt;

&lt;p&gt;They weren't smarter. They were faster. And they clearly had automation.&lt;/p&gt;

&lt;p&gt;So I started building mine.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What I Learned Building an Autonomous Trading Agent&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Getting the data is easy. Acting on it is hard.&lt;/p&gt;

&lt;p&gt;Polygon is public. You can read every wallet, every trade, every timestamp. The signal is all there — whale opens a position at 2am, market moves 20% by morning.&lt;/p&gt;

&lt;p&gt;The problem is execution speed and geo-blocks. Polymarket blocks most of the world (including where I live — Thailand). The CLOB API is finicky. And placing an order requires: USDC approval, wallet signing, order construction, submission — four steps that need to happen in seconds.&lt;/p&gt;

&lt;p&gt;I ended up routing everything through Vercel's Tokyo region (hnd1) — the one datacenter Polymarket doesn't block. It's janky but it works.&lt;/p&gt;

&lt;p&gt;Non-custodial was non-negotiable.&lt;/p&gt;

&lt;p&gt;I didn't want to hold user funds. So every user gets their own Polygon wallet, private key encrypted with AES-256-GCM, stored only on their side. The agent signs transactions locally. We never touch the money.&lt;/p&gt;

&lt;p&gt;The leaderboard changed everything.&lt;/p&gt;

&lt;p&gt;When I added a public leaderboard showing which agent strategies were performing — people started competing. Optimizing not just for profit but for rank. It turned a trading tool into a game.&lt;/p&gt;

&lt;p&gt;The Bigger Realization&lt;/p&gt;

&lt;p&gt;Halfway through building this I had a thought that I can't un-have:&lt;/p&gt;

&lt;p&gt;People already deposit money into AI agents. They just don't call it that.&lt;/p&gt;

&lt;p&gt;Your Meta Ads budget? That's a deposit into an AI agent. It decides targeting, bids, timing — autonomously. You fund it and hope it earns more than it spends.&lt;/p&gt;

&lt;p&gt;Polymarket is the same model, but transparent. Every trade is on-chain. You can audit exactly what the agent did and why.&lt;/p&gt;

&lt;p&gt;We're at the beginning of a shift where "giving money to an AI and letting it run" becomes normal. Not just for ads. For trading, for content, for operations.&lt;/p&gt;

&lt;p&gt;I don't know how fast this happens. But it's already happening.&lt;/p&gt;

&lt;p&gt;Where It Is Now&lt;/p&gt;

&lt;p&gt;The tool is live at polyclawster.com. Early, rough around the edges, but real — real wallets, real trades, real on-chain history.&lt;/p&gt;

&lt;p&gt;Building this in public. Sharing the wins and the stupid mistakes along the way.&lt;/p&gt;

&lt;p&gt;If you're doing anything similar — AI agents with real-world stakes, prediction markets, on-chain automation — I'd genuinely love to talk: &lt;a href="https://twitter.com/ilyagordey" rel="noopener noreferrer"&gt;https://twitter.com/ilyagordey&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>cryptocurrency</category>
      <category>polymarket</category>
      <category>openclaw</category>
    </item>
  </channel>
</rss>
