DEV Community

Cover image for Build Your First AI Agent in Python — No ML Degree Required
klement Gunndu
klement Gunndu

Posted on

Build Your First AI Agent in Python — No ML Degree Required

Most "build an AI agent" tutorials assume you already understand embeddings, vector stores, and prompt engineering. You don't need any of that to build your first one.

An AI agent is a program that uses a language model to decide what to do next. It reads your request, picks the right tool, runs it, reads the result, and decides whether to use another tool or answer you. That loop — reason, act, observe — is the entire concept.

This tutorial builds a working AI agent in Python. It searches the web, does math, and answers questions by combining both. You will have it running in 30 minutes.

What You Need Before Starting

Python 3.10 or higher. Check with python --version. If you're below 3.10, upgrade first — the libraries we use require it.

An Anthropic API key. Sign up at console.anthropic.com. The free tier gives you enough credits to follow this tutorial. You need the key to call Claude, which powers the agent's reasoning.

A terminal and a text editor. VS Code, PyCharm, or even a plain terminal with nano. Nothing fancy required.

That's it. No machine learning background. No math. No prior AI experience.

Step 1: Set Up Your Project

Create a project folder and a virtual environment. Virtual environments keep your project's packages separate from your system Python — a habit worth building from day one.

mkdir my-first-agent && cd my-first-agent
python -m venv .venv
source .venv/bin/activate  # On Windows: .venv\Scripts\activate
Enter fullscreen mode Exit fullscreen mode

Install the packages:

pip install langchain-anthropic langgraph langchain-community duckduckgo-search python-dotenv
Enter fullscreen mode Exit fullscreen mode

Here is what each package does:

  • langchain-anthropic — connects to Claude (the language model)
  • langgraph — provides the agent framework (the reasoning loop)
  • langchain-community — community-built tools (like web search)
  • duckduckgo-search — the search engine backend (no API key needed)
  • python-dotenv — loads your API key from a .env file so you don't hardcode secrets

Create a .env file to store your API key:

echo "ANTHROPIC_API_KEY=your-key-here" > .env
Enter fullscreen mode Exit fullscreen mode

Replace your-key-here with your actual key from the Anthropic console. Never commit this file to git. Add it to .gitignore immediately:

echo ".env" >> .gitignore
Enter fullscreen mode Exit fullscreen mode

Step 2: Build a Custom Tool

An AI agent without tools is just a chatbot. Tools give the agent abilities — searching the web, doing calculations, reading files, calling APIs. The agent decides which tool to use based on your question.

Create a file called agent.py and add your first custom tool:

from langchain_core.tools import tool

@tool
def calculate(expression: str) -> str:
    """Evaluate a math expression and return the result.

    Args:
        expression: A math expression like '2 + 2' or '100 * 0.15'
    """
    allowed_chars = set("0123456789+-*/(). ")
    if not all(c in allowed_chars for c in expression):
        return "Error: Only basic math operations are allowed."

    try:
        result = eval(expression)  # Safe: input is sanitized above
        return str(result)
    except Exception as e:
        return f"Error: {e}"
Enter fullscreen mode Exit fullscreen mode

Three things to notice:

  1. The @tool decorator turns any Python function into something the agent can call. It reads the function name, type hints, and docstring to understand what the tool does.

  2. The docstring matters. The agent reads it to decide when to use this tool. Write it like you are explaining the tool to a coworker: what it does, what inputs it expects.

  3. Input validation. The allowed_chars check prevents the agent from running arbitrary code. Always validate tool inputs, even in tutorials.

Step 3: Add Web Search

DuckDuckGo search requires no API key, which makes it the simplest web search tool for getting started. Add it below your calculate tool:

from langchain_community.tools import DuckDuckGoSearchRun

search = DuckDuckGoSearchRun()
Enter fullscreen mode Exit fullscreen mode

That's it. One line. The DuckDuckGoSearchRun tool is pre-built — it already has a name ("duckduckgo_search"), a description, and handles the HTTP requests internally. The agent will use it whenever it needs current information from the web.

Step 4: Create the Agent

Now connect the language model, the tools, and the reasoning loop:

from dotenv import load_dotenv
from langchain_anthropic import ChatAnthropic
from langgraph.prebuilt import create_react_agent

load_dotenv()  # Reads ANTHROPIC_API_KEY from .env

# The language model — Claude handles the reasoning
model = ChatAnthropic(
    model="claude-sonnet-4-6",
    temperature=0,
)

# The tools — what the agent can do
tools = [search, calculate]

# The agent — combines model + tools into a reasoning loop
agent = create_react_agent(model, tools)
Enter fullscreen mode Exit fullscreen mode

create_react_agent builds a ReAct agent. ReAct stands for Reason + Act — a pattern from a 2022 research paper where the model:

  1. Thinks about what to do next
  2. Acts by calling a tool
  3. Observes the tool's output
  4. Repeats until it has enough information to answer

You don't need to implement this loop yourself. create_react_agent handles it.

Step 5: Run Your Agent

Add the main execution block at the bottom of agent.py:

def main():
    print("AI Agent ready. Type 'quit' to exit.\n")

    while True:
        user_input = input("You: ")
        if user_input.lower() in ("quit", "exit", "q"):
            print("Goodbye!")
            break

        result = agent.invoke(
            {"messages": [{"role": "user", "content": user_input}]}
        )

        # The last message in the result contains the agent's answer
        final_message = result["messages"][-1]
        print(f"\nAgent: {final_message.content}\n")


if __name__ == "__main__":
    main()
Enter fullscreen mode Exit fullscreen mode

Run it:

python agent.py
Enter fullscreen mode Exit fullscreen mode

Try these prompts to see the agent use different tools:

  • "What is 15% of 847?" — The agent calls calculate with 847 * 0.15
  • "What is the current weather in Tokyo?" — The agent calls duckduckgo_search
  • "How much is a mass of 5 kg in pounds, and who invented the metric system?" — The agent calls both tools: calculate for the conversion and search for the history

Watch the terminal. You will see the agent reason through each step, pick a tool, use it, and combine the results into a single answer.

The Complete Code

Here is the full agent.py in one block. Copy it, add your .env file, and run it:

from dotenv import load_dotenv
from langchain_core.tools import tool
from langchain_community.tools import DuckDuckGoSearchRun
from langchain_anthropic import ChatAnthropic
from langgraph.prebuilt import create_react_agent

load_dotenv()

# --- Tools ---

@tool
def calculate(expression: str) -> str:
    """Evaluate a math expression and return the result.

    Args:
        expression: A math expression like '2 + 2' or '100 * 0.15'
    """
    allowed_chars = set("0123456789+-*/(). ")
    if not all(c in allowed_chars for c in expression):
        return "Error: Only basic math operations are allowed."
    try:
        result = eval(expression)
        return str(result)
    except Exception as e:
        return f"Error: {e}"

search = DuckDuckGoSearchRun()

# --- Agent ---

model = ChatAnthropic(
    model="claude-sonnet-4-6",
    temperature=0,
)

agent = create_react_agent(model, [search, calculate])

# --- Run ---

def main():
    print("AI Agent ready. Type 'quit' to exit.\n")
    while True:
        user_input = input("You: ")
        if user_input.lower() in ("quit", "exit", "q"):
            print("Goodbye!")
            break
        result = agent.invoke(
            {"messages": [{"role": "user", "content": user_input}]}
        )
        final_message = result["messages"][-1]
        print(f"\nAgent: {final_message.content}\n")

if __name__ == "__main__":
    main()
Enter fullscreen mode Exit fullscreen mode

Under 50 lines of code. That is your first AI agent.

What Just Happened — The ReAct Loop Explained

When you type "How much is 15% of 847, and what country uses that tax rate?", the agent runs this loop internally:

Thought 1: "I need to calculate 15% of 847. I'll use the calculate tool."
Action: calculate("847 * 0.15")
Observation: "127.05"

Thought 2: "Now I need to find which country uses a 15% tax rate. I'll search the web."
Action: duckduckgo_search("countries with 15% tax rate")
Observation: Search results about tax rates in various countries.

Thought 3: "I have both pieces of information. I can answer the user."
Final answer: A combined response with the calculation result and the search findings.

This is the core of every AI agent, from simple assistants to complex multi-agent systems. The difference between a beginner agent and a production agent is not the loop — it is the tools, the error handling, and the guardrails around it.

Where to Go From Here

You have a working agent. Here are three concrete next steps, ordered by difficulty:

1. Add More Tools (10 minutes)

Every Python function can become a tool. Here is a file reader:

@tool
def read_file(file_path: str) -> str:
    """Read the contents of a text file.

    Args:
        file_path: Path to the file to read. Must be a .txt or .md file.
    """
    if not file_path.endswith((".txt", ".md")):
        return "Error: Only .txt and .md files are supported."
    try:
        with open(file_path, "r") as f:
            content = f.read()
        return content[:2000]  # Limit output length
    except FileNotFoundError:
        return f"Error: File '{file_path}' not found."
Enter fullscreen mode Exit fullscreen mode

Add it to your tools list: tools = [search, calculate, read_file]. The agent will automatically know when to use it.

2. Add Memory (30 minutes)

Right now, the agent forgets everything between questions. Each invoke call starts fresh. To add conversation memory, pass the full message history:

messages = []

while True:
    user_input = input("You: ")
    if user_input.lower() in ("quit", "exit", "q"):
        break

    messages.append({"role": "user", "content": user_input})
    result = agent.invoke({"messages": messages})

    # Add all new messages (including tool calls) to history
    messages = result["messages"]

    final_message = messages[-1]
    print(f"\nAgent: {final_message.content}\n")
Enter fullscreen mode Exit fullscreen mode

Now the agent remembers previous questions. Ask "What is 15% of 847?" followed by "Double that" — it will know what "that" refers to.

3. Add Error Handling (1 hour)

Production agents need retry logic and timeouts. The search tool can fail if the network is down. The LLM can return an unexpected format. Wrap your agent call:

import time

def ask_agent(agent, messages, max_retries=3):
    for attempt in range(max_retries):
        try:
            result = agent.invoke({"messages": messages})
            return result
        except Exception as e:
            if attempt < max_retries - 1:
                wait = 2 ** attempt  # 1s, 2s, 4s
                print(f"Retrying in {wait}s... ({e})")
                time.sleep(wait)
            else:
                print(f"Failed after {max_retries} attempts: {e}")
                return None
Enter fullscreen mode Exit fullscreen mode

This exponential backoff pattern is the same one used in production systems at scale. Start simple, add resilience as you go.

Common Mistakes Beginners Make

1. Forgetting the docstring. The @tool decorator reads the function's docstring to tell the agent what the tool does. No docstring = the agent won't know when to use it.

2. Hardcoding the API key. Never put ANTHROPIC_API_KEY="sk-..." directly in your code. Use .env files and python-dotenv. One accidental git push and your key is public.

3. No input validation on tools. The language model generates the tool inputs. It can produce unexpected values. Always validate and handle errors inside your tool functions.

4. Expecting perfection. AI agents are probabilistic. The same question might get slightly different responses. This is normal. The tools are deterministic (math is math), but the reasoning path varies.

Key Takeaways

  • An AI agent = language model + tools + a reasoning loop
  • create_react_agent from LangGraph builds the loop for you
  • Custom tools are Python functions with the @tool decorator
  • DuckDuckGo search works without an API key — good for prototyping
  • Start with 2-3 tools, add more as you need them
  • Always validate tool inputs and handle errors

The gap between "I have never built an AI agent" and "I have a working one" is under 50 lines of Python. The gap between that and a production system is larger — but you cannot cross the second gap without crossing the first one.


Packages used in this tutorial (as of March 2026):

  • langchain-anthropic — ChatAnthropic integration
  • langgraph — Agent framework with create_react_agent
  • langchain-community — DuckDuckGoSearchRun tool
  • duckduckgo-search — Search backend
  • python-dotenv (v1.2.2) — Environment variable loading

Follow @klement_gunndu for more AI engineering content. We're building in public.

Top comments (0)