Build Your First AI Agent with Python and LangChain in 2026
AI agents are the hottest topic in software development right now — and for good reason. Unlike simple chatbots, AI agents can reason, plan, and take actions. They can search the web, run code, call APIs, and chain multiple steps together.
The good news? You can build a functional AI agent in under 100 lines of Python. Let me show you how.
What Is an AI Agent?
An AI agent is a program that:
- Receives a goal (not just a question)
- Reasons about how to achieve it (using an LLM)
- Takes actions (calls tools, APIs, or functions)
- Observes the results and adjusts
- Repeats until the goal is reached
This is fundamentally different from a chatbot that just answers questions.
Why LangChain?
LangChain is the most popular Python framework for building LLM-powered applications. In 2026, the ecosystem has matured significantly:
- LangGraph for stateful, multi-step agents
- Tool calling is now native in most LLMs (OpenAI, Anthropic, Gemini)
- Streaming support built-in
- Excellent docs and community
Prerequisites
pip install langchain langchain-openai langchain-community
You'll need an OpenAI API key (or use any compatible LLM).
Step 1: Your First Agent — A Research Assistant
from langchain_openai import ChatOpenAI
from langchain.agents import create_tool_calling_agent, AgentExecutor
from langchain_community.tools import DuckDuckGoSearchRun
from langchain_core.prompts import ChatPromptTemplate
# 1. Define the LLM
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
# 2. Define tools the agent can use
tools = [DuckDuckGoSearchRun()]
# 3. Create the prompt
prompt = ChatPromptTemplate.from_messages([
("system", "You are a helpful research assistant. Use the search tool to find accurate, current information."),
("human", "{input}"),
("placeholder", "{agent_scratchpad}"),
])
# 4. Create the agent
agent = create_tool_calling_agent(llm, tools, prompt)
executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
# 5. Run it
result = executor.invoke({"input": "What are the top Python frameworks for AI in 2026?"})
print(result["output"])
The verbose=True flag shows you exactly what the agent is thinking — great for learning.
Step 2: Adding Custom Tools
The real power comes from custom tools. Let's add a tool that reads files:
from langchain_core.tools import tool
import os
@tool
def read_file(filename: str) -> str:
"""Read the contents of a file. Use this when you need to analyze a file."""
try:
with open(filename, 'r') as f:
return f.read()
except FileNotFoundError:
return f"Error: File '{filename}' not found"
@tool
def list_files(directory: str = ".") -> str:
"""List all files in a directory."""
try:
files = os.listdir(directory)
return "\n".join(files)
except Exception as e:
return f"Error: {str(e)}"
# Add to tools list
tools = [DuckDuckGoSearchRun(), read_file, list_files]
Now your agent can explore your filesystem. Combine this with a code-writing tool and you have a mini code agent.
Step 3: A Practical Example — Freelance Invoice Analyzer
Here's a real-world agent that analyzes freelance invoices:
from langchain_core.tools import tool
from langchain_openai import ChatOpenAI
from langchain.agents import create_tool_calling_agent, AgentExecutor
from langchain_core.prompts import ChatPromptTemplate
import json
from datetime import datetime
@tool
def calculate_tax(amount: float, rate: float = 0.20) -> str:
"""Calculate tax on an amount. Default rate is 20% (standard VAT)."""
tax = amount * rate
total = amount + tax
return f"Amount: €{amount:.2f}, Tax ({rate*100:.0f}%): €{tax:.2f}, Total: €{total:.2f}"
@tool
def days_overdue(invoice_date: str) -> str:
"""Calculate how many days overdue an invoice is. Date format: YYYY-MM-DD"""
try:
invoice_dt = datetime.strptime(invoice_date, "%Y-%m-%d")
days = (datetime.now() - invoice_dt).days
if days > 30:
return f"⚠️ OVERDUE by {days - 30} days (issued {days} days ago)"
else:
return f"✅ Not overdue — issued {days} days ago, due in {30 - days} days"
except:
return "Invalid date format. Use YYYY-MM-DD"
@tool
def format_payment_reminder(client_name: str, amount: float, days: int) -> str:
"""Generate a professional payment reminder email."""
return f"""Subject: Payment Reminder — Invoice #{days}D
Dear {client_name},
I hope this message finds you well. I'm following up on the outstanding invoice
of €{amount:.2f}, which was due {days} days ago.
Please let me know if you have any questions or if there's an issue with the invoice.
Thank you for your attention to this matter.
Best regards"""
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
tools = [calculate_tax, days_overdue, format_payment_reminder]
prompt = ChatPromptTemplate.from_messages([
("system", "You are a freelance business assistant. Help analyze invoices, calculate taxes, and draft client communications."),
("human", "{input}"),
("placeholder", "{agent_scratchpad}"),
])
agent = create_tool_calling_agent(llm, tools, prompt)
executor = AgentExecutor(agent=agent, tools=tools, verbose=False)
# Test it
result = executor.invoke({
"input": "My client TechCorp owes me €2,500 from an invoice issued on 2026-02-01. Calculate the tax, check if it's overdue, and draft a reminder."
})
print(result["output"])
Step 4: Multi-Step Agents with LangGraph
For more complex workflows, use LangGraph (the stateful evolution of LangChain):
from langgraph.graph import StateGraph, END
from langgraph.prebuilt import ToolNode
from langchain_openai import ChatOpenAI
from typing import TypedDict, Annotated
import operator
class AgentState(TypedDict):
messages: Annotated[list, operator.add]
def should_continue(state):
"""Decide whether to continue using tools or finish."""
last_message = state["messages"][-1]
if hasattr(last_message, "tool_calls") and last_message.tool_calls:
return "tools"
return END
# Build the graph
workflow = StateGraph(AgentState)
workflow.add_node("agent", your_agent_node)
workflow.add_node("tools", ToolNode(tools))
workflow.set_entry_point("agent")
workflow.add_conditional_edges("agent", should_continue)
workflow.add_edge("tools", "agent")
app = workflow.compile()
LangGraph gives you:
- Persistent state between steps
- Human-in-the-loop checkpoints
- Streaming of intermediate steps
- Parallel tool execution
Common Patterns in 2026
ReAct (Reason + Act)
The classic pattern: Think → Act → Observe → Repeat
Plan-and-Execute
- First, generate a complete plan
- Then execute each step
- Great for complex, multi-step tasks
Reflection
After completing a task, the agent critiques its own output and improves it. Surprisingly effective.
Performance Tips
- Use smaller models for tool routing (gpt-4o-mini) and larger ones for complex reasoning (gpt-4o)
- Cache tool results — avoid calling the same API twice
- Set max_iterations to avoid infinite loops
- Stream responses for better UX in production
What to Build Next
AI agents shine when tasks are:
- Sequential — one step depends on the previous
- Information-gathering — search + synthesize
- Decision-making — if/then logic based on data
Good starter projects:
- Job application assistant — research company, tailor CV, draft cover letter
- Content research agent — find sources, fact-check, summarize
- Personal finance agent — parse bank statements, categorize expenses, flag anomalies
- Customer support agent — answer from FAQ, escalate complex cases
AI agents are moving from "interesting demo" to "production tool" in 2026. The freelancers and developers who learn this now will have a massive head start.
Ready to ship your own AI-powered products? Check out my Freelancer OS Notion Template to manage your freelance business efficiently as you grow.
Happy building! 🤖
Top comments (0)