Staring at the vast landscape of AI, wondering where to begin? AI agents offer a concrete starting point. They encapsulate core AI concepts—reasoning, tool use, and memory—into an actionable, observable system. For career switchers and beginners, building an agent provides practical experience with large language models (LLMs) and a tangible project for your portfolio. This guide walks you through creating your first Python-based AI agent, detailing environment setup, implementation, testing, and portfolio presentation.
Why AI Agents for Your First Project?
AI agents are ideal for beginners because they provide a structured approach to applied AI. Unlike training complex models from scratch, agent development focuses on orchestrating existing AI capabilities. You learn to integrate LLMs, define decision-making processes, and empower models with external tools. This process builds intuition for prompt engineering, system design, and error handling—skills directly transferable to real-world AI roles. Agents demonstrate a clear input-output loop, making their behavior easier to understand, debug, and improve.
Setting Up Your Python Environment
A clean Python environment prevents dependency conflicts and ensures project portability. Use venv for isolation and pip for package management.
First, create a project directory and a virtual environment:
mkdir ai_agent_project
cd ai_agent_project
python -m venv venv
Activate the virtual environment. On macOS/Linux:
source venv/bin/activate
On Windows:
.\venv\Scripts\activate
Install necessary libraries. langchain-community provides core agent components, openai enables interaction with OpenAI models, and python-dotenv manages API keys securely.
pip install langchain-community openai python-dotenv
Store your OpenAI API key securely. Create a file named .env in your project root.
OPENAI_API_KEY="your_openai_api_key_here"
Replace "your_openai_api_key_here" with your actual key. Remember to add .env to your .gitignore file to prevent accidental exposure.
Building a Basic Functional AI Agent
This agent will answer questions and perform simple arithmetic using a tool. We leverage LangChain's AgentExecutor and create_openai_functions_agent for a streamlined implementation.
Agents combine an LLM with tools to perform tasks. They reason about which tool to use, execute it, and integrate the results.
Create a file named simple_agent.py.
import os
from dotenv import load_dotenv
from langchain_community.tools import ArxivQueryRun, WikipediaQueryRun
from langchain_community.utilities import ArxivAPIWrapper, WikipediaAPIWrapper
from langchain_openai import ChatOpenAI
from langchain.agents import AgentExecutor, create_openai_functions_agent
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
# 1. Load environment variables
load_dotenv()
openai_api_key = os.getenv("OPENAI_API_KEY")
if not openai_api_key:
raise ValueError("OPENAI_API_KEY not found. Please set it in your .env file.")
# 2. Define Tools
# We'll use a simple calculator tool for this example.
# For more complex agents, integrate tools like Wikipedia, Arxiv, custom APIs.
# Example: A simple Python function as a tool
def calculator_tool(expression: str) -> str:
"""Evaluates a mathematical expression."""
try:
return str(eval(expression))
except Exception as e:
return f"Error evaluating expression: {e}"
# LangChain's `tool` decorator simplifies tool creation
from langchain_core.tools import tool
@tool
def calculate(expression: str) -> str:
"""Evaluates a mathematical expression. Input must be a string representing the expression."""
return calculator_tool(expression)
# You can also integrate pre-built tools:
# wikipedia_tool = WikipediaQueryRun(api_wrapper=WikipediaAPIWrapper())
# arxiv_tool = ArxivQueryRun(api_wrapper=ArxivAPIWrapper())
tools = [calculate] # Add other tools here if you define them
# 3. Initialize the Language Model
llm = ChatOpenAI(model="gpt-3.5-turbo", temperature=0, api_key=openai_api_key)
# 4. Define the Prompt Template
# The prompt guides the agent's reasoning and tool selection.
# MessagesPlaceholder is crucial for history and tool interaction.
prompt = ChatPromptTemplate.from_messages(
[
("system", "You are a helpful AI assistant. Use the available tools to answer questions."),
MessagesPlaceholder("chat_history", optional=True), # For future memory
("human", "{input}"),
MessagesPlaceholder("agent_scratchpad"), # For agent's internal thought process and tool calls
]
)
# 5. Create the Agent
# create_openai_functions_agent automatically handles tool calling for OpenAI function-calling models.
agent = create_openai_functions_agent(llm, tools, prompt)
# 6. Create the Agent Executor
# The AgentExecutor is the runtime for the agent. It manages the agent's execution loop.
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
# 7. Run the Agent
print("Welcome to your first AI Agent! Type 'exit' to quit.")
while True:
user_input = input("\nYou: ")
if user_input.lower() == 'exit':
break
try:
result = agent_executor.invoke({"input": user_input})
print(f"Agent: {result['output']}")
except Exception as e:
print(f"An error occurred: {e}")
Run this script from your terminal:
python simple_agent.py
The verbose=True flag in AgentExecutor shows the agent's internal thought process, including tool calls and observations. This is invaluable for debugging and understanding agent behavior.
Testing and Iterating on Your Agent
Initial agent performance is rarely perfect. Iteration is key. Observe the verbose output to identify where the agent struggles.
Test Cases:
- Direct Question: "What is the capital of France?" (Should answer directly without tools)
- Tool Use: "What is 123 * 45?" (Should use the
calculatetool) - Complex Tool Use: "What is the square root of 64?" (Might struggle without a dedicated square root tool; could try
8*8or similar if it reasons) - Ambiguous Input: "Tell me about math." (Too broad, might ask for clarification or provide general info)
Iteration Strategies:
- Prompt Engineering: Refine the system message in
ChatPromptTemplate. Make instructions clearer. Specify expected output formats or constraints.- Example: "You are a helpful AI assistant. Always use the
calculatetool for any mathematical operations. If a calculation is requested, extract the full expression for the tool."
- Example: "You are a helpful AI assistant. Always use the
- Tool Refinement: Ensure tools are robust and handle edge cases. Add more specific tools for common tasks the agent needs to perform.
- Example: If the agent frequently fails on square roots, create a
square_roottool.
- Example: If the agent frequently fails on square roots, create a
- Model Selection: Experiment with different LLMs (e.g.,
gpt-4). More capable models often exhibit better reasoning and tool-use capabilities, but at a higher cost. - Error Handling: Implement robust error handling within your tools and agent execution loop. This prevents crashes and provides better feedback.
Consistent observation of agent traces (via
verbose=True) reveals patterns in its decision-making, guiding your improvements.
Showcasing This Project for AI Roles
This basic agent project, while simple, demonstrates crucial skills for entry-level AI roles. Focus on what you learned and how you applied it.
Portfolio Elements:
- Clear GitHub Repository:
-
README.md: Explain the project's purpose, technologies used (Python, LangChain, OpenAI), how to set it up, and how to run it. - Code: Clean, well-commented code.
-
.env.example: A template for the.envfile, showing required environment variables. -
requirements.txt: Generated withpip freeze > requirements.txt.
-
- Project Description:
- Problem: Briefly state the problem your agent solves (e.g., "Answering questions and performing basic calculations using an LLM and external tools").
- Solution: Describe how your agent works, highlighting its components (LLM, tools, prompt).
- Key Learnings: Emphasize your understanding of:
- LLM Integration: How you connected to an LLM.
- Tooling: How you empowered the LLM with external capabilities.
- Prompt Engineering: How you guided the LLM's behavior.
- Agent Architecture: The basic loop of thought, action, observation.
- Debugging/Iteration: How you improved the agent's performance.
- Future Work: Suggest potential enhancements (e.g., adding memory, more tools, connecting to external APIs).
- Demo (Optional but powerful): A short video or GIF demonstrating the agent in action. Show it answering questions and using its tools.
This project showcases your ability to move from theoretical understanding to practical implementation, a key differentiator for career switchers.
Common Beginner Pitfalls and How to Avoid Them
- Hardcoding API Keys: Storing API keys directly in code is a security risk. Always use environment variables (e.g.,
python-dotenv).- Solution: Use
.envfiles andos.getenv(). Add.envto.gitignore.
- Solution: Use
- Ignoring Verbose Output: The
verbose=Trueflag is not just for debugging; it's for understanding. Skipping it means missing crucial insights into agent reasoning.- Solution: Always run agents with
verbose=Trueduring development. Analyze the "thought" process.
- Solution: Always run agents with
- Over-complicating Tools: Start with simple, well-defined tools. Complex tools with many parameters make it harder for the LLM to use them correctly.
- Solution: Each tool should ideally perform one specific, atomic action.
- Vague Prompts: An LLM is only as good as its instructions. Ambiguous or overly general prompts lead to unpredictable agent behavior.
- Solution: Be specific in your system prompt. Define the agent's persona, goals, and constraints. Provide examples if necessary.
- Not Iterating: Expecting perfect performance on the first try is unrealistic. Agent development is an iterative process of testing, observing, and refining.
- Solution: Embrace the loop. Create test cases, analyze failures, adjust prompts or tools, and re-test.
Building your first AI agent is a significant step. It moves you from passive learning to active creation, providing a foundation for more advanced AI projects and a compelling story for your career transition. Focus on understanding the core concepts and the iterative development process.
Follow @klement_gunndu for daily deep dives on AI agents, Claude Code, Python patterns, and developer productivity. New article every day.
Top comments (0)