DEV Community

klement Gunndu
klement Gunndu

Posted on

Build Your First AI Agent: Ace Your Dev Job Search

The entry-level developer job market is fiercely competitive. Resumes often blend into a sea of similar projects – todo apps, e-commerce clones, personal portfolios. To stand out, you need to showcase not just foundational coding skills, but also an understanding of emerging technologies and a proactive approach to problem-solving. Building an AI agent for your job search demonstrates exactly this.

An AI agent is more than just a script; it's a program that reasons, plans, and uses tools to achieve a goal. By creating an agent that helps you navigate the job market, you build a powerful portfolio piece. This project not only automates crucial steps in your application process but also proves your ability to leverage cutting-edge AI for real-world applications. Recruiters and hiring managers notice candidates who build tools to solve their own problems.

This project showcases your ability to apply AI, solve real-world problems, and stand out from other candidates.

Why AI Agents are a Game-Changer for New Grad Portfolios

Traditional projects often highlight specific frameworks or languages. While essential, these projects rarely demonstrate an ability to orchestrate complex tasks or integrate multiple systems. An AI agent, however, directly addresses these advanced capabilities. You move beyond merely coding a feature to designing an autonomous system.

Building an AI agent for your job search proves several high-demand skills. You apply principles of prompt engineering, API integration, and tool creation. It demonstrates practical problem-solving by automating a tedious process. You also showcase an understanding of modern AI paradigms, which is invaluable in today's tech landscape. This project acts as a tangible example of your ability to learn and adapt to new technologies, a critical trait for any developer.

Setting Up Your Environment: Python, Libraries, and Your First Agent

You need a robust Python environment to build your AI agent. Python 3.9 or newer is ideal. We use a virtual environment to manage dependencies cleanly, preventing conflicts with other projects. Key libraries include langchain for agent orchestration, openai for accessing large language models, python-dotenv for secure API key management, requests for HTTP requests, and beautifulsoup4 for web scraping.

First, create a project directory and a virtual environment. Activate it, then install the necessary libraries.

mkdir job_search_agent
cd job_search_agent
python -m venv venv
# On Windows:
# .\venv\Scripts\activate
# On macOS/Linux:
# source venv/bin/activate
pip install langchain langchain-openai python-dotenv requests beautifulsoup4
Enter fullscreen mode Exit fullscreen mode

Next, set up your OpenAI API key securely. Create a .env file in your project root and add your API key. Replace "your_openai_api_key_here" with your actual key.

# .env
OPENAI_API_KEY="your_openai_api_key_here"
Enter fullscreen mode Exit fullscreen mode

Now, write your first Python script, main.py, to test your environment and make a basic LLM call. This confirms your setup is correct and your API key is loaded.

# main.py
import os
from dotenv import load_dotenv
from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage

# Load environment variables from .env file
load_dotenv()

# Initialize the Large Language Model (LLM)
# We use gpt-3.5-turbo for cost-effectiveness and good performance.
# temperature=0 ensures deterministic, consistent responses.
llm = ChatOpenAI(model="gpt-3.5-turbo", temperature=0)

def test_llm_connection():
    """Tests the connection to the LLM and prints a response."""
    print("Testing LLM connection...")
    try:
        response = llm.invoke([HumanMessage(content="Hello, LLM!")])
        print(f"LLM Response: {response.content}")
    except Exception as e:
        print(f"Error connecting to LLM: {e}")
        print("Please ensure your OPENAI_API_KEY is correctly set in the .env file.")

if __name__ == "__main__":
    test_llm_connection()
Enter fullscreen mode Exit fullscreen mode

Run python main.py. You should see a friendly greeting from the LLM, confirming your setup. This foundational step is crucial before building more complex agent logic.

Building Core Agent Skills: Job Description Parsing & Custom Research Tools

An effective AI agent needs specialized tools to interact with the world and process information. For a job search agent, two critical tools are job description parsing and web research. These tools allow the agent to extract structured information from unstructured text and gather external data.

Tool 1: Job Description Parsing

Manually reading and extracting key details from every job description is time-consuming. An LLM excels at this task. We create a tool that takes raw job description text and returns a structured summary. This allows the agent to quickly identify required skills, responsibilities, and qualifications.

The @tool decorator from LangChain registers a function as an available tool for the agent. The docstring of the tool function is vital; it tells the agent when and how to use the tool.

# main.py (add to existing main.py or create a new file and import)
from langchain.agents import tool
from langchain_core.messages import HumanMessage
# Assuming llm is already initialized from the previous section

@tool
def parse_job_description(jd_text: str) -> str:
    """
    Parses a job description text to extract key information like required skills,
    responsibilities, and qualifications. Returns a formatted string summary.
    The agent uses this tool to understand job requirements.
    """
    prompt = f"""
    Analyze the following job description and extract the following information:
    - Job Title
    - Company Name
    - Required Skills (list 3-5 most important)
    - Responsibilities (list 3-5 key duties)
    - Qualifications (e.g., Bachelor's degree, years of experience)
    - Nice-to-haves (optional skills/experience)

    Format the output as a clear, concise bulleted list. If a piece of information
    is not explicitly mentioned, state "Not specified".

    Job Description:
    ---
    {jd_text}
    ---
    """
    response = llm.invoke([HumanMessage(content=prompt)])
    return response.content

# Example usage for testing (remove or comment out when integrating with agent)
# if __name__ == "__main__":
#     load_dotenv()
#     llm = ChatOpenAI(model="gpt-3.5-turbo", temperature=0)
#     sample_jd = """
#     Software Engineer I at Netanel Systems
#     We are looking for a talented Software Engineer to join our team.
#     Responsibilities include developing and maintaining backend services, participating in code reviews, and collaborating with cross-functional teams.
#     Required Skills: Python, SQL, Git, REST APIs.
#     Qualifications: Bachelor's degree in Computer Science or related field.
#     Nice-to-haves: AWS experience, Docker, Kubernetes.
#     """
#     print("--- Testing parse_job_description tool ---")
#     print(parse_job_description.run(sample_jd))
Enter fullscreen mode Exit fullscreen mode

Tool 2: Custom Web Research

Job descriptions often mention specific company names, products, or niche technologies. To provide tailored advice, the agent needs to perform quick web research. This tool fetches the content of a given URL, allowing the agent to gather context. We use requests for fetching web pages and BeautifulSoup for parsing HTML and extracting clean text.

Web scraping can be fragile. Websites change, and some block automated requests. Including a User-Agent header and robust error handling improves reliability. We also truncate the output to avoid exceeding LLM token limits, a common trade-off in agent design.

# main.py (add to existing main.py)
import requests
from bs4 import BeautifulSoup
# Assuming llm is already initialized

@tool
def search_web_page(url: str) -> str:
    """
    Fetches the content of a given URL and returns its text.
    Useful for researching companies or technologies mentioned in job descriptions.
    Returns the first 2000 characters of the cleaned text to manage token limits.
    """
    try:
        headers = {
            'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36'
        }
        response = requests.get(url, headers=headers, timeout=10)
        response.raise_for_status()  # Raise an exception for HTTP errors (4xx or 5xx)

        soup = BeautifulSoup(response.text, 'html.parser')
        # Remove script and style elements to get cleaner text
        for script_or_style in soup(['script', 'style']):
            script_or_style.extract()

        text = soup.get_text()
        # Clean up excessive whitespace and newlines
        clean_text = "\n".join(line.strip() for line in text.splitlines() if line.strip())
        return clean_text[:2000] # Return first 2000 characters to avoid token limits

    except requests.exceptions.RequestException as e:
        return f"Error fetching URL {url}: {e}"
    except Exception as e:
        return f"An unexpected error occurred: {e}"

# Example usage for testing (remove or comment out when integrating with agent)
# if __name__ == "__main__":
#     load_dotenv()
#     llm = ChatOpenAI(model="gpt-3.5-turbo", temperature=0)
#     print("\n--- Testing search_web_page tool ---")
#     # Using a placeholder URL, replace with an actual one for testing
#     print(search_web_page.run("https://www.netanel.systems"))
Enter fullscreen mode Exit fullscreen mode

Integrating Tools into an Agent

With our tools defined, we now create the agent itself. LangChain's create_openai_tools_agent function simplifies this. We provide the LLM, our custom tools, and a prompt that defines the agent's persona and instructions. The agent then decides which tool to use, when, and with what input, based on the user's query and its internal reasoning.

The agent's prompt is crucial. It guides the agent's behavior and output format. We instruct it to act as an assistant for new graduates, specifically tailored for job search advice. The AgentExecutor orchestrates the agent's decisions and tool calls. verbose=True outputs the agent's thought process, which is invaluable for debugging and understanding its behavior.

# main.py (add to existing main.py)
from langchain.agents import AgentExecutor, create_openai_tools_agent
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.messages import SystemMessage, HumanMessage, AIMessage

# Define all tools available to the agent
tools = [parse_job_description, search_web_page]

# Define the agent's prompt template
# This prompt instructs the agent on its role and how to interact.
# {input} is where the user's query goes.
# {agent_scratchpad} is where the agent stores its thoughts and tool outputs.
prompt = ChatPromptTemplate.from_messages(
    [
        SystemMessage(content="You are a helpful AI assistant designed to analyze job descriptions and provide tailored, actionable advice for new graduates applying for software development roles. Always try to use the provided tools to gather information before responding."),
        HumanMessage(content="{input}"),
        AIMessage(content="{agent_scratchpad}"),
    ]
)

# Create the agent
# This agent uses the LLM to decide which tools to use based on the prompt and input.
agent = create_openai_tools_agent(llm, tools, prompt)

# Create the AgentExecutor
# This executor runs the agent, managing its tool calls and thought process.
# verbose=True shows the agent's internal reasoning steps.
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)

if __name__ == "__main__":
    load_dotenv()
    llm = ChatOpenAI(model="gpt-3.5-turbo", temperature=0) # Ensure LLM is initialized if running this block directly

    print("\n--- Running the AI Agent for Job Search ---")

    # Define a complex input for the agent
    job_description_with_company = """
    Software Engineer I at Netanel Systems (https://www.netanel.systems)
    We are looking for a talented Software Engineer to join our team.
    Responsibilities include developing and maintaining backend services, participating in code reviews, and collaborating with cross-functional teams.
    Required Skills: Python, SQL, Git, REST APIs.
    Qualifications: Bachelor's degree in Computer Science or related field.
    Nice-to-haves: AWS experience, Docker, Kubernetes.
    Experience with financial technology is a plus.
    """

    user_input = f"""
    I am a new graduate looking for my first software engineering role.
    Analyze the following job description and provide:
    1. A summary of the key requirements and responsibilities.
    2. A list of 3-5 specific projects or experiences I should highlight on my resume to match these requirements,
       assuming I have a basic Python/web development background.
    3. Based on the company's website (if provided), suggest one specific aspect of the company's mission or
       products I should mention in my cover letter to show genuine interest.

    Job Description:
    ---
    {job_description_with_company}
    ---
    """

    # Invoke the agent with the user's input
    try:
        response = agent_executor.invoke({"input": user_input})
        print("\n--- Agent's Final Output ---")
        print(response["output"])
    except Exception as e:
        print(f"An error occurred during agent execution: {e}")
Enter fullscreen mode Exit fullscreen mode

When you run this main.py, you will observe the agent's Thought process in the console. It decides to use parse_job_description first, then potentially search_web_page if it identifies a URL. Finally, it synthesizes all information to answer your multi-part query. This demonstrates true agentic behavior.

Showcasing Your AI Agent: Tips for Your Resume & Interviews

Building the agent is only half the battle; effectively showcasing it is equally important. This project differentiates you, so ensure it gets proper visibility on your resume and in interviews.

Resume Strategies

Treat your AI agent as a flagship project. Create a dedicated section for "AI/Machine Learning Projects" or "Advanced Software Projects." Use action verbs and quantify impact where possible.

  • Project Title: "AI-Powered Job Search Assistant" or "Automated Career Advisor Agent."
  • Bullet Points:
    • "Developed an AI agent using Python and LangChain to automate job description analysis and personalized application advice."
    • "Implemented custom tools for intelligent text extraction from job descriptions and web scraping for company research (BeautifulSoup, Requests)."
    • "Utilized prompt engineering and LLM APIs (OpenAI) to enable autonomous reasoning and task execution."
    • "Demonstrated proficiency in agentic workflows, API integration, and full-stack Python development."
    • "Reduced manual job application preparation time by X% (if you track usage)."
  • Include a Link: Always provide a link to a GitHub repository with clean, well-documented code. A README.md explaining the project, its architecture, and how to run it is essential. Consider a short video demo.

Interview Strategies

During interviews, be ready to discuss your agent in detail. This project provides excellent talking points for technical and behavioral questions.

  • Explain the "Why": Start by explaining the problem you aimed to solve (the tedious job search process for new grads) and how an AI agent was the ideal solution. This shows problem-solving skills and business acumen.
  • Technical Deep Dive: Be prepared to discuss the architecture (LLM, tools, agent executor), specific libraries used, and design choices. Explain how langchain orchestrates the process and why you chose certain tools.
  • Challenges and Trade-offs: Discuss the difficulties you faced, such as prompt engineering for optimal results, handling web scraping failures, or managing LLM token limits. Explain how you iterated and improved the agent. This demonstrates critical thinking and resilience.
  • Live Demo (Optional but Powerful): If feasible, offer a quick live demonstration. Seeing the agent in action, especially its verbose thought process, is highly impactful. If a live demo isn't practical, have a well-rehearsed walkthrough of the code and its output.
  • Connect to Company: Research the company's tech stack or challenges. If they use Python, AI, or deal with data processing, draw parallels to your agent project. Show how your skills are transferable.

Next Steps: Expanding Your Agent's Capabilities & Learning More

Your first AI agent is a strong foundation. The field of agentic AI is rapidly evolving, offering numerous avenues for expansion. Continuously improving your agent keeps your skills sharp and your portfolio current.

Expanding Your Agent's Capabilities

  • Automated Cover Letter Generation: Integrate a tool that generates a personalized cover letter draft based on the parsed job description and company research. Always emphasize human review for quality assurance.
  • Resume Tailoring: Create a tool that takes your base resume and suggests specific keywords or phrasing to adapt it for a given job description.
  • Application Tracking: Build a tool to log applied jobs, company details, and application status into a simple database or spreadsheet.
  • Interview Preparation: Develop a tool that generates common interview questions based on the job description's skill requirements.
  • Dynamic Content Scraping: For websites that load content with JavaScript, explore tools like Selenium or Playwright instead of just requests and BeautifulSoup. This handles more complex web pages.
  • Integrate Other LLMs: Experiment with different LLMs, such as those from Anthropic (Claude) or open-source models like Llama 3 via Hugging Face. This broadens your experience with various API paradigms and model capabilities.
  • Agent Memory: Implement a memory component (e.g., using LangChain's memory modules) so the agent can retain context across multiple interactions, making it more conversational and efficient.

Learning More

The AI agent landscape is dynamic. Continuous learning is essential.

  • LangChain Documentation: Dive deeper into the official LangChain documentation. Explore different agent types, memory modules, and advanced prompting techniques.
  • OpenAI/Anthropic API Documentation: Understand the nuances of the LLM APIs you use. Learn about fine-tuning, embedding models, and advanced API features.
  • Prompt Engineering Guides: Master the art of crafting effective prompts. Resources like OpenAI's prompt engineering guide or various online courses provide valuable insights.
  • Agentic AI Frameworks and Patterns: Research broader concepts in agentic AI, such as multi-agent systems, self-reflection, and tool-use patterns.
  • Online Courses and Communities: Platforms like Coursera, Udacity, and deeplearning.ai offer courses on LLMs, prompt engineering, and agent development. Engage with communities on platforms like Reddit (r/LangChain, r/LocalLLaMA) or Discord.

By building and continuously improving your AI agent, you not only gain a powerful tool for your job search but also develop highly sought-after skills that will make you an indispensable asset in the rapidly evolving tech industry. This project demonstrates your initiative, technical prowess, and ability to leverage the future of software development.


Follow @klement_gunndu for daily deep dives on AI agents, Claude Code, Python patterns, and developer productivity. New article every day.

Top comments (0)