DEV Community

Cover image for I Built an Autonomous Job Application Agent with Claude AI — Here's How It Works
Tanzil Ahmed
Tanzil Ahmed

Posted on • Edited on

I Built an Autonomous Job Application Agent with Claude AI — Here's How It Works

Job hunting today is broken.

You’re expected to:

  • Search across multiple job boards
  • Tailor resumes for every role
  • Research each company
  • Write custom cover letters
  • Track applications manually

It’s repetitive, time-consuming, and mentally draining — especially when you’re applying to dozens (or hundreds) of roles.

So I built Job Hunter AI — an autonomous job application agent that does this end-to-end:

Find jobs → Research companies → Generate tailored applications → Track everything

This post breaks down exactly how I built it using:

  • Claude API (tool use)
  • FastAPI
  • WebSockets
  • Tavily + Exa (search + data enrichment)
  • PostgreSQL

The Problem: Job Hunting is High Friction

The core issue isn’t lack of jobs — it’s friction.

Every application requires:

  • Context switching (LinkedIn → company site → resume editor)
  • Repeated research
  • Manual writing
  • No feedback loop

Even worse:

  • You lose track of where you applied
  • You don’t know which strategy works
  • You burn out before you optimize

I didn’t want a “job tracker.”

I wanted an agent that behaves like a smart assistant:

“Find relevant roles, understand the company, and apply better than I would manually.”


System Architecture (High-Level)

Here’s how the system is structured:

                ┌──────────────────────┐
                │   Frontend (HTML)    │
                │  + WebSocket Client  │
                └─────────┬────────────┘
                          │
                          ▼
                ┌──────────────────────┐
                │      FastAPI         │
                │  (API + WebSockets)  │
                └─────────┬────────────┘
                          │
        ┌─────────────────┼─────────────────┐
        ▼                 ▼                 ▼
┌──────────────┐  ┌──────────────┐  ┌──────────────┐
│ Claude API   │  │ Tavily API   │  │ Exa API      │
│ (Agent Brain)│  │ (Search)     │  │ (Deep Data)  │
└──────────────┘  └──────────────┘  └──────────────┘
                          │
                          ▼
                ┌──────────────────────┐
                │   PostgreSQL DB      │
                │ (Jobs + Tracking)    │
                └──────────────────────┘
Enter fullscreen mode Exit fullscreen mode

Key idea:

Claude is not just generating text — it is orchestrating actions via tools.


Claude Tool Use: Turning LLM into an Agent

This is where things get interesting.

Instead of prompting Claude to “write a cover letter,” I gave it tools like:

  • search_jobs
  • research_company
  • generate_application

Claude decides when to call which tool.

Example: Company Research Tool

tools = [
    {
        "name": "research_company",
        "description": "Fetch detailed company insights",
        "input_schema": {
            "type": "object",
            "properties": {
                "company_name": {"type": "string"}
            },
            "required": ["company_name"]
        }
    }
]
Enter fullscreen mode Exit fullscreen mode

Claude Call

response = client.messages.create(
    model="claude-3-opus-20240229",
    max_tokens=1024,
    tools=tools,
    messages=[
        {"role": "user", "content": "Apply to backend roles at Stripe"}
    ]
)
Enter fullscreen mode Exit fullscreen mode

Tool Execution Layer

def research_company(company_name):
    tavily_data = tavily.search(company_name)
    exa_data = exa.get_company_info(company_name)

    return {
        "summary": tavily_data["summary"],
        "culture": exa_data["culture"],
        "tech_stack": exa_data["tech"]
    }
Enter fullscreen mode Exit fullscreen mode

What’s powerful here?

Claude:

  • Decides when to research
  • Uses the data to adapt the application
  • Maintains context across steps

This turns it from a chatbot into a decision-making system.


FastAPI + WebSocket Pipeline

I didn’t want a “click → wait → response” UX.

This is a streaming system.

Why WebSockets?

Because the agent:

  • Finds jobs
  • Researches companies
  • Generates applications

…and I want the user to see that live.


Backend Flow

@app.websocket("/ws")
async def websocket_endpoint(ws: WebSocket):
    await ws.accept()

    while True:
        query = await ws.receive_text()

        # Step 1: Find jobs
        jobs = search_jobs(query)
        await ws.send_json({"stage": "jobs_found", "data": jobs})

        for job in jobs:
            # Step 2: Research
            company_data = research_company(job["company"])
            await ws.send_json({
                "stage": "research_done",
                "data": company_data
            })

            # Step 3: Generate application
            application = generate_application(job, company_data)

            await ws.send_json({
                "stage": "application_ready",
                "data": application
            })
Enter fullscreen mode Exit fullscreen mode

Frontend Behavior

  • Connects via WebSocket
  • Listens for stage updates
  • Renders results progressively

This creates a real-time agent experience.


What Surprised Me While Building This

1. Tool Use > Prompt Engineering

Initially, I tried:

“Write better prompts”

That plateaued quickly.

The real unlock was:

Giving the model structured tools and letting it act


2. Data Quality > Model Quality

Even with Claude:

  • Bad company data = bad applications
  • Weak search results = irrelevant jobs

Your system is only as good as:

The data pipeline feeding the model


3. State Management is Hard

When your agent:

  • Searches
  • Branches
  • Calls tools

You need to track:

  • Context
  • Progress
  • Failures

This becomes a mini orchestration engine, not just an API call.


4. Real-Time UX Changes Everything

Without WebSockets:

  • It feels slow
  • Feels like a black box

With streaming:

  • Feels alive
  • Feels intelligent

How to Run It Yourself

  1. Clone the repo:
git clone https://github.com/Tanzil-Ahmed/job-hunter-agent

Enter fullscreen mode Exit fullscreen mode
  1. Set environment variables:
ANTHROPIC_API_KEY=
TAVILY_API_KEY=
EXA_API_KEY=
DATABASE_URL=postgresql://...
Enter fullscreen mode Exit fullscreen mode
  1. Install dependencies:
pip install -r requirements.txt
Enter fullscreen mode Exit fullscreen mode
  1. Run FastAPI server:
uvicorn api:app --reload
Enter fullscreen mode Exit fullscreen mode
  1. Open frontend:
index.html
Enter fullscreen mode Exit fullscreen mode

What’s Next

Planned improvements:

  • Resume auto-optimization per job
  • Feedback loop (track response rates)
  • Auto-apply integrations
  • Multi-agent workflow (research agent + writing agent)

Final Thoughts

This project changed how I think about building with AI.

The shift is:

From “generate text” → to “build systems that act”

If you’re building AI apps today:

  • Don’t just prompt
  • Design agents
  • Build pipelines

Try It Yourself

If this was useful or interesting:

👉 Star the repo
👉 Run it locally
👉 Break it, improve it, build on top of it

This is just the beginning of what autonomous systems can do.

Let’s build smarter tools.

Top comments (0)