DEV Community

Nathaniel Hamlett
Nathaniel Hamlett

Posted on • Originally published at nathanhamlett.com

I Built an AI Agent That Applies to Jobs While I Sleep

I Built an AI Agent That Applies to Jobs While I Sleep

Job hunting is a full-time job that pays nothing. You spend hours scrolling boards, tailoring resumes, writing cover letters, and filling out forms — and 90% of it goes into a void. I decided to automate the entire pipeline.

I built an autonomous agent that runs 24/7 on a cron schedule. It discovers opportunities, researches companies, tailors resumes, writes cover letters, submits applications, and tracks follow-ups. Here's how it works and what I learned building it.

The Architecture

The system has three layers:

  1. Discovery — scans 9+ job APIs and specialized boards every few hours
  2. Research & Qualification — deep-dives each opportunity: company research, role analysis, network mapping, strategy development
  3. Conversion — builds tailored application packets and submits them

Everything runs through a SQLite database that tracks every opportunity through a pipeline: discovered → researched → strategy_ready → applied → interviewing → closed.

Discovery (cron: 7 AM, 2 PM)
    → Scan job APIs (Adzuna, Jooble, HN Who's Hiring, The Muse, USAJOBS...)
    → Scan specialized boards (crypto, tech, local)
    → Hard-reject filter (auto-reject cybersecurity, medical, clearance-required)
    → Fit scoring (0-10 based on skills, experience, culture match)
    → Insert qualified opportunities to DB

Research (cron: 8 AM)
    → Pick top 5 unresearched opportunities
    → Company research (funding, stage, culture, red flags)
    → Role analysis (actual day-to-day, salary range, competition level)
    → Network mapping (who do I know there? mutual connections?)
    → Strategy development (best angle, strongest opening move)

Conversion (cron: 9 AM, 11 AM)
    → Pick top 5 strategy-ready opportunities
    → Generate tailored resume (keywords from JD, narrative variant rotation)
    → Write cover letter (role-specific, no templates)
    → ATS pre-score (keyword overlap, format compliance)
    → Submit via browser automation
    → Update DB, schedule follow-ups
Enter fullscreen mode Exit fullscreen mode

The Skill System

The agent has 40+ "skills" — modular instruction sets that tell it how to handle specific tasks. Each skill is a markdown file with a defined interface:

  • opportunity-scanner — knows which boards to scan, how to score fit, what to reject
  • deep-researcher — produces structured "intel cards" with apply URL, ATS type, warm paths, strategy
  • resume-cv-builder — generates tailored resumes from a locked facts file (never fabricates)
  • outreach-composer — writes personalized messages using conversation stage frameworks
  • follow-up-tracker — monitors application timelines and drafts follow-ups at 7/14/21 days

Skills are loaded contextually. A scan session only loads discovery skills. A conversion session only loads resume + submission skills. This prevents context pollution — a key lesson I learned the hard way.

Resume Facts: The Anti-Hallucination Constraint

LLMs will fabricate impressive-sounding experience if you let them. My solution: a resume_facts.json file that locks every claim:

{
  "companies": [
    {
      "name": "Corn",
      "dates": "March 2024 - December 2025",
      "title": "Head of Community & Operations",
      "metrics": {
        "community_size": "50,000+",
        "fundraise_supported": "$16.5M"
      }
    }
  ],
  "skills_verified": ["Discord", "Telegram", "Twitter/X", "Active Directory"],
  "education": [...]
}
Enter fullscreen mode Exit fullscreen mode

The resume builder can reorganize, emphasize, and reframe — but it can NEVER fabricate companies, metrics, or skills not in this file. A programmatic validator checks every generated resume against the facts before submission.

This single constraint eliminated the "impressive but fictional" resume problem.

Narrative Variant Rotation

If you submit 20 applications with the same cover letter structure, recruiters notice. Template DNA kills response rates.

I define 5 narrative variants — different "story spines" for the same experience:

  • Variant A (Builder): Led with building from zero, emphasize creation
  • Variant B (Operator): Led with managing complexity under pressure
  • Variant C (Connector): Led with relationship network and ecosystem navigation
  • Variant D (Technical Creative): Led with technical projects as credibility signals
  • Variant E (Pivot): Led with cross-domain pattern matching

Each application draws from a different variant. The DB tracks which one was used so they never repeat within a batch.

Browser Automation for Submission

The hardest part isn't writing applications — it's submitting them. Every ATS (Greenhouse, Lever, Ashby, Workable, iCIMS) has different forms, different required fields, different upload workflows.

I use browser-use — an LLM-driven browser automation library. Instead of writing brittle Playwright selectors for each ATS, I describe the task in natural language:

agent = Agent(
    task="""Go to {apply_url}. Fill application:
    Name: Nathan Hamlett
    Email: hello@nathanhamlett.com
    Upload resume from: {resume_path}
    Paste cover letter: {cover_letter_text}
    Submit.""",
    llm=llm,
)
result = await agent.run()
Enter fullscreen mode Exit fullscreen mode

This handles 80% of ATS portals without custom code. When it fails, I fall back to targeted Playwright scripts for specific platforms.

What I Learned

1. Discovery is cheap, conversion is expensive.

My first version had 8 discovery cron jobs and 0 conversion jobs. It found 400+ opportunities and submitted 0 applications. The ratio should be inverted — spend 20% of agent time discovering and 80% converting.

2. 70% of aggregator listings are stale.

Job boards serve cached listings that are already closed. I built a URL verification step that checks if the apply link is still live before building a packet. This saved hours of work on dead opportunities.

3. Context discipline matters.

Loading 40 tools into an LLM context degrades everything. Research shows performance drops significantly beyond 5-10 tools per session. Each cron job now loads ONLY the skills it needs.

4. ATS keyword matching is the first gate.

Before submitting, I run each resume through an ATS scoring check: keyword overlap with the JD, section heading compliance, format validation. If the score is below 70%, the resume gets rewritten before submission.

5. Follow-ups are where responses come from.

Cold applications have a 0.5-2% response rate. A well-timed follow-up at day 7 can 3-4x that. The agent schedules follow-ups automatically and drafts personalized messages that reference something current (recent company news, product launch, etc).

The Numbers So Far

  • 500+ opportunities discovered
  • 25+ applications submitted (fully automated)
  • 5 narrative variants rotating
  • 9 job API sources integrated
  • 22 follow-up emails drafted and scheduled
  • Total human time: ~2 hours of initial setup, then autonomous

The agent runs on a $0/month infrastructure stack: SQLite for the database, cron for scheduling, free-tier API keys for job sources, and commodity LLM calls for generation.

Should You Build This?

If you're applying to more than 5 jobs per week, yes. The ROI is immediate. The first version took me a weekend to build, and it's saved dozens of hours since.

Start simple: a script that scans one job board, scores fit against your resume, and generates a tailored cover letter. Add submission automation later. Add follow-up tracking after that.

The full pipeline is ~2,000 lines of meaningful Python. Most of it is plumbing (API calls, database operations, file management). The actual intelligence is in the prompts — which is why the skill system works. You iterate on the prompts, not the code.


Nathaniel Hamlett builds autonomous AI systems. More at nathanhamlett.com.

Top comments (0)