DEV Community

Cover image for I Built an AI Resume Optimizer While Job Hunting — Here’s What Actually Worked
Rajaram Yadav
Rajaram Yadav

Posted on

I Built an AI Resume Optimizer While Job Hunting — Here’s What Actually Worked

Try it: https://shortlisted-one.vercel.app/
I got tired of rewriting my resume for every job application.

Different companies, different keywords, slightly different expectations—and somehow your resume needs to match all of them.

So I built Shortlisted: a tool that takes your resume + a job description and rewrites it into a tailored version.

What I didn’t expect?
I’d spend more time debugging async pipelines than building the AI itself.

*🚀 What the product does
*

The flow is simple:

  • Upload your resume (PDF/DOCX)
  • Paste a job description (or URL)
  • Get a tailored resume ready to export

Under the hood, it:

  • Parses your resume
  • Extracts and analyzes the job description
  • Matches and rewrites content
  • Exports a polished PDF/DOCX

*🧠 Tech Stack (kept simple)
*

  • Frontend: Next.js (App Router), TypeScript, Tailwind
  • Backend: FastAPI + PostgreSQL
  • Async processing: Redis + Celery
  • Auth & billing: Clerk + Stripe (test mode)
  • Analytics: PostHog

*⚙️ Architecture (high-level)
*
Client → API → Queue → Worker → Storage
↑ ↓
└──── Status API ←──────┘

  • Upload route → handles resume ingestion + parsing
  • Job route → processes the job description
  • Optimize route → kicks off async pipeline
  • Worker → runs rewrite, match, export stages
  • Status endpoint → returns granular progress for UI polling **🔥 What was harder than expected **1. Async pipelines are easy to start, hard to trust

Celery works great… until it doesn’t.

I ran into:

  • Jobs getting stuck in “pending”
  • Duplicate executions on retries
  • Partial failures mid-pipeline

Fixes involved:

  • Adding explicit job state tracking in the DB
  • Making tasks idempotent
  • Breaking the pipeline into smaller, traceable steps

Lesson: “it works locally” means nothing for async systems

*2. Resume parsing is messier than you think
*

Resumes are wildly inconsistent:

  • Different formats (PDF vs DOCX)
  • Layout-heavy designs
  • Broken text extraction

I tried:

  • Regex → fast but brittle
  • LLM parsing → flexible but expensive
  • Hybrid → best balance
    Final approach:

  • Structured extraction where possible

  • LLM fallback for messy sections
    *3. UX for async systems is underrated
    *

    Users don’t care that you’re using Redis or Celery.

They care that:

  • It doesn’t feel stuck
  • They know what’s happening
  • Progress feels real

So I added:

  • Step-level progress updates (not just “loading…”)
  • Backend-driven status tracking
  • Polling with meaningful states (not fake progress bars) *4. LLM reliability is a system design problem * It’s not just “call OpenAI and done”.

Real issues:

  • API failures
  • Rate limits
  • Cost control
  • Output inconsistency

What helped:

  • Provider fallback logic
  • Key validation before execution
  • Structured prompts + validation
    📈 What I’d improve next
    **
    Product-side
    **

  • Template system for multiple resume styles

  • Section-level editing (human-in-the-loop)

  • Better analytics: upload → export conversion
    *Engineering-side
    *

  • Stronger retry + idempotency guarantees

  • Job deduplication (same resume + JD = no re-run)

  • Smarter caching for repeated job descriptions

  • Cost tracking per pipeline run
    *💡 What I learned
    *

  • Async systems are where most real complexity lives

  • LLMs are the easy part—reliability is the hard part

  • Good UX matters more than clever backend design

  • “AI product” really means “distributed systems + AI”
    *🤔 Open questions
    *

    If you’re building something similar:

  • How are you handling async job reliability?

  • Are you using Celery, or something like Temporal?

  • How do you manage LLM fallbacks and cost control?

Would love to compare approaches.

If you’ve built anything in this space, drop a link—I’m curious how others are solving this.

Top comments (0)