Last year I spent 3 months job hunting. Applied to 200+ positions manually. Got 4 interviews. Terrible ratio.
So I built a Python automation that changed everything. Within 2 weeks of using it, I had 12 interviews lined up.
Here is the exact system.
The Problem With Manual Job Hunting
Every day looked the same:
- Open 5 job boards
- Scroll through hundreds of listings
- Copy-paste my resume details
- Write a custom cover letter
- Click submit
- Repeat 20 times
- Get zero responses
I was spending 4-5 hours daily on applications. Most went into a black hole.
The Automation Architecture
Here is what I built:
┌──────────────┐ ┌──────────────┐ ┌──────────────┐
│ Job Scraper │───▶│ Matcher │───▶│ Tracker │
│ (3 sources) │ │ (AI scoring) │ │ (Notion API) │
└──────────────┘ └──────────────┘ └──────────────┘
Stage 1: Scrape jobs from multiple sources
Stage 2: Score each job against my skills
Stage 3: Track everything in a dashboard
Stage 1: The Job Scraper
import requests
from bs4 import BeautifulSoup
import json
from datetime import datetime
def scrape_jobs(keywords, location="remote"):
jobs = []
# Example: scraping a job board API
params = {
"description": keywords,
"location": location,
"full_time": "true"
}
response = requests.get(
"https://jobs.github.com/positions.json",
params=params,
headers={"User-Agent": "JobSearchBot/1.0"}
)
for job in response.json():
jobs.append({
"title": job["title"],
"company": job["company"],
"location": job["location"],
"url": job["url"],
"description": job["description"][:500],
"posted": job["created_at"],
"source": "github_jobs",
"scraped_at": datetime.now().isoformat()
})
return jobs
# Run for multiple keywords
all_jobs = []
for keyword in ["python developer", "data engineer", "backend engineer"]:
all_jobs.extend(scrape_jobs(keyword))
print(f"Found {len(all_jobs)} jobs")
Key insight: I scraped 3 different sources and deduplicated by company + title. This alone found jobs I would have missed.
Stage 2: AI-Powered Matching
This was the game-changer. Instead of reading every listing, I scored them:
def calculate_match_score(job_description, my_skills):
"""Simple keyword matching - no AI API needed"""
score = 0
description_lower = job_description.lower()
# Skill matching
for skill, weight in my_skills.items():
if skill.lower() in description_lower:
score += weight
# Red flag detection
red_flags = ["10+ years", "15+ years", "unpaid", "equity only"]
for flag in red_flags:
if flag in description_lower:
score -= 50
# Green flag detection
green_flags = ["remote", "flexible", "learning budget", "4-day week"]
for flag in green_flags:
if flag in description_lower:
score += 10
return max(0, min(100, score))
my_skills = {
"python": 20, "javascript": 15, "react": 15,
"postgresql": 10, "docker": 10, "aws": 10,
"fastapi": 15, "django": 10, "rest api": 10
}
for job in all_jobs:
job["match_score"] = calculate_match_score(
job["description"], my_skills
)
# Only apply to 70%+ matches
high_matches = [j for j in all_jobs if j["match_score"] >= 70]
print(f"{len(high_matches)} high-match jobs out of {len(all_jobs)}")
Result: Instead of applying to everything, I only applied to jobs scoring 70+. My interview rate went from 2% to 15%.
Stage 3: Automated Tracking
import sqlite3
def save_to_db(jobs):
conn = sqlite3.connect("job_search.db")
cursor = conn.cursor()
cursor.execute("""
CREATE TABLE IF NOT EXISTS jobs (
id INTEGER PRIMARY KEY AUTOINCREMENT,
title TEXT, company TEXT, location TEXT,
url TEXT UNIQUE, match_score INTEGER,
status TEXT DEFAULT "new",
applied_at TEXT, response TEXT,
source TEXT, scraped_at TEXT
)
""")
for job in jobs:
try:
cursor.execute("""
INSERT OR IGNORE INTO jobs
(title, company, location, url, match_score, source, scraped_at)
VALUES (?, ?, ?, ?, ?, ?, ?)
""", (job["title"], job["company"], job["location"],
job["url"], job["match_score"], job["source"],
job["scraped_at"]))
except Exception:
pass
conn.commit()
# Quick stats
cursor.execute("SELECT COUNT(*), AVG(match_score) FROM jobs")
total, avg = cursor.fetchone()
print(f"Total tracked: {total} | Avg score: {avg:.1f}")
conn.close()
save_to_db(high_matches)
The Results After 2 Weeks
| Metric | Manual | Automated |
|---|---|---|
| Jobs found/day | 20 | 150+ |
| Applications/day | 15 | 8 (targeted) |
| Interview rate | 2% | 15% |
| Hours spent/day | 4-5h | 30min |
| Interviews in 2 weeks | 2 | 12 |
The key was not applying to MORE jobs — it was applying to the RIGHT jobs.
Lessons Learned
- Quality over quantity — 8 targeted applications beat 20 spray-and-pray
- Scrape multiple sources — each board has different listings
- Red flag detection saved time — instantly filtered out bad postings
- Track everything — knowing my conversion rate helped me improve my resume
- Run it as a cron job — new jobs appeared before most humans checked
What is your job search hack?
Did you automate any part of your search? Or do you have a manual trick that works well? I am curious what approaches others take.
I write about Python automation, web scraping, and developer productivity. Follow for more practical tutorials.
Building something similar? Check out my 130+ web scraping tools list for more scraping libraries.
More from me: 10 Dev Tools I Use Daily | 77 Scrapers on a Schedule | 150+ Free APIs
Top comments (0)