I've been job hunting as a full-stack dev. I usually do freelance work but I'm not the best businessman and honestly it would be nice if somebody just told me what to build and I could get paid. But all the jobs on LinkedIn are ghost jobs, they're just accumulating resumes into a black hole and none of them ever actually get checked. You write a whole cover letter for a job that doesn't exist.
I checked out a bunch of the newer tools. HiringCafe looked like the backend wasn't even hooked up, which is kinda ridiculous. They have money. StackJobs wouldn't load. Scoutify wanted money and I was like nah, I'll just build this myself with Claude. Jobright kinda works but the matching wasn't what I wanted, I wanted to be able to search for specific engineering jobs that match my exact tech stack. Like I build with Django, React, TypeScript, I don't want to read through every JD guessing if they use what I use.
So I built this thing called RepoRadar. I wanted it to be click click results, because my attention span is pretty short.

how it works
Google SSO > upload your resume > Claude API parses it and auto-selects your tech stack > hit search > see matching jobs sorted by recent. This way you avoid the ghost jobs because everything's pulled fresh. You click a job and it takes you to the actual company page to apply. minimal bs.
where the jobs come from
After doing some research it turns out most tech companies use one of like four ATS platforms and they all have public APIs. No auth needed. You can hit them as much as you want which is pretty sweet. So what I did is I mapped over 6,000 companies to their ATS platform Greenhouse, Lever, Ashby, Workable. I created a Celery beat task that hits all of them every morning around 6am and pulls fresh listings.
GET https://boards-api.greenhouse.io/v1/boards/stripe/jobs
That gives you every open role at Stripe in JSON. All four platforms work like this.
I also pull from RemoteOK, Remotive, We Work Remotely, and the monthly HN Who's Hiring thread. But it's mostly from the ATS boards — there's some ridiculous amount that show up on RepoRadar, like 185,000 jobs right now.
The mapping was honestly the hardest part of the whole project. Took way longer than expected, and I kept having to reprocess them and losing my SSH connection to Railway in the middle of it.
how the matching works
Every job description gets run through a regex extractor with ~170 keyword patterns: Django, React, PostgreSQL, TypeScript, Next.js, etc. It tags each job with what it detected. When you search it just filters where your stack overlaps with what's in the JD.
TECH_PATTERNS = {
'django': r'\bdjango\b',
'react': r'\breact(?:\.js|js)?\b',
'postgresql': r'\b(?:postgres(?:ql)?|psql)\b',
'typescript': r'\btypescript\b',
'next_js': r'\bnext\.?js\b',
# ~170 more
}
def extract_techs(description_text):
detected = []
text_lower = description_text.lower()
for tech_name, pattern in TECH_PATTERNS.items():
if re.search(pattern, text_lower):
detected.append(tech_name)
return detected
No ML, no embeddings, no vector search. If I get some traffic maybe I'll add that stuff, who knows. Is it perfect? No. But is it tailored for engineering jobs? Absolutely.
I also wanted to make it so you could search by what companies are actually building with. Like I build with Claude, so I want to find companies that match my exact workflow. That's kind of the vision.
the stack
Django 5 / DRF on Railway, React 19 / TypeScript / Vite / Tailwind on Netlify, PostgreSQL, Redis + Celery for background jobs, Claude API for resume parsing, Google OAuth via django-allauth. Railway runs a single container that starts gunicorn and a Celery worker from a start.sh script. Frontend proxies API requests through netlify.toml. OAuth goes straight to Railway because you can't proxy OAuth redirects — learned that the hard way.
what I learned
ATS public APIs are a goldmine. You can hit them way more than you can hit the GitHub API (which wasn't that useful anyway since most company repos are private). There's so many jobs out there it's actually crazy once you start pulling from these endpoints.
The company-to-ATS mapping took way longer than writing the application. Like, significantly. Ok, maybe an exaggeration. But annoying either way
Freshness matters more than volume. The older a posting is, the less likely you are to hear back from anybody. So everything on RepoRadar is sorted recent-first and refreshed daily.
Originally I had set up a GitHub integration to search repos and find organizations by tech stack then map that way, but it turns out most company repos are private so that didn't work the way I wanted. I pivoted to focusing on the ATS boards and remote platforms instead.
what's not great
6,200 companies is a lot but it's not everything, no Workday, iCIMS, or Taleo coverage yet. Regex misses some edge cases. And if I get concurrent user load I'm gonna have to split Celery out into its own Railway service instead of running it in the same container, but that's all doable.
Right now I've got it hooked up with Sentry for monitoring. I'd love to add OpenTelemetry if I can get some real users on it. I also set up a bunch of MCP servers — Railway, Chrome MCP — but the Chrome one isn't that helpful because you can't get past the Google SSO login screen with it.
try it
It's free. You can try it and complain to me and tell me what I did wrong — in fact I would love that. It'd be really cool if I can get some users on this thing.
Note: I made it so you can filter for us remote only
Email me: Mnraynor90@gmail.com - Direct complaint line. Also can tell me its cool.
Top comments (0)