Remote job boards are everywhere. Most of them are aggregators scraping the same stale listings from Indeed and LinkedIn, padding results with "remote-friendly" roles that turn out to be hybrid. If you've ever built a job alert system or tried to do market research on remote hiring trends, you know the pain: garbage in, garbage out.
We Work Remotely is different, and that difference matters when you're working with the data programmatically.
Why WWR data is worth scraping
We Work Remotely has been around since 2011. It's curated — companies pay to post, and the board only accepts fully remote positions. No "2 days in office" surprises. No recruiter spam. No ghost listings.
This means the signal-to-noise ratio is unusually high. When you pull data from WWR, you're getting:
- Real companies with real budgets (they paid to post)
- Genuinely remote roles (not hybrid bait-and-switch)
- Current openings (paid listings expire, so stale jobs get removed)
For developers who want to automate job searching or analyze the remote job market, this is the cleanest dataset you'll find.
Three things developers actually do with this data
1. Build your own job alert pipeline
The simplest and most useful automation: scrape WWR daily, filter by your stack, and push matches to Slack or email.
# Pseudocode for a daily job filter
for job in wwr_jobs:
if any(tag in job["title"].lower() for tag in ["python", "fastapi", "backend"]):
send_slack_notification(
f"{job['title']} at {job['company']} — {job['applyUrl']}"
)
Why not just use WWR's own email alerts? Because you can't filter by tech stack, salary range, or company size. With raw data, you build exactly the filter you need. Combine it with a cron job and you've got a personalized job radar that runs while you sleep.
2. Remote hiring market research
Which tech stacks are remote companies actually hiring for right now? Are Go roles growing? Is the React demand plateau real?
With structured job data, you can answer these questions with code instead of vibes:
- Track category trends week over week
- Count mentions of specific technologies in job titles
- Compare hiring volume across programming, devops, and design categories
This is useful whether you're deciding what to learn next, writing a blog post with real data, or advising a team on where the market is heading.
3. Competitor intelligence
If you're running a remote-first company, WWR data tells you which competitors are scaling up. A company posting 5+ roles in a month is clearly in growth mode. You can:
- Monitor specific companies for new postings
- Track which roles they're hiring (engineering-heavy? sales-heavy?)
- Spot trends before they show up in press releases
This isn't shady — it's public data. Companies post on WWR specifically to be seen.
What the data looks like
Each listing gives you structured fields that are easy to work with:
| Field | Example |
|---|---|
title |
Senior Backend Engineer (Python) |
company |
Basecamp |
applyUrl |
https://weworkremotely.com/remote-jobs/... |
datePosted |
2026-04-10 |
category |
Programming |
Clean, consistent, and ready to pipe into a database, spreadsheet, or dashboard.
How to get the data
You can write your own scraper, but WWR's markup changes periodically and maintaining it gets old fast.
We built a WWR scraper on Apify that handles the extraction and outputs structured JSON. It runs on Apify's infrastructure, so you don't need to manage proxies or worry about rate limiting. Free tier gets you started.
The scraper supports filtering by category, and you can schedule it to run daily — which pairs nicely with the job alert pipeline described above.
Try it out
If you're building anything with remote job data, give it a spin. And if it saves you time, a quick review on the Apify store page helps other developers find it.
Happy scraping.
Top comments (0)