DEV Community

agenthustler
agenthustler

Posted on

Scraping LinkedIn Job Listings in 2026: Public Data Without Login

LinkedIn is the world's largest professional network, with millions of job postings updated daily. What most people don't know is that LinkedIn exposes a public jobs API that requires no login, no API key, and no OAuth — just a simple HTTP GET request.

In this guide, I'll show you how to access LinkedIn's public job listings endpoint, extract job details, and build useful job market research tools.

The Hidden Public Endpoint

LinkedIn's job search has a guest-facing API designed for search engines and non-logged-in visitors. This endpoint returns job listings as HTML fragments that you can parse:

https://www.linkedin.com/jobs-guest/jobs/api/seeMoreJobPostings/search?keywords=python&location=United%20States&start=0
Enter fullscreen mode Exit fullscreen mode

Key parameters:

  • keywords — job title, skills, or company name
  • location — city, state, country, or remote
  • start — pagination offset (increments of 25)
  • f_TPR — time posted filter (r86400 = 24h, r604800 = past week)
  • f_E — experience level (1=Intern, 2=Entry, 3=Associate, 4=Mid-Senior, 5=Director, 6=Executive)
  • f_JT — job type (F=Full-time, P=Part-time, C=Contract, T=Temporary)

No authentication required. Let's build with it.

1. Searching for Jobs

Here's how to search for jobs and extract listing data:

import requests
from bs4 import BeautifulSoup

def search_linkedin_jobs(keywords, location="United States", max_results=50):
    jobs = []
    headers = {
        "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36"
    }

    for start in range(0, max_results, 25):
        url = "https://www.linkedin.com/jobs-guest/jobs/api/seeMoreJobPostings/search"
        params = {
            "keywords": keywords,
            "location": location,
            "start": start
        }

        response = requests.get(url, params=params, headers=headers)
        if response.status_code != 200:
            break

        soup = BeautifulSoup(response.text, "html.parser")
        cards = soup.select("div.base-card")

        if not cards:
            break

        for card in cards:
            title_el = card.select_one("h3.base-search-card__title")
            company_el = card.select_one("h4.base-search-card__subtitle")
            location_el = card.select_one("span.job-search-card__location")
            link_el = card.select_one("a.base-card__full-link")
            date_el = card.select_one("time")

            jobs.append({
                "title": title_el.get_text(strip=True) if title_el else None,
                "company": company_el.get_text(strip=True) if company_el else None,
                "location": location_el.get_text(strip=True) if location_el else None,
                "url": link_el["href"].split("?")[0] if link_el else None,
                "posted": date_el.get("datetime") if date_el else None
            })

    return jobs

jobs = search_linkedin_jobs("python developer", "Remote")
for job in jobs[:10]:
    print(f"{job['title']} at {job['company']} ({job['location']})")
Enter fullscreen mode Exit fullscreen mode

2. Getting Job Details

Each job listing URL has a public view with full details:

import requests
from bs4 import BeautifulSoup
import re

def get_job_details(job_url):
    headers = {
        "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36"
    }

    response = requests.get(job_url, headers=headers)
    soup = BeautifulSoup(response.text, "html.parser")

    title = soup.select_one("h1.top-card-layout__title")
    company = soup.select_one("a.topcard__org-name-link")
    location = soup.select_one("span.topcard__flavor--bullet")
    description = soup.select_one("div.description__text")
    criteria = soup.select("li.description__job-criteria-item")

    details = {
        "title": title.get_text(strip=True) if title else None,
        "company": company.get_text(strip=True) if company else None,
        "location": location.get_text(strip=True) if location else None,
        "description": description.get_text(strip=True)[:500] if description else None,
    }

    # Extract seniority, employment type, industry
    for item in criteria:
        header = item.select_one("h3")
        value = item.select_one("span")
        if header and value:
            key = header.get_text(strip=True).lower().replace(" ", "_")
            details[key] = value.get_text(strip=True)

    return details

# Use a URL from the search results
job = get_job_details("https://www.linkedin.com/jobs/view/1234567890")
for key, value in job.items():
    if key != "description":
        print(f"{key}: {value}")
print(f"Description preview: {job.get('description', '')[:200]}...")
Enter fullscreen mode Exit fullscreen mode

3. Building a Job Market Research Tool

Here's a practical script that tracks job trends:

import requests
from bs4 import BeautifulSoup
import time
from collections import Counter

def analyze_job_market(keywords_list, location="United States"):
    market_data = {}

    for keyword in keywords_list:
        print(f"Researching: {keyword}")
        jobs = search_linkedin_jobs(keyword, location, max_results=100)

        companies = Counter()
        locations = Counter()
        for job in jobs:
            if job["company"]:
                companies[job["company"]] += 1
            if job["location"]:
                locations[job["location"]] += 1

        market_data[keyword] = {
            "total_listings": len(jobs),
            "top_companies": companies.most_common(5),
            "top_locations": locations.most_common(5)
        }

        time.sleep(3)  # Be respectful between searches

    return market_data

skills = ["python developer", "rust developer", "golang developer"]
market = analyze_job_market(skills)

for skill, data in market.items():
    print(f"\n=== {skill.upper()} ===")
    print(f"Total listings: {data['total_listings']}")
    print("Top companies:")
    for company, count in data["top_companies"]:
        print(f"  {company}: {count} openings")
    print("Top locations:")
    for loc, count in data["top_locations"]:
        print(f"  {loc}: {count} jobs")
Enter fullscreen mode Exit fullscreen mode

Advanced Filtering

LinkedIn's guest API supports several useful filters you can combine:

def search_filtered(keywords, location, experience=None, job_type=None, posted_within=None):
    params = {
        "keywords": keywords,
        "location": location,
        "start": 0
    }

    # Experience level: 1=Intern, 2=Entry, 3=Associate, 4=Mid-Senior, 5=Director
    if experience:
        params["f_E"] = experience

    # Job type: F=Full-time, P=Part-time, C=Contract
    if job_type:
        params["f_JT"] = job_type

    # Time filter: r86400=24h, r604800=week, r2592000=month
    if posted_within:
        params["f_TPR"] = posted_within

    url = "https://www.linkedin.com/jobs-guest/jobs/api/seeMoreJobPostings/search"
    headers = {
        "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36"
    }

    response = requests.get(url, params=params, headers=headers)
    soup = BeautifulSoup(response.text, "html.parser")
    return soup.select("div.base-card")

# Senior full-time Python jobs posted in the last 24 hours
cards = search_filtered(
    keywords="python",
    location="San Francisco",
    experience="4",        # Mid-Senior
    job_type="F",          # Full-time
    posted_within="r86400" # Last 24 hours
)
print(f"Found {len(cards)} matching jobs")
Enter fullscreen mode Exit fullscreen mode

Rate Limits and Best Practices

LinkedIn is more protective than most sites. Follow these rules:

  1. Keep delays at 3-5 seconds between requests
  2. Rotate User-Agent strings for longer sessions
  3. Don't hammer pagination — limit to 200-300 results per query
  4. Cache results locally — don't re-fetch the same listings
  5. Use the filters — narrower queries mean fewer requests
  6. Monitor for 429 responses — back off immediately if you get rate limited

Scaling Up

For production use cases — daily job market reports, recruiter dashboards, salary research across thousands of listings — building and maintaining your own scraper becomes a full-time job (pun intended).

LinkedIn Jobs Scraper on Apify handles proxy rotation, rate limiting, and structured data output, so you can focus on analysis instead of infrastructure.

What You Can Build

With LinkedIn job data, the possibilities are practical and valuable:

  • Job market dashboards — track demand for skills over time
  • Salary research tools — compare compensation across locations
  • Recruiting automation — alert on new postings matching criteria
  • Career planning tools — see which skills lead to which roles
  • Competitive intelligence — monitor competitor hiring patterns

LinkedIn's public jobs endpoint is one of the most useful hidden APIs on the web. No login, no API key, just structured job data waiting to be analyzed. Start building your job market research tools today.

Top comments (0)