import requests
from bs4 import BeautifulSoup
import csv
url = 'https://news.ycombinator.com/jobs'
response = requests.get(url)
soup = BeautifulSoup(response.text, 'html.parser')
job_cards = soup.select('.story')
jobs = []
for card in job_cards:
title = card.select_one('.title a')['text']
link = card.select_one('.title a')['href']
company = card.select_one('.company a')['text']
jobs.append({"title": title, "link": link, "company": company})
with open('jobs.csv', 'w', newline='', encoding='utf-8') as file:
writer = csv.DictWriter(file, fieldnames=["title", "link", "company"])
writer.writeheader()
writer.writerows(jobs)
HN Job Scraper Pro is a Python tool that automates the process of fetching and exporting job data from Hacker News. This script fetches the latest job listings, parses the HTML content, and exports the results to a CSV file. It eliminates the need for manual scraping, which can be time-consuming and error-prone.
The tool is designed for remote developers and recruiters who need real-time access to job listings on Hacker News. By automating the scraping process, users can save hours of manual work each week. The script is ready to run out of the box, with clear setup instructions and all dependencies managed via a requirements.txt file.
One of the key features of HN Job Scraper Pro is its ability to filter jobs by location, type, and date range. This makes it easier to focus on relevant opportunities. The tool also provides daily job summaries, which can be useful for tracking hiring trends and staying ahead of the competition.
To use the tool, you simply need Python 3.7 or later. The script is written in a way that makes it easy to modify and extend. For example, you can add custom filters or integrate it with other tools for data analysis. The README.txt file included in the download provides step-by-step instructions for installation and usage, ensuring that even beginners can get started quickly.
Hereβs an example of how the script exports data:
import csv
with open('jobs.csv', 'r', newline='', encoding='utf-8') as file:
reader = csv.DictReader(file)
for row in reader:
print(f"{row['title']} - {row['company']} - {row['link']}")
This code reads the CSV file and prints out each job listing, making it easy to verify that the data is being exported correctly. The output is clean and structured, with title, link, and company information clearly separated.
HN Job Scraper Pro is the perfect tool for anyone looking to automate the process of tracking Hacker News job listings. Whether you're a remote developer searching for new opportunities or a recruiter looking to stay ahead of the curve, this tool can help you save time and increase productivity. For more information and to download the tool, visit HN Job Scraper Pro.
Top comments (0)