import requests
from bs4 import BeautifulSoup
# Fetch Hacker News job listings
url = 'https://news.ycombinator.com/jobs'
response = requests.get(url)
soup = BeautifulSoup(response.text, 'html.parser')
job_listings = soup.find_all('tr', class_='athing')
# Extract job details
jobs = []
for job in job_listings:
title = job.find('span', class_='titleline').text.strip()
link = job.find('a')['href']
company = job.find('span', class_='company').text.strip()
jobs.append({
'title': title,
'company': company,
'link': link
})
# Save to CSV
import csv
with open('hn_jobs.csv', 'w', newline='') as file:
writer = csv.DictWriter(file, fieldnames=['title', 'company', 'link'])
writer.writeheader()
writer.writerows(jobs)
When you're looking for remote or freelance opportunities on Hacker News, manually copying job details is both time-consuming and error-prone. The HackerNews Job Exporter automates this process by fetching job listings directly from Hacker News and exporting them to a CSV file in one click. This tool is designed for Python developers who want to streamline their job tracking workflow without the hassle of manual copying.
The script uses requests to fetch the job listings page and BeautifulSoup to parse the HTML and extract relevant information. Each job listing is saved with its title, company, and link, and then exported to a CSV file. This approach ensures that you don't lose any job listings due to copy-paste errors or missed entries.
# Example of how to handle pagination
base_url = 'https://news.ycombinator.com/jobs?sort=desc&mode=job&points=100'
page = 1
all_jobs = []
while True:
url = f'{base_url}?p={page}'
response = requests.get(url)
soup = BeautifulSoup(response.text, 'html.parser')
job_listings = soup.find_all('tr', class_='athing')
if not job_listings:
break
for job in job_listings:
# Extract job details as before
all_jobs.extend(jobs)
page += 1
The HackerNews Job Exporter handles pagination seamlessly, allowing you to export job listings from multiple pages without missing any data. This is especially useful when you're looking for opportunities across a large number of job listings. The script continues fetching jobs until there are no more entries, ensuring that you get a complete export of all available listings.
One of the key benefits of using the HackerNews Job Exporter is that it eliminates the need for manual copying. Instead of spending time copying and pasting job details, you can let the script do the work for you. This not only saves time but also reduces the risk of errors that can occur during manual data entry.
The tool includes all the necessary dependencies, and you can install them with a single pip install requests beautifulsoup4 command. The script is ready to run out of the box, and the included README.txt provides step-by-step instructions for setting up and running the tool. Additionally, a sample_output.txt file is provided to show exactly what the script produces, making it easy to understand the output format.
If you're a Python developer looking to automate your job tracking process, the HackerNews Job Exporter is the perfect tool for you. It allows you to focus on what matters most—finding the right remote or freelance opportunities—without the hassle of manual data entry. You can find the tool at https://intellitools.gumroad.com/l/hackernews-job-exporter.
Top comments (0)