eBay remains one of the largest e-commerce marketplaces with over 1.3 billion live listings. Whether you're tracking prices, monitoring competitors, or building a deal-finding tool, scraping eBay programmatically gives you a serious edge.
In this guide, I'll show you how to extract eBay product listings, prices, seller information, and auction data using Python.
What Data Can You Scrape from eBay?
eBay exposes a rich set of data on its public pages:
- Product listings — title, description, images, condition, item specifics
- Pricing data — current price, Buy It Now price, shipping cost, best offer status
- Auction data — bid count, time remaining, bid history
- Seller info — username, feedback score, feedback percentage, location
- Category data — breadcrumbs, item specifics, product identifiers (UPC, MPN)
Setting Up Your Environment
pip install requests beautifulsoup4 lxml
Scraping eBay Search Results
Let's start by scraping search results for a given keyword:
import requests
from bs4 import BeautifulSoup
import json
import time
def scrape_ebay_search(keyword, max_pages=3):
headers = {
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) "
"AppleWebKit/537.36 (KHTML, like Gecko) "
"Chrome/120.0.0.0 Safari/537.36",
"Accept-Language": "en-US,en;q=0.9",
}
all_items = []
for page in range(1, max_pages + 1):
url = f"https://www.ebay.com/sch/i.html?_nkw={keyword}&_pgn={page}"
response = requests.get(url, headers=headers)
soup = BeautifulSoup(response.text, "lxml")
items = soup.select("div.s-item__wrapper")
for item in items:
title_el = item.select_one(".s-item__title span")
price_el = item.select_one(".s-item__price")
link_el = item.select_one("a.s-item__link")
shipping_el = item.select_one(".s-item__shipping")
condition_el = item.select_one(".SECONDARY_INFO")
if not title_el or "Shop on eBay" in title_el.text:
continue
all_items.append({
"title": title_el.text.strip(),
"price": price_el.text.strip() if price_el else None,
"url": link_el["href"].split("?")[0] if link_el else None,
"shipping": shipping_el.text.strip() if shipping_el else "N/A",
"condition": condition_el.text.strip() if condition_el else "N/A",
})
time.sleep(2) # Be respectful with delays
return all_items
results = scrape_ebay_search("mechanical keyboard", max_pages=2)
for item in results[:5]:
print(f"{item['title']} — {item['price']}")
Scraping Individual Product Pages
Once you have listing URLs, you can extract detailed data from each product page:
def scrape_ebay_product(url):
headers = {
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) "
"AppleWebKit/537.36 (KHTML, like Gecko) "
"Chrome/120.0.0.0 Safari/537.36",
}
response = requests.get(url, headers=headers)
soup = BeautifulSoup(response.text, "lxml")
product = {}
# Title
title = soup.select_one("h1.x-item-title__mainTitle span")
product["title"] = title.text.strip() if title else None
# Price
price = soup.select_one("div.x-price-primary span.ux-textspans")
product["price"] = price.text.strip() if price else None
# Condition
condition = soup.select_one("span.ux-icon-text__text span.clipped")
product["condition"] = condition.text.strip() if condition else None
# Seller info
seller = soup.select_one("div.x-sellercard-atf__info span.ux-textspans--BOLD")
product["seller"] = seller.text.strip() if seller else None
feedback = soup.select_one("div.x-sellercard-atf__info span.ux-textspans--SECONDARY")
product["seller_feedback"] = feedback.text.strip() if feedback else None
# Item specifics
specifics = {}
rows = soup.select("div.ux-layout-section-evo__col")
for row in rows:
label = row.select_one("dt .ux-textspans--BOLD")
value = row.select_one("dd .ux-textspans")
if label and value:
specifics[label.text.strip()] = value.text.strip()
product["item_specifics"] = specifics
# Shipping
shipping = soup.select_one("div.ux-labels-values--shipping span.ux-textspans--BOLD")
product["shipping"] = shipping.text.strip() if shipping else None
return product
Monitoring Auction Data
For auctions, you'll want to track bid counts and time remaining:
def scrape_auction_data(url):
headers = {
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) "
"AppleWebKit/537.36 (KHTML, like Gecko) "
"Chrome/120.0.0.0 Safari/537.36",
}
response = requests.get(url, headers=headers)
soup = BeautifulSoup(response.text, "lxml")
auction = {}
# Current bid
bid_price = soup.select_one("span[itemprop='price']")
auction["current_bid"] = bid_price.text.strip() if bid_price else None
# Number of bids
bid_count = soup.select_one("a[data-testid='x-bid-count']")
auction["bid_count"] = bid_count.text.strip() if bid_count else "0 bids"
# Time remaining
timer = soup.select_one("span.ux-timer__text")
auction["time_remaining"] = timer.text.strip() if timer else None
return auction
Handling eBay's Anti-Scraping Measures
eBay uses several defenses against automated scraping:
- Rate limiting — they'll block IPs that make too many requests
- CAPTCHAs — triggered by suspicious patterns
- Dynamic rendering — some content loads via JavaScript
Using Proxies to Avoid Blocks
For any serious scraping project, rotating proxies are essential. ThorData provides residential proxies that work well with e-commerce sites:
proxies = {
"http": "http://user:pass@proxy.thordata.com:9000",
"https": "http://user:pass@proxy.thordata.com:9000",
}
response = requests.get(url, headers=headers, proxies=proxies)
For a managed solution that handles proxy rotation and CAPTCHA solving automatically, ScraperAPI wraps your request in a single API call:
import requests
API_KEY = "YOUR_SCRAPERAPI_KEY"
url = f"http://api.scraperapi.com?api_key={API_KEY}&url=https://www.ebay.com/sch/i.html?_nkw=laptop"
response = requests.get(url)
The Easy Way: Use a Pre-Built eBay Scraper
If you don't want to maintain your own scraping infrastructure, there's a ready-to-use eBay Scraper on Apify that handles all the complexity for you — proxy rotation, anti-bot bypassing, and structured JSON output.
You just provide a search keyword or listing URL, and it returns clean, structured data:
{
"title": "Keychron K2 Wireless Mechanical Keyboard",
"price": "$79.99",
"condition": "Brand New",
"seller": "keychron_official",
"seller_feedback": "99.8% positive",
"shipping": "Free shipping",
"bids": null,
"url": "https://www.ebay.com/itm/..."
}
No infrastructure to manage, no proxies to configure, and it scales to thousands of listings automatically.
Building a Price Tracker
Here's a practical example — a price tracker that monitors listings and alerts you to drops:
import json
import time
from datetime import datetime
def track_prices(keywords, check_interval=3600):
price_history = {}
while True:
for keyword in keywords:
results = scrape_ebay_search(keyword, max_pages=1)
timestamp = datetime.now().isoformat()
for item in results[:10]:
item_id = item["url"]
price_str = item["price"]
if item_id not in price_history:
price_history[item_id] = []
price_history[item_id].append({
"price": price_str,
"timestamp": timestamp,
})
# Check for price drops
if len(price_history[item_id]) > 1:
prev = price_history[item_id][-2]["price"]
if price_str != prev:
print(f"Price change: {item['title']}")
print(f" {prev} -> {price_str}")
with open("price_history.json", "w") as f:
json.dump(price_history, f, indent=2)
time.sleep(check_interval)
track_prices(["rtx 4090", "ps5 console", "iphone 15 pro"])
Best Practices
-
Respect robots.txt — check
ebay.com/robots.txtbefore scraping - Add delays — 2-5 seconds between requests minimum
- Use proxies — rotate IPs with ThorData to avoid bans
- Cache responses — don't re-scrape data you already have
- Handle errors gracefully — eBay pages change frequently, so build in fallbacks
- Consider the API — eBay has an official API for some use cases, though it's more limited
Wrapping Up
eBay scraping is straightforward once you handle the anti-bot measures. For small projects, a simple requests + BeautifulSoup setup works fine. For production workloads, use a managed tool like the eBay Scraper on Apify or pair your custom code with ScraperAPI for reliability.
The code examples above should get you started — adapt them to your specific use case and always scrape responsibly.
Top comments (0)