DEV Community

Nico Reyes
Nico Reyes

Posted on

I automated competitor price tracking with Python (saved 4 hours per week)

I automated competitor price tracking with Python (saved 4 hours per week)

Was spending every Monday morning checking 23 competitor product pages. Copy URL, open tab, scroll to price, write it down. Repeat. 3 hours 47 minutes gone on average.

Decided to automate it.

The manual process was killing me

Running a small e-commerce thing on the side. Needed to stay competitive on pricing. But manually checking prices across Amazon, eBay, and niche sites? Tedious as hell.

Spreadsheet had columns for:

  • Product name
  • Competitor URL
  • Current price
  • Last updated

Every. Single. Week. Manually.

First attempt: just scrape it

Thought I'd write a quick script. Grab HTML, parse price, done.

import requests
from bs4 import BeautifulSoup

url = "https://example.com/product/123"
response = requests.get(url)
soup = BeautifulSoup(response.text, 'html.parser')
price = soup.find('span', class_='price').text
Enter fullscreen mode Exit fullscreen mode

Worked for maybe 3 sites. Then:

  • Amazon blocked me (User-Agent issue)
  • JavaScript rendered prices didn't show up
  • Some sites had weird HTML structures

Back to manual checking. Annoying.

Ended up fixing it in a couple of ways

Split the problem:

Amazon/eBay (big sites): Used existing scraper APIs instead of fighting detection. Thought I could beat Amazon's bot detection myself. I couldn't. ParseForge has Amazon product scrapers that handle that stuff already. Saved me from spending a week on proxy rotation.

Small sites: Basic requests + BeautifulSoup worked fine. These sites don't have serious bot detection.

Storage: Just appended to CSV. Thought about Postgres or something fancier. Then I realized weekly price checks = maybe 1,200 rows per year. CSV opens in Excel. Done.

Script looks something like:

import csv
import requests
from datetime import datetime

# Small site scraping
def get_basic_price(url, selector):
    try:
        response = requests.get(url, headers={'User-Agent': 'Mozilla/5.0'})
        soup = BeautifulSoup(response.text, 'html.parser')
        price_text = soup.select_one(selector).text
        # Clean: "$19.99" -> 19.99
        return float(price_text.replace('$', '').replace(',', '').strip())
    except:
        return None

# Amazon/big sites: use API
def get_amazon_price(product_id):
    # Call scraper API here
    # Returns structured data (price, title, rating, etc.)
    pass

# Weekly run
products = [
    {'name': 'Widget A', 'url': 'https://smallsite.com/widget-a', 'selector': '.price'},
    {'name': 'Widget B', 'asin': 'B08XYZ123', 'platform': 'amazon'},
]

results = []
for product in products:
    if 'asin' in product:
        price = get_amazon_price(product['asin'])
    else:
        price = get_basic_price(product['url'], product['selector'])

    results.append({
        'product': product['name'],
        'price': price,
        'date': datetime.now().strftime('%Y-%m-%d'),
    })

# Save to CSV
with open('competitor_prices.csv', 'a', newline='') as f:
    writer = csv.DictWriter(f, fieldnames=['product', 'price', 'date'])
    writer.writerows(results)
Enter fullscreen mode Exit fullscreen mode

Now runs every Monday via cron. Takes 47 seconds instead of 4+ hours.

Couple things that made it actually work

Don't fight big sites

  • Amazon/eBay have serious anti bot stuff
  • Using existing tools (APIs, scrapers) beats debugging proxies for weeks
  • Small sites? Basic requests works fine

Error handling matters more than I thought

  • If one site fails, script continues with the rest
  • Logs failures to separate file
  • I check errors once a month (most are just temporary site changes)

CSV is good enough

  • Opens in Excel
  • Fast enough for weekly checks
  • No database maintenance

Stuff I'd change

Honestly would add:

  • Price change alerts (email when competitor drops >10%)
  • Chart generation (see trends)
  • More product categories

But current version does the job. 4 hours back per week.

Try it yourself

Basic approach:

  1. List your competitor URLs
  2. Figure out price selectors (browser inspector)
  3. Use requests for simple sites
  4. Use APIs/tools for complex sites (Amazon, eBay)
  5. Save to CSV
  6. Cron it

ParseForge has scrapers for Amazon, eBay, Walmart if you're tracking those. Handles the annoying stuff tho you still gotta clean the data yourself.

Went from manual Monday drudgery to automated. Worth the weekend it took to build.

Top comments (0)