DEV Community

agenthustler
agenthustler

Posted on • Originally published at web-data-labs.com

Tracking Amazon Price Drops at Scale — Build a Price Monitor with Python

A friend runs a small dropshipping store. Nothing fancy — 200 SKUs, mostly home goods, sourced from Amazon and marked up on their own Shopify. His biggest headache for years was margin drift: Amazon would quietly drop a price, he'd keep selling at the old price, and by the end of the month his "20% margin" products had actually been 3% losers for weeks.

He asked me for a cheap price monitor. I built it in an afternoon. Here's the whole thing.

The idea

For each SKU, pull the current Amazon price once every few hours. If it moved more than some threshold, fire an alert. Store the history so we can look at trends and answer questions like "which products are trending down this week?"

Stack:

  • Apify's Amazon Scraper for the actual price pulls ($0.005/result — basically free at this scale).
  • SQLite for history.
  • A tiny Python script on a $5 VPS for scheduling.
  • Slack webhook for alerts.

That's it. No Redis, no queue, no Docker.

Step 1 — Watchlist

Start with a CSV. Column 1 is the ASIN, column 2 is the price you're currently selling at.

import csv

def load_watchlist(path: str = "watchlist.csv") -> dict[str, float]:
    out = {}
    with open(path) as f:
        for row in csv.reader(f):
            out[row[0].strip()] = float(row[1])
    return out
Enter fullscreen mode Exit fullscreen mode

Keep it boring. You'll thank yourself later when you're debugging at 2am.

Step 2 — Fetch current prices

import os, requests

APIFY_TOKEN = os.environ["APIFY_TOKEN"]
ACTOR = "cryptosignals~amazon-scraper"

def fetch_prices(asins: list[str]) -> list[dict]:
    run_input = {
        "asins": asins,
        "country": "US",
        "includeReviews": False,
    }
    r = requests.post(
        f"https://api.apify.com/v2/acts/{ACTOR}/run-sync-get-dataset-items",
        params={"token": APIFY_TOKEN},
        json=run_input,
        timeout=600,
    )
    r.raise_for_status()
    return r.json()
Enter fullscreen mode Exit fullscreen mode

One call, all ASINs. For 200 products the actor finishes in about a minute. For thousands, chunk it — I found batches of 500 to be a sweet spot.

Step 3 — Store the history

import sqlite3
from datetime import datetime

DB = sqlite3.connect("prices.db")
DB.execute("""
CREATE TABLE IF NOT EXISTS price_history (
    asin TEXT,
    price REAL,
    currency TEXT,
    in_stock INTEGER,
    checked_at TEXT
)
""")
DB.execute("CREATE INDEX IF NOT EXISTS idx_asin_time ON price_history(asin, checked_at)")

def record(items: list[dict]) -> None:
    now = datetime.utcnow().isoformat()
    for it in items:
        DB.execute(
            "INSERT INTO price_history VALUES (?, ?, ?, ?, ?)",
            (
                it["asin"],
                float(it.get("price") or 0),
                it.get("currency", "USD"),
                1 if it.get("inStock") else 0,
                now,
            ),
        )
    DB.commit()
Enter fullscreen mode Exit fullscreen mode

Never delete a row. Disk is cheap, history is priceless when you're trying to figure out if a price drop is a fluke or a trend.

Step 4 — Detect drops

def significant_drops(threshold_pct: float = 5.0) -> list[tuple]:
    rows = DB.execute("""
    SELECT asin, price, checked_at FROM price_history
    ORDER BY asin, checked_at DESC
    """).fetchall()

    latest = {}
    previous = {}
    for asin, price, _ in rows:
        if asin not in latest:
            latest[asin] = price
        elif asin not in previous:
            previous[asin] = price

    alerts = []
    for asin, cur in latest.items():
        prev = previous.get(asin)
        if not prev or prev == 0:
            continue
        pct = (cur - prev) / prev * 100
        if pct <= -threshold_pct:
            alerts.append((asin, prev, cur, pct))
    return alerts
Enter fullscreen mode Exit fullscreen mode

5% is a reasonable default. Below that, most of the signal is noise — Amazon nudges prices by a few cents constantly.

Step 5 — Alert

import json

SLACK = os.environ["SLACK_WEBHOOK"]

def alert(drops: list[tuple]) -> None:
    if not drops:
        return
    lines = [
        f":chart_with_downwards_trend: *{a}* {p1:.2f}{p2:.2f} ({pct:+.1f}%)"
        for a, p1, p2, pct in drops
    ]
    requests.post(SLACK, json={"text": "\n".join(lines)}, timeout=10)
Enter fullscreen mode Exit fullscreen mode

Slack, Discord, email, SMS — it doesn't matter. What matters is that the alert lands somewhere you actually look.

Step 6 — Schedule

Cron is fine. Every four hours:

0 */4 * * * cd /opt/price-monitor && python3 run.py >> run.log 2>&1
Enter fullscreen mode Exit fullscreen mode

run.py is the three-line composition of everything above:

from monitor import load_watchlist, fetch_prices, record, significant_drops, alert

items = fetch_prices(list(load_watchlist().keys()))
record(items)
alert(significant_drops(threshold_pct=5))
Enter fullscreen mode Exit fullscreen mode

Use cases beyond dropshipping

Once the pipeline is running, other uses appear naturally:

  • Comparison shopping. Same ASIN across multiple country marketplaces — find arbitrage.
  • Restock alerts. Flip the in_stock flag check instead of price.
  • Deal blogs. If you run a content site, price drops become content. Low effort, good traffic.
  • Personal wishlists. The nerdiest one. I track 20 board games and get pinged when anything drops 10%+.

Cost check

At 200 products every 4 hours, that's 1,200 results per day × 30 days = 36,000 results/month. At $0.005/result the bill is $180/month — and my friend's monthly margin drift savings alone paid for it in the first week. For smaller lists (20–50 products) you're talking pennies.

If you want the actor: Amazon Scraper on Apify. It has a free tier so you can prove out the idea before wiring up cron.

The boring stack works. Go build.

Top comments (0)