How to Monitor Yiwugo Product Prices Automatically
If you're sourcing products from Yiwugo.com, you already know prices aren't static. Suppliers adjust pricing based on raw material costs, order volume, seasonal demand, and competition. A product that costs ¥2.50 today might be ¥1.80 next month — or ¥3.20.
The sellers who win aren't the ones who check prices manually every week. They're the ones who automate it.
In this tutorial, I'll walk you through building an automated price monitoring system for Yiwugo products using the Yiwugo Scraper on Apify, a bit of Python, and a scheduling tool. By the end, you'll have a pipeline that tracks prices daily and alerts you when something interesting happens.
Why Monitor Prices?
Before we build anything, let's talk about why this matters:
1. Buy at the right time. Yiwugo prices for seasonal products (Christmas decorations, back-to-school supplies) can swing 20–40% depending on when you order. Monitoring lets you spot the dip.
2. Catch supplier price wars. When multiple suppliers compete on the same product, prices drop. If you're watching, you can jump in at the bottom.
3. Detect supply issues early. A sudden price spike across multiple suppliers usually means raw material costs went up or supply got tight. That's your signal to stock up before it gets worse.
4. Negotiate better. When you walk into a supplier conversation with 3 months of price history, you negotiate from strength. "Your compebeen at ¥1.60 for the past 6 weeks" hits different than "I think this is too expensive."
Architecture Overview
Here's what we're building:
┌─────────────┐ ┌──────────────┐ ┌─────────────┐
│ Scheduler │────▶│ Yiwugo │────▶│ Price DB │
│ (cron/Apify)│ │ Scraper │ │ (CSV/JSON) │
└─────────────┘ └──────────────┘ └──────┬──────┘
│
┌──────▼──────┐
│ Analysis │
│ Script │
└──────┬──────┘
│
┌──────▼──────┐
│ Alerts │
│ (Email/ │
│ Webhook) │
└─────────────┘
Simple. Scrape → Store → Analyze → Alert.
Step 1: Set Up the Scraper
First, you need an Apify account and API token. The Yiwugo Scraper handles all the complexity of navigating Yiwugo's Chinese-language site, dealing with pagination, and extracting structured data.
Install the Apify client:
pip install apify-client
Here's a basic scrape for a specific product keyword:
from apify_client import ApifyClient
client = ApifyClient("YOUR_APIFY_TOKEN")
run_input = {
"keyword": "硅胶铲", # Silicone spatula
"maxItems": 100,
"language": "en"
}
run = client.actor("jungle_intertwining/yiwugo-scraper").call(run_input=run_input)
items = list(client.dataset(run["defaultDatasetId"]).iterate_items())
print(f"Scraped {len(items)} products")
Each item includes the product title, price, supplier name, MOQ, and other details you'll need for monitoring.
Step 2: Store Price History
A single scrape is a snapshot. Price monitoring needs history. Here's a script that appends each scrape's results to a CSV file with timestamps:
import csv
import os
from datetime import datetime
from apify_client import ApifyClient
def scrape_and_store(keyword, output_file="price_history.csv"):
client = ApifyClient("YOUR_APIFY_TOKEN")
run = client.actor("jungle_intertwining/yiwugo-scraper").call(
run_input={
"keyword": keyword,
"maxItems": 100,
"language": "en"
}
)
items = list(client.dataset(run["defaultDatasetId"]).iterate_items())
timestamp = datetime.now().isoformat()
file_exists = os.path.exists(output_file)
with open(output_file, "a", newline="", encoding="utf-8") as f:
writer = csv.writer(f)
if not file_exists:
writer.writerow([
"timestamp", "product_id", "title", "price",
"min_order", "supplier", "supplier_location"
])
for item in items:
writer.writerow([
timestamp,
item.get("productId", ""),
item.get("title", ""),
item.get("price", ""),
item.get("minOrder", ""),
item.get("supplierName", ""),
item.get("location", "")
])
print(f"[{timestamp}] Stored {len(items)} records for '{keyword}'")
return items
# Run it
scrape_and_store("硅胶铲")
After a week of daily scrapes, your CSV will look something like this:
timestamp,product_id,title,price,min_order,supplier,supplier_location
2026-02-01T09:00:00,12345,Silicone Spatula Set,2.50,100,Yiwu Kitchen Co.,Yiwu
2026-02-02T09:00:00,12345,Silicone Spatula Set,2.50,100,Yiwu Kitchen Co.,Yiwu
2026-02-03T09:00:00,12345,Silicone Spatula Set,2.30,100,Yiwu Kitchen Co.,Yiwu
That ¥0.20 drop on day 3? That's the kind of signal you want to catch.
Step 3: Analyze Price Changes
Now the interesting part. This script reads your price history and identifies significant changes:
import csv
from collections import defaultdict
from datetime import datetime
def analyze_prices(csv_file="price_history.csv", threshold=0.10):
"""
Analyze price history and flag products with significant changes.
threshold: minimum percentage change to flag (0.10 = 10%)
"""
# Group prices by product
products = defaultdict(list)
with open(csv_file, "r", encoding="utf-8") as f:
reader = csv.DictReader(f)
for row in reader:
try:
price = float(row["price"])
products[row["product_id"]].append({
"date": row["timestamp"],
"price": price,
"title": row["title"],
"supplier": row["supplier"]
})
except (ValueError, KeyError):
continue
alerts = []
for product_id, history in products.items():
if len(history) < 2:
continue
# Sort by date
history.sort(key=lambda x: x["date"])
latest = history[-1]
previous = history[-2]
# Calculate change
if previous["price"] == 0:
continue
change = (latest["price"] - previous["price"]) / previous["price"]
if abs(change) >= threshold:
alerts.append({
"product_id": product_id,
"title": latest["title"],
"supplier": latest["supplier"],
"old_price": previous["price"],
"new_price": latest["price"],
"change_pct": change * 100,
"direction": "📉 DROP" if change < 0 else "📈 SPIKE"
})
# Also find all-time lows
for product_id, history in products.items():
if len(history) < 7: # Need at least a week of data
continue
prices = [h["price"] for h in history]
current = history[-1]["price"]
if current == min(prices) and current < sum(prices) / len(prices) * 0.9:
alerts.append({
"product_id": product_id,
"title": history[-1]["title"],
"supplier": history[-1]["supplier"],
"old_price": max(prices),
"new_price": current,
"change_pct": ((current - max(prices)) / max(prices)) * 100,
"direction": "⭐ ALL-TIME LOW"
})
return alerts
# Run analysis
alerts = analyze_prices(threshold=0.10)
for alert in alerts:
print(f"{alert['direction']}: {alert['title']}")
print(f" Supplier: {alert['supplier']}")
print(f" Price: ¥{alert['old_price']:.2f} → ¥{alert['new_price']:.2f} ({alert['change_pct']:+.1f}%)")
print()
Output looks like:
📉 DROP: Silicone Spatula Set (3-piece)
Supplier: Yiwu Kitchen Co.
Price: ¥2.50 → ¥1.80 (-28.0%)
📈 SPIKE: LED String Lights 10m
Supplier: Zhejiang Lighting Factory
Price: ¥3.20 → ¥4.10 (+28.1%)
⭐ ALL-TIME LOW: Bamboo Phone Stand
Supplier: Yiwu Craft Trading
Price: ¥5.00 → ¥2.80 (-44.0%)
Step 4: Set Up Alerts
Price data is useless if you don't act on it. Here's how to send alerts via webhook (works with Slack, Discord, or any webhook-compatible service):
import requests
import json
def send_alert(alerts, webhook_url):
"""Send price alerts to a webhook (Slack, Discord, etc.)"""
if not alerts:
return
message_lines = ["🔔 **Yiwugo Price Alert**\n"]
for alert in alerts:
message_lines.append(
f"{alert['direction']}: {alert['title']}\n"
f" ¥{alert['old_price']:.2f} → ¥{alert['new_price']:.2f} "
f"({alert['change_pct']:+.1f}%)\n"
f" Supplier: {alert['supplier']}\n"
)
payload = {"content": "\n".join(message_lines)}
# For Slack, use {"text": ...} instead of {"content": ...}
response = requests.post(webhook_url, json=payload)
print(f"Alert sent: {response.status_code}")
# Example usage
alerts = analyze_prices(threshold=0.10)
if alerts:
send_alert(alerts, "https://discord.com/api/webhooks/YOUR_WEBHOOK_URL")
Step 5: Automate with Scheduling
The final piece: run this automatically. You have two options.
Option A: Apify Scheduler (Recommended)
Apify has built-in scheduling. You can set the scraper to run daily without managing any infrastructure:
# Create a scheduled task on Apify
schedule = client.schedules().create(
name="yiwugo-daily-price-check",
cron_expression="0 9 * * *", # Every day at 9 AM UTC
actions=[{
"type": "RUN_ACTOR",
"actorId": "jungle_intertwining/yiwugo-scraper",
"runInput": {
"keyword": "硅胶铲",
"maxItems": 100,
"language": "en"
}
}]
)
Option B: System Cron + Python Script
If you prefer running everything locally:
# Add to crontab (crontab -e)
0 9 * * * cd /path/to/project && python3 monitor.py >> monitor.log 2>&1
Where monitor.py combines all the steps:
#!/usr/bin/env python3
"""
Yiwugo Price Monitor — runs daily via cron.
Scrapes products, stores history, analyzes changes, sends alerts.
"""
from apify_client import ApifyClient
import csv, os, json, requests
from datetime import datetime
from collections import defaultdict
# --- Config ---
APIFY_TOKEN = os.environ.get("APIFY_TOKEN", "YOUR_TOKEN")
WEBHOOK_URL = os.environ.get("WEBHOOK_URL", "")
KEYWORDS = ["硅胶铲", "发夹", "手机壳"] # Add your keywords
DATA_DIR = "price_data"
ALERT_THRESHOLD = 0.10 # 10% change
def main():
os.makedirs(DATA_DIR, exist_ok=True)
client = ApifyClient(APIFY_TOKEN)
all_alerts = []
for keyword in KEYWORDS:
print(f"Scraping: {keyword}")
csv_file = os.path.join(DATA_DIR, f"{keyword}.csv")
# Scrape
run = client.actor("jungle_intertwining/yiwugo-scraper").call(
run_input={"keyword": keyword, "maxItems": 100, "language": "en"}
)
items = list(client.dataset(run["defaultDatasetId"]).iterate_items())
# Store
timestamp = datetime.now().isoformat()
file_exists = os.path.exists(csv_file)
with open(csv_file, "a", newline="", encoding="utf-8") as f:
writer = csv.writer(f)
if not file_exists:
writer.writerow([
"timestamp", "product_id", "title", "price",
"min_order", "supplier", "supplier_location"
])
for item in items:
writer.writerow([
timestamp,
item.get("productId", ""),
item.get("title", ""),
item.get("price", ""),
item.get("minOrder", ""),
item.get("supplierName", ""),
item.get("location", "")
])
print(f" Stored {len(items)} records")
# Analyze
alerts = analyze_prices(csv_file, ALERT_THRESHOLD)
all_alerts.extend(alerts)
# Alert
if all_alerts and WEBHOOK_URL:
send_alerts(all_alerts, WEBHOOK_URL)
print(f"Done. {len(all_alerts)} alerts generated.")
if __name__ == "__main__":
main()
Monitoring Multiple Product Categories
For serious sourcing operations, you'll want to monitor across categories. Here's a config-driven approach:
# monitor_config.json
{
"categories": [
{
"name": "Kitchen Supplies",
"keywords": ["硅胶铲", "不锈钢锅", "厨房收纳"],
"frequency": "daily",
"threshold": 0.10
},
{
"name": "Fashion Accessories",
"keywords": ["发夹", "耳环", "项链"],
"frequency": "daily",
"threshold": 0.15
},
{
"name": "Holiday Decorations",
"keywords": ["圣诞装饰", "LED灯串"],
"frequency": "weekly",
"threshold": 0.20
}
]
}
Different categories get different thresholds because price volatility varies. Fashion accessories move faster than hardware tools, so you want a wider threshold to avoid alert fatigue.
Real-World Tips
Start small. Monitor 3–5 keywords for 2 weeks before scaling up. You need to understand the baseline price patterns before alerts become meaningful.
Use Chinese keywords. Searching "硅胶铲" (silicone spatula) returns 3–5x more results than the English equivalent on Yiwugo. More data = better price signals.
Watch supplier count, not just prices. If the number of suppliers listing a product suddenly drops, that's a supply signal even if prices haven't moved yet. Fewer suppliers usually means prices will rise soon.
Combine with demand data. Price monitoring is most powerful when paired with demand signals. Cross-reference Yiwugo price drops with Google Trends or Amazon Best Sellers to find products where supply is cheap AND demand is growing.
Set different alert thresholds for buying vs. selling. If you're buying, you care about drops (set a -10% threshold). If you're already holding inventory, you care about spikes in competitor pricing (set a +15% threshold for opportunity alerts).
What This Looks Like After 30 Days
After running this system for a month, you'll have:
- A CSV file per keyword with daily price snapshots
- Clear visibility into which products have stable vs. volatile pricing
- Automatic alerts when prices hit buying opportunities
- Data to back up your supplier negotiations
The total cost? Apify's free tier gives you enough compute for monitoring ~10 keywords daily. The Python scripts run on any machine. The only real investment is the 30 minutes it takes to set this up.
Next Steps
- Clone the scripts above and configure your keywords
- Get an Apify API token (free tier works for monitoring)
- Run manually for a few days to verify the data
- Set up cron or Apify scheduling for daily automation
- Connect alerts to your preferred notification channel
Price monitoring turns sourcing from a guessing game into a data-driven operation. The suppliers who adjust prices most aggressively are also the ones most open to negotiation — and now you'll know exactly who they are.
📚 Related: Before building a monitoring pipeline, make sure you understand the unique challenges of Scraping Chinese E-commerce Sites — anti-bot systems, encoding, and rate limits all apply here.
Have questions about setting up price monitoring for your specific products? Drop a comment — I'll help you configure the right keywords and thresholds.
📦 Also check out: DHgate Scraper — Extract DHgate product data for dropshipping research.
- Made-in-China Scraper — Extract B2B product data, supplier info, and MOQ from Made-in-China.com
📚 More on wholesale data:
Top comments (0)