DEV Community

wfgsss
wfgsss

Posted on • Edited on

How to Use Yiwugo Data to Find Trending Products Before They Go Viral

Finding the next trending product before your competitors do is the holy grail of e-commerce. While most sellers rely on Amazon Best Sellers or Google Trends, there's a massive untapped data source sitting right at the origin: Yiwugo.com, China's largest wholesale marketplace.

In this guide, I'll show you how to extract and analyze Yiwugo product data to identify trending products early — before they flood AliExpress and Amazon.

Why Yiwugo Data Gives You an Edge

Most product research tools look at retail data (Amazon reviews, Google search volume). But trends start at the supply side:

  • New products appear on Yiwugo weeks to months before they hit retail platforms
  • Supplier activity (new listings, price changes) signals rising demand
  • MOQ (Minimum Order Quantity) drops indicate suppliers scaling up for expected demand

By monitoring wholesale data, you're essentially looking at the leading indicator instead of the lagging one.

Step 1: Collect Product Data from Yiwugo

First, you need structured data. The Yiwugo Scraper on Apify extracts product listings with all the fields you need:

{
  "title": "LED Sunset Projection Lamp USB Night Light",
  "price": "¥8.50 - ¥15.00",
  "minOrder": "60 pieces",
  "shopName": "Yiwu Bright Electronics Co., Ltd",
  "categoryUrl": "https://www.yiwugo.com/product/c1/",
  "totalResults": 2847
}
Enter fullscreen mode Exit fullscreen mode

Set up a weekly scrape across multiple categories to build your dataset:

// Apify Actor input — scrape 5 trending-prone categories
const input = {
  categoryUrls: [
    "https://www.yiwugo.com/product/c1/",   // Electronics & Gadgets
    "https://www.yiwugo.com/product/c15/",  // Home & Garden
    "https://www.yiwugo.com/product/c8/",   // Fashion Accessories
    "https://www.yiwugo.com/product/c10/",  // Toys & Hobbies
    "https://www.yiwugo.com/product/c5/"    // Beauty & Health
  ],
  maxItems: 500
};
Enter fullscreen mode Exit fullscreen mode

Step 2: Build a Trend Detection Pipeline

Raw data isn't useful until you analyze it. Here's a Python pipeline that identifies potential trending products:

import json
from collections import Counter
from datetime import time, timedelta

def load_snapshots(current_file, previous_file):
    """Load two weekly snapshots for comparison."""
    with open(current_file) as f:
        current = json.load(f)
    with open(previous_file) as f:
        previous = json.load(f)
    return current, previous

def detect_new_products(current, previous):
    """Find products that appeared this week but not last week."""
    prev_titles = {p['title'].lower() for p in previous}
    new_products = [
        p for p in current
        if p['title'].lower() not in prev_titles
    ]
    return new_products

def detect_keyword_surges(current, previous):
    """Find keywords that appear significantly more this week."""
    def extract_keywords(products):
        words = Counter()
        for p in products:
            for word in p['title'].lower().split():
                if len(word) > 3:
                    words[word] += 1
        return words

    curr_kw = extract_keywords(current)
    prev_kw = extract_keywords(previous)

    surges = {}
    for word, count in curr_kw.items():
        prev_count = prev_kw.get(word, 0)
        if prev_count > 0:
            growth = (count - prev_count) / prev_count
            if growth > 0.5:  # 50%+ increase
                surges[word] = {
                    'current': count,
                    'previous': prev_count,
                    'growth': f"{growth:.0%}"
                }

    return dict(sorted(surges.items(),
                       key=lambda x: x[1]['current'], reverse=True))

def detect_price_drops(current, previous):
    """Find products with significant price decreases (scaling signal)."""
    prev_prices = {}
    for p in previous:
        try:
            price = float(p['price'].replace('¥','').split('-')[0].strip())
            prev_prices[p['title'].lower()] = price
        except (ValueError, AttributeError):
            continue

    drops = []
    for p in current:
        try:
            price = float(p['price'].replace('¥','').split('-')[0].strip())
            prev = prev_prices.get(p['title'].lower())
            if prev and price < prev * 0.8:  # 20%+ drop
                drops.append({
                    'title': p['title'],
                    'old_price': f"¥{prev:.2f}",
                    'new_price': f"¥{price:.2f}",
                    'drop': f"{((prev-price)/prev)*100:.0f}%"
                })
        except (ValueError, AttributeError):
            continue

    return drops
Enter fullscreen mode Exit fullscreen mode

Step 3: Score and Rank Potential Trends

Not every new product is a trend. Use a scoring system to filter signal from noise:

def calculate_trend_score(product, keyword_surges, new_products):
    """
    Score a product's trend potential (0-100).

    Factors:
    - Is it new this week? (+30)
    - Does its title contain surging keywords? (+20 per keyword)
    - Low MOQ? (+15, signals supplier confidence)
    - Competitive price point? (+10)
    """
    score = 0
    title_lower = product['title'].lower()

    # New product bonus
    if any(p['title'] == product['title'] for p in new_products):
        score += 30

    # Surging keyword bonus
    for keyword in keyword_surges:
        if keyword in title_lower:
            score += 20

    # Low MOQ bonus (supplier expects high demand)
    try:
        moq = int(''.join(filter(str.isdigit,
                  product.get('minOrder', '999'))))
        if moq <= 24:
            score += 15
        elif moq <= 60:
            score += 10
    except ValueError:
        pass

    # Sweet spot price range for impulse buys ($2-15 USD)
    try:
        price = float(product['price'].replace('¥','')
                      .split('-')[0].strip())
        if 10 <= price <= 100:  # ~$1.50-$15 USD
            score += 10
    except (ValueError, AttributeError):
        pass

    return min(score, 100)

# Run the full pipeline
current, previous = load_snapshots('week_12.json', 'week_11.json')
new_prods = detect_new_products(current, previous)
kw_surges = detect_keyword_surges(current, previous)
price_drops = detect_price_drops(current, previous)

# Score all current products
scored = []
for product in current:
    score = calculate_trend_score(product, kw_surges, new_prods)
    if score >= 40:  # Only show high-potential items
        scored.append({**product, 'trend_score': score})

scored.sort(key=lambda x: x['trend_score'], reverse=True)

print(f"\n🔥 Top Trending Products ({len(scored)} found):\n")
for p in scored[:10]:
    print(f"  [{p['trend_score']}] {p['title']}")
    print(f"       Price: {p['price']} | MOQ: {p.get('minOrder', 'N/A')}")
Enter fullscreen mode Exit fullscreen mode

Step 4: Validate with Cross-Platform Data

Before committing to a product, validate the trend signal:

# Cross-reference with Google Trends (conceptual)
validation_checklist = {
    "google_trends": "Search the product keyword — is it rising?",
    "amazon": "Check if similar products exist — low competition = opportunity",
    "tiktok": "Search the product on TikTok — viral videos = demand signal",
    "aliexpress": "Check AliExpress — if NOT there yet, you're early",
}
Enter fullscreen mode Exit fullscreen mode

The ideal scenario: a product shows up as new on Yiwugo, has surging keywords, appears on TikTok with growing views, but has few Amazon listings. That's your window.

Step 5: Automate the Whole Thing

Set up a weekly automated pipeline:

  1. Monday: Apify scraper runs across 5 categories ( run)
  2. Tuesday: Python script compares with last week's data, generates trend report
  3. Wednesday: You review the top 10 scored products and validate manually
# Schedule weekly scrape on Apify (cron: every Monday 6 AM UTC)
# Then download results via API:
curl "https://api.apify.com/v2/datasets/{DATASET_ID}/items?format=json" \
  -o "data/week_$(date +%V).json"

# Run trend analysis
python analyze_trends.py \
  --current "data/week_$(date +%V).json" \
  --previous "data/week_$(date -v-7d +%V).json" \
  --output "reports/trends_week_$(date +%V).md"
Enter fullscreen mode Exit fullscreen mode

Real-World Example: What This Looks Like

Here's what a trend detection report might look like:

🔥 Trend Report — Week 12, 2026

📈 Surging Keywords:
  "magnetic" — 127 products (+68% vs last week)
  "silicone"  — 89 products (+52%)
  "solar"     — 64 products (+45%)

🆕 New Product Clusters:
  - Magnetic phone mounts (23 new listings)
  - Silicone kitchen utensil sets (18 new listings)
  - Solar-powered garden lights (15 new listings)

💰 Price Drops (scaling signals):
  - LED strip light controllers: ¥12 → ¥8 (-33%)
  - Reusable silicone bags: ¥5 → ¥3.5 (-30%)

🏆 Top Scored Products:
  [85] Magnetic Wireless Car Charger Mount
  [75] Silicone Collapsible Water Bottle
  [70] Solar LED Garden Path Lights (Set of 6)
Enter fullscreen mode Exit fullscreen mode

Key Takeaways

  • Wholesale data is a leading indicator — trends appear on Yiwugo before retail platforms
  • Track week-over-week changes — new listings, keyword surges, and price drops all signal demand
  • Score and filter — not every new product is a trend; use multiple signals
  • Validate before committing — cross-reference with Google Trends, TikTok, and Amazon
  • Automate the pipeline — consistency beats one-time analysis

The sellers who win aren't the ones with the best products — they're the ones who find them first.


Need structured Yiwugo data for your own product research? Check out the Yiwugo Scraper on Apify Store — it handles pagination, anti-bot protection, and outputs clean JSON ready for analysis.

📚 Related: How to Extract Product Images from Yiwugo.com | How to Monitor Yiwugo Product Prices Automatically | Scraping Chinese E-commerce Sites

📦 Also check out: DHgate Scraper — Extract DHgate product data for dropshipping research.

📚 More on wholesale data:

Top comments (0)