DEV Community

Cover image for Tracking Search Rankings & SEO on Depop
Jonathan D. Fisher
Jonathan D. Fisher

Posted on

Tracking Search Rankings & SEO on Depop

Visibility drives sales on Depop. For high-volume sellers and fashion brands, slipping from the first row of search results to the tenth is the difference between a quick sale and a stale listing. Because the Depop algorithm prioritizes fresh, relevant content, your search position changes constantly.

Monitoring these positions manually is tedious, especially if you manage dozens of items across multiple keywords. This guide demonstrates how to build an automated Depop SEO tool using Python and Selenium. We will use the open-source Depop.com-Scrapers repository to extract search data and implement logic to track exactly where your products rank over time.

Understanding Depop’s Search Structure

Before writing code, we need to look at the technical layout of a Depop search page. When you search for "vintage nike sweatshirt," Depop returns a grid of products.

Technically, these results are an ordered list of product objects. A product's rank is its index in that list, plus one to make it human-readable. For example, the first item in the results array has an index of 0 and a rank of 1.

To track rankings reliably, use a unique identifier. Tracking by title is unreliable because sellers often use similar titles or update them for SEO. Instead, use the productId, a unique string assigned by Depop that never changes. The logic follows these steps:

  1. Send a search query to Depop.
  2. Extract the list of product IDs from the results.
  3. Find the index of your TARGET_PRODUCT_ID.
  4. Log the rank.

Step 1: Setting Up the Search Scraper

We’ll use the Selenium implementation from the ScrapeOps repository, as it handles Depop’s dynamic content effectively.

First, clone the repository and install the dependencies:

git clone https://github.com/scraper-bank/Depop.com-Scrapers.git
cd Depop.com-Scrapers/python/selenium
pip install -r requirements.txt
Enter fullscreen mode Exit fullscreen mode

Configuring the ScrapeOps API Key

Depop uses anti-bot measures on their search pages. To avoid blocks or CAPTCHAs, you need proxy rotation. The repository is pre-configured to work with ScrapeOps.

Open product_search/scraper/depop_scraper_product_search_v1.py and find the API_KEY variable. Replace it with your key from the ScrapeOps Dashboard.

# python/selenium/product_search/scraper/depop_scraper_product_search_v1.py
API_KEY = "YOUR_SCRAPEOPS_API_KEY"
Enter fullscreen mode Exit fullscreen mode

This routes your Selenium requests through a residential proxy network, rotating your IP address with every request.

Step 2: Extracting Search Results

The base scraper uses the extract_data function to parse search results into a structured ScrapedData object. This object contains a list of products, each with its own productId, name, and price.

The scraper identifies individual items using CSS selectors:

# Snippet from extract_data in the repository
items = driver.find_elements(By.CSS_SELECTOR, "li.styles_listItem__Uv9lb")

for item in items:
    # Logic to extract href, price, and image
    p_id = href.strip("/").split("/")[-1] if href else ""
    product["productId"] = p_id
    products.append(product)
Enter fullscreen mode Exit fullscreen mode

This provides a clean list of every product visible on the search page.

Step 3: Implementing the Rank Finder Logic

Next, create a wrapper script to import the scraper, perform a search, and locate your item. Create a new file named rank_tracker.py:

import logging
from scraper.depop_scraper_product_search_v1 import get_driver, extract_data

# Configuration
TARGET_PRODUCT_ID = "12345678"  # Replace with your Depop Product ID
KEYWORD = "vintage 90s windbreaker"

def get_product_rank(keyword, target_id):
    driver = get_driver()
    search_url = f"https://www.depop.com/search/?q={keyword.replace(' ', '+')}"

    try:
        driver.get(search_url)
        scraped_result = extract_data(driver, search_url)

        if not scraped_result or not scraped_result.products:
            return -1 # Search failed or no results

        for index, product in enumerate(scraped_result.products):
            if product['productId'] == target_id:
                return index + 1  # Ranks are 1-based

        return 0  # Item not found in the current results
    finally:
        driver.quit()

rank = get_product_rank(KEYWORD, TARGET_PRODUCT_ID)
print(f"Your item is currently ranked: {rank if rank > 0 else 'Not Found'}")
Enter fullscreen mode Exit fullscreen mode

How it works

  • get_driver(): Initializes the undetected-chromedriver with ScrapeOps proxy settings.
  • extract_data(): Scrapes the page and returns the product list.
  • enumerate(): Loops through the list to find the matching productId.

Step 4: Handling Pagination and Depth

Depop uses infinite scrolling. If your item isn't in the first 30 results, a basic scrape will miss it. You need to tell Selenium to scroll down to increase the search depth.

Modify the logic to include a scroll loop:

import time

def scroll_to_depth(driver, max_items=100):
    last_height = driver.execute_script("return document.body.scrollHeight")

    while True:
        items = driver.find_elements(By.CSS_SELECTOR, "li.styles_listItem__Uv9lb")
        if len(items) >= max_items:
            break

        driver.execute_script("window.scrollTo(0, document.body.scrollHeight);")
        time.sleep(2)  # Wait for products to load

        new_height = driver.execute_script("return document.body.scrollHeight")
        if new_height == last_height:
            break # Reached the end of results
        last_height = new_height
Enter fullscreen mode Exit fullscreen mode

Integrating this before calling extract_data ensures you check the top 100 items. Checking beyond 200 items is rarely necessary, as click-through rates drop significantly after the first few pages.

Step 5: Automating History

A single rank check is just a snapshot. To see if SEO efforts like refreshing listings or changing tags work, you need historical data. You can store findings in a CSV file:

import csv
from datetime import datetime

def log_rank(keyword, product_id, rank):
    file_exists = False
    try:
        with open('rank_history.csv', 'r') as f:
            file_exists = True
    except FileNotFoundError:
        pass

    with open('rank_history.csv', 'a', newline='') as f:
        writer = csv.writer(f)
        if not file_exists:
            writer.writerow(['Date', 'Keyword', 'ProductID', 'Rank'])

        writer.writerow([
            datetime.now().strftime("%Y-%m-%d %H:%M:%S"),
            keyword,
            product_id,
            rank
        ])

# Usage
current_rank = get_product_rank(KEYWORD, TARGET_PRODUCT_ID)
log_rank(KEYWORD, TARGET_PRODUCT_ID, current_rank)
Enter fullscreen mode Exit fullscreen mode

Running this script daily via a Cron job creates a dataset that reveals ranking volatility. If a rank drops from 5 to 50 overnight, it’s a clear signal to update the listing or check for new competitors.

Recommended Approaches to Avoid Bans

When building a rank tracker, the main risk is getting your IP flagged for excessive search requests.

  1. Use Proxy Rotation: Search pages are more heavily guarded than product pages. Use ScrapeOps proxy rotation to distribute the load.
  2. Control Frequency: Don't check your rank every 10 minutes. Depop's search index doesn't update that fast. Once or twice a day is sufficient.
  3. Randomize Delays: If you are checking multiple keywords, add time.sleep(random.uniform(5, 15)) between queries to mimic human browsing.
  4. Headless Mode: The repository uses --headless=new by default. This is faster and uses fewer resources. Ensure your User-Agent is set correctly to avoid detection.

To Wrap Up

A custom Depop SEO tool replaces guesswork with data. By combining ScrapeOps scrapers with a rank-finding script, you can detect ranking drops before they impact sales, test which keywords perform best, and monitor competitor movements.

You can expand this by turning your TARGET_PRODUCT_ID into a dictionary to loop through all your top items. You could even integrate a messaging service like Slack or Discord to send an alert whenever an item drops out of the top 10.

For the full source code or alternative implementations using Playwright or Node.js, visit the Depop.com-Scrapers repository.

Top comments (0)