DEV Community

Cover image for Dynamic Selector Fallbacks: How to Scrape E-commerce Sites That Change Frequently
Robert N. Gutierrez
Robert N. Gutierrez

Posted on

Dynamic Selector Fallbacks: How to Scrape E-commerce Sites That Change Frequently

The most frustrating moment for any web scraping developer is waking up to a broken scraper. You might spend hours mapping out the perfect selectors for a site like Ulta.com, only for their front-end team to push a minor UI update that renames .Price-v1 to .Price-v2. Suddenly, your database fills with null values and your price alerts go silent.

In e-commerce, layouts change constantly. Relying on a single, rigid CSS selector leads to high maintenance costs and data gaps. To build professional data pipelines, we need to move away from fragile scraping and toward a Dynamic Selector Fallback strategy.

This guide explores how to implement robust fallbacks and proactive monitoring, using the Ulta.com-Scrapers repository as a real-world case study.

The Problem: Fragility in Generated Code

Most scrapers fail silently. They don't crash with a loud error; they simply return empty strings or zeros because a specific class name no longer exists in the DOM.

Consider this common pattern found in the initial version of node/cheerio-axios/product_data/scraper/ulta_scraper_product_data_v1.js:

// Rigid selector example
const valText = $(".ReviewStars__Content").first().text().trim();
if (valText) aggRating.ratingValue = parseFloat(valText) || 0.0;
Enter fullscreen mode Exit fullscreen mode

This code is vulnerable. If Ulta’s developers rename .ReviewStars__Content to .ReviewStars__Value or move the rating into a data attribute, this line returns 0.0. You won't know it's broken until you realize your average rating report has plummeted. This cycle of fixing and breaking is exhausting and prevents you from scaling.

Strategy 1: The Priority List

The first step toward reliability is acknowledging that there is often more than one way to find a piece of data. We can define a Priority List, which is an array of potential selectors for each field.

We start by creating a helper function, extractWithFallback, which iterates through candidates until it finds a valid result.

/**
 * Attempts to extract text using a list of selectors.
 * Returns the text and the index of the selector that worked.
 */
function extractWithFallback($, selectors) {
    for (let i = 0; i < selectors.length; i++) {
        const value = $(selectors[i]).first().text().trim();
        if (value && value.length > 0) {
            return { value, index: i };
        }
    }
    return { value: null, index: -1 };
}
Enter fullscreen mode Exit fullscreen mode

Why this works

Using this approach for critical fields like price or product name creates a safety net:

  1. Primary Selector: The most specific, current class name (e.g., .ProductPricing__price).
  2. Secondary Selector: Older known classes or broader containers (e.g., .price, .pal-c-Price).
  3. Metadata: E-commerce sites often include data in <meta> tags or JSON-LD scripts that change less frequently than the visual UI.

Strategy 2: The "Hail Mary" (Text-Based Matching)

When specific class names fail entirely, you need a backup plan. This involves looking for data based on its structure or content rather than its CSS class.

For example, if you can't find the price via a class, search for elements containing a currency symbol or use regular expressions on the entire page body.

// A "Hail Mary" approach using text-based matching in Cheerio
const hailMaryPrice = $("span:contains('$')").first().text();

// Or using Regex on a broader text block
const bodyText = $("body").text();
const priceMatch = bodyText.match(/\$\s?(\d+[.,]\d{2})/);
const price = priceMatch ? priceMatch[1] : null;
Enter fullscreen mode Exit fullscreen mode

Note: These methods can produce false positives, such as accidentally grabbing the price of a related product. They should always be the last item in your fallback array.

Strategy 3: The Deprecation Warning System

The secret to low-maintenance scraping is knowing when a scraper is starting to fail. You can turn your fallback system into a proactive monitoring tool.

If the primary selector fails but a secondary selector succeeds, the scraper is technically still working, but it’s on life support. Log this as a warning to stay ahead of the curve.

const priceResult = extractWithFallback($, [
    ".pal-c-Price--salePrice", // Primary
    ".Price",                  // Secondary
    ".product-price"           // Legacy
]);

if (priceResult.index > 0) {
    console.warn(`DEPRECATION WARNING: Primary selector failed for Price. Fallback index ${priceResult.index} used at ${url}`);
    // This could also be sent to a monitoring service like Sentry or Slack
}
Enter fullscreen mode Exit fullscreen mode

This transforms maintenance from an emergency into a routine task. You can update the selectors during normal business hours rather than scrambling when the data stops flowing.

Tutorial: Refactoring the Ulta Scraper

Let's apply these concepts to the extractData function in the Ulta repository. We will refactor the rating and price logic to be significantly more robust.

Step 1: Define the Helper

Add a safe extraction helper that handles the logic of checking multiple selectors and logging warnings.

function safeExtract($, selectors, fieldName) {
    const result = extractWithFallback($, selectors);
    if (result.index > 0) {
        console.warn(`[Monitoring] ${fieldName} used fallback selector: ${selectors[result.index]}`);
    }
    return result.value;
}
Enter fullscreen mode Exit fullscreen mode

Step 2: Refactor the Rating Logic

The original code relies heavily on ReviewStars__Content. We will expand this to check JSON-LD first, as it is the most stable source, then multiple CSS classes.

function extractRating($, jsonData) {
    const ratingObj = { ratingValue: 0.0, reviewCount: 0 };

    // 1. Try JSON-LD (Most stable)
    if (jsonData?.aggregateRating) {
        ratingObj.ratingValue = parseFloat(jsonData.aggregateRating.ratingValue) || 0;
        ratingObj.reviewCount = parseInt(jsonData.aggregateRating.reviewCount) || 0;
        if (ratingObj.ratingValue > 0) return ratingObj;
    }

    // 2. Fallback to CSS Selectors
    const valText = safeExtract($, [
        ".ReviewStars__Content", 
        ".pal-c-Ratings__numericalRatingAfter",
        ".Text-ds--body-3"
    ], "RatingValue");

    ratingObj.ratingValue = parseFloat(valText) || 0.0;
    return ratingObj;
}
Enter fullscreen mode Exit fullscreen mode

Step 3: Refactor the Price Logic

Price is the most critical field. We’ll combine JSON-LD, specific classes, and a regex match as the final fallback.

function extractPrice($, jsonData) {
    // Priority 1: Schema.org JSON-LD
    if (jsonData?.offers?.price) {
        return parseFloat(jsonData.offers.price);
    }

    // Priority 2: Known CSS Classes
    const priceText = safeExtract($, [
        ".pal-c-Price--PDP span",
        ".Price",
        ".ProductPricing"
    ], "Price");

    if (priceText) {
        const match = priceText.match(/[\d,.]+/);
        if (match) return parseFloat(match[0].replace(/,/g, ''));
    }

    // Priority 3: Hail Mary Regex
    const bodyMatch = $.html().match(/\"price\"\s*:\s*\"?(\d+\.\d{2})\"?/);
    return bodyMatch ? parseFloat(bodyMatch[1]) : 0.0;
}
Enter fullscreen mode Exit fullscreen mode

To Wrap Up

By implementing dynamic selector fallbacks, you trade a few milliseconds of execution time for a massive increase in scraper longevity. Instead of your pipeline breaking the moment a developer changes a class name, your system adapts and alerts you to the change.

Key Takeaways:

  • Use Priority Lists: Always have a Plan B and Plan C for critical data points.
  • Use JSON-LD: Metadata scripts are often more stable than the visual DOM.
  • Log Fallbacks: Treat a successful fallback as a warning to schedule maintenance before a total failure occurs.
  • Stay Defensive: Use helper functions like safeExtract to keep your main scraping logic clean.

For the full suite of Ulta scraping tools, visit the Ulta.com-Scrapers GitHub repository. To scale your operations, use a ScrapeOps API key to manage proxy rotation and avoid blocks.

Top comments (0)