DEV Community

Mohammad Waseem
Mohammad Waseem

Posted on

Mitigating IP Bans During High Traffic Web Scraping with TypeScript

Mitigating IP Bans During High Traffic Web Scraping with TypeScript

Web scraping at scale, especially during high traffic events, poses significant challenges related to IP banning. Many websites implement anti-scraping measures, including IP rate limiting and banning, which can derail data collection efforts. In this post, we explore a technical approach using TypeScript to systematically avoid IP bans by employing strategies like IP rotation, request throttling, and adaptive behavior.

Understanding the Problem

During high traffic events, websites ramp up their defenses to block or limit scrapers. IP bans are common when the server detects unusual activity, such as too many requests from a single IP address. To maintain continuous access, scrapers need to resemble human-like browsing patterns and distribute requests across multiple IP addresses.

Strategies to Avoid IP Banning

1. IP Rotation

Using a pool of proxies or VPN endpoints, requests can be distributed across different IP addresses. This disguises the source of traffic and reduces the likelihood of bans.

2. Request Throttling and Random Delays

Adding random delays between requests mimics human browsing speed, preventing the server from flagging rapid request patterns.

3. Adaptive Request Patterns

Monitoring responses and adjusting request frequency based on server feedback helps avoid detection. For example, if a '429 Too Many Requests' status is received, the scraper should slow down.

4. Using Headless Browsers with Human-like Behavior

In some cases, employing headless browsers with behavior that emulates real users adds an extra layer of disguise.

Implementation in TypeScript

Below is a simplified example demonstrating some of these strategies in TypeScript.

import axios, { AxiosRequestConfig } from 'axios';
import HttpsProxyAgent from 'https-proxy-agent';

// List of proxy URLs
const proxies = [
  'http://proxy1.example.com:8080',
  'http://proxy2.example.com:8080',
  'http://proxy3.example.com:8080'
];

// Function to select a random proxy
function getRandomProxy(): string {
  const index = Math.floor(Math.random() * proxies.length);
  return proxies[index];
}

// Function to perform a request with IP rotation and delays
async function scrapeWithRotation(url: string): Promise<void> {
  for (let i = 0; i < 100; i++) { // example iteration
    const proxyUrl = getRandomProxy();
    const agent = new HttpsProxyAgent(proxyUrl);

    const config: AxiosRequestConfig = {
      url,
      method: 'GET',
      httpsAgent: agent,
      headers: {
        'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/85.0.4183.102 Safari/537.36'
      },
      timeout: 10000
    };

    try {
      const response = await axios(config);
      console.log(`Request ${i + 1} successful with proxy ${proxyUrl}`);
    } catch (error) {
      console.error(`Request ${i + 1} failed:`, error.message);
    }

    // Random delay between 1-3 seconds
    const delay = Math.random() * 2000 + 1000;
    await new Promise(res => setTimeout(res, delay));
  }
}

// Usage
scrapeWithRotation('https://targetwebsite.com/data');
Enter fullscreen mode Exit fullscreen mode

This script demonstrates IP rotation by selecting a random proxy from a pool for each request, incorporates random delays, and sets a User-Agent to emulate a genuine browser.

Further Enhancements

  • Implement response-based throttling, increasing delay after server signals (e.g., 429 responses).
  • Incorporate headless browser automation with Puppeteer or Playwright for higher stealth.
  • Use a proxy management service that automatically provides new IPs when current ones are blocked.

Final Thoughts

Successfully scraping during high traffic requires a combination of techniques and adaptive behaviors. Emulating human browsing patterns, rotating IP addresses, and respecting server responses are vital to minimize bans. TypeScript, with its type safety and rich ecosystem, provides a solid foundation for building resilient and scalable scraping tools.

Ensure your scraping activities comply with legal and ethical considerations, and always respect robots.txt and website terms of service.


🛠️ QA Tip

To test this safely without using real user data, I use TempoMail USA.

Top comments (0)