DEV Community

Mohammad Waseem
Mohammad Waseem

Posted on

Overcoming IP Bans When Scraping with TypeScript on a Zero Budget

Overcoming IP Bans When Scraping with TypeScript on a Zero Budget

Web scraping is a common technique to collect data from websites, but many sites actively implement measures like IP banning to thwart aggressive scraping. As a security researcher working with limited resources, it's crucial to adopt strategies that are both effective and cost-free. In this guide, we'll explore how to mitigate IP bans while scraping using TypeScript, focusing on practical techniques that require no financial investment.

Understanding the Challenge

IP bans typically occur when a server detects unusual or excessive requests from a single IP address. To avoid this, we need to mimic human-like browsing behavior, distribute requests intelligently, and employ techniques that do not rely on paid proxies or services.

Key Strategies

1. Rotate User Agents

Many websites block requests based on unusual or missing user-agent headers. Randomizing user agents can help emulate diverse browsers and devices.

function getRandomUserAgent(): string {
  const userAgents = [
    'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko)',
    'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko)',
    'Mozilla/5.0 (Linux; Android 10; SM-G975F) AppleWebKit/537.36 (KHTML, like Gecko)',
    // Add more user agents as needed
  ];
  const index = Math.floor(Math.random() * userAgents.length);
  return userAgents[index];
}

// Usage in fetch:
const response = await fetch('https://example.com', {
  headers: {
    'User-Agent': getRandomUserAgent(),
  },
});
Enter fullscreen mode Exit fullscreen mode

2. Implement Request Throttling

To appear more natural, introduce delays between requests. A simple randomized delay can avoid patterns that trigger bans.

function delay(ms: number): Promise<void> {
  return new Promise(resolve => setTimeout(resolve, ms));
}

async function scrapeWithDelay(urls: string[]) {
  for (const url of urls) {
    const waitTime = Math.random() * 3000 + 2000; // 2-5 seconds
    await delay(waitTime);
    const response = await fetch(url, {
      headers: { 'User-Agent': getRandomUserAgent() },
    });
    const data = await response.text();
    // Process data
  }
}
Enter fullscreen mode Exit fullscreen mode

3. Change IPs Using Free Network Variations

Without paid proxies, users can switch IP addresses by cycling through available networks, such as toggling Wi-Fi, using VPNs, or mobile hotspots. Automating this in code is more complex, but manually switching your network between different ISPs or networks can mimic different IPs.

4. Leverage Free Public Proxies

While free proxies are unreliable and often insecure, they can be used sparingly and rotated to distribute request sources.

async function fetchViaProxy(url: string, proxy: string): Promise<string> {
  const response = await fetch(url, {
    headers: { 'User-Agent': getRandomUserAgent() },
    agent: new HttpsProxyAgent(proxy), // Using dependencies like 'https-proxy-agent'
  });
  return response.text();
}
Enter fullscreen mode Exit fullscreen mode

Note: Using proxies requires installing the https-proxy-agent package:

npm install https-proxy-agent
Enter fullscreen mode Exit fullscreen mode

5. Mimic Browser Behavior

Using headless browsers or tools like Puppeteer adds realism by executing JavaScript and emulating user interactions. Although Puppeteer may have higher resource needs, it doesn't require payment for basic usage.

import puppeteer from 'puppeteer';

async function scrapeWithPuppeteer(url: string) {
  const browser = await puppeteer.launch({ headless: true });
  const page = await browser.newPage();
  await page.setUserAgent(getRandomUserAgent());
  await page.goto(url, { waitUntil: 'networkidle2' });
  const content = await page.content();
  await browser.close();
  return content;
}
Enter fullscreen mode Exit fullscreen mode

Final Thoughts

Combining user-agent rotation, request throttling, network variation, limited proxy use, and browser emulation offers a robust zero-budget approach to avoid IP bans while scraping. Remember to respect website terms of service and robots.txt. Over-aggressive scraping, even with these techniques, can still lead to bans or legal issues. Use these strategies responsibly and ethically.

By employing these methods judiciously, security researchers can gather valuable data without incurring costs, maintaining both effectiveness and integrity in their scraping operations.


References:


🛠️ QA Tip

Pro Tip: Use TempoMail USA for generating disposable test accounts.

Top comments (0)