DEV Community

Cover image for Web Scraping Google Without Getting Blocked
ApiForSeo
ApiForSeo

Posted on

Web Scraping Google Without Getting Blocked

Introduction

Web Scraping Google Without Getting Blocked 1

Google provides us with a crucial piece of information that has become the backbone of many businesses directly, or if I talk indirectly, every business is dependent on Google. An enormous amount of data can provide leverage over other companies in a particular field and an upper hand over their businesses.

But who provides this data? The answer is the Web Scrapers. Web Scrapers are used to obtain a large amount of information from websites, usually in HTML format. Thus, if you are making your own Google scraper for an MNC or your project, you can follow these 10 tips to avoid blocking while scraping Google. These tips are equally important while scraping other websites also.

Table Of Contents

  1. IP Rotation
  2. User Agents
  3. HTTP Header Referrer
  4. Make scraping slower
  5. Headless Browser
  6. Scraping Data From Google Cache
  7. Changing your scraping pattern
  8. Avoid Scraping Images
  9. Change in HTML Tags
  10. Captcha Solving

IP Rotation

There might be a chance of risk to be caught by Google if you are using the same IP address for every request. It is one of the easiest ways to get blocked by Google's anti-scraping mechanism. But, you can solve this problem by rotating or using a new IP for every request. If you want to scrape millions of pages of Google, you require a large pool of proxies to turn them on every request. Instead of buying proxies, you can try this Google Search API that uses a proxy cluster consisting of millions of IPs which can help you to avoid your IP from being blocked by Google.

Google uses an advanced anti-bot mechanism to block the bots from scraping their websites. So, you have to use residential or mobile proxies to bypass blocking. Data Center Proxies get more easily blocked than residential. Thus you can’t use them for the longer term. Serpdog’s Google Search API solves all these problems for developers and allows them to scrape millions of web pages of Google without any hindrance.

User Agents

User Agent is a request header to identify the application, operating system, vendor, and version of the requesting user agent. If your User Agent doesn’t belong to a significant browser or if your user agents are not set then there is a chance that Google might block you or restrict you from seeing their content.

If you are using the same User Agents for every request, then this may cause your IP to get blocked in no time. So, what can be the solution to this problem? The solution is pretty simple, before scraping Google, you will have to collect a legitimate set of User Agents so that your web crawler or the bot looks like a real user. You can also get a list of User Agents from this NPM library fake-useragent.

You can check your user agent on Google by typing “What is my User-Agent”, or you can check it on: http://www.whatsmyuseragent.com/.

HTTP Header Referrer

The Referrer header is an HTTP request header that allows the website to know from which site you are coming. It can be a great header if you want to bypass anti-scraping mechanisms.

“Referrer”: “https://www.google.com/”

It can make the bot look like a natural user. Think of it if you are scraping Google.com and you set the referrer as google.com telling the website that, “Hey, I just came from visiting google.com.” It would portray your bot as an organic user in front of Google.

Make scraping slower

The speed at which your bot might be scraping the website can be easily detected by Google's anti-bot mechanism, as we know the bot works at an inhuman speed. The overloading of requests might even crash the website, which is not beneficial for anyone.

So, adding random delays by making your bot sleep for a short period(for example, between 2-6 seconds) can help your scraper look like an organic user. Adding a cap or making requests within a defined limit will ensure that the website does not crash by mass requests, and you can also keep scraping.

Another way we can choose to scrape data in an organized method is to make a schedule to extract the data. For example, if you are beginning to scrape the data at 1:00 a.m. for some days, start the scraping sometime before 30 minutes or after 30 minutes. And then, start making requests in a balanced manner, which can help you to bypass Google's anti-bot mechanism.

Headless Browser

Web Scraping Google Without Getting Blocked 2

Google displays content based on your User Agent. It will present you with advanced featured snippets if you are on a higher version. These advanced featured snippets have a very hefty dependence on Javascript, and a simple HTTP won’t be able to return the content rendered by JS. Google can also know the web fonts, extensions, and browser cookies to check if the site visitor is an organic user. It makes the headless browser come into play.

To scrape the content rendered by Javascript, you can use tools like Puppeteer JS or Selenium, which can run in headless mode and can help to extract the dynamic content on the website. But the main disadvantage with these tools is they are very high CPU intensive, which can cause them to crash sometimes. But still, you can use services like Browsercloud, which enables you to open the browser on their server instead of increasing the load on your server.

Scraping Data From Google Cache

Google also keeps a cached copy of the websites. So, you can try scraping out the cached pages as, in this case you would be making requests to the cached copy rather than the actual website, which can help you to avoid getting blocked. You can just type “http://webcache.googleusercontent.com/search?q=cache:” at the beginning and then the Google URL after cache “https://www.google.com/search?q=footabll&gl=us&tbm=isch.” For example, “https://webcache.googleusercontent.com/search?q=cache:https://www.google.com/search?q=footabll&gl=us&tbm=isch

This is not a complete solution, as it returns only a part of the data and prevents you from directly accessing the website. You should also keep in mind that this technique is only useful while scraping non-sensitive data that often keeps changing.

Change Your Scraping Pattern

You should apply various ways to scrape the data, as working with only one pattern can make your scraper to be blocked. Generally, bots are designed to crawl the data in a single pattern that doesn’t look like an organic user to the website after some time, as the pattern get into notice by its anti-bot-mechanism. On the other hand, if we talk about humans, they don’t perform the same task again and again. You can also think of yourself in this case.

A solution to this problem can be to perform random clicks on HTML elements, random scrolling, and other random activities that can make your bot look like an organic user.

Avoid Scraping Images

As we know, images are heavy objects, and because of this, extracting them would consume additional bandwidth compared to other elements. Also, images are frequently loaded on websites with the help of Javascript, which is one of the reasons behind their slow loading. This process of scraping images can influence your data extraction process by making your scraper slow.

Change in HTML Tags

One of the main things while scraping Google is to keep an eye on changing HTML tags of the elements. Google keeps changing its website to improve user experience and increase the quality of its search results, which can sometimes cause the scraper to break and the scraper would not be able to deliver the data you were expecting.

A solution to this problem is that you can run a test every 24 hours to detect if your parser is returning the exact piece of data you were expecting. If it is successfully giving the exact data then it is good, but if empty, you should return an alert from your server in the form of mail to yourself. Or another solution can be to run an API test and print the extracted results which would be better, as you will be able to see the data yourself and will be able to figure out which tags are not returning the expected results.

But maintaining your scraper daily can become very heavy work sometimes, as you have to look after your parser according to changing CSS selectors. To solve this problem, you can use Serpdog’s Google Search API. It handles all the problems of maintaining the parser on its behalf, you don’t have to search for tags from the complex HTML data and the main thing is you get ready-made structured JSON data.

Captcha Solving

Web Scraping Google Without Getting Blocked 4

Captchas are designed to differentiate between organic users and automated users, or we can say bots. It provides challenges like grading tests or puzzle tests which can be passed by a human but not by a computer. Sometimes scraping websites at a large scale can cause your scraper to be blocked, and your bot will see captchas instead of search results, which causes you to use captcha-solving services, but I must say they are not ideal for scraping as they are very slow and expensive. A better solution for this is you can spread your requests, then there might be less chance of blocking your scraper, and your bot will not have to see these captchas while scraping.

Summary

In this tutorial, we learned some tips to scrape Google. I am hopeful that after reading this article, you will feel comfortable implementing some advanced scraping techniques in your scraper.
Feel free to message me if I missed something.

Please share it on social media if you like the blog.

Follow me on Twitter. Thanks for reading!

Additional Resources

Top comments (1)

Collapse
 
crawlbase profile image
Crawlbase

Thanks! for compiling this data.The 10 tips shared here are absolute gems for anyone proceeding into the realm of web scraping, each tip is like a secret weapon against getting blocked. And yes, speaking of scraping solutions, have you checked out Crawlbase? It's like having your own superhero sidekick in the world of web scraping. With its advanced features and reliable performance.