DEV Community

Cover image for How to Build a Web Crawler from Scratch – A Guide for Beginners
octoparse2019
octoparse2019

Posted on

How to Build a Web Crawler from Scratch – A Guide for Beginners

Living in a digital world today has definitely made our lives easier in many aspects as the internet becomes the ultimate source for finding almost everything we need; such digital transformation has generated new challenges to how data can be assessed, collected, stored and analyzed.

The number of internet users around the world had just passed 4 billion, up 7% from the year 2017, according to the new 2018 Global Digital suite of reports from We Are Social and Hootsuite. People are turning to online options at an unprecedented rate and all of these that we are doing on the internet is generating a massive amount of “user data” as we speak, let it be a review, a hotel booking, a purchase record, literally countless examples. Not surprisingly, the internet is now the best place for analyzing the market trend, spying on your competitors, or simply getting the lead data you need to drive up the sales! The ability to access, aggregate and analyze data from the world wide web has become a critical skill to master for making good and data-driven business decisions.

Building a web crawler, sometimes also refers to as a spider or spider bot, is a smart approach to aggregating big data sets. In this article, I will address the following questions:

1) What is web crawler?

2) What can a web crawler do?

3) How to build a web crawler as a beginner?

1) What is web crawler?

A web crawler is an Internet bot that works by indexing the contents of a website on the internet. It is a program or script written in a computer language to scrape any information or data from the internet automatically. The bot scans and scrapes certain information on each required page until all qualified pages are processed.

what-is-web-crawler

Having different application scenarios, there are roughly 4 types of structure for web crawlers: General Purpose Web Crawler, Focused Web Crawler, Incremental Web Crawler, and Deep Web Crawler.

General Purpose Web Crawler

A general purpose Web crawler gathers as many pages as it can from a particular set of URLs to crawl large-scale data and information. High internet speed and large storage space are required for running a general purpose web crawler. Primarily, it is built to scrape massive data for search engines and web service providers.

Focused Web Crawler

Focused Web Crawler refers to a web crawler that selectively crawls pages related to pre-defined topics. Compared with the general purpose of a web crawler, the focused crawler only needs to crawl the pages related to the pre-defined topics. Thus, it is able to run well with smaller storage space and slower internet speed.

Generally speaking, this kind of web crawler is one of the important parts of search engines, such as Google, Yahoo, and Baidu.

Incremental Web Crawler

Incremental Web Crawler is a crawler that crawls only newly generated information in web pages. As incremental crawlers only crawl newly generated or updated information and do not re-download the information that has not changed, it can effectively save crawling time and storage space.

Deep Web Crawler

Web pages can be divided into Surface Web and Deep Web (also known as Invisible Web Pages or Hidden Web). A surface page is a page that can be indexed by a traditional search engine or a static page that can be reached by a hyperlink. Deep Web is a web page that most of the content can't be obtained through static links. It is hidden behind the search form. Users cannot see it without submitting some certain keywords. For example, some pages are visible to users after they are registered. Deep web crawler helps us crawler the information from invisible web pages.

2) What can a web crawler do?

The interaction between human and network is happening at all time owing to the booming of the internet and IoT. Every time we search on the internet, a web crawler will help us reach the information we want. Also when a larger amount of unstructured data is needed from the web, we can use a web crawler to scrape the data.

Web Crawler as an Important Component of Search Engines

Search engines or the search function on any portal sites are achieved using Focused Web Crawlers. It helps the search engine locate the web pages with the highest relevance to the searched-topics.

In the case of a search engine, a web crawler helps

Provide users with related and valid contents
Create a copy of all the visited pages for subsequent processing

Aggregating Dataset

Another good use of web crawlers is to aggregate datasets for study, business, and other purposes.

Understand and analyze netizen’s behaviors for a company or an organization
Collect marketing information and make the marketing decision more properly in the short run.
Collect information from the internet and analyze them for academic study.
Collect data to analyze the developing trend of an industry in the long term.
Monitor Competitor real-time changes

3) How to build a web crawler as a beginner?

Using Computer Language (Example: Python)

For any non-coders who wish to build a web scraper using a computer language, Python might be the easiest one to start with, comparing to PHP, Java, C/C++. Python's syntax is rather simple and readable for anyone that reads English.

Here is a simple example of a web crawler writing in Python.

import Queue

initial_page = "http://www.renminribao.com"

url_queue = Queue.Queue()

seen = set()

seen.insert(initial_page)

url_queue.put(initial_page)

while(True):

   if url_queue.size()>0:

        current_url = url_queue.get()

        store(current_url)

        for next_url in extract_urls(current_url):

              if next_url not in seen:

                   seen.put(next_url)

                   url_queue.put(next_url)

   else:

          break

As beginners without knowing how to program, we are absolutely required to spend time and energy in learning Python and then writing a web crawler ourselves. The whole studying process might last several months.

Using Web Scraping Tool (Example: Octoparse)

When a beginner wants to build a web crawler within a reasonable time, a visual web scraping software like Octoparse is a good option to consider. It is a coding-free web scraping tool that comes with a free version. In comparison with other web scraping tools, Octoparse can be a cost-efficient solution for anyone looking to quickly scrape some data off a website.[Top 5 Web Scraping Tools Comparison].

How to “Build a web crawler” in Octoparse.

1. Template Mode for one-touch scraping

Template mode is the easiest way that we can have in web scraping. What we need to do is to select a template and clicks several buttons.

task-template

2. Wizard Mode for easy scraping

Wizard Mode which will guide users step by step in scraping data in Octoparse provides three pre-built templates – “List or Table”, “List and Detail” and “Single Page”. Providing the pre-built templates were able to satisfy our need, we can easily to build a “web crawler” in Octoparse within clicks after downloading Octoparse.

wizard-mode

3. Advanced Mode for complex web scraping

Since some websites are built with complex structures, Wizard Mode cannot help us scrape all the data we want. Thus, we’d better off using the Advanced Mode that is more powerful and flexible in terms of scraping data.

Here is an example: https://www.youtube.com/watch?v=b7eX7xInIVc&t=

4) Conclusion

top of the new technologies. Web crawling is an efficient way to reach the data you need and web crawling can be achieved either via computer languages like python or web scraping software like Octoparse and many more.

It’s always exciting to learn new things and empower ourselves with data intelligence. To end this post, I am going to provide a few further readings for anyone that wish to learn more about web crawling or data scraping via web scraper.

Latest comments (1)

Collapse
 
crawlbase profile image
Crawlbase • Edited

Thanks! Great guide for beginners on building web crawlers! The explanations are clear and helpful, especially for those new to the concept. And if you're looking for an efficient way to implement these ideas, Crawlbase can provide additional support and tools.