DEV Community

Cover image for Day 45 of #100DaysOfCode: Python Web Crawler for Beginners: Parse Data from the Static Website
Jen-Hsuan Hsieh
Jen-Hsuan Hsieh

Posted on

Day 45 of #100DaysOfCode: Python Web Crawler for Beginners: Parse Data from the Static Website

Introduction

The web crawler is an efficient way to get the data if we don’t have REST APIs or libraries to retrieved data.

What we the most want to do with web crawlers is retrieving data in real-time. In a program of a web crawler, it usually sends a request to the target website as a flight company, EC website, or galleries of products. Then parse the response from the website and extract the information we expect.

We can present data in different ways as web pages, APIs, or an executable file. There are some cases I used web crawling to solve.
Alt Text

This article introduces the following topics.

  1. Introduction
  2. Implement a web crawler program for static websites

Details

Please refer to my article

Articles

There are some of my articles and released projects. Feel free to check if you like!

Top comments (0)