Scrapy is one of the most accessible tools that you can use to scrape and also spider a website with effortless ease.
One of the significant issues with Web crawlers is the fact that they break so easily. Using a framework like Scrapy and using the Contracts module to check the return data is one of the best ways to write crawlers you can trust.
First, let's write a simple scraper and then see how we can use Contracts to see that nothing is broken because you changed your code.
Our Simple Scraper
Today lets see how we can scrape Amazon to get reviews for lets, to say the Apple Air pods.
Here is the URL we are going to scrape https://www.amazon.com/Apple-AirPods-Charging-Latest-Model/dp/B07PXGQC1Q?pf_rd_p=35490539-d10f-5014-aa29-827668c75392&pf_rd_r=6T723HNZTK1TCR53MSAG&pd_rd_wg=U5S97&ref_=pd_gw_ri&pd_rd_w=rFFcu&pd_rd_r=9ea5f81c-8328-4375-bf2c-874e4f045991#customerReviews
Especially the review area.
First, we need to install scrapy if you haven't already.
Once installed, go ahead and create a project by invoking the start project command.
This will output something
And create a folder structure
Now CD into the scrapingproject. You will need to do it twice
How we need a spider to crawl through the Amazon reviews page. So we use the genspider to tell scrapy to create one for us. We call the spider ourfirstbot and pass it to the URL of the Amazon page.
This should return successfully
Great. Now open the file ourfirstbot.py in the spider's folder.
Let's examine this code before we proceed.
The allowed_domains array restricts all further crawling to the domain paths specified here.
start_urls is the list of URLs to crawl. For us, in this example, we only need one URL.
The def parse(self, response): function is called by scrapy after every successful URL crawl. Here is where we can write our code to extract the data we want.
We now need to find the CSS selector of the elements we need to extract the data. Go to the URL https://www.amazon.com/Apple-AirPods-Charging-Latest-Model/dp/B07PXGQC1Q?pf_rd_p=35490539-d10f-5014-aa29-827668c75392&pf_rd_r=6T723HNZTK1TCR53MSAG&pd_rd_wg=U5S97&ref_=pd_gw_ri&pd_rd_w=rFFcu&pd_rd_r=9ea5f81c-8328-4375-bf2c-874e4f045991#customerReviews
And right-click on the title of one of the reviews and click on inspect. This will open the Google Chrome Inspector
You can see that the CSS class name of the title element is review-title, so we are going to ask scrapy to get us the contents of this class
Similarly, we try and find the class names of the review rating element (note that the class names might change by the time you run this code)
If you are unfamiliar with CSS selectors, you can refer to this page by Scrapy https://docs.scrapy.org/en/latest/topics/selectors.html
We have to use the zip function now to map a similar index of multiple containers so that they can be used just using a single entity.
We use BeautifulSoup to remove HTML tags and get pure text
and now lets run this with the command (Notice we are turning off obeying Robots.txt)
And Bingo. you will get the results
Now, let's export the extracted data to a CSV file. All you have to do is to provide an export file
Or if you want the data in the JSON format.
Using Contracts to detect code breakages
As we know, the parse function needs to return the review title and review rating. Let's pretend this is super important and sacrosanct for our process. To make sure that in a lengthier implementation, the basics are not broken, we specify using the multi-line docstrings. The contract "syntax" is: @contract_name . You can create your own contracts, which is pretty neat.
This tells the Contracts module to check if the URL is the right one and the fields that need to be there definitely and the number of items minimum one and maximum ten that the count should fall under.
Let's modify the code to add the support of the Contracts module.
Now when you change some part of the code, you can check for the integrity of code by running.
Which should return the following if all went well.
If you really want to build trustworthy web crawlers, you can consider using a Rotating Proxy Service like Proxies API which prevents IP blocks or looks at a cloud-based web crawler which can handle all these subtleties behind the scenes like TeraCrawler.io
Top comments (0)