DEV Community

Cover image for Python Web Scraping: Best Libraries and Practices
Eztosin
Eztosin

Posted on • Originally published at Medium

Python Web Scraping: Best Libraries and Practices

The word data should no longer sound strange to you, it’s a four-letter word that holds so much importance in today's world. Accessing the right information or data is essential for startup firms, individuals, developers, or website owners, having the right content to drive traffic to your site can be crucial or seem out of your reach but that’s untrue having Python Web Scraping at your beck and call.

In this article, we’re going to delve into the most important and widely used Python libraries, implementing the best practices involved in scraping the web, dissecting what web scraping means in its entirety and so much more. So join me as we explore the world of Python Web Scraping together!

Introduction to Web Scraping

In simple terms, Web Scraping means extracting contents from a webpage whether authorized or not. This extracted content performed by a user is usually for a specific purpose; for example, a user might need information about weather forecasts, or the latest technology news for a blog. There’s a wide range of needs for scraping the web, from gathering data for research to automating data collection for business insights.

A notable application of web scraping can be extracting names, addresses, and phone numbers of individuals from a website. While this might seem beneficial for profiling and targeting new customers as a business owner, it’s crucial to consider data privacy and ethical considerations in using such scraped data.

It’s important to note that scraping the contents of websites is not universally accepted. Many companies or website owners drastically frown against scraping the contents of their websites; some have built-in security measures that protect their contents from unauthorized access. Therefore, if you need to scrape a particular content without getting blocked, you can refer to companies like ScrapingBee which offer the best practices and pricing to suit your needs.

Getting Started with Python for Web Scraping

Python is considered a high-level programming language, known for its readable syntax making it easier to learn and master, it boasts a vast library and it is an open-source software, allowing its users to contribute to its already amazing features.
To begin using Python for Web Scraping, you’ll need to have Python3 and pip3 installed. If you’re on a Linux Operating system such as Ubuntu 20.04 you can do that with the command below in your terminal

sudo apt update
sudo apt install python3
sudo apt install python3-pip
Enter fullscreen mode Exit fullscreen mode
# Install Python package installer
sudo apt install pip3
Enter fullscreen mode Exit fullscreen mode

Now that you have Python and pip3 installed, here’s a list of common Libraries used for Web Scraping in Python. In this article, we’re going to focus on the most important and frequently used Python libraries for Web Scraping which include:

  • Requests: In Python web scraping, the request module is considered a top choice, allowing the user to send HyperText Transfer Protocol (HTTP) requests. These requests can be made with various methods such as GET, POST, PUT, DELETE, etc. and then a response is received from the web server to which the request was made. As an essential tool for extracting web content, other libraries used for web scraping often rely on its usage.

pip3 install requests

Example usage:

     import requests

     # Send a GET request and print the response content
     response = requests.get('https://example.com')
     print(response.text)
Enter fullscreen mode Exit fullscreen mode
  • Beautiful Soup: Who doesn’t like a good meal eh? As its name implies it performs its task beautifully. This remains my favorite Python module for scraping, Beautiful Soup is quite a popular tool used in Python web scraping, with it, you can target specific tags and attributes in HTML when requesting a webpage. It also supports the HTML and LXML parser included in the Python standard library. It can be installed using the command below:

pip3 install beautifulsoup4

Example usage:

     from bs4 import BeautifulSoup

     # Parse HTML content and print the title
     soup = BeautifulSoup(html_content, 'html.parser')
     print(soup.title)
Enter fullscreen mode Exit fullscreen mode
  • Pyquery: Ideal for scraping JavaScript-rendered contents, Pyquery uses LXML to manipulate HTML and XML making it suitable for dynamic web pages. First, you need to install Pyquery:

pip3 install pyquery

Example usage:

     from pyquery import PyQuery as pq

     # Create a PyQuery object and print text content
     doc = pq(html_content)
     print(doc('p').text())
Enter fullscreen mode Exit fullscreen mode
  • Scrapy: Scrapy is a powerful Python library for scraping large data. In Scrapy, Requests are scheduled and processed asynchronously, this implies that Scrapy doesn’t need to wait for a request to be processed before it sends another allowing the user to perform more requests and save time. The term “Spider” or “Scrapy Spider” is often used in this context. Spider refers to Python classes that a programmer defines and Scrapy uses the instructions to extract content from a website.

pip3 install scrapy

Example usage:

     import scrapy

     # Define a Scrapy spider and parse response
     class MySpider(scrapy.Spider):
         name = 'example_spider'
         start_urls = ['https://example.com']

         def parse(self, response):
             # Your scraping logic here
Enter fullscreen mode Exit fullscreen mode
  • Selenium: Selenium helps the user have more control over requests made to websites, allowing more user interaction such as filling forms on sites, clicking links, and navigating through the available pages. Like Pyquery, it allows you to scrape from sites that render JavaScript content helping with automation. It's often used alongside other modules such as Beautiful Soup. Selenium supports headless browsing meaning you can browse websites without the user interface. To be able to use Selenium it can be installed along with the browser drive you intend to use:

pip3 install selenium

Example usage:

     from selenium import webdriver

     # Launch a Chrome WebDriver and print page title
     driver = webdriver.Chrome()
     driver.get('https://example.com')
     print(driver.title)
Enter fullscreen mode Exit fullscreen mode

Key Features and Considerations:

  • Each library has its strengths and is suited for different scraping tasks.
  • Requests module is efficient for basic HTTP requests.
  • Beautiful Soup simplifies HTML parsing and extraction.
  • PyQuery handles JavaScript-rendered content.
  • Scrapy is ideal for scraping large datasets.
  • Selenium enables automated web interaction.

In sum, each library discussed has its strengths and is suited for different scraping tasks. However, successful web scraping also depends on a basic understanding of HTML and CSS. Since most of the requests made and responses received are going to be in this format. HTML is like the human skeleton but for a website, CSS adds visual appeal and design elements to make HTML more beautiful and appealing. A good grasp of HTML tags and CSS selectors is crucial to navigating and extracting content from any website effectively. By combining the capabilities of these Python libraries with a solid understanding of HTML and CSS, developers can unlock the full potential of Python Web Scraping and achieve their data extraction goals.

Understanding HTML Structure:

HTML is the structural backbone of web content, it comprises elements and tags that organize and present texts, images, and videos arranging and structuring them into sections. Each section is defined by specific elements or tags, such as title, heading, or sub-heading. It can even be an image tag or any other multimedia element as the case may be.

In Web Scraping understanding the structure of a website is crucial to a successful data extraction. Each HTML tag often has a class name or ID assigned for easy identification and extraction of relevant information. For instance <div class=”article-content”> is a div tag with a class name “article-content” and may contain the main contents of a web page facilitating targeted extraction.

In addition, CSS (Cascading Style Sheets) affects how a web page is displayed to users. CSS selectors such as class selectors ( .class-name), ID selectors(#id-name), and element selectors (element-name) allow developers to pinpoint and style specific elements on a webpage.

In the context of web scraping, a strong background in CSS elements aids in identifying such structured data and navigating through styled content. Furthermore, CSS can impact JavaScript-rendered content, influencing how it is loaded and displayed. When dealing with dynamic content or interactions driven by JavaScript, considerations of CSS effects are vital for accurate scraping results.

By understanding the structure of HTML with the knowledge of CSS selectors, and how they interact with JavaScript, developers can effectively navigate web pages and extract valuable data for various applications.

Basic Web Scraping Techniques

In this section, we’ll explore some basic techniques of Python Web Scraping using popular libraries such as Requests, Beautiful Soup, and Pyquery.

Example of using Requests

Create a file named request.py and type in the following lines of code then save and exit your text editor. After which you can run the command python3 request.py your output should be similar to mine in the image below.

# Importing the requests module
import requests

# Specify the URL of the webpage to scrape
url = 'https://dev-tools.alx-tools.com/'

# Send a GET request to the URL and store the response in a variable
request = requests.get(url)

# Check if the request was successful (status code 200)
if request.status_code == 200:
        # Print the HTML content of the webpage
        print(request.text)
else:
        # Print an error message if the request failed
        print('Error:', request.status_code)
Enter fullscreen mode Exit fullscreen mode

Output:

Expected output after running request.py

Example of using Beautiful soup:

Create a file named soup.py and type in the following lines of code then save and exit your text editor. After which you can run the command python3 soup.py your output should be similar to mine in the image below.

# Importing the requests module
import requests
from bs4 import BeautifulSoup

# Specify the URL of the webpage to scrape
url = 'https://dev-tools.alx-tools.com/'

# Send a GET request to the URL and store the response in a variable
response = requests.get(url)

# Check if the request was successful (status code 200)
if response.status_code == 200:
    # Create a BeautifulSoup object with the HTML content of the webpage
    soup = BeautifulSoup(response.text, 'html.parser')

    # Print the title of the webpage
    print('Title:', soup.title.text)

    # Print a specific element from the webpage (e.g., the first paragraph)
    print('First Paragraph:', soup.p.text)
else:
    # Print an error message if the request failed
    print('Error:', response.status_code)
Enter fullscreen mode Exit fullscreen mode

Output:

Expected output after running soup.py

Example code using Pyquery:

In the case of Pyquery, you need to be careful when naming your file to avoid circular importation errors when running your code. In my case, I created a file named pyquery_scraping.py with the following lines of codes after which I ran the command python3 pyquery_scraping.py

# Importing the requests module
import requests
from pyquery import PyQuery as pq

# Specify the URL of the webpage to scrape
url = 'https://dev-tools.alx-tools.com/'

# Send a GET request to the URL and store the response in a variable
response = requests.get(url)

# Check if the request was successful (status code 200)
if response.status_code == 200:
    # Create a PyQuery object with the HTML content of the webpage
    doc = pq(response.text)

    # Print the title of the webpage
    print('Title:', doc('title').text())

    # Print a specific element from the webpage (e.g., the first paragraph)
    print('First Paragraph:', doc('p').eq(0).text())
else:
    # Print an error message if the request failed
    print('Error:', response.status_code)

Enter fullscreen mode Exit fullscreen mode

Output:

Expected output after running pyquery_scraping.py

Advanced Web Scraping Techniques

In this section, we are going to see some examples of web scraping in Python using popular libraries like Scrapy and Selenium
Example of Python Web scraping using Scrapy:

First, ensure you have Scrapy installed. If not, you can install it using pip:
pip3 install scrapy

Next, create a new Scrapy project and navigate to the project directory:
scrapy startproject dev_tools_scraper
cd dev_tools_scraper/dev_tools_scraper/spiders
Now, create a new spider inside the spiders( /dev_tools_scraper/dev_tools_scraper/spiders) directory. Let's name it dev_tools_spider.py:

import scrapy


class DevToolsSpider(scrapy.Spider):
    """
    A Scrapy spider to scrape data from specified URL
    """

    # Name of the spider
    name = 'dev_tools'


    # URL to start scraping from
    start_urls = [
        'https://dev-tools.alx-tools.com/',
    ]

    def parse(self, response):
        """
        Method to parse the response from each URL
        """

        # Extracting data from the HTML structure
        title = response.css('title::text').get()
        navbar_links = response.css('.navbar-nav a.nav-link::text').getall()
        about_content = response.css('#about .lead::text').get()

        # Yielding the extracted data as a dictionary
        yield {
            'title': title,
            'navbar_links': navbar_links,
            'about_content': about_content,
            }

Enter fullscreen mode Exit fullscreen mode

Save this spider file in the spiders directory of your Scrapy project.

Now, you can run the spider using the following command:
scrapy crawl dev_tools -o output.json
Replace output.json with the desired output file name. This command will execute the spider, scrape the specified URL, and save the extracted data into a JSON file.

You can customize the spider to extract more specific data based on your requirements by using CSS selectors or XPath expressions to target specific elements in the HTML structure.

Output:

Expected output after running  raw `scrapy crawl dev_tools -o output.json` endraw  or  raw `python -m scrapy crawl dev_tools -o output.json` endraw

Example Python Web scraping using Selenium

Again, ensure you have Selenium installed. You can install it using pip:
pip3 install selenium
Also Download WebDriver based on the web browser of your choice:

  • For Chrome: Visit the ChromeDriver download page and download the appropriate version for your operating system.
  • For Firefox: You can get GeckoDriver from the GeckoDriver releases page. For other browsers like Edge, Safari, etc., you can find their respective WebDriver downloads from the official sources
# Import the web driver
from selenium import webdriver
from selenium.webdriver.chrome.service import Service


# Initialize the WebDriver (assuming you have the appropriate driver installed, e.g., chromedriver for Chrome)
service = Service(executable_path="chromedriver.exe")  # Specify the correct path to chromedriver
driver = webdriver.Chrome(service=service)


# Navigate to a website
driver.get('https://dev-tools.alx-tools.com/')


# Get and print the title of the webpage
print("Title of the webpage:", driver.title)


# Close the browser window
driver.quit()
Enter fullscreen mode Exit fullscreen mode

Save this file in your project directory
After running the script you should have a similar output to mine.

Output:

Expected output after running selenium.py

Best Practices and Tips

You’ve just learned a skill that can save you a lot of time and resources: web scraping with Python libraries, In this article, I used a basic website Holberton School - Developer tools because it’s a free-to-use website for learning purposes. While learning is an important part of our journey as developers, it’s important to learn the right way and approach web scraping responsibly and ethically.
Here’s a list of best practices and tips to consider when engaging in web scraping

Tips:

  • Constant practice ensures mastery. Consider utilizing websites like geeksforgeeks, which offer free Python tutorials on web scraping.
  • When using Scrapy, always ensure that your ‘.py’ file is located in the spider directory to avoid errors such as “KeyError(f"Spider not found: {spider_name}")”.
  • Download a web driver with the same version as the browser you intend to use, whether it’s Chrome or Firefox. Refer to their official websites for the correct version.

Common Pitfalls:

  • Not organizing your scraping files properly can lead to errors and confusion, especially in larger projects.
  • Not commenting on your code can lead to confusion, employ proper coding ethics
  • Ignoring website terms of service and scraping without authorization can result in legal issues.

Real-World Applications:

Python libraries like Panda and Matplotlib are often used for data processing and analysis by data analysts.
Web Scraping can be used in various real-world applications, such as:

  • Extracting the latest news from a blog site for content aggregation.
  • Gathering product information and prices from e-commerce sites for competitive analysis.
  • Monitoring changes in stock prices or financial data from financial websites.

By following these best practices, avoiding common pitfalls, and exploring real-world applications, you can harness the power of web scraping effectively and responsibly in your projects.

Conclusion:

In this article, we’ve discussed powerful Python libraries for web scraping and provided practical examples of how to use and install them. We’ve discussed the strengths of each tool and when to use them, based on the size of the project and complexity.

However, this is just an introduction to real problems in Python Web Scraping. To deepen your understanding and gain more practical insights, I encourage you to explore additional resources provided below. These resources include tutorials, documentation, and community forums where you can learn from others and share your experiences.

By continuing to learn and practice Python Web Scraping techniques, you can unlock a world of possibilities in data extraction and analysis. Whether you are a developer, data analyst, or website owner, Python opens doors to valuable tools and libraries for your usage.

Now, it’s time to dive deeper and apply what you’ve learned. Happy Scraping!

Additional Resources:

Scrapy Tutorial.
Freecode Camp Web Scraping Tutorial.
How to Setup Selenium with Chromedriver on Ubuntu.
Python Selenium Tutorial
How to Bypass CAPTCHA with Selenium.
Best Practices for Ethical Web Scraping.
Image by Freepik

Top comments (0)