DEV Community

Cover image for Image Scraping with HtmlAgilityPack: A Practical Guide Using ConsoleWebScraper
Bohdan Harabadzhyu
Bohdan Harabadzhyu

Posted on

Image Scraping with HtmlAgilityPack: A Practical Guide Using ConsoleWebScraper

Image Scraping with HtmlAgilityPack: A Practical Guide Using ConsoleWebScraper

Web scraping is a valuable tool for automating the collection of information from websites. My simple, open-source ConsoleWebScraper application, available on GitHub and SourceForge, demonstrates how to use the HtmlAgilityPack library to scrape images from web pages. This guide will focus on the image scraping capabilities of the application and provide an overview of its core functionality.

Introduction to HtmlAgilityPack

HtmlAgilityPack is a .NET library that simplifies HTML parsing, making it a favorite among developers for web scraping tasks. It provides a robust way to traverse and manipulate HTML documents. With HtmlAgilityPack, extracting elements like images and text from web pages becomes straightforward and efficient.

Why Scrape Images?

Images are a significant part of web content, used for various purposes such as visual data representation, marketing, and documentation. Scraping images can be useful for creating archiving content, or monitoring website changes. ConsoleWebScraper application serves as a simple example to demonstrate how you can automate this process using HtmlAgilityPack.

Key Features of ConsoleWebScraper

ConsoleWebScraper offers a few essential functionalities:
URL Input: Prompts the user to enter a URL and retrieves the HTML content.
HTML Parsing: Extracts inner URLs and images from the HTML content.
File Saving: Saves scraped URLs, images, and HTML content (with tags removed) to separate files for easy access and further analysis.

Additional Functionalities

Before diving into the core image scraping method, it's helpful to understand the broader functionality of the ConsoleWebScraper application, which includes several supporting methods and classes.
Another classes :
The Client class manages the interaction with the user and controls the application's flow. It listens for user commands and executes the appropriate actions
The Printer class provides simple methods to display the application's start page and main menu to the user.
The Controller class orchestrates the scraping process, managing user input, folder creation, and invoking the web scraper service methods.
The HtmlTags class provides a method to remove HTML tags from the content, leaving only the text.
The IWebScraperService interface defines methods for saving URLs, content, and images, which are implemented in the WebScraperService class.

The Core Method: SaveImagesToDoc

The heart of the image scraping functionality is encapsulated in the SaveImagesToDoc method. Let's dive deeper into this method to understand how it works.

public async Task SaveImagesToDoc(string fileName, string htmlContent, string baseUrl)
{
    // Create directory to save images
    Directory.CreateDirectory(fileName);

    // Load HTML content into HtmlDocument
    var doc = new HtmlAgilityPack.HtmlDocument();
    doc.LoadHtml(htmlContent);

    // Extract image URLs
    var images = doc.DocumentNode.Descendants("img")
        .Select(e => e.GetAttributeValue("src", null))
        .Where(src => !string.IsNullOrEmpty(src))
        .Select(src => new Uri(new Uri(baseUrl), src).AbsoluteUri)
        .ToList();

    // Initialize HttpClient
    using (HttpClient client = new HttpClient())
    {
        int pictureNumber = 1;
        foreach (var img in images)
        {
            try
            {
                // Download image as byte array
                var imageBytes = await client.GetByteArrayAsync(img);

                // Get image file extension
                var extension = Path.GetExtension(new Uri(img).AbsolutePath);

                // Save image to file
                await File.WriteAllBytesAsync($"{fileName}\\Image{pictureNumber}{extension}", imageBytes);
                pictureNumber++;
            }
            catch (Exception ex)
            {
                // Log any errors
                Console.WriteLine($"Failed to download or save image {img}: {ex.Message}");
            }
        }
    }
}

Enter fullscreen mode Exit fullscreen mode

Step-by-Step Breakdown

  • Create Directory: The method starts by creating a directory to store the downloaded images.
Directory.CreateDirectory(fileName);
Enter fullscreen mode Exit fullscreen mode
  • Load HTML Content: It then loads the provided HTML content into an HtmlDocument object from the HtmlAgilityPack library.
var doc = new HtmlAgilityPack.HtmlDocument();
doc.LoadHtml(htmlContent);
Enter fullscreen mode Exit fullscreen mode
  • Extract Image URLs: The method identifies all img elements in the HTML and extracts their src attributes. It converts these relative URLs to absolute URLs using the base URL of the web page.
var images = doc.DocumentNode.Descendants("img")
    .Select(e => e.GetAttributeValue("src", null))
    .Where(src => !string.IsNullOrEmpty(src))
    .Select(src => new Uri(new Uri(baseUrl), src).AbsoluteUri)
    .ToList();
Enter fullscreen mode Exit fullscreen mode
  • Download and Save Images: Using HttpClient, the method iterates through the list of image URLs, downloads each image as a byte array, and saves it to the designated directory with an appropriate file extension. Errors during the download or save process are caught and logged.
using (HttpClient client = new HttpClient())
{
    int pictureNumber = 1;
    foreach (var img in images)
    {
        try
        {
            var imageBytes = await client.GetByteArrayAsync(img);
            var extension = Path.GetExtension(new Uri(img).AbsolutePath);
            await File.WriteAllBytesAsync($"{fileName}\\Image{pictureNumber}{extension}", imageBytes);
            pictureNumber++;
        }
        catch (Exception ex)
        {
            Console.WriteLine($"Failed to download or save image {img}: {ex.Message}");
        }
    }
}
Enter fullscreen mode Exit fullscreen mode

Conclusion

The ConsoleWebScraper application demonstrates the fundamental use of the HtmlAgilityPack library to scrape images from web pages. Despite its simplicity, the tool offers basic functions, making it a good solution for entry-level tasks. By automating image extraction and storage, you can streamline your data collection efforts. Happy scraping!

Top comments (0)