DEV Community

Cover image for ๐Ÿ•ธ๏ธ The Final Boss of Web Scraping: A Streamlit-Powered, Multi-Page Ethical Scraper.
Nishkarsh Pandey
Nishkarsh Pandey

Posted on

๐Ÿ•ธ๏ธ The Final Boss of Web Scraping: A Streamlit-Powered, Multi-Page Ethical Scraper.

"Most scrapers stop at the first page. But this one doesn't know when to quit."
โ€” Probably you, after building this.

๐Ÿš€ Introduction
Web scraping is one of the most powerful tools in any developerโ€™s toolkit โ€” from collecting product prices and news articles to monitoring SEO tags or academic citations. But most beginner tutorials stop at scraping a single page.

Today, we go full Final Boss mode.
Youโ€™ll learn how to build a smart, ethical, and multi-page web scraper wrapped in a beautiful Streamlit app.

๐Ÿ’ก What this project does:

โœ… Scrapes headings, paragraphs, images, and links.
๐Ÿ” Crawls multiple internal pages recursively.
๐Ÿ” Allows keyword filtering.
๐Ÿค– Respects robots.txt.
๐Ÿ’พ Saves everything to CSV.
๐Ÿ“Š Features progress bars and live feedback.
๐ŸŽ›๏ธ Has an intuitive Streamlit UI.

Letโ€™s dive in.

๐Ÿ“ฆ Tech Stack
Python ๐Ÿ
Requests โ€“ for HTTP requests
BeautifulSoup โ€“ for parsing HTML
Streamlit โ€“ for interactive UI
CSV module โ€“ for saving scraped data

๐Ÿ› ๏ธ How It Works
Here's a high-level look at the logic:
You enter a URL in the Streamlit app.

The scraper:
Checks if scraping is allowed via robots.txt
Fetches the HTML of the page
Extracts key elements (headings, paragraphs, images, links)

Optionally, it:
Crawls internal links recursively (within the same domain)
Filters content based on a keyword
Results are displayed and downloadable via Streamlit.

import requests
from bs4 import BeautifulSoup
import csv
import os
import streamlit as st
# Function to check if scraping is allowed on a website
def is_scraping_allowed(url):
    try:
        # Checking robots.txt for the URL
        robots_url = url.rstrip("/") + "/robots.txt"
        response = requests.get(robots_url)

        if response.status_code == 200:
            if "Disallow: /" in response.text:
                return False
            return True
        else:
            return True
    except Exception as e:
        return True

# Function to scrape the website and extract content
def scrape_website(url):
    if not is_scraping_allowed(url):
        st.error("Scraping is disallowed on this site.")
        return

    try:
        # Send GET request to the webpage
        response = requests.get(url)

        if response.status_code != 200:
            st.error(f"Failed to retrieve webpage. Status code: {response.status_code}")
            return

        # Parse the content with BeautifulSoup
        soup = BeautifulSoup(response.content, 'html.parser')

        # Extracting headings
        headings = soup.find_all(['h1', 'h2', 'h3', 'h4', 'h5', 'h6'])
        headings_text = [heading.get_text(strip=True) for heading in headings]

        # Extracting paragraphs
        paragraphs = soup.find_all('p')
        paragraphs_text = [para.get_text(strip=True) for para in paragraphs]

        # Extracting links
        links = soup.find_all('a', href=True)
        links_list = [link['href'] for link in links]

        # Extracting image URLs
        images = soup.find_all('img', src=True)
        images_list = [image['src'] for image in images]

        # Save the data to a CSV file
        save_to_csv(headings_text, paragraphs_text, links_list, images_list)

        return headings_text, paragraphs_text, links_list, images_list

    except Exception as e:
        st.error(f"Error during scraping: {e}")
        return

# Function to save the data into a CSV file
def save_to_csv(headings, paragraphs, links, images):
    filename = 'scraped_data.csv'

    # Check if file exists, if so, append data
    file_exists = os.path.isfile(filename)

    with open(filename, mode='a', newline='', encoding='utf-8') as file:
        writer = csv.writer(file)
        if not file_exists:
            # Write the header row if it's a new file
            writer.writerow(['Heading', 'Paragraph', 'Link', 'Image URL'])

        # Write the data to the CSV file
        for heading, paragraph, link, image in zip(headings, paragraphs, links, images):
            writer.writerow([heading, paragraph, link, image])
# Streamlit UI setup
st.title("Web Scraper with Streamlit")
# Input for URL
url = st.text_input("Enter the URL to scrape:")

if url:
    # Scrape the website and get the results
    headings, paragraphs, links, images = scrape_website(url)
    if headings:
        st.subheader("Headings")
        st.write(headings)
    if paragraphs:
        st.subheader("Paragraphs")
        st.write(paragraphs)
    if links:
        st.subheader("Links")
        st.write(links)
    if images:
        st.subheader("Image URLs")
        st.write(images)
    st.write(f"Data saved to 'scraped_data.csv'")
Enter fullscreen mode Exit fullscreen mode

โš ๏ธ Ethical Reminder
Scraping is powerful, but always respect robots.txt and never overload a server.
Use time delays, user-agent headers, and never scrape private or login-protected areas.

Screenshots:

Streamlit_output

CSV_screenshot

๐Ÿ“Œ Possible Improvements:
Want to take it even further?
Add sitemap.xml support to find all internal pages.
Integrate a headless browser like Selenium or Playwright.
Store data in a MongoDB or SQLite database.
Add domain blocking and rate limiting.
Deploy with Streamlit Cloud.

๐Ÿ”š Conclusion
Youโ€™ve just built a Final Boss-level web scraper โ€” no more toy examples.
With keyword filters, recursion, and a live UI, youโ€™ve taken scraping to the next level ๐Ÿ’ช

Whether you're building a research tool, monitoring content, or just flexing your skills โ€” this scraper gives you a powerful base to expand from.

๐Ÿ’ฌ Feedback?
Got ideas to improve it? Questions about deployment? Drop a comment or fork the code!
๐Ÿ‘‰ Follow me on Dev.to for more Python + AI + DevTool content!

Top comments (0)