DEV Community

Custodia-Admin
Custodia-Admin

Posted on

Web Scraping API for Python: Extract Data Without Beautiful Soup or Selenium

Web Scraping API for Python: Extract Data Without Beautiful Soup or Selenium

Every Python developer has written a web scraper. Beautiful Soup + Requests for static pages. Selenium + headless Chrome for JavaScript-rendered content. Both approaches break the same way: network timeouts, JavaScript failures, pagination logic, JavaScript rendering overhead, rate-limit walls.

PageBolt's /extract endpoint returns clean, structured data from any URL in one API call.

The Python Web Scraping Problem

Beautiful Soup + Requests:

import requests
from bs4 import BeautifulSoup
import time

urls = ['https://example.com/product/{}'.format(i) for i in range(1, 100)]
products = []

for url in urls:
    response = requests.get(url)
    soup = BeautifulSoup(response.content, 'html.parser')

    # You now manually parse every variation of HTML structure
    title = soup.find('h1', class_='product-title')
    price = soup.find('span', class_='price')

    # Rate limiting, retries, error handling...
    products.append({'title': title.text, 'price': price.text})
    time.sleep(1)  # Don't get blocked
Enter fullscreen mode Exit fullscreen mode

Problems: Brittle CSS selectors, no JavaScript rendering, manual rate limiting, maintenance burden.

Selenium + headless Chrome:

from selenium import webdriver
from selenium.webdriver.common.by import By
import time

driver = webdriver.Chrome()

for url in urls:
    driver.get(url)
    time.sleep(3)  # Wait for JS to render

    title = driver.find_element(By.CSS_SELECTOR, 'h1.product-title').text
    price = driver.find_element(By.CSS_SELECTOR, '.price').text

    products.append({'title': title, 'price': price})

driver.quit()
Enter fullscreen mode Exit fullscreen mode

Problems: 300MB+ Chrome per instance, memory leaks, 5–10 second startup per page, infrastructure costs, fragile waits.

The API Solution

PageBolt /extract returns Markdown-formatted, structured data:

import requests

response = requests.post(
    'https://api.pagebolt.dev/v1/extract',
    headers={'Authorization': f'Bearer {api_key}'},
    json={
        'url': 'https://example.com/product/123',
        'options': {
            'include_tables': True,
            'include_images': True,
            'include_links': True,
            'max_length': 5000
        }
    }
)

data = response.json()
print(data['content'])  # Clean Markdown
print(f"Extracted in {data['extraction_time_ms']}ms")
Enter fullscreen mode Exit fullscreen mode

Returns:

{
  "url": "https://example.com/product/123",
  "content": "# Product Title\n\nPrice: $49.99\n\nDescription: ...",
  "word_count": 342,
  "extraction_time_ms": 847
}
Enter fullscreen mode Exit fullscreen mode

Real-World Examples

1. Bulk Product Monitoring

import requests
import json
from datetime import datetime

competitors = {
    'amazon': 'https://amazon.com/s?k=widget',
    'ebay': 'https://ebay.com/sch/i.html?_nkw=widget',
    'aliexpress': 'https://aliexpress.com/wholesale?SearchText=widget'
}

api_key = os.environ['PAGEBOLT_KEY']
headers = {'Authorization': f'Bearer {api_key}'}

results = {}

for source, url in competitors.items():
    res = requests.post(
        'https://api.pagebolt.dev/v1/extract',
        headers=headers,
        json={'url': url, 'options': {'max_length': 10000}}
    )

    extraction = res.json()
    results[source] = {
        'extracted_at': datetime.utcnow().isoformat(),
        'content': extraction['content'],
        'time_ms': extraction['extraction_time_ms']
    }

# Parse with Claude, store in DB
with open('competitor_data.json', 'w') as f:
    json.dump(results, f)
Enter fullscreen mode Exit fullscreen mode

2. News Aggregator Pipeline

import requests
import asyncio

async def extract_article(url, api_key):
    res = await requests.post(
        'https://api.pagebolt.dev/v1/extract',
        headers={'Authorization': f'Bearer {api_key}'},
        json={'url': url}
    )
    return res.json()

async def aggregate(urls, api_key):
    tasks = [extract_article(url, api_key) for url in urls]
    return await asyncio.gather(*tasks)

# Process 50 articles in parallel, <50 seconds
articles = asyncio.run(aggregate(article_urls, api_key))
Enter fullscreen mode Exit fullscreen mode

3. CI/CD Data Validation

import requests
import sys

# Verify staging site renders product data correctly
res = requests.post(
    'https://api.pagebolt.dev/v1/extract',
    headers={'Authorization': f'Bearer {api_key}'},
    json={'url': f'https://staging.yourapp.com/product/{product_id}'}
)

data = res.json()

# Fail deploy if critical data missing
required_fields = ['$99.99', 'Product Name', 'In Stock']

missing = [f for f in required_fields if f not in data['content']]

if missing:
    print(f"❌ Staging validation failed. Missing: {missing}")
    sys.exit(1)

print("✅ Staging data validation passed")
Enter fullscreen mode Exit fullscreen mode

Why Not Self-Hosted Scraping?

Beautiful Soup/Requests:

  • Manual HTML parsing per site
  • No JavaScript rendering
  • You manage retries, rate limits, proxies
  • Breaks on layout changes

Selenium/Puppeteer:

  • 300MB Chrome per instance
  • 5–10 second startup per page
  • Infrastructure costs ($100+/month at scale)
  • Memory leaks in production

PageBolt /extract:

  • <1KB per request
  • 0.5–2 second per page
  • $29/month unlimited
  • Automatic updates (no maintenance)

Pricing

  • Free: 50 extractions/month
  • Starter: $29/month → 10,000/month
  • Scale: $99/month → 100,000/month

Next Steps

  1. Get API key: pagebolt.dev/pricing
  2. Read Python guide: pagebolt.dev/docs#extract
  3. Run first extraction: curl https://api.pagebolt.dev/v1/extract ...

Try free — 50 extractions/month, no credit card.

Top comments (0)