Ever needed to capture website screenshots programmatically? Maybe for generating link previews, monitoring visual changes, or building a testing pipeline? I built a Screenshot API that handles all of this with a single GET request.
In this post, I'll walk through the architecture and how you can use it.
The Problem
Taking website screenshots sounds simple, but doing it reliably at scale is tricky:
- Sites use lazy loading, animations, and dynamic content
- Bot detection blocks headless browsers
- Different devices need different viewports
- Cookie banners and ads clutter the output
- You need to handle timeouts, errors, and edge cases
Building this into every project is a waste of time. So I wrapped it all into one API.
The Stack
- Python 3.13 + FastAPI for the web framework
- Playwright (Chromium) for headless browser rendering
- playwright-stealth to bypass bot detection
- Pillow for WebP conversion
- Pydantic for request validation
How It Works
A single long-lived Chromium process runs in the background. Each request gets a fresh browser context (fast startup, full isolation). Here's the flow:
- Validate request params with Pydantic (21 configurable options)
- Create a new browser context with viewport, device emulation, headers
- Apply stealth mode to bypass bot detection
- Optionally block ads via route interception
- Navigate to the URL
- Hide cookie banners, inject custom CSS/JS
- Take the screenshot (full page, element, or clip region)
- Return raw image bytes
Quick Start
Capture any website with a simple GET request:
GET /screenshot?url=example.com&format=png&full_page=true
Key Parameters
| Parameter | Default | Description |
|---|---|---|
| url | required | Website URL to capture |
| width | 1280 | Viewport width (320-3840) |
| height | 720 | Viewport height (200-2160) |
| format | png | Output: png, jpeg, or webp |
| full_page | false | Capture entire scrollable page |
| device | - | Preset: mobile, tablet, desktop |
| dark_mode | false | Force dark color scheme |
| block_ads | false | Block ad network requests |
| hide_cookie_banners | false | Hide GDPR/cookie popups |
| delay | 0 | Wait before capture (0-15s) |
| selector | - | CSS selector to screenshot |
| custom_css | - | Inject custom styles |
| custom_js | - | Run JS before capture |
Device Emulation
Need a mobile screenshot? Just set the device parameter:
GET /screenshot?url=example.com&device=mobile
This automatically sets the right viewport, user agent, scale factor, and touch support for iPhone.
Full Page with Ad Blocking
GET /screenshot?url=news-site.com&full_page=true&block_ads=true&hide_cookie_banners=true
Architecture Highlights
Stealth Mode: Every page gets playwright-stealth applied, which patches common bot detection vectors like navigator.webdriver, Chrome plugin arrays, and WebGL vendor strings.
Ad Blocking: Route interception checks requests against 40+ ad network domains using a frozenset for O(1) lookups. No external filter lists needed.
Concurrency: A semaphore limits concurrent screenshots to 10, preventing memory issues while still handling solid throughput.
WebP Support: Playwright doesn't support WebP natively, so PNG screenshots get converted via Pillow when WebP format is requested.
Try It Out
The API is live and available on RapidAPI Hub - search for "Website Screenshot - URL to Image". There's a free tier with 100 requests/month so you can test it out.
Pricing is straightforward:
- Basic: Free - 100 requests/month
- Pro: $7/month - 15,000 requests
- Ultra: $15/month - 150,000 requests
- Mega: $39/month - 1,500,000 requests
Wrapping Up
If you're building anything that needs website screenshots - link previews, visual regression testing, social media cards, PDF generation - give it a try. One API call, tons of options, and it just works.
Happy to answer any questions in the comments!
Top comments (0)