Taking screenshots of websites programmatically is one of those tasks that sounds simple but gets complicated fast. After building screenshot automation into several projects, here are the three approaches I've used, with honest pros and cons.
Method 1: Selenium (The Classic)
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
options = Options()
options.add_argument('--headless')
options.add_argument('--window-size=1280,800')
driver = webdriver.Chrome(options=options)
driver.get('https://example.com')
driver.save_screenshot('screenshot.png')
driver.quit()
Pros: Battle-tested, huge community, handles JavaScript-heavy pages.
Cons: Requires Chrome + ChromeDriver installed and version-matched. Breaks constantly when Chrome updates. Memory-hungry — each instance eats 200-400MB.
When to use: Testing your own app, one-off scripts, when you need to interact with the page before screenshotting.
Method 2: Playwright (The Modern Choice)
from playwright.sync_api import sync_playwright
with sync_playwright() as p:
browser = p.chromium.launch()
page = browser.new_page(viewport={'width': 1280, 'height': 800})
page.goto('https://example.com')
page.screenshot(path='screenshot.png', full_page=True)
browser.close()
Pros: Auto-manages browser binaries. Better API than Selenium. Full-page screenshots built in. Faster.
Cons: Still runs a full browser (~300MB RAM). playwright install downloads 400MB+ of browsers. Not great for serverless.
When to use: Serious automation, CI pipelines, when you need full-page captures or PDF generation.
Method 3: Screenshot API (The Lazy Way)
import requests
response = requests.get('https://rendly-api.fly.dev/api/v1/screenshots', params={
'url': 'https://example.com',
'width': 1280,
'height': 800,
'format': 'png'
}, headers={
'Authorization': 'Bearer YOUR_API_KEY'
})
with open('screenshot.png', 'wb') as f:
f.write(response.content)
Pros: No browser to install or maintain. Works anywhere (serverless, edge, mobile). Sub-3 second responses. No memory overhead.
Cons: Requires network access. Rate limits on free tiers. You're trusting a third party.
When to use: Production apps, serverless functions, when you don't want to manage browser infrastructure.
The Real Comparison
| Selenium | Playwright | API | |
|---|---|---|---|
| Setup time | 15-30 min | 5 min | 2 min |
| Dependencies | Chrome + Driver | ~400MB browsers | requests |
| RAM per screenshot | 200-400MB | 200-300MB | ~0 |
| Serverless-friendly | ❌ | ❌ (usually) | ✅ |
| Full-page capture | Manual scroll | Built-in | Built-in |
| Cost | Free | Free | Free tier, then $9/mo+ |
My Take
For local scripts and testing, Playwright wins. The API is cleaner than Selenium, and auto-managed browsers save headaches.
For production apps, especially serverless, use an API. I built Rendly specifically because I was tired of managing headless Chrome in Docker containers. It handles the browser infrastructure so you don't have to.
For anything involving page interaction (clicking buttons, filling forms), you still need Selenium or Playwright — APIs just capture what's visible.
Bonus: Async Screenshots in Python
If you're taking many screenshots, async makes a huge difference:
import aiohttp
import asyncio
async def screenshot(session, url):
params = {'url': url, 'format': 'png'}
headers = {'Authorization': 'Bearer YOUR_API_KEY'}
async with session.get(
'https://rendly-api.fly.dev/api/v1/screenshots',
params=params, headers=headers
) as resp:
return await resp.read()
async def main():
urls = ['https://github.com', 'https://dev.to', 'https://news.ycombinator.com']
async with aiohttp.ClientSession() as session:
results = await asyncio.gather(*[screenshot(session, u) for u in urls])
for i, data in enumerate(results):
with open(f'screenshot_{i}.png', 'wb') as f:
f.write(data)
asyncio.run(main())
10 screenshots sequentially: ~25 seconds. With async: ~4 seconds.
What's your go-to method for website screenshots? I'm curious if anyone's using something I haven't tried.
Top comments (0)