Selenium vs Playwright in 2026: Which Should You Use for Web Scraping?
Selenium was the default for 15 years. Playwright launched in 2020 and has been steadily taking over. In 2026, the question isn't "should I learn Selenium?" — it's "when does Selenium still make sense?"
The 30-Second Summary
Use Playwright for new projects. It's faster, has better async support, auto-waits for elements (no more time.sleep()), and handles modern JavaScript apps better.
Stick with Selenium if: you have existing Selenium code, your team knows it deeply, or you need a specific browser/version Playwright doesn't support.
Feature Comparison
| Feature | Selenium | Playwright |
|---|---|---|
| Speed | Slower | 2-3x faster |
| Auto-wait | No — manual sleeps needed | Yes — waits for elements automatically |
| Async support | Poor | Excellent (native async) |
| Browser support | Chrome, Firefox, Safari, Edge, IE | Chrome, Firefox, Safari, Edge |
| Network interception | Complex setup | Built-in, easy |
| Mobile emulation | Limited | Full device emulation |
| Shadow DOM | Requires workarounds | Native support |
| Anti-bot detection | High (easily detected) | Moderate (still detectable) |
| Learning curve | Low (years of docs/examples) | Low-medium |
| Community | Very large (15+ years) | Growing fast |
| Maintained by | Open source | Microsoft |
Code Comparison: Same Task
Find all product names on a page:
Selenium:
from selenium import webdriver
from selenium.webdriver.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
import time
driver = webdriver.Chrome()
driver.get("https://books.toscrape.com")
# Have to wait manually — no auto-wait
time.sleep(2)
# Or use explicit wait (verbose)
wait = WebDriverWait(driver, 10)
products = wait.until(EC.presence_of_all_elements_located((By.CSS_SELECTOR, 'h3 a')))
names = [p.text for p in products]
print(names[:5])
driver.quit()
Playwright:
from playwright.sync_api import sync_playwright
with sync_playwright() as p:
browser = p.chromium.launch(headless=True)
page = browser.new_page()
page.goto("https://books.toscrape.com")
# Auto-waits for elements to be present
products = page.locator('h3 a').all_text_contents()
print(products[:5])
browser.close()
Playwright is ~40% less code for the same task and handles waiting automatically.
Performance Benchmark
Test: Load 20 pages sequentially, extract h1 from each.
import time
# Selenium benchmark
from selenium import webdriver
from selenium.webdriver.by import By
urls = [f"https://httpbin.org/html"] * 20
start = time.time()
driver = webdriver.Chrome(options=webdriver.ChromeOptions().add_argument("--headless"))
for url in urls:
driver.get(url)
_ = driver.find_element(By.TAG_NAME, "h1").text
driver.quit()
selenium_time = time.time() - start
# Playwright benchmark
from playwright.sync_api import sync_playwright
start = time.time()
with sync_playwright() as p:
browser = p.chromium.launch(headless=True)
page = browser.new_page()
for url in urls:
page.goto(url)
_ = page.locator("h1").text_content()
browser.close()
playwright_time = time.time() - start
print(f"Selenium: {selenium_time:.1f}s")
print(f"Playwright: {playwright_time:.1f}s")
print(f"Playwright speedup: {selenium_time/playwright_time:.1f}x")
# Typical result: Selenium 28s, Playwright 11s (2.5x faster)
Async: Playwright's Biggest Advantage
Selenium has no real async support. Playwright is built for it:
import asyncio
from playwright.async_api import async_playwright
async def scrape_page(browser, url: str) -> str:
page = await browser.new_page()
await page.goto(url)
title = await page.title()
await page.close()
return title
async def scrape_all(urls: list) -> list:
async with async_playwright() as p:
browser = await p.chromium.launch(headless=True)
# Scrape 10 URLs concurrently
tasks = [scrape_page(browser, url) for url in urls]
results = await asyncio.gather(*tasks)
await browser.close()
return results
urls = ["https://example.com"] * 20
results = asyncio.run(scrape_all(urls))
# 10x faster than sequential Selenium
When Selenium Still Makes Sense
Legacy codebases: If you have 50,000 lines of Selenium tests, the migration cost is real.
SeleniumGrid / distributed scraping: Selenium's grid infrastructure is more mature for distributed execution across many machines.
IE/Legacy browser testing: If you need to test on Internet Explorer or very old browsers.
Specific language bindings: Selenium has bindings for Java, C#, Ruby, PHP. Playwright covers JavaScript, Python, Java, C#.
Very conservative teams: Selenium has 15 years of Stack Overflow answers.
Anti-Bot Detection: Neither is Great
Both Selenium and Playwright are detectable by anti-bot systems:
# Selenium detection check
from selenium import webdriver
driver = webdriver.Chrome()
driver.get("https://bot.sannysoft.com")
# Will show multiple red flags
# Playwright — slightly better by default, still detectable
from playwright.sync_api import sync_playwright
with sync_playwright() as p:
browser = p.chromium.launch(headless=True)
page = browser.new_page()
page.goto("https://bot.sannysoft.com")
# Still shows webdriver: present
For anti-bot bypass:
With Selenium:
from selenium.webdriver.chrome.options import Options
import undetected_chromedriver as uc
options = uc.ChromeOptions()
driver = uc.Chrome(options=options, headless=True)
# Patches navigator.webdriver and CDP signatures
With Playwright:
from playwright.sync_api import sync_playwright
with sync_playwright() as p:
browser = p.chromium.launch(headless=True)
context = browser.new_context()
context.add_init_script("""
Object.defineProperty(navigator, 'webdriver', {get: () => undefined});
""")
page = context.new_page()
Or better — use camoufox (Firefox-based, C++ level patching):
from camoufox.sync_api import Camoufox
with Camoufox(headless=True) as browser:
page = browser.new_page()
page.goto("https://target.com")
Migrating from Selenium to Playwright
Common patterns:
# Selenium → Playwright equivalents
# Finding elements
driver.find_element(By.CSS_SELECTOR, ".class") # Selenium
page.locator(".class") # Playwright
# Clicking
element.click() # Selenium
page.locator(".class").click() # Playwright (auto-waits)
# Typing
element.send_keys("text") # Selenium
page.locator("input").fill("text") # Playwright
# Waiting
WebDriverWait(driver, 10).until(EC.presence_of_element_located((By.ID, "id"))) # Selenium
page.wait_for_selector("#id") # Playwright (simpler)
# Getting text
element.text # Selenium
page.locator(".class").text_content() # Playwright
# Getting attribute
element.get_attribute("href") # Selenium
page.locator("a").get_attribute("href") # Playwright
# Screenshots
driver.save_screenshot("shot.png") # Selenium
page.screenshot(path="shot.png") # Playwright
# Execute JavaScript
driver.execute_script("return document.title") # Selenium
page.evaluate("() => document.title") # Playwright
Installation and Setup
Selenium:
pip install selenium
# Also need: ChromeDriver matching your Chrome version
# Or: pip install webdriver-manager
Playwright:
pip install playwright
playwright install chromium # Downloads browser automatically
# That's it — no separate driver needed
Playwright's setup is significantly simpler — no ChromeDriver version matching required.
Decision Matrix
| Situation | Recommendation |
|---|---|
| New scraping project | Playwright |
| Existing Selenium codebase | Keep Selenium (or migrate gradually) |
| Need async/concurrent | Playwright |
| Anti-bot heavy sites | camoufox (both Selenium/Playwright struggle equally) |
| Team only knows Selenium | Selenium (then upskill gradually) |
| Need IE/legacy browser | Selenium |
| JavaScript-heavy SPAs | Playwright (better handling) |
| Simple page scraping | Neither — use curl_cffi instead |
The Bottom Line
If you're starting a new project in 2026: use Playwright. The auto-waiting alone eliminates an entire class of flaky scraper bugs. The async support means 10x throughput for the same code complexity.
If you have Selenium code that works: don't rewrite it unless you have a specific reason. "Playwright is newer" isn't a reason to rewrite working code.
Related Articles
- Web Scraping Tools Comparison 2026: requests vs curl_cffi vs Playwright vs Scrapy — When to use which tool
- Python Web Scraping Tutorial for Beginners 2026 — Start here if new to scraping
- Web Scraping Without Getting Banned in 2026 — Anti-detection guide
Save hours on scraping setup: The $29 Apify Scrapers Bundle includes 35+ production-ready actors — Google SERP, LinkedIn, Amazon, TikTok, contact info, and more. Pre-configured inputs, working on day one.
Related Tools
Pre-built solutions for this use case:
Top comments (0)