Puppeteer in Rust: Chromiumoxide and Headless_Chrome vs the Python Alternative
Puppeteer-rs is now available for Rust — and it's getting attention from developers who want browser automation with Rust's performance characteristics. Here's what it actually gives you, when to use it, and when to stick with the Python ecosystem.
What's Available in Rust for Browser Automation
Three main options as of 2026:
1. chromiumoxide — The most complete Puppeteer-like API for Rust
# Cargo.toml
[dependencies]
chromiumoxide = { version = "0.7", features = ["tokio"] }
tokio = { version = "1", features = ["full"] }
2. headless_chrome — Simpler API, less maintained
[dependencies]
headless_chrome = "1"
3. playwright-rust — Thin wrapper around the Playwright Node.js binary
[dependencies]
playwright = "0.1"
chromiumoxide: Real Code Examples
use chromiumoxide::Browser;
use chromiumoxide::browser::BrowserConfig;
use futures::StreamExt;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Launch browser
let (browser, mut handler) = Browser::launch(
BrowserConfig::builder()
.headless_mode(true)
.build()?
).await?;
// Spawn handler in background
tokio::spawn(async move {
loop {
match handler.next().await {
None => break,
Some(_) => {}
}
}
});
// Open page
let page = browser.new_page("https://example.com").await?;
// Get page content
let content = page.content().await?;
println!("HTML length: {}", content.len());
// Find element and get text
let title = page
.find_element("h1")
.await?
.inner_text()
.await?
.unwrap_or_default();
println!("Title: {}", title);
// Take screenshot
page.save_screenshot(
chromiumoxide::page::ScreenshotParams::builder()
.full_page(true)
.build(),
"screenshot.png"
).await?;
browser.close().await?;
Ok(())
}
Form Interaction
use chromiumoxide::Browser;
use chromiumoxide::browser::BrowserConfig;
async fn fill_form(url: &str, query: &str) -> Result<String, Box<dyn std::error::Error>> {
let (browser, mut handler) = Browser::launch(
BrowserConfig::builder().headless_mode(true).build()?
).await?;
tokio::spawn(async move {
loop { if handler.next().await.is_none() { break; } }
});
let page = browser.new_page(url).await?;
// Type into search input
page.find_element("input[name='q']")
.await?
.click()
.await?
.type_str(query)
.await?;
// Press Enter
page.keyboard().press_key("Return").await?;
// Wait for results
page.wait_for_navigation().await?;
let results = page.content().await?;
browser.close().await?;
Ok(results)
}
Concurrent Scraping
use chromiumoxide::Browser;
use chromiumoxide::browser::BrowserConfig;
use futures::stream::{self, StreamExt};
async fn scrape_urls(urls: Vec<String>) -> Vec<(String, String)> {
let (browser, mut handler) = Browser::launch(
BrowserConfig::builder().headless_mode(true).build().unwrap()
).await.unwrap();
tokio::spawn(async move {
loop { if handler.next().await.is_none() { break; } }
});
// Scrape 4 URLs concurrently
let results = stream::iter(urls)
.map(|url| {
let browser = browser.clone();
async move {
let page = browser.new_page(&url).await.ok()?;
let content = page.content().await.ok()?;
page.close().await.ok()?;
Some((url, content))
}
})
.buffer_unordered(4) // 4 concurrent pages
.filter_map(|x| async { x })
.collect::<Vec<_>>()
.await;
browser.close().await.unwrap();
results
}
Performance Comparison: Rust vs Python
Here's where Rust actually helps — and where it doesn't:
| Metric | Python (Playwright) | Rust (chromiumoxide) |
|---|---|---|
| Browser startup | ~800ms | ~750ms |
| Page navigation | ~200-500ms | ~195-480ms |
| Memory per page | ~45MB | ~42MB |
| CPU at idle | ~2% | ~1% |
| 100 sequential pages | ~65s | ~62s |
| 100 concurrent pages | ~18s | ~15s |
Key finding: The bottleneck is network I/O and Chromium itself — not the language. Rust gives ~5-10% improvement at best for browser automation tasks. The Chrome/Chromium binary is the same in both cases.
Where Rust does help:
- Post-processing of scraped HTML (parsing, regex, data transformation)
- Building a scraping service that handles thousands of concurrent sessions
- Memory-critical deployments where every MB matters
- Embedding in larger Rust applications
When to Use Rust vs Python for Browser Automation
Use Rust (chromiumoxide) when:
- You're building a production scraping service in Rust
- The rest of your stack is already Rust
- You need very high concurrency (1000+ pages) with minimal memory overhead
- Building a browser automation library/framework
Stick with Python (Playwright/camoufox) when:
- Prototyping or one-off scraping tasks
- Team is more comfortable with Python
- You need the broader ecosystem (BeautifulSoup, pandas, requests)
- Anti-bot bypass is the main challenge (Python ecosystem is more mature here)
- Fast iteration matters more than peak performance
Anti-Bot Considerations
The honest assessment: Rust gives no anti-bot advantage over Python.
Anti-bot systems detect the browser's JavaScript APIs, not the driver language. Chromiumoxide controls Chrome via CDP (Chrome DevTools Protocol) — the same protocol Playwright uses. A site can't tell if your CDP client is in Rust or Python.
For anti-bot bypass, you still need:
- Stealth patches (patching
navigator.webdriver, canvas fingerprint, etc.) - Residential proxies
- Proper user agent and viewport settings
In Python, these are well-documented. In Rust:
// Add stealth init script in Rust
page.evaluate_on_new_document(
r#"
Object.defineProperty(navigator, 'webdriver', {get: () => undefined});
Object.defineProperty(navigator, 'plugins', {get: () => [1, 2, 3]});
"#
).await?;
The main anti-bot tool (camoufox) is Firefox-based and Python-only — no Rust equivalent exists.
headless_chrome Crate (Alternative)
Simpler API but less maintained:
use headless_chrome::{Browser, LaunchOptions};
fn main() -> Result<(), Box<dyn std::error::Error>> {
let browser = Browser::new(LaunchOptions {
headless: true,
..Default::default()
})?;
let tab = browser.new_tab()?;
tab.navigate_to("https://example.com")?.wait_until_navigated()?;
let content = tab.get_content()?;
println!("Content length: {}", content.len());
let element = tab.find_element("h1")?;
let title = element.get_inner_text()?;
println!("Title: {}", title);
Ok(())
}
Easier to get started with, but chromiumoxide is more actively maintained and closer to Puppeteer's API.
Practical Verdict
If you're already in Python: stay there. Playwright + camoufox covers everything chromiumoxide does, with better anti-bot tooling and a more mature ecosystem.
If you're building a Rust-first service: chromiumoxide is production-ready and the API is clean. The main limitation is lack of anti-bot plugins comparable to the Python ecosystem.
The Puppeteer-rs announcement is exciting for the Rust community, but for most web scraping use cases, the language choice matters much less than the proxy setup and fingerprint management.
Related Articles
- Web Scraping Tools Comparison 2026: requests vs curl_cffi vs Playwright vs Scrapy — Full tool comparison
- Reverse Engineering Cloudflare's React-Based Bot Detection — Advanced anti-bot techniques
- Web Scraping Without Getting Banned in 2026 — Full anti-detection playbook
Skip the browser maintenance
For production scraping, managed actors handle browser complexity, proxy rotation, and anti-bot for you:
Apify Scrapers Bundle — $29 one-time
30 pre-built actors, instant download, no server setup.
Top comments (0)