Rust now has two serious headless browser libraries: chromiumoxide and headless_chrome. Both drive Chrome via the Chrome DevTools Protocol. Neither is as mature as Playwright or Puppeteer — but for certain use cases (performance-critical scrapers, systems already in Rust), they're worth evaluating.
This is a practical comparison, not a benchmark. All code examples tested on Chrome 122.
chromiumoxide — the more maintained option
chromiumoxide wraps the Chrome DevTools Protocol with async Rust. It's actively maintained and follows tokio's async patterns.
# Cargo.toml
[dependencies]
chromiumoxide = { version = "0.7", features = ["async-std-runtime"] }
tokio = { version = "1", features = ["full"] }
Basic usage:
use chromiumoxide::{Browser, BrowserConfig};
use futures::StreamExt;
#[tokio::main]
async fn main() -> anyhow::Result<()> {
let (browser, mut handler) = Browser::launch(
BrowserConfig::builder()
.with_head() // Remove this for headless
.build()?
).await?;
// Handler loop must run in background
let _task = tokio::spawn(async move {
loop {
match handler.next().await {
Some(h) => { let _ = h; }
None => break,
}
}
});
let page = browser.new_page("https://example.com").await?;
// Wait for navigation
page.wait_for_navigation().await?;
// Extract text
let title = page.find_element("h1").await?
.inner_text().await?
.unwrap_or_default();
println!("Title: {}", title);
Ok(())
}
Clicking and form interaction:
// Click a button
page.find_element("#submit-btn").await?.click().await?;
// Type into input
page.find_element("input[name='search']").await?
.type_str("web scraping rust").await?;
// Execute JavaScript
let result: serde_json::Value = page
.evaluate("document.querySelectorAll('.item').length")
.await?
.into_value()?;
println!("Items found: {}", result);
headless_chrome — simpler but less maintained
headless_chrome has a simpler API but the repository hasn't seen major updates recently. Still works for basic use cases.
[dependencies]
headless_chrome = "1.0"
use headless_chrome::Browser;
use headless_chrome::protocol::cdp::types::Event;
fn main() -> anyhow::Result<()> {
let browser = Browser::default()?;
let tab = browser.new_tab()?;
tab.navigate_to("https://example.com")?
.wait_until_navigated()?;
let title = tab.find_element("h1")?
.get_inner_text()?;
println!("Title: {}", title);
Ok(())
}
The synchronous API is simpler to reason about, but blocks the thread.
Handling JavaScript-heavy pages
For SPAs or pages that load content after initial HTML:
// chromiumoxide: wait for specific element
page.wait_for_element(".data-loaded").await?;
// Or wait with timeout
use tokio::time::{timeout, Duration};
timeout(Duration::from_secs(10), page.wait_for_element(".data-loaded")).await??;
// Check page source after JS execution
let content = page.content().await?;
Proxy configuration
let config = BrowserConfig::builder()
.arg("--proxy-server=http://user:pass@proxy.example.com:8080")
.arg("--ignore-certificate-errors")
.build()?;
let (browser, mut handler) = Browser::launch(config).await?;
When to use Rust headless browsers
Use chromiumoxide/headless_chrome when:
- Your entire system is Rust and you want to avoid cross-language calls
- You need maximum performance with minimal memory per browser instance
- You have specialized resource constraints
Use Python/Playwright instead when:
- You want a stable, well-documented API
- You need auto-waiting (Playwright handles this; Rust libs don't)
- You need to maintain the code long-term without breaking changes risk
Use Apify actors when:
- You want managed infrastructure (proxy rotation, browser pools, scheduling)
- You don't want to run browsers on your own server
- You need to scale beyond a single machine
Real performance comparison
Rust headless browser advantages are real but narrow for typical scraping:
| Metric | chromiumoxide | Playwright Python | Difference |
|---|---|---|---|
| Startup time | ~800ms | ~1200ms | 33% faster |
| Memory per tab | ~80MB | ~120MB | 33% less |
| Pages/second (simple) | ~4 | ~3 | 25% faster |
| Developer hours to implement | 3x | 1x | — |
For most teams, the 25-33% performance gain doesn't justify the development overhead and fragility.
The practical recommendation
If you're already building a Rust service that needs to occasionally scrape a page, chromiumoxide is a reasonable choice. If you're building a scraper from scratch, start with Python + Playwright.
For production scraping at scale, managed actors handle the browser complexity entirely:
Apify Scrapers Bundle — $29 one-time
Includes actors for Google, LinkedIn, Amazon, and 27 other targets — all handling their own anti-bot logic.
Top comments (0)