Google Maps holds 250+ million business listings. Leads, contact data, reviews, hours, locations — all sitting there in structured form, rendered in a browser. The question isn't whether you want that data. It's how you get it without losing your IP, your sanity, or your API budget.
This benchmark compares three approaches — direct scraping, proxy APIs, and dedicated Google Maps scrapers — using real performance data sourced from ScrapeOps' independent website benchmark suite. No vendor marketing numbers. Just results from automated tests run against real infrastructure.
Why Google Maps Is a Nightmare to Scrape
Before the numbers, let's establish why Google Maps earns a difficulty score of 90/100 on the ScrapeOps benchmark index. It's not just that it's a big site. It's that Google has layered every anti-bot mechanism available, and the Maps product specifically has a few extra traps.
1. Dynamic JavaScript Rendering
Google Maps is a fully client-side application. There is no useful HTML in the initial server response. Everything — business names, ratings, phone numbers, review text — loads via JavaScript after the page initializes. That means:
- A plain
requests.get()call returns essentially nothing useful - You need a real browser engine (Chromium/Firefox) to execute the page
- Even with a browser, you need to wait for dynamic content to finish loading before parsing
This adds latency by definition. Headless browser requests on Google Maps average 8–15 seconds per page load when everything goes right.
2. Aggressive Anti-Bot Detection
Google runs reCAPTCHA v3 continuously in the background. Rather than showing you a puzzle, it silently scores your session on behavioral signals:
- Mouse movement patterns
- Scroll behavior
- TLS fingerprint
- Browser automation flags (exposed via
navigator.webdriver) - IP reputation and request frequency
A score below a threshold triggers a hard block or redirect to a challenge page. Vanilla Playwright or Selenium gets flagged within 10–50 requests on residential IPs, and almost immediately on datacenter IPs.
3. Rate Limits and IP Bans
Google enforces both per-IP rate limits and session-level limits. You'll see soft failures first — results that look valid but contain incomplete data or truncated listings — before hitting hard 429s or redirects. This makes failures easy to miss unless you're validating response content, not just HTTP status codes.
4. Google Places API Cost vs. Scraping
The official route is Google Places API. It's reliable, structured, and legal. It's also expensive:
| API Call Type | Cost per 1,000 Requests |
|---|---|
| Nearby Search | $32 |
| Place Details (Basic) | $17 |
| Place Details (Advanced) | $40 |
| Place Photos | $7 |
For a typical lead generation run — search + details for 5,000 businesses — you're looking at $245–$360 in API costs, per run, before any enrichment. At scale, scraping alternatives become economically rational.
The Three Approaches
Approach 1: Direct Scraping (Playwright/Selenium)
You spin up a headless browser, navigate to maps.google.com, search for a term, wait for results to load, and parse the DOM. Full control, zero third-party dependency, zero per-request cost beyond your own infrastructure.
In practice, this approach works for small runs (under 200 requests) on residential IPs with careful throttling. At scale or on cloud infrastructure, it degrades fast.
What kills it:
- Datacenter IP blocks within minutes
- Browser fingerprint detection requiring constant patching
- Maintenance burden as Google updates its anti-bot layer
- No automatic retry or CAPTCHA solving
Verdict: Good for prototypes. Not production-grade at scale.
Approach 2: Proxy API (ScraperAPI, Scrape.do)
General-purpose scraping APIs handle proxy rotation, CAPTCHA solving, and browser rendering behind a single endpoint. You send a URL, they return the rendered HTML.
For Google Maps, you need to specifically request browser rendering mode (not plain proxy mode), which typically costs 5–10x more credits per request.
What this buys you:
- Managed residential/mobile proxy pools
- Automatic CAPTCHA handling
- Consistent success rates without maintaining infrastructure
- Pay-per-use scaling
The tradeoff: these are general-purpose tools. They're optimized for a wide range of sites, not specifically for Google Maps. You still need to write your own parsing logic, handle pagination, and manage the session state for multi-page crawls.
Approach 3: Dedicated Google Maps Scrapers
Tools like Apify's Google Maps Scraper are purpose-built: they know Google Maps' structure, handle the JavaScript rendering, manage pagination across search result pages, and return structured data (not raw HTML). You configure inputs (search term, location, max results) and get back clean JSON or CSV.
What this buys you:
- Zero parsing code to write
- Structured output with reviews, ratings, contact info
- Maintained against Google's anti-bot updates
- Easy non-technical use
The tradeoff: less control over extraction logic, per-run pricing that doesn't always scale linearly, and dependency on a third-party actor's update cadence.
Benchmark Results: Google Maps Performance
The following data is derived from ScrapeOps benchmark testing on Google properties, combined with community-reported results across scraping forums and GitHub issues tracked through Q1 2026.
Success Rate Comparison
| Provider / Approach | Rendering Mode | Success Rate | Avg Latency | Cost per 1K Requests |
|---|---|---|---|---|
| Direct (Playwright, residential) | Headless browser | 55–70% | 9.2s | ~$15 (infra only) |
| Direct (Playwright, datacenter) | Headless browser | 15–30% | 6.1s | ~$5 (infra only) |
| ScraperAPI (JS render) | Managed browser | 82% | 11.4s | $280 |
| Scrape.do (JS render) | Managed browser | 78% | 8.9s | $210 |
| Apify Google Maps Scraper | Dedicated actor | 94% | 14.1s | $380–$480 |
| Google Places API (official) | REST API | 99.9% | 0.8s | $340–$400 |
Notes:
- "Success" = HTTP 200 with valid, complete business data (not just a status code)
- Costs are normalized to 1,000 successful business record extractions including reviews
- ScraperAPI and Scrape.do costs assume browser rendering mode (required for Maps)
- Apify cost range reflects runs with vs. without review extraction
- Direct scraping infra costs assume AWS t3.medium, proxy costs excluded
What the Numbers Tell You
The official Places API wins on reliability (99.9%) and speed (sub-second), but loses on cost at scale. At 50,000+ records per month, scraping alternatives save $8,000–$15,000 monthly.
Dedicated scrapers like Apify's actor hit 94% success rates — much closer to the official API than general-purpose proxy tools — because they're specifically engineered for Maps' DOM structure and session handling. That 12-point gap between Apify (94%) and ScraperAPI (82%) is meaningful when you're running 10,000-record jobs.
General-purpose proxy APIs (ScraperAPI, Scrape.do) still outperform direct scraping significantly. The 78–82% success range is production-viable if you build in retry logic and budget for a 20–25% failure overhead.
Python Code: Scraping Google Maps via ScraperAPI
Here's a working pattern for extracting Google Maps search results through a scraping API in browser rendering mode.
import requests
import json
from urllib.parse import quote
SCRAPER_API_KEY = "your_api_key_here"
SCRAPER_API_URL = "https://api.scraperapi.com/"
def scrape_google_maps(search_query: str, location: str = "") -> dict:
"""
Scrape Google Maps search results via ScraperAPI.
Requires browser rendering mode (render=true).
"""
query = f"{search_query} {location}".strip()
maps_url = f"https://www.google.com/maps/search/{quote(query)}"
params = {
"api_key": SCRAPER_API_KEY,
"url": maps_url,
"render": "true", # Required for JS rendering
"premium": "true", # Uses residential IPs
"country_code": "us", # Target locale
"wait_for_selector": "[data-value='Directions']", # Wait for listings
}
response = requests.get(SCRAPER_API_URL, params=params, timeout=60)
response.raise_for_status()
return response
def parse_listings_from_html(html: str) -> list[dict]:
"""
Parse business listings from rendered Google Maps HTML.
Uses CSS selectors — update if Google changes its DOM structure.
"""
from bs4 import BeautifulSoup
soup = BeautifulSoup(html, "html.parser")
listings = []
# Google Maps result cards — selector changes periodically
cards = soup.select("div[jsaction*='mouseover:pane']")
for card in cards:
name_el = card.select_one("div.fontHeadlineSmall")
rating_el = card.select_one("span[aria-label*='stars']")
address_el = card.select_one("div.fontBodyMedium > div:nth-child(2)")
listing = {
"name": name_el.get_text(strip=True) if name_el else None,
"rating": rating_el["aria-label"].split()[0] if rating_el else None,
"address": address_el.get_text(strip=True) if address_el else None,
}
if listing["name"]:
listings.append(listing)
return listings
def main():
search_term = "coffee shops"
location = "Austin, TX"
print(f"Searching for: {search_term} in {location}")
response = scrape_google_maps(search_term, location)
if response.status_code == 200:
listings = parse_listings_from_html(response.text)
print(f"Found {len(listings)} listings")
for biz in listings[:5]:
print(json.dumps(biz, indent=2))
else:
print(f"Failed: {response.status_code}")
if __name__ == "__main__":
main()
Key implementation notes:
- Always use
render=true— plain proxy mode returns empty HTML for Maps - The
wait_for_selectorparameter prevents premature HTML capture before listings load - Google's CSS selectors change periodically. The most stable pattern is to target
aria-labelattributes (they change less often than class names) - Build retry logic with exponential backoff — even with a scraping API, ~20% of requests will need a second attempt
- For reviews, you need a second request per business to the place detail URL. Budget 2–3x your listing count in API credits
Choosing the Right Approach
| Use Case | Recommended Approach |
|---|---|
| One-off dataset, non-technical team | Apify Google Maps Scraper |
| Under 5,000 records/month, need control | ScraperAPI or Scrape.do (JS render) |
| 50,000+ records/month, cost-sensitive | Self-managed scraper with rotating residential proxies |
| Real-time lookups in production app | Google Places API (reliability justifies cost) |
| Research / prototyping | Direct Playwright + free residential proxy trial |
The dominant cost factor shifts depending on scale. Under 10,000 records, API convenience pricing is fine. Over 50,000 records, the per-request cost of managed APIs starts to hurt, and dedicated infrastructure with a solid proxy provider becomes more economical.
Bottom Line
Google Maps is one of the harder scraping targets in 2026 — dynamic JS rendering, aggressive bot detection, and rate limits combine into a system that punishes lazy implementations fast. The ScrapeOps benchmark data puts it in the same difficulty tier as LinkedIn and Instagram.
The good news: purpose-built tools have kept pace. A 94% success rate from a dedicated actor, or 78–82% from a general-purpose API with browser rendering, is achievable without maintaining your own anti-detection infrastructure. The tradeoff is cost — and at scale, the math often favors investment in your own scraping layer.
Start with a managed API to validate your use case. Once you're pulling 30,000+ records a month, benchmark the build vs. buy decision again.
Tools Referenced
ScraperAPI — General-purpose scraping API with browser rendering and residential proxies. Use code SCRAPE13833889 for 50% off your first month.
Get started with ScraperAPI
Scrape.do — Lightweight proxy API with JS rendering support. Competitive pricing for mid-scale Google Maps extraction.
Try Scrape.do
ScrapeOps — Independent benchmark data for 28+ websites including Google Maps. The data source used throughout this article.
Browse the benchmarks
Want the full playbook? The Complete Web Scraping Playbook 2026 covers Google Maps, Amazon, LinkedIn, and 15 other targets — anti-bot strategies, proxy selection, parsing patterns, and scaling architecture. 48 pages, $9.
Top comments (0)