Google Maps is one of the richest sources of local business data on the web. Whether you're building a local SEO tool, doing competitive analysis, or aggregating business directories, extracting places, reviews, and business hours from Google Maps can provide tremendous value.
In this guide, we'll walk through practical approaches to scraping Google Maps data in 2026, including working Python code and tips for handling common challenges.
Why Scrape Google Maps?
Local businesses rely on Google Maps visibility. Extracting this data enables:
- Local SEO audits — track how businesses rank for specific queries in different locations
- Lead generation — build lists of businesses by category and region
- Competitive intelligence — monitor competitor reviews, ratings, and hours
- Market research — understand business density and trends in specific areas
Setting Up Your Environment
import requests
from bs4 import BeautifulSoup
import json
import time
import csv
# For handling dynamic content
from playwright.sync_api import sync_playwright
def search_google_maps(query, location="New York"):
"""Search Google Maps and extract place results."""
with sync_playwright() as p:
browser = p.chromium.launch(headless=True)
page = browser.new_page()
search_url = f"https://www.google.com/maps/search/{query}+in+{location}"
page.goto(search_url, wait_until="networkidle")
time.sleep(3)
# Scroll to load more results
feed = page.query_selector('div[role="feed"]')
if feed:
for _ in range(5):
feed.evaluate('el => el.scrollTop = el.scrollHeight')
time.sleep(1.5)
# Extract place data
places = []
items = page.query_selector_all('div[role="feed"] > div > div > a')
for item in items:
name = item.get_attribute("aria-label")
href = item.get_attribute("href")
if name and href:
places.append({
"name": name,
"url": href
})
browser.close()
return places
results = search_google_maps("restaurants", "Chicago")
for place in results[:5]:
print(f"Name: {place['name']}")
Extracting Business Details
Once you have place URLs, you can extract detailed information:
def extract_place_details(page, place_url):
"""Extract detailed info from a Google Maps place page."""
page.goto(place_url, wait_until="networkidle")
time.sleep(2)
details = {}
# Business name
name_el = page.query_selector('h1')
details['name'] = name_el.inner_text() if name_el else None
# Rating and review count
rating_el = page.query_selector('div.F7nice span[aria-hidden="true"]')
details['rating'] = rating_el.inner_text() if rating_el else None
# Address
address_el = page.query_selector('button[data-item-id="address"]')
details['address'] = address_el.inner_text() if address_el else None
# Phone
phone_el = page.query_selector('button[data-item-id^="phone"]')
details['phone'] = phone_el.inner_text() if phone_el else None
# Business hours
hours_el = page.query_selector('div[aria-label*="hours"]')
if hours_el:
hours_el.click()
time.sleep(1)
hours_rows = page.query_selector_all('table tr')
details['hours'] = {}
for row in hours_rows:
cells = row.query_selector_all('td')
if len(cells) >= 2:
day = cells[0].inner_text()
time_range = cells[1].inner_text()
details['hours'][day] = time_range
return details
Extracting Reviews
Reviews contain valuable sentiment data. Here's how to collect them:
def extract_reviews(page, max_reviews=50):
"""Extract reviews from a Google Maps place page."""
# Click on reviews tab
reviews_tab = page.query_selector('button[aria-label*="Reviews"]')
if reviews_tab:
reviews_tab.click()
time.sleep(2)
# Scroll to load reviews
review_panel = page.query_selector('div.m6QErb.DxyBCb')
if review_panel:
for _ in range(max_reviews // 5):
review_panel.evaluate('el => el.scrollTop = el.scrollHeight')
time.sleep(1)
reviews = []
review_elements = page.query_selector_all('div.jftiEf')
for el in review_elements[:max_reviews]:
review = {}
author = el.query_selector('div.d4r55')
review['author'] = author.inner_text() if author else None
rating = el.query_selector('span.kvMYJc')
review['rating'] = rating.get_attribute('aria-label') if rating else None
text = el.query_selector('span.wiI7pd')
review['text'] = text.inner_text() if text else None
date = el.query_selector('span.rsqaWe')
review['date'] = date.inner_text() if date else None
reviews.append(review)
return reviews
Handling Anti-Scraping Measures
Google Maps actively fights scraping. Here are strategies that work in 2026:
Use residential proxies — Datacenter IPs get blocked fast. Services like ThorData offer rotating residential proxies that mimic real users.
Randomize request timing — Add variable delays between 2-8 seconds between requests.
Rotate user agents — Switch between different browser fingerprints.
Use headless browsers wisely — Playwright with stealth plugins avoids basic detection.
import random
USER_AGENTS = [
"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36",
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36",
"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36",
]
def get_random_delay():
return random.uniform(2, 8)
def get_random_ua():
return random.choice(USER_AGENTS)
Alternative: Using Review Scraping Tools
If you need structured review data without building your own scraper, check out ready-made solutions like the G2 Reviews Scraper on Apify, which handles pagination, rate limiting, and data formatting automatically for business review platforms.
Saving Data to CSV
def save_to_csv(places, filename="google_maps_data.csv"):
if not places:
return
keys = places[0].keys()
with open(filename, 'w', newline='', encoding='utf-8') as f:
writer = csv.DictWriter(f, fieldnames=keys)
writer.writeheader()
writer.writerows(places)
print(f"Saved {len(places)} places to {filename}")
Legal and Ethical Considerations
Always check Google's Terms of Service before scraping. Consider:
- Using the official Google Places API for production applications
- Respecting robots.txt directives
- Rate limiting your requests to avoid overloading servers
- Not scraping personal information without consent
Conclusion
Google Maps scraping is powerful for local SEO, lead generation, and market research. While the techniques work in 2026, always consider using official APIs when available and ensure your scraping activities comply with applicable laws and terms of service.
For proxy infrastructure that keeps your scrapers running, ThorData's residential proxies provide reliable rotating IPs specifically suited for map data extraction.
Top comments (0)