DEV Community

agenthustler
agenthustler

Posted on

How to Scrape Google Maps Business Data in 2026: A Complete Python Guide

Scraping Google Maps for business data is one of the most valuable skills for marketers, sales teams, and researchers in 2026. Whether you're building a leads database, analyzing competitors, or researching commercial real estate — Google Maps holds a goldmine of structured business information.

In this guide, I'll walk you through extracting business data from Google Maps using Python, including names, addresses, phone numbers, hours, ratings, reviews, coordinates, and categories. You'll get working code you can run today.

Why Scraping Google Maps Is Hard in 2026

Google Maps is one of the hardest websites to scrape on the internet. Here's why:

  • Heavy JavaScript rendering: The page content loads dynamically via JavaScript. A simple requests.get() returns an empty shell — the actual business data isn't in the initial HTML.
  • Aggressive anti-bot detection: Google uses CAPTCHAs, fingerprinting, rate limiting, and behavioral analysis to detect automated access.
  • Frequent DOM changes: Google regularly changes class names, data attributes, and page structure to break scrapers.
  • IP blocking: Making too many requests from a single IP will get you blocked within minutes.

The solution? Use a proxy service with JavaScript rendering built in. This handles IP rotation, CAPTCHA solving, and headless browser rendering — so you can focus on parsing the data.

I recommend ScraperAPI for this. It handles proxy rotation, CAPTCHAs, and JavaScript rendering with a single API call. Just add render=true to your request and it returns the fully rendered HTML. They offer 5,000 free API credits to get started.

Another solid alternative is ScrapeOps, which provides proxy aggregation and monitoring for your scraping pipelines.

What Data Can You Extract?

From a Google Maps business listing, you can extract:

Data Point Example
Business Name "Joe's Coffee Shop"
Address "123 Main St, Austin, TX 78701"
Phone Number "(512) 555-0123"
Hours "Mon-Fri 7AM-6PM"
Rating 4.7
Review Count 342
Coordinates 30.2672, -97.7431
Categories "Coffee shop, Café"
Website "https://joescoffee.com"

Setting Up Your Environment

First, install the required packages:

pip install requests beautifulsoup4 lxml
Enter fullscreen mode Exit fullscreen mode

You'll also need a ScraperAPI account. Sign up for free and grab your API key from the dashboard.

Step 1: Scraping Google Maps Search Results

Let's start by searching for businesses in a specific area. We'll search for "coffee shops in Austin TX" and extract the results.

import requests
from bs4 import BeautifulSoup
import json
import re
import time
import urllib.parse

SCRAPER_API_KEY = "YOUR_SCRAPERAPI_KEY"


def scrape_google_maps_search(query):
    """Search Google Maps and extract business listings."""
    encoded_query = urllib.parse.quote(query)
    google_maps_url = f"https://www.google.com/maps/search/{encoded_query}/"

    # ScraperAPI with render=true for JavaScript rendering
    api_url = (
        f"https://api.scraperapi.com"
        f"?api_key={SCRAPER_API_KEY}"
        f"&url={urllib.parse.quote(google_maps_url)}"
        f"&render=true"
        f"&wait_for_selector=div[role='feed']"
    )

    response = requests.get(api_url, timeout=120)
    response.raise_for_status()

    soup = BeautifulSoup(response.text, "lxml")
    businesses = []

    # Parse visible HTML elements from the feed
    feed = soup.find("div", {"role": "feed"})
    if feed:
        listings = feed.find_all("div", {"class": re.compile(r"Nv2PK")})
        for listing in listings:
            business = extract_listing_data(listing)
            if business.get("name"):
                businesses.append(business)

    return businesses


def extract_listing_data(listing):
    """Extract structured data from a single search result listing."""
    data = {
        "name": None,
        "address": None,
        "phone": None,
        "rating": None,
        "review_count": None,
        "category": None,
        "hours": None,
        "website": None,
    }

    # Business name
    name_el = listing.find("div", {"class": re.compile(r"qBF1Pd")})
    if name_el:
        data["name"] = name_el.get_text(strip=True)

    # Rating and review count
    rating_el = listing.find("span", {"class": re.compile(r"MW4etd")})
    if rating_el:
        try:
            data["rating"] = float(rating_el.get_text(strip=True))
        except ValueError:
            pass

    review_el = listing.find("span", {"class": re.compile(r"UY7F9")})
    if review_el:
        text = review_el.get_text(strip=True)
        match = re.search(r"([\d,]+)", text)
        if match:
            data["review_count"] = int(match.group(1).replace(",", ""))

    # Address and other info from secondary text
    info_spans = listing.find_all("span", {"class": re.compile(r"W4Efsd")})
    for span in info_spans:
        text = span.get_text(strip=True)
        if re.search(r"\d{3}[-.\s]?\d{3}[-.\s]?\d{4}", text):
            data["phone"] = text.strip()
        elif re.search(r"\d+\s+\w+\s+(St|Ave|Blvd|Dr|Rd|Ln|Way)", text):
            data["address"] = text.strip()
        elif len(text) < 40:
            data["category"] = text.strip()

    return data
Enter fullscreen mode Exit fullscreen mode

Step 2: Scraping Individual Business Pages

For detailed data (hours, coordinates, full reviews), scrape the individual business page:

def scrape_business_details(place_url):
    """Scrape a specific Google Maps business page for detailed info."""
    api_url = (
        f"https://api.scraperapi.com"
        f"?api_key={SCRAPER_API_KEY}"
        f"&url={urllib.parse.quote(place_url)}"
        f"&render=true"
    )

    response = requests.get(api_url, timeout=120)
    response.raise_for_status()
    soup = BeautifulSoup(response.text, "lxml")

    details = {
        "name": None,
        "address": None,
        "phone": None,
        "website": None,
        "rating": None,
        "review_count": None,
        "coordinates": None,
        "hours": [],
        "categories": [],
    }

    # Extract coordinates from the URL
    coord_match = re.search(r"@(-?\d+\.\d+),(-?\d+\.\d+)", place_url)
    if coord_match:
        details["coordinates"] = {
            "lat": float(coord_match.group(1)),
            "lng": float(coord_match.group(2)),
        }

    # Business name from title
    title_el = soup.find("h1")
    if title_el:
        details["name"] = title_el.get_text(strip=True)

    # Address
    addr_el = soup.find("button", {"data-item-id": "address"})
    if addr_el:
        details["address"] = addr_el.get_text(strip=True)

    # Phone
    phone_el = soup.find("button", {"data-item-id": re.compile(r"phone")})
    if phone_el:
        details["phone"] = phone_el.get_text(strip=True)

    # Website
    website_el = soup.find("a", {"data-item-id": "authority"})
    if website_el:
        details["website"] = website_el.get("href", "")

    # Rating
    rating_el = soup.find("div", {"class": re.compile(r"F7nice")})
    if rating_el:
        spans = rating_el.find_all("span")
        for span in spans:
            text = span.get_text(strip=True)
            try:
                val = float(text)
                if 1.0 <= val <= 5.0:
                    details["rating"] = val
                    break
            except ValueError:
                if "review" in text.lower():
                    match = re.search(r"([\d,]+)", text)
                    if match:
                        details["review_count"] = int(
                            match.group(1).replace(",", "")
                        )

    # Operating hours
    hours_table = soup.find("table", {"class": re.compile(r"eK4R0e")})
    if hours_table:
        rows = hours_table.find_all("tr")
        for row in rows:
            cols = row.find_all("td")
            if len(cols) >= 2:
                day = cols[0].get_text(strip=True)
                time_range = cols[1].get_text(strip=True)
                details["hours"].append({"day": day, "hours": time_range})

    # Categories
    cat_el = soup.find("button", {"class": re.compile(r"DkEaL")})
    if cat_el:
        details["categories"] = [cat_el.get_text(strip=True)]

    return details
Enter fullscreen mode Exit fullscreen mode

Step 3: Putting It All Together

Here's a complete script that searches for businesses and exports to CSV:

import csv


def scrape_google_maps(query, max_results=20):
    """Full pipeline: search Google Maps and collect results."""
    print(f"Searching Google Maps for: {query}")
    results = scrape_google_maps_search(query)
    print(f"Found {len(results)} listings")

    detailed_results = []
    for i, biz in enumerate(results[:max_results]):
        print(f"  [{i+1}/{min(len(results), max_results)}] {biz.get('name', 'Unknown')}")
        # Respect rate limits
        time.sleep(2)
        detailed_results.append(biz)

    return detailed_results


def save_to_csv(businesses, filename="google_maps_data.csv"):
    """Save extracted business data to a CSV file."""
    if not businesses:
        print("No data to save.")
        return

    fieldnames = [
        "name", "address", "phone", "website",
        "rating", "review_count", "category",
        "hours", "coordinates",
    ]

    with open(filename, "w", newline="", encoding="utf-8") as f:
        writer = csv.DictWriter(f, fieldnames=fieldnames)
        writer.writeheader()
        for biz in businesses:
            row = {}
            for field in fieldnames:
                val = biz.get(field, "")
                if isinstance(val, (dict, list)):
                    val = json.dumps(val)
                row[field] = val
            writer.writerow(row)

    print(f"Saved {len(businesses)} businesses to {filename}")


if __name__ == "__main__":
    query = "coffee shops in Austin TX"
    businesses = scrape_google_maps(query, max_results=10)
    save_to_csv(businesses)
    print("\nSample result:")
    if businesses:
        print(json.dumps(businesses[0], indent=2))
Enter fullscreen mode Exit fullscreen mode

3 Concrete Use Cases

1. Local Business Lead Generation

Sales teams can scrape Google Maps to build targeted lead lists. Search for "dentists in Chicago" or "plumbers in Miami" and instantly get names, phone numbers, and websites. Filter by rating (4.0+) or review count (50+) to focus on established businesses. This replaces hours of manual research with a script that runs in minutes.

2. Competitor Analysis

Business owners can monitor their competition by scraping all businesses in their category and area. Track competitors' ratings over time, see how many reviews they're getting, and check their operating hours. A restaurant owner searching "Italian restaurants in Brooklyn" gets a complete competitive landscape — including which competitors are highly rated and which might be vulnerable.

3. Real Estate Research

Real estate investors use Google Maps data to evaluate neighborhoods. Scraping business density, types of businesses (cafes vs. pawn shops), and average ratings gives you a data-driven picture of an area's commercial health. High concentrations of well-rated restaurants and shops often correlate with rising property values.

Best Practices and Tips

Respect rate limits. Even with ScraperAPI handling proxies, add 2-3 second delays between requests. This keeps your account in good standing and avoids wasting API credits on failed requests.

Handle errors gracefully. Google Maps pages can vary in structure. Always use try/except blocks and check for None before accessing element attributes.

Cache your results. Store raw HTML responses locally so you can re-parse without making additional API calls. This saves money and speeds up development.

Use render=true. This is critical for Google Maps. Without JavaScript rendering, you'll get an empty page. ScraperAPI's render=true parameter handles this automatically.

Monitor with ScrapeOps. For production pipelines, ScrapeOps provides monitoring dashboards so you can track success rates, response times, and costs across your scraping jobs.

Legal Considerations

Web scraping public data is generally legal, but always:

  • Respect robots.txt guidelines
  • Don't overload servers with excessive requests
  • Comply with local data protection laws (GDPR, CCPA)
  • Use the data ethically — don't spam businesses you scrape
  • Review Google's Terms of Service for your use case

Conclusion

Google Maps scraping in 2026 requires JavaScript rendering and proxy rotation due to Google's anti-bot protections. Using ScraperAPI with render=true makes this manageable — you get fully rendered pages through rotating proxies without managing any infrastructure.

The Python code in this guide gives you a working foundation. Customize the parsing logic for your specific needs, add error handling for production use, and always respect rate limits.

Start with the free tier (5,000 credits) at ScraperAPI and scale up as your data needs grow.


Found this useful? Follow me for more Python scraping tutorials and data engineering guides.

Top comments (0)