Google Maps is one of the richest sources of local business data on the internet. From restaurant ratings and opening hours to customer reviews and contact information, the platform holds a treasure trove of data that can power lead generation, market research, local SEO analysis, and competitive intelligence.
In this guide, we'll break down how Google Maps structures its data, what you can extract, the technical challenges involved, and how to build efficient scrapers for business data at scale.
Understanding Google Maps' Data Architecture
Google Maps is built on a complex infrastructure that combines map rendering, place data from Google's Knowledge Graph, user-generated content (reviews, photos), and business-submitted information via Google Business Profile (formerly Google My Business).
How Data Is Served
Unlike simple HTML websites, Google Maps uses:
- Protocol Buffers (protobuf): Many internal API responses use protobuf encoding rather than JSON
- Server-side rendering with hydration: Initial place data is embedded in the HTML, but additional data loads dynamically
- Lazy-loaded components: Reviews, photos, and Q&A sections load on demand as users interact
- Session-based rate limiting: Google tracks request patterns and can throttle or block excessive automated access
The Place Data Hierarchy
Every business or point of interest on Google Maps is organized as a "Place" with the following structure:
Place
├── Basic Info
│ ├── Name
│ ├── Place ID (unique identifier)
│ ├── Category (restaurant, hotel, etc.)
│ └── Status (open, temporarily closed, etc.)
├── Location
│ ├── Address (formatted, components)
│ ├── Coordinates (lat/lng)
│ ├── Plus Code
│ └── Viewport bounds
├── Contact
│ ├── Phone number
│ ├── Website URL
│ └── Social media links
├── Hours
│ ├── Regular hours (by day)
│ ├── Special hours (holidays)
│ └── Popular times / Live busyness
├── Ratings & Reviews
│ ├── Overall rating (1-5)
│ ├── Total review count
│ ├── Individual reviews (text, rating, date, author)
│ └── Review topics / highlights
├── Business Attributes
│ ├── Price level ($-$$$$)
│ ├── Accessibility features
│ ├── Service options (dine-in, delivery, etc.)
│ └── Amenities
├── Photos
│ ├── Owner photos
│ ├── User photos
│ └── Street View
└── Related Places
├── "People also search for"
└── Nearby places
What Data Can You Extract?
Core Business Information
The foundation of any Google Maps scraping project is basic business data:
const businessData = {
placeId: "ChIJN1t_tDeuEmsRUsoyG83frY4",
name: "Sydney Opera House",
category: "Performing arts theater",
subcategories: ["Tourist attraction", "Theater"],
address: {
full: "Bennelong Point, Sydney NSW 2000, Australia",
street: "Bennelong Point",
city: "Sydney",
state: "NSW",
postalCode: "2000",
country: "Australia"
},
coordinates: {
lat: -33.8568,
lng: 151.2153
},
plusCode: "42HW+MQ Sydney",
status: "OPERATIONAL"
};
Contact and Web Presence
const contactData = {
phone: "+61 2 9250 7111",
internationalPhone: "+61 2 9250 7111",
website: "https://www.sydneyoperahouse.com",
googleMapsUrl: "https://maps.google.com/?cid=...",
socialProfiles: {
facebook: "https://facebook.com/SydneyOperaHouse",
instagram: "https://instagram.com/sydneyoperahouse",
twitter: "https://twitter.com/SydOperaHouse"
}
};
Operating Hours
Google Maps provides detailed operating hours including special hours for holidays:
const hoursData = {
regularHours: [
{ day: "Monday", open: "09:00", close: "17:00" },
{ day: "Tuesday", open: "09:00", close: "17:00" },
{ day: "Wednesday", open: "09:00", close: "17:00" },
{ day: "Thursday", open: "09:00", close: "21:00" },
{ day: "Friday", open: "09:00", close: "17:00" },
{ day: "Saturday", open: "09:00", close: "17:00" },
{ day: "Sunday", open: "09:00", close: "17:00" }
],
specialHours: [
{ date: "2026-12-25", status: "CLOSED" },
{ date: "2026-01-01", open: "12:00", close: "17:00" }
],
timezone: "Australia/Sydney"
};
Popular Times and Live Busyness
One of the most unique datasets Google Maps offers is foot traffic data:
const popularTimes = {
monday: [
{ hour: 6, busyness: 0 },
{ hour: 7, busyness: 10 },
{ hour: 8, busyness: 25 },
{ hour: 9, busyness: 45 },
{ hour: 10, busyness: 65 },
{ hour: 11, busyness: 80 },
{ hour: 12, busyness: 90 },
{ hour: 13, busyness: 85 },
{ hour: 14, busyness: 75 },
// ... continues for each hour
],
// ... other days
liveData: {
currentBusyness: 72,
usualBusyness: 65,
label: "A little busier than usual"
}
};
Ratings and Reviews
Reviews are often the most valuable data for analysis:
const reviewData = {
overallRating: 4.6,
totalReviews: 18542,
ratingDistribution: {
5: 12350,
4: 3200,
3: 1500,
2: 800,
1: 692
},
reviewHighlights: [
{ topic: "architecture", sentiment: "positive", mentions: 3420 },
{ topic: "views", sentiment: "positive", mentions: 2100 },
{ topic: "parking", sentiment: "negative", mentions: 890 }
],
reviews: [
{
author: "Jane D.",
authorUrl: "https://maps.google.com/contrib/...",
rating: 5,
date: "2026-03-10",
text: "Absolutely stunning architecture...",
likesCount: 42,
ownerResponse: {
text: "Thank you for visiting!",
date: "2026-03-12"
},
photos: ["https://lh3.googleusercontent.com/..."]
}
]
};
Google Business Profile (GBP) Data
For businesses that have claimed their Google Business Profile, additional data is available:
const gbpData = {
claimed: true,
verified: true,
description: "The Sydney Opera House is a multi-venue performing arts centre...",
fromTheBusiness: {
highlights: ["Great for kids", "Iconic landmark"],
serviceOptions: ["Tours available", "Wheelchair accessible"],
crowd: ["Family-friendly", "Tourist-friendly"]
},
menuUrl: null,
bookingUrl: "https://www.sydneyoperahouse.com/tickets",
priceLevel: "$$$",
updates: [
{
type: "event",
title: "Summer Concert Series 2026",
date: "2026-01-15",
description: "Join us for..."
}
]
};
Building a Google Maps Scraper
Approach 1: Scraping Search Results
When you search Google Maps for something like "coffee shops in Austin TX", the results page contains a list of places. Here's how to extract them:
const puppeteer = require('puppeteer');
async function scrapeGoogleMapsSearch(query) {
const browser = await puppeteer.launch({ headless: true });
const page = await browser.newPage();
await page.setViewport({ width: 1920, height: 1080 });
// Navigate to Google Maps search
const searchUrl = `https://www.google.com/maps/search/${encodeURIComponent(query)}`;
await page.goto(searchUrl, { waitUntil: 'networkidle2' });
// Wait for results panel
await page.waitForSelector('[role="feed"]', { timeout: 10000 });
// Scroll through results to load more
const feed = await page.$('[role="feed"]');
let previousCount = 0;
while (true) {
await feed.evaluate(el => el.scrollBy(0, 2000));
await new Promise(r => setTimeout(r, 2000));
const currentCount = await page.$$eval(
'[role="feed"] > div > div > a',
els => els.length
);
if (currentCount === previousCount) break;
previousCount = currentCount;
}
// Extract place data from results
const places = await page.evaluate(() => {
const results = document.querySelectorAll('[role="feed"] > div > div');
return Array.from(results).map(result => {
const nameEl = result.querySelector('.fontHeadlineSmall');
const ratingEl = result.querySelector('span[role="img"]');
const addressEl = result.querySelectorAll('.fontBodyMedium')[1];
return {
name: nameEl?.textContent?.trim(),
rating: ratingEl?.getAttribute('aria-label'),
address: addressEl?.textContent?.trim(),
url: result.querySelector('a')?.href
};
}).filter(p => p.name);
});
await browser.close();
return places;
}
Approach 2: Scraping Individual Place Pages
For detailed data, you need to visit each place's individual page:
async function scrapePlaceDetails(page, placeUrl) {
await page.goto(placeUrl, { waitUntil: 'networkidle2' });
// Wait for the place details panel
await page.waitForSelector('h1', { timeout: 10000 });
const details = await page.evaluate(() => {
// Extract name
const name = document.querySelector('h1')?.textContent?.trim();
// Extract rating and review count
const ratingContainer = document.querySelector('.F7nice');
const rating = ratingContainer?.querySelector('span[aria-hidden]')?.textContent;
const reviewCount = ratingContainer?.parentElement?.textContent
?.match(/\(([\d,]+)\)/)?.[1]?.replace(',', '');
// Extract address
const addressButton = document.querySelector(
'button[data-item-id="address"]'
);
const address = addressButton?.textContent?.trim();
// Extract phone
const phoneButton = document.querySelector(
'button[data-item-id^="phone"]'
);
const phone = phoneButton?.textContent?.trim();
// Extract website
const websiteLink = document.querySelector(
'a[data-item-id="authority"]'
);
const website = websiteLink?.href;
// Extract hours
const hoursTable = document.querySelector(
'table.eK4R0e'
);
const hours = [];
if (hoursTable) {
hoursTable.querySelectorAll('tr').forEach(row => {
const day = row.querySelector('td:first-child')?.textContent?.trim();
const time = row.querySelector('td:last-child')?.textContent?.trim();
if (day && time) hours.push({ day, hours: time });
});
}
// Extract category
const category = document.querySelector(
'button[jsaction*="category"]'
)?.textContent?.trim();
return {
name, rating: parseFloat(rating),
reviewCount: parseInt(reviewCount),
address, phone, website, hours, category
};
});
return details;
}
Extracting Reviews
Reviews require additional scrolling and interaction:
async function scrapeReviews(page, maxReviews = 100) {
// Click the reviews tab/button
const reviewsButton = await page.$(
'button[aria-label*="Reviews"]'
);
if (reviewsButton) await reviewsButton.click();
await new Promise(r => setTimeout(r, 2000));
// Sort by newest
const sortButton = await page.$('button[aria-label="Sort reviews"]');
if (sortButton) {
await sortButton.click();
await new Promise(r => setTimeout(r, 1000));
const newestOption = await page.$('[data-index="1"]');
if (newestOption) await newestOption.click();
await new Promise(r => setTimeout(r, 2000));
}
// Scroll to load reviews
const reviewsContainer = await page.$('[role="feed"]');
let reviews = [];
while (reviews.length < maxReviews) {
await reviewsContainer.evaluate(el => el.scrollBy(0, 3000));
await new Promise(r => setTimeout(r, 1500));
reviews = await page.evaluate(() => {
const reviewEls = document.querySelectorAll('[data-review-id]');
return Array.from(reviewEls).map(el => {
const authorEl = el.querySelector('.d4r55');
const ratingEl = el.querySelector('span[role="img"]');
const textEl = el.querySelector('.wiI7pd');
const dateEl = el.querySelector('.rsqaWe');
return {
author: authorEl?.textContent?.trim(),
rating: parseInt(ratingEl?.getAttribute('aria-label')),
text: textEl?.textContent?.trim(),
date: dateEl?.textContent?.trim()
};
});
});
if (reviews.length >= maxReviews) break;
}
return reviews.slice(0, maxReviews);
}
Using Apify for Google Maps Scraping
Building and maintaining a Google Maps scraper requires significant effort due to Google's anti-bot measures and frequent DOM changes. Apify provides a much more practical solution for production use.
The Apify Store has dedicated Google Maps scrapers that handle:
- Automatic proxy rotation with datacenter and residential proxies
- CAPTCHA solving when Google challenges are triggered
- Structured JSON output with all the fields mentioned above
- Pagination handling for search results and reviews
- Rate limit management to avoid blocks
- Regular updates when Google changes their page structure
Example: Running a Google Maps Scraper on Apify
const { ApifyClient } = require('apify-client');
const client = new ApifyClient({
token: 'YOUR_APIFY_TOKEN',
});
const run = await client.actor('YOUR_ACTOR_ID').call({
searchQuery: "dentists in Chicago, IL",
maxResults: 200,
includeReviews: true,
maxReviews: 50,
language: "en",
fields: [
"name", "address", "phone", "website",
"rating", "reviewCount", "category",
"hours", "coordinates", "priceLevel"
]
});
const { items } = await client.dataset(run.defaultDatasetId).listItems();
console.log(`Found ${items.length} businesses`);
items.forEach(biz => {
console.log(`${biz.name} | ${biz.rating}★ (${biz.reviewCount} reviews) | ${biz.phone}`);
});
Pay-Per-Event Pricing
Apify's PPE model means you pay per extracted result — no subscription fees, no unused capacity. This makes it ideal for both one-off research projects and ongoing data pipelines.
Local SEO Analysis with Scraped Data
One of the most powerful applications of Google Maps data is local SEO analysis. Here's how to use scraped data for SEO insights:
Competitor Ranking Analysis
function analyzeLocalSEO(businesses, targetBusiness) {
const target = businesses.find(b => b.name === targetBusiness);
if (!target) return null;
const sameCategory = businesses.filter(
b => b.category === target.category
);
const avgRating = sameCategory.reduce(
(sum, b) => sum + b.rating, 0
) / sameCategory.length;
const avgReviews = sameCategory.reduce(
(sum, b) => sum + b.reviewCount, 0
) / sameCategory.length;
return {
business: target.name,
yourRating: target.rating,
marketAvgRating: avgRating.toFixed(2),
ratingGap: (target.rating - avgRating).toFixed(2),
yourReviews: target.reviewCount,
marketAvgReviews: Math.round(avgReviews),
reviewGap: target.reviewCount - Math.round(avgReviews),
rankInCategory: sameCategory
.sort((a, b) => b.rating - a.rating)
.findIndex(b => b.name === targetBusiness) + 1,
totalCompetitors: sameCategory.length,
hasWebsite: !!target.website,
hasPhone: !!target.phone,
hasHours: target.hours?.length > 0,
completenessScore: calculateCompleteness(target)
};
}
function calculateCompleteness(business) {
const fields = ['name', 'address', 'phone', 'website',
'hours', 'category', 'description'];
const filled = fields.filter(f => business[f]).length;
return Math.round((filled / fields.length) * 100);
}
Review Sentiment Analysis
function analyzeReviewSentiment(reviews) {
const keywords = {
positive: ['great', 'excellent', 'amazing', 'friendly',
'professional', 'recommend', 'best', 'love'],
negative: ['terrible', 'awful', 'rude', 'slow', 'dirty',
'expensive', 'worst', 'avoid', 'disappointing']
};
const analysis = reviews.map(review => {
const text = review.text?.toLowerCase() || '';
const posCount = keywords.positive.filter(w => text.includes(w)).length;
const negCount = keywords.negative.filter(w => text.includes(w)).length;
return {
...review,
sentiment: posCount > negCount ? 'positive' :
negCount > posCount ? 'negative' : 'neutral',
positiveKeywords: keywords.positive.filter(w => text.includes(w)),
negativeKeywords: keywords.negative.filter(w => text.includes(w))
};
});
return {
totalReviews: reviews.length,
positive: analysis.filter(r => r.sentiment === 'positive').length,
negative: analysis.filter(r => r.sentiment === 'negative').length,
neutral: analysis.filter(r => r.sentiment === 'neutral').length,
topPositiveThemes: findTopThemes(analysis, 'positive'),
topNegativeThemes: findTopThemes(analysis, 'negative'),
recentTrend: calculateRecentTrend(analysis)
};
}
GBP Optimization Recommendations
function getGBPRecommendations(businessData) {
const recommendations = [];
if (!businessData.description || businessData.description.length < 250) {
recommendations.push({
priority: 'HIGH',
action: 'Add or expand your business description to at least 750 characters',
impact: 'Helps Google understand your business for relevant searches'
});
}
if (businessData.reviewCount < 50) {
recommendations.push({
priority: 'HIGH',
action: 'Actively request reviews from satisfied customers',
impact: 'Reviews are a top local ranking factor'
});
}
if (!businessData.hours || businessData.hours.length === 0) {
recommendations.push({
priority: 'MEDIUM',
action: 'Add complete business hours including special hours',
impact: 'Prevents "might be closed" warnings in search results'
});
}
if (!businessData.website) {
recommendations.push({
priority: 'HIGH',
action: 'Add your website URL to your Google Business Profile',
impact: 'Drives traffic and signals legitimacy to Google'
});
}
if (businessData.photoCount < 10) {
recommendations.push({
priority: 'MEDIUM',
action: 'Upload at least 10 high-quality photos',
impact: 'Listings with photos receive 42% more direction requests'
});
}
return recommendations;
}
Data Pipeline Architecture
For ongoing Google Maps data collection, here's a production-ready pipeline:
// Pipeline architecture overview
const pipeline = {
step1_extract: {
tool: "Apify Actor",
frequency: "Weekly",
output: "Raw JSON datasets"
},
step2_transform: {
process: "Clean, normalize, deduplicate",
enrichment: "Geocoding verification, category mapping"
},
step3_load: {
database: "PostgreSQL with PostGIS",
schema: "Normalized tables for places, reviews, hours"
},
step4_analyze: {
dashboards: "Metabase or Grafana",
alerts: "New competitor detection, rating drops",
reports: "Weekly market intelligence summaries"
}
};
// Database schema for Google Maps data
const schema = `
CREATE TABLE places (
place_id VARCHAR(255) PRIMARY KEY,
name VARCHAR(500) NOT NULL,
category VARCHAR(255),
address TEXT,
city VARCHAR(255),
state VARCHAR(100),
postal_code VARCHAR(20),
country VARCHAR(100),
latitude DECIMAL(10, 8),
longitude DECIMAL(11, 8),
phone VARCHAR(50),
website VARCHAR(500),
rating DECIMAL(2, 1),
review_count INTEGER,
price_level VARCHAR(10),
claimed BOOLEAN,
last_scraped TIMESTAMP DEFAULT NOW()
);
CREATE TABLE reviews (
id SERIAL PRIMARY KEY,
place_id VARCHAR(255) REFERENCES places(place_id),
author VARCHAR(255),
rating INTEGER,
text TEXT,
review_date DATE,
scraped_at TIMESTAMP DEFAULT NOW()
);
CREATE INDEX idx_places_city ON places(city);
CREATE INDEX idx_places_category ON places(category);
CREATE INDEX idx_reviews_place ON reviews(place_id);
`;
Legal and Ethical Considerations
When scraping Google Maps data, keep these important points in mind:
Google's Terms of Service: Google's ToS restricts automated scraping. Consider using their official Places API for smaller-scale needs. Be aware that scraping may violate their terms.
Rate Limiting: Always implement respectful delays between requests. Hammering Google's servers can result in IP blocks and legal issues.
Personal Data: Reviews contain user names and sometimes profile information. Handle this data in compliance with GDPR, CCPA, and other privacy regulations.
Data Accuracy: Scraped data can be outdated. Business hours change, ratings fluctuate, and businesses close. Always timestamp your data and plan for regular refreshes.
Commercial Use: If using scraped data commercially, consult with a lawyer about the legal implications in your jurisdiction.
Conclusion
Google Maps is an incredibly rich data source for local business intelligence, SEO analysis, and market research. Understanding how Google structures place data — from basic contact information to review sentiment and popular times — enables you to build powerful analytical tools.
While building your own scraper is a great learning exercise, the maintenance burden of keeping up with Google's changes makes managed solutions on the Apify Store the practical choice for production workloads. Combine scraped data with analytical tools and you have a powerful system for local market intelligence.
Whether you're doing lead generation, competitive analysis, or local SEO optimization, the techniques in this guide give you the foundation to extract and analyze Google Maps data effectively.
Need production-grade Google Maps scraping? Browse the Apify Store for maintained actors with proxy rotation, CAPTCHA handling, and structured output — ready to plug into your data pipeline.
Top comments (0)