Google Search is dynamic and protected. Static scrapers (requests, BeautifulSoup) won’t work, you’ll get empty HTML. Instead, use a headless browser. Here’s a simple guide with SeleniumBase in undetected Chrome mode.
Table of Contents
- Step 1. Install SeleniumBase
- Step 2. Import the Libraries
- Step 3. Build the Search URL
- Step 4. Launch a Headless Browser
- Step 5. Extract Organic Results
- Step 6. Save Results
- Full Code
- Final Notes
Step 1. Install SeleniumBase
Install Selenium Base:
pip install seleniumbase
This gives you an extended Selenium wrapper with built-in uc mode (undetected Chrome).
Step 2. Import the Libraries
Import to the project:
from seleniumbase import Driver
from selenium.webdriver.common.By import By
import urllib.parse, pandas as pd
Step 3. Build the Search URL
We’ll generate the Google search URL from a keyword:
def build_search_url(query):
return f"https://www.google.com/search?q={urllib.parse.quote_plus(query)}"
Step 4. Launch a Headless Browser
Start Chrome in uc mode so Google treats it like a real user:
driver = Driver(uc=True, headless=True)
Step 5. Extract Organic Results
Each result lives inside div.MjjYud. From there, grab title, link, and snippet:
def scrape_google(driver, query):
driver.get(build_search_url(query))
blocks = driver.find_elements(By.CSS_SELECTOR, "div.MjjYud")
results = []
for b in blocks:
try:
title = b.find_element(By.CSS_SELECTOR, "h3").text
link = b.find_element(By.CSS_SELECTOR, "a").get_attribute("href")
snippet = b.find_element(By.CSS_SELECTOR, "div.VwiC3b").text
results.append({"Title": title, "Link": link, "Snippet": snippet})
except:
continue
return results
Step 6. Save Results
Store everything into a CSV file with pandas:
data = scrape_google(driver, "what is web scraping")
pd.DataFrame(data).to_csv("organic_results.csv", index=False)
print(f"Saved {len(data)} results")
Full Code
Check selector and copy:
from seleniumbase import Driver
from selenium.webdriver.common.by import By
import urllib.parse, pandas as pd, time
def build_search_url(query):
return f"https://www.google.com/search?q={urllib.parse.quote_plus(query)}"
def scrape_google(driver, query, max_pages=1):
results = []
for page in range(max_pages):
url = build_search_url(query) + (f"&start={page*10}" if page > 0 else "")
driver.get(url)
time.sleep(5) # wait for page to load
try:
blocks = driver.find_elements(By.CSS_SELECTOR, "div.MjjYud")
except:
continue
for b in blocks:
try:
title = b.find_element(By.CSS_SELECTOR, "h3").text
link = b.find_element(By.CSS_SELECTOR, "a").get_attribute("href")
snippet = b.find_element(By.CSS_SELECTOR, "div.VwiC3b").text
results.append({"Title": title, "Link": link, "Snippet": snippet})
except:
continue
return results
driver = Driver(uc=True, headless=True) # undetected Chrome
try:
data = scrape_google(driver, "what is web scraping", max_pages=2)
pd.DataFrame(data).to_csv("organic_results.csv", index=False)
print(f"Saved {len(data)} results")
finally:
driver.quit()
Final Notes
Full Guide on How to Scrape Google SERP with Python
Join our Discord
If you want any examples I might have missed, leave a comment and I’ll add them.
Top comments (0)