DEV Community

Valentina Skakun for HasData

Posted on

Scraping Google Organic Results with Python

Google Search is dynamic and protected. Static scrapers (requests, BeautifulSoup) won’t work, you’ll get empty HTML. Instead, use a headless browser. Here’s a simple guide with SeleniumBase in undetected Chrome mode.

Table of Contents

Step 1. Install SeleniumBase

Install Selenium Base:

pip install seleniumbase
Enter fullscreen mode Exit fullscreen mode

This gives you an extended Selenium wrapper with built-in uc mode (undetected Chrome).

Step 2. Import the Libraries

Import to the project:

from seleniumbase import Driver
from selenium.webdriver.common.By import By
import urllib.parse, pandas as pd
Enter fullscreen mode Exit fullscreen mode

Step 3. Build the Search URL

We’ll generate the Google search URL from a keyword:

def build_search_url(query):
    return f"https://www.google.com/search?q={urllib.parse.quote_plus(query)}"
Enter fullscreen mode Exit fullscreen mode

Step 4. Launch a Headless Browser

Start Chrome in uc mode so Google treats it like a real user:

driver = Driver(uc=True, headless=True)
Enter fullscreen mode Exit fullscreen mode

Step 5. Extract Organic Results

Each result lives inside div.MjjYud. From there, grab title, link, and snippet:

def scrape_google(driver, query):
    driver.get(build_search_url(query))
    blocks = driver.find_elements(By.CSS_SELECTOR, "div.MjjYud")

    results = []
    for b in blocks:
        try:
            title = b.find_element(By.CSS_SELECTOR, "h3").text
            link = b.find_element(By.CSS_SELECTOR, "a").get_attribute("href")
            snippet = b.find_element(By.CSS_SELECTOR, "div.VwiC3b").text
            results.append({"Title": title, "Link": link, "Snippet": snippet})
        except:
            continue
    return results
Enter fullscreen mode Exit fullscreen mode

Step 6. Save Results

Store everything into a CSV file with pandas:

data = scrape_google(driver, "what is web scraping")
pd.DataFrame(data).to_csv("organic_results.csv", index=False)
print(f"Saved {len(data)} results")
Enter fullscreen mode Exit fullscreen mode

Full Code

Check selector and copy:

from seleniumbase import Driver
from selenium.webdriver.common.by import By
import urllib.parse, pandas as pd, time

def build_search_url(query):
    return f"https://www.google.com/search?q={urllib.parse.quote_plus(query)}"

def scrape_google(driver, query, max_pages=1):
    results = []
    for page in range(max_pages):
        url = build_search_url(query) + (f"&start={page*10}" if page > 0 else "")
        driver.get(url)
        time.sleep(5)  # wait for page to load

        try:
            blocks = driver.find_elements(By.CSS_SELECTOR, "div.MjjYud")
        except:
            continue

        for b in blocks:
            try:
                title = b.find_element(By.CSS_SELECTOR, "h3").text
                link = b.find_element(By.CSS_SELECTOR, "a").get_attribute("href")
                snippet = b.find_element(By.CSS_SELECTOR, "div.VwiC3b").text
                results.append({"Title": title, "Link": link, "Snippet": snippet})
            except:
                continue
    return results

driver = Driver(uc=True, headless=True)  # undetected Chrome

try:
    data = scrape_google(driver, "what is web scraping", max_pages=2)

    pd.DataFrame(data).to_csv("organic_results.csv", index=False)
    print(f"Saved {len(data)} results")
finally:
    driver.quit()
Enter fullscreen mode Exit fullscreen mode

Final Notes

Full Guide on How to Scrape Google SERP with Python
Join our Discord
If you want any examples I might have missed, leave a comment and I’ll add them.

Top comments (0)