This is the second post in series about scraping with Selenium in Python.
In this one, you’ll learn how to move between pages, handle tabs and pop-ups, fill forms, scroll, and even run JavaScript.
Table of Contents
- Step 1: Navigate Between Pages
- Step 2: Work with Tabs and Pop-ups
- Step 3: Perform Clicks, Form Fills, and Keyboard Actions
- Step 4: Scroll and Load More Content
- Step 5: Execute JavaScript
Step 1: Navigate Between Pages
You can control the browser flow with a few simple commands.
driver.get("https://example.com") # Open a page
driver.refresh() # Reload it
driver.back() # Go to the previous page
driver.forward() # Move forward again
That’s all you need to move around.
Step 2: Work with Tabs and Pop-ups
Modern websites love to open new tabs and alerts. You can switch between them:
# Get all open window handles
windows = driver.window_handles
# Switch to the second tab
driver.switch_to.window(windows[1])
# Close it and go back to the first one
driver.close()
driver.switch_to.window(windows[0])
To handle alerts:
alert = driver.switch_to.alert
print(alert.text)
alert.accept() # or alert.dismiss()
Step 3: Perform Clicks, Form Fills, and Keyboard Actions
The basics of interaction: clicks, inputs, and keyboard actions.
from selenium.webdriver.common.by import By
from selenium.webdriver.common.keys import Keys
driver.find_element(By.ID, "search").send_keys("Selenium" + Keys.ENTER)
driver.find_element(By.CSS_SELECTOR, ".submit-btn").click()
Keep your locators simple, CSS selectors are usually enough.
Step 4: Scroll and Load More Content
Dynamic pages often load content as you scroll. Here’s a quick way to simulate user scrolling:
import time
last_height = driver.execute_script("return document.body.scrollHeight")
while True:
driver.execute_script("window.scrollTo(0, document.body.scrollHeight);")
time.sleep(2) # Wait for new content to load
new_height = driver.execute_script("return document.body.scrollHeight")
if new_height == last_height:
break
last_height = new_height
This pattern works great for infinite scroll pages.
Step 5: Execute JavaScript
In Selenium 4.36+, Chrome and Edge use the new BiDi (Bidirectional) protocol.
You can still run custom JavaScript like this:
driver.execute_script("console.log('Running JS from Python');")
result = driver.execute_script("return document.title;")
BiDi also lets you capture logs and network data.
Final Notes
In the next article, we’ll cover locating elements, reading text, handling the Shadow DOM, and exporting your results.
Meanwhile, here are some useful resources:
- The Complete Guide to Web Scraping with Selenium in Python
- Join our Discord
- Selenium Scraping Examples in Python and NodeJS (GitHub) If you want any examples I might have missed, leave a comment and I’ll add them.
Top comments (2)
Use driver.get, driver.refresh, driver.back, and driver.forward just for top-level moves, then immediately wait on a rock-solid element so you’re not flying blind. After driver.get, aim for a specific element’s presence/visibility. For SPAs, full page loads aren’t a thing-watch for URL changes, a route-specific element, or a quick JS peek at window.location.pathname. driver.get can return while the JS is still painting the UI, so chill until the main container is visible or your item count looks right.
Good point. Totally agree, so we covered that in the main HasData post (link at the end). Covered a few ways to wait for stuff to actually load.