<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: ZenRows</title>
    <description>The latest articles on DEV Community by ZenRows (@zenrows).</description>
    <link>https://dev.to/zenrows</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/zenrows"/>
    <language>en</language>
    <item>
      <title>🔥 How to Scrape Dynamic Web Pages in Python 🚀</title>
      <dc:creator>ZenRows</dc:creator>
      <pubDate>Wed, 11 Oct 2023 11:47:29 +0000</pubDate>
      <link>https://dev.to/zenrows/how-to-scrape-dynamic-web-pages-in-python-4i48</link>
      <guid>https://dev.to/zenrows/how-to-scrape-dynamic-web-pages-in-python-4i48</guid>
      <description>&lt;p&gt;Have you gotten poor results while scraping dynamic web page content? It's not just you. Crawling dynamic data is a challenging undertaking (to say the least) for standard scrapers. That's because JavaScript runs in the background when an HTTP request is made.&lt;/p&gt;

&lt;p&gt;Scraping dynamic websites requires rendering the entire page in a browser and extracting the target information.&lt;/p&gt;

&lt;p&gt;Join us in this step-by-step tutorial to learn all you need about dynamic web scraping with Python — the dos and don'ts, the challenges and solutions, and everything in between.&lt;/p&gt;

&lt;p&gt;Let's dive right in!&lt;/p&gt;




&lt;h2&gt;
  
  
  What Is a Dynamic Website? 🤔
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;A dynamic website is one that doesn't have all its content directly in its static HTML.&lt;/strong&gt; It uses server-side or client-side to display data, sometimes based on the user's actions (e.g., clicking, scrolling, etc.).&lt;/p&gt;

&lt;p&gt;Put simply, these websites display different content or layout with every server request. This helps with loading time as there's no need to reload the same information each time the user wants to view “new” content.&lt;/p&gt;

&lt;p&gt;How to identify them? One way is by &lt;strong&gt;disabling JavaScript in the command palette on your browser.&lt;/strong&gt; If the website is dynamic, the content will disappear.&lt;/p&gt;

&lt;p&gt;Let's use &lt;a href="https://reactstorefront.vercel.app/default-channel/en-US/" rel="noopener noreferrer"&gt;Saleor React&lt;/a&gt; Storefront as an example. Here's what its front page looks like:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbfc4kdvam5p9fz4xxeek.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbfc4kdvam5p9fz4xxeek.png" alt="Saelor React front page" width="750" height="416"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Take notice of the titles, images and artist's name.&lt;/p&gt;

&lt;p&gt;Now, let's disable JavaScript using the steps below:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Inspect the page: Right-click and select “Inspect” to open the DevTools window.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Navigate to the command palette: CTRL/CMD + SHIFT + P.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Search for “JavaScript.”&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click on Disable JavaScript.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Hit refresh.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;What's the result? See below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhuttx0h3in6d3jj624gx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhuttx0h3in6d3jj624gx.png" alt="Result of disabling JavaScript" width="750" height="463"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You see it for yourself! Disabling JavaScript removes all dynamic web content.&lt;/p&gt;




&lt;h2&gt;
  
  
  Alternatives to Dynamic Web Scraping With Python ❄
&lt;/h2&gt;

&lt;p&gt;So, you want to scrape dynamic websites with Python…&lt;/p&gt;

&lt;p&gt;Since libraries such as &lt;a href="https://beautiful-soup-4.readthedocs.io/en/latest/" rel="noopener noreferrer"&gt;Beautiful Soup&lt;/a&gt; or &lt;a href="https://requests.readthedocs.io/en/latest/" rel="noopener noreferrer"&gt;Requests&lt;/a&gt; don't automatically fetch dynamic content, you're left with two options to complete the task:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Feed the content to a standard library.&lt;/li&gt;
&lt;li&gt;Execute the page's internal JavaScript while scraping.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;However, not all dynamic pages are the same. Some render content through JS APIs that can be accessed by inspecting the “Network” tab. Others store the JS-rendered content as JSON somewhere in the DOM (Document Object Model).&lt;/p&gt;

&lt;p&gt;The good news is we can parse the JSON string to extract the necessary data in both cases.&lt;/p&gt;

&lt;p&gt;Keep in mind that there are situations in which these solutions are inapplicable. For such websites, you can use headless browsers to render the page and extract your needed data.&lt;/p&gt;

&lt;p&gt;The alternatives to crawling dynamic web pages with Python are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Manually locating the data and parsing JSON string.&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Using headless browsers to execute the page's internal JavaScript&lt;/strong&gt; (e.g., Selenium or &lt;a href="https://pypi.org/project/pyppeteer/" rel="noopener noreferrer"&gt;Pyppeteer&lt;/a&gt;, an unofficial Python port of Puppeteer).&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  What Is the Easiest Way to Scrape a Dynamic Website in Python? 🤔
&lt;/h2&gt;

&lt;p&gt;It's true, headless browsers can be slow and performance-intensive. However, they lift all restrictions on web scraping. That is if you don't count anti-bot detection. And you shouldn't because we've already told you &lt;a href="https://www.zenrows.com/blog/bypass-bot-detection?utm_source=dev.to&amp;amp;utm_medium=social&amp;amp;utm_campaign=republishing"&gt;how to bypass such protections&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Manually locating data and parsing JSON strings presumes that accessing the JSON version of the dynamic data is possible. Unfortunately, that's not always the case, especially when it comes to high-level Single-page applications (SPAs).&lt;/p&gt;

&lt;p&gt;Not to mention that mimicking an API request is not scalable. They often require cookies and authentications alongside other restrictions that can easily block you out.&lt;/p&gt;

&lt;p&gt;The best way to scrape dynamic web pages in Python depends on your goals and resources. If you have access to the website's JSON and are looking to extract a single page's data, you may not need a headless browser.&lt;/p&gt;

&lt;p&gt;However, barring this tiny portion of cases, most of the time using Beautiful Soup and Selenium is your best and easiest option.&lt;/p&gt;

&lt;p&gt;Time to get our hands dirty! Get ready to write some code and see precisely how to scrape a dynamic website in Python!&lt;/p&gt;




&lt;h2&gt;
  
  
  Prerequisites 🛠
&lt;/h2&gt;

&lt;p&gt;To follow this tutorial, you'll need to meet some requirements. We'll use the following tools:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://www.python.org/downloads/" rel="noopener noreferrer"&gt;Python 3&lt;/a&gt;: The latest version of Python will work best. At the time of writing, that is 3.11.2.&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.selenium.dev/downloads/" rel="noopener noreferrer"&gt;Selenium&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://pypi.org/project/webdriver-manager/" rel="noopener noreferrer"&gt;Webdriver Manager&lt;/a&gt;: This will ensure that the browser's and the driver's versions match. You don't have to manually download the WebDriver for this purpose.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;selenium webdriver-manager
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You now have everything you need. Let's go!&lt;/p&gt;




&lt;h2&gt;
  
  
  Method #1: Dynamic Web Scraping With Python Using Beautiful Soup 😋
&lt;/h2&gt;

&lt;p&gt;Beautiful Soup is arguably the most popular Python library for crawling HTML data.&lt;/p&gt;

&lt;p&gt;To extract information with it, we need our target page's HTML string. However, dynamic content is not directly present in a website's static HTML. This means that &lt;strong&gt;Beautiful Soup can't access JavaScript-generated data.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Here's a solution: it's possible to &lt;a href="https://www.zenrows.com/blog/mastering-web-scraping-in-python-from-zero-to-hero#xhr-requests?utm_source=dev.to&amp;amp;utm_medium=social&amp;amp;utm_campaign=republishing"&gt;extract data from XHR requests&lt;/a&gt; if the website loads content using an AJAX request.&lt;/p&gt;




&lt;h2&gt;
  
  
  Method #2: Scraping Dynamic Web Pages in Python Using Selenium ✂
&lt;/h2&gt;

&lt;p&gt;To understand how Selenium helps you scrape dynamic websites, first, we need to inspect how regular libraries, such as &lt;code&gt;Requests&lt;/code&gt;, interact with them.&lt;/p&gt;

&lt;p&gt;We'll use &lt;a href="https://angular.io/" rel="noopener noreferrer"&gt;Angular&lt;/a&gt; as our target website:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F69aa0hsgac002lzlcg9e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F69aa0hsgac002lzlcg9e.png" alt="Angular website" width="750" height="370"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let's try scraping it with &lt;code&gt;Requests&lt;/code&gt; and see the result. Before that, we have to install the &lt;code&gt;Requests&lt;/code&gt; library that can be executed using the &lt;code&gt;pip&lt;/code&gt; command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;requests
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here's what our code looks like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;requests&lt;/span&gt; 

&lt;span class="n"&gt;url&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;https://angular.io/&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt; 

&lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;requests&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;url&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; 

&lt;span class="n"&gt;html&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;text&lt;/span&gt; 

&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;html&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see, only the following HTML was extracted:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight html"&gt;&lt;code&gt;&lt;span class="nt"&gt;&amp;lt;noscript&amp;gt;&lt;/span&gt; 
    &lt;span class="nt"&gt;&amp;lt;div&lt;/span&gt; &lt;span class="na"&gt;class=&lt;/span&gt;&lt;span class="s"&gt;"background-sky hero"&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&amp;lt;/div&amp;gt;&lt;/span&gt; 
    &lt;span class="nt"&gt;&amp;lt;section&lt;/span&gt; &lt;span class="na"&gt;id=&lt;/span&gt;&lt;span class="s"&gt;"intro"&lt;/span&gt; &lt;span class="na"&gt;style=&lt;/span&gt;&lt;span class="s"&gt;"text-shadow: 1px 1px #1976d2;"&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&lt;/span&gt; 
        &lt;span class="nt"&gt;&amp;lt;div&lt;/span&gt; &lt;span class="na"&gt;class=&lt;/span&gt;&lt;span class="s"&gt;"hero-logo"&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&amp;lt;/div&amp;gt;&lt;/span&gt; 
        &lt;span class="nt"&gt;&amp;lt;div&lt;/span&gt; &lt;span class="na"&gt;class=&lt;/span&gt;&lt;span class="s"&gt;"homepage-container"&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&lt;/span&gt; 
            &lt;span class="nt"&gt;&amp;lt;div&lt;/span&gt; &lt;span class="na"&gt;class=&lt;/span&gt;&lt;span class="s"&gt;"hero-headline"&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&lt;/span&gt;The modern web&lt;span class="nt"&gt;&amp;lt;br&amp;gt;&lt;/span&gt;developer's platform&lt;span class="nt"&gt;&amp;lt;/div&amp;gt;&lt;/span&gt; 
        &lt;span class="nt"&gt;&amp;lt;/div&amp;gt;&lt;/span&gt; 
    &lt;span class="nt"&gt;&amp;lt;/section&amp;gt;&lt;/span&gt; 
    &lt;span class="nt"&gt;&amp;lt;h2&lt;/span&gt; &lt;span class="na"&gt;style=&lt;/span&gt;&lt;span class="s"&gt;"color: red; margin-top: 40px; position: relative; text-align: center; text-shadow: 1px 1px #fafafa; border-top: none;"&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&lt;/span&gt; 
        &lt;span class="nt"&gt;&amp;lt;b&amp;gt;&amp;lt;i&amp;gt;&lt;/span&gt;This website requires JavaScript.&lt;span class="nt"&gt;&amp;lt;/i&amp;gt;&amp;lt;/b&amp;gt;&lt;/span&gt; 
    &lt;span class="nt"&gt;&amp;lt;/h2&amp;gt;&lt;/span&gt; 
&lt;span class="nt"&gt;&amp;lt;/noscript&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Though, inspecting the website shows more content than what was retrieved.&lt;/p&gt;

&lt;p&gt;This is what happened when we disabled JavaScript on the page:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz8bk7fw31eby9sym0jzv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz8bk7fw31eby9sym0jzv.png" alt="Anular website when JavaScript is disabled" width="750" height="608"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;That's precisely what &lt;code&gt;Requests&lt;/code&gt; was able to return. The library perceives no errors as it parses data from the website's static HTML, which is exactly what it was created to do.&lt;/p&gt;

&lt;p&gt;In this case, aiming for the same result as what's displayed on the website is impossible. Can you guess why? That's right, it's because this is a dynamic web page.&lt;/p&gt;

&lt;p&gt;To access the entire content and extract our target data, we must render the JavaScript.&lt;/p&gt;

&lt;p&gt;It's time to make it right with Selenium dynamic web scraping.&lt;/p&gt;

&lt;p&gt;We'll use the following script to quickly crawl our target website:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;selenium&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;webdriver&lt;/span&gt; 
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;selenium.webdriver.chrome.service&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Service&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;ChromeService&lt;/span&gt; 
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;webdriver_manager.chrome&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;ChromeDriverManager&lt;/span&gt; 

&lt;span class="n"&gt;url&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;https://angular.io/&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt; 

&lt;span class="n"&gt;driver&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;webdriver&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Chrome&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;service&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nc"&gt;ChromeService&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt; 
    &lt;span class="nc"&gt;ChromeDriverManager&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;install&lt;/span&gt;&lt;span class="p"&gt;()))&lt;/span&gt; 

&lt;span class="n"&gt;driver&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;url&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; 

&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;driver&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;page_source&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;There you have it! The page's complete HTML, including the dynamic web content.&lt;/p&gt;

&lt;p&gt;Congratulations! You've just scraped your first dynamic website.&lt;/p&gt;




&lt;h2&gt;
  
  
  Selecting Elements in Selenium 👈
&lt;/h2&gt;

&lt;p&gt;There are different ways to access elements in Selenium. We discuss this matter in depth in our &lt;a href="https://www.zenrows.com/blog/web-scraping-with-selenium-in-python#finding-elements-and-content?utm_source=dev.to&amp;amp;utm_medium=social&amp;amp;utm_campaign=republishing"&gt;web scraping with Selenium in Python&lt;/a&gt; guide.&lt;/p&gt;

&lt;p&gt;Still, we'll explain this with an example. Let's select only the H2s on our target website:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmk8s5gabvhrxyft7lmnb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmk8s5gabvhrxyft7lmnb.png" alt="Selecting the H2 elements using chrome devtools" width="750" height="417"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Before we get to that, we need to inspect the website and identify the location of the elements we want to extract.&lt;/p&gt;

&lt;p&gt;We can see that the &lt;code&gt;class="text-container"&lt;/code&gt; is common for those headlines. We copy that and map the H2s to get elements using Chrome Driver.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuqdb20alde2are6yzohh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuqdb20alde2are6yzohh.png" alt="Selecting a container using chrome devtools" width="750" height="381"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Paste this code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;selenium&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;webdriver&lt;/span&gt; 
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;selenium.webdriver.common.by&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;By&lt;/span&gt; 
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;selenium.webdriver.chrome.service&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Service&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;ChromeService&lt;/span&gt; 
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;webdriver_manager.chrome&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;ChromeDriverManager&lt;/span&gt; 

&lt;span class="c1"&gt;# instantiate options 
&lt;/span&gt;&lt;span class="n"&gt;options&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;webdriver&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;ChromeOptions&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; 

&lt;span class="c1"&gt;# run browser in headless mode 
&lt;/span&gt;&lt;span class="n"&gt;options&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;headless&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="bp"&gt;True&lt;/span&gt; 

&lt;span class="c1"&gt;# instantiate driver 
&lt;/span&gt;&lt;span class="n"&gt;driver&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;webdriver&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Chrome&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;service&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nc"&gt;ChromeService&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt; 
    &lt;span class="nc"&gt;ChromeDriverManager&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;install&lt;/span&gt;&lt;span class="p"&gt;()),&lt;/span&gt; &lt;span class="n"&gt;options&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;options&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; 

&lt;span class="c1"&gt;# load website 
&lt;/span&gt;&lt;span class="n"&gt;url&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;https://angular.io/&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt; 

&lt;span class="c1"&gt;# get the entire website content 
&lt;/span&gt;&lt;span class="n"&gt;driver&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;url&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; 

&lt;span class="c1"&gt;# select elements by class name 
&lt;/span&gt;&lt;span class="n"&gt;elements&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;driver&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;find_elements&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;By&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;CLASS_NAME&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;text-container&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; 
&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;title&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;elements&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; 
    &lt;span class="c1"&gt;# select H2s, within element, by tag name 
&lt;/span&gt;    &lt;span class="n"&gt;heading&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;title&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;find_element&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;By&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;TAG_NAME&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;h2&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="n"&gt;text&lt;/span&gt; 
    &lt;span class="c1"&gt;# print H2s 
&lt;/span&gt;    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;heading&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You'll get the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="s2"&gt;"DEVELOP ACROSS ALL PLATFORMS"&lt;/span&gt;&lt;span class="w"&gt; 
&lt;/span&gt;&lt;span class="s2"&gt;"SPEED &amp;amp; PERFORMANCE"&lt;/span&gt;&lt;span class="w"&gt; 
&lt;/span&gt;&lt;span class="s2"&gt;"INCREDIBLE TOOLING"&lt;/span&gt;&lt;span class="w"&gt; 
&lt;/span&gt;&lt;span class="s2"&gt;"LOVED BY MILLIONS"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Nice and easy! You can now scrape dynamic sites with Selenium effortlessly.&lt;/p&gt;




&lt;h2&gt;
  
  
  How to Scrape Infinite Scroll Web Pages With Selenium ♾
&lt;/h2&gt;

&lt;p&gt;Some dynamic pages load more content as users scroll down to the bottom of the page. These are known as “Infine scroll websites.” Crawling them is a bit more challenging. To do so, we need to instruct our spider to scroll to the bottom, wait for all new content to load and only then begin scraping.&lt;/p&gt;

&lt;p&gt;Understand this with an example. Let's use &lt;a href="https://scrapingclub.com/exercise/list_infinite_scroll/" rel="noopener noreferrer"&gt;Scraping Club&lt;/a&gt;'s sample page.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7pow6xcdmrjt4q103ogr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7pow6xcdmrjt4q103ogr.png" alt="Scraping Club sample page" width="750" height="519"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This script will scroll through the first 20 results and extract their title:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;selenium&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;webdriver&lt;/span&gt; 
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;selenium.webdriver.common.by&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;By&lt;/span&gt; 
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;selenium.webdriver.chrome.service&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Service&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;ChromeService&lt;/span&gt; 
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;webdriver_manager.chrome&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;ChromeDriverManager&lt;/span&gt; 
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;time&lt;/span&gt; 

&lt;span class="n"&gt;options&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;webdriver&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;ChromeOptions&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; 
&lt;span class="n"&gt;options&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;headless&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="bp"&gt;True&lt;/span&gt; 
&lt;span class="n"&gt;driver&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;webdriver&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Chrome&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;service&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nc"&gt;ChromeService&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt; 
    &lt;span class="nc"&gt;ChromeDriverManager&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;install&lt;/span&gt;&lt;span class="p"&gt;()),&lt;/span&gt; &lt;span class="n"&gt;options&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;options&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; 

&lt;span class="c1"&gt;# load target website 
&lt;/span&gt;&lt;span class="n"&gt;url&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;https://scrapingclub.com/exercise/list_infinite_scroll/&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt; 

&lt;span class="c1"&gt;# get website content 
&lt;/span&gt;&lt;span class="n"&gt;driver&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;url&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; 

&lt;span class="c1"&gt;# instantiate items 
&lt;/span&gt;&lt;span class="n"&gt;items&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt; 

&lt;span class="c1"&gt;# instantiate height of webpage 
&lt;/span&gt;&lt;span class="n"&gt;last_height&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;driver&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;execute_script&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;return document.body.scrollHeight&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; 

&lt;span class="c1"&gt;# set target count 
&lt;/span&gt;&lt;span class="n"&gt;itemTargetCount&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;20&lt;/span&gt; 

&lt;span class="c1"&gt;# scroll to bottom of webpage 
&lt;/span&gt;&lt;span class="k"&gt;while&lt;/span&gt; &lt;span class="n"&gt;itemTargetCount&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="nf"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;items&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt; 
    &lt;span class="n"&gt;driver&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;execute_script&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;window.scrollTo(0, document.body.scrollHeight);&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; 

    &lt;span class="c1"&gt;# wait for content to load 
&lt;/span&gt;    &lt;span class="n"&gt;time&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;sleep&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; 

    &lt;span class="n"&gt;new_height&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;driver&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;execute_script&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;return document.body.scrollHeight&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; 

    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;new_height&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="n"&gt;last_height&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; 
        &lt;span class="k"&gt;break&lt;/span&gt; 

    &lt;span class="n"&gt;last_height&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="n"&gt;new_height&lt;/span&gt; 

    &lt;span class="c1"&gt;# select elements by XPath 
&lt;/span&gt;    &lt;span class="n"&gt;elements&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;driver&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;find_elements&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;By&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;XPATH&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;//div/h4/a&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; 
    &lt;span class="n"&gt;h4_texts&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;element&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;text&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;element&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;elements&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; 

    &lt;span class="n"&gt;items&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;extend&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;h4_texts&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; 

    &lt;span class="c1"&gt;# print title 
&lt;/span&gt;    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;h4_texts&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Remark:&lt;/strong&gt; It's important to set a target count for infinite scroll pages so you can end your script at some point.&lt;/p&gt;

&lt;p&gt;In the previous example, we used yet another selector: &lt;code&gt;By.XPath&lt;/code&gt;. It will locate elements based on an XPath instead of classes and IDs, as seen before. Inspect the page, right-click on a &lt;code&gt;&amp;lt;div&amp;gt;&lt;/code&gt; containing the elements you want to scrape and select Copy Path.&lt;/p&gt;

&lt;p&gt;Your result should look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="err"&gt;'Short&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Dress'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;'Patterned&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Slacks'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;'Short&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Chiffon&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Dress'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;'Off-the-shoulder&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Dress'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;...&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And there you have it, the H4s of the first 20 products!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Remark:&lt;/strong&gt; Using Selenium for dynamic web scraping can get tricky with continuous &lt;a href="https://www.selenium.dev/downloads/" rel="noopener noreferrer"&gt;Selenium updates&lt;/a&gt;. Do well to go through the latest changes.&lt;/p&gt;




&lt;h2&gt;
  
  
  Conclusion 🤝
&lt;/h2&gt;

&lt;p&gt;Dynamic web pages are everywhere. Thus, there's a high enough chance you'll encounter them in your data extraction efforts. Remember that getting familiar with their structure will help you identify the best approach for retrieving your target information.&lt;/p&gt;

&lt;p&gt;All methods we explored in this article come with their own faults and disadvantages. So, take a look at what &lt;a href="https://www.zenrows.com/?utm_source=dev.to&amp;amp;utm_medium=social&amp;amp;utm_campaign=republishing"&gt;ZenRows&lt;/a&gt; has to offer. The solution allows you to scrape dynamic websites using a simple API call. Try it for free today and save yourself time and resources.&lt;/p&gt;




&lt;p&gt;Thanks for reading! This article was originally published on ZenRows: &lt;a href="https://www.zenrows.com/blog/dynamic-web-pages-scraping-python?utm_source=dev.to&amp;amp;utm_medium=social&amp;amp;utm_campaign=republishing"&gt;Dynamic Web Pages Scraping with Python: Guide to Scrape All Content&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;If you liked this guide, please ❤️ like the article and subscribe for more! ⬇⬇&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>python</category>
      <category>webscraping</category>
      <category>webdev</category>
      <category>tutorial</category>
    </item>
  </channel>
</rss>
