<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: David Hernandez Torres</title>
    <description>The latest articles on DEV Community by David Hernandez Torres (@david_hdz).</description>
    <link>https://dev.to/david_hdz</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/david_hdz"/>
    <language>en</language>
    <item>
      <title>Scraping Twitter comments with selenium(Python): step-by-step guide</title>
      <dc:creator>David Hernandez Torres</dc:creator>
      <pubDate>Wed, 03 Jul 2024 00:40:34 +0000</pubDate>
      <link>https://dev.to/david_hdz/scraping-twitter-comments-with-seleniumpython-step-by-step-guide-d51</link>
      <guid>https://dev.to/david_hdz/scraping-twitter-comments-with-seleniumpython-step-by-step-guide-d51</guid>
      <description>&lt;p&gt;In today's world full of data, everyone uses social media to express themselves and contribute to the public voice. This is such valuable information that is just publically available to anyone, you can gather a lot of insights, feedback, and very good advice from this public opinion.&lt;br&gt;
That is why, I bring you this step-by-step guide to start scraping comments on Twitter without much work.&lt;/p&gt;

&lt;p&gt;What you will need:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A text editor
-A programing language that selenium supports(I will be using python)
-A Twitter account (Preferably not your main one)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  **Warning:
&lt;/h2&gt;

&lt;p&gt;Using WebScraping in the wrong manner could be unethical and illegal against some terms of service that could lead to permanent IP address bans and more. Use with no bad intentions this WebScraping tools**&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 1: The setup
&lt;/h2&gt;

&lt;p&gt;To start, we will create a new directory with a virtual environment and activate it.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

 &amp;gt; C:\Users\Blogs\Webscraping&amp;gt;  python -m venv .
 &amp;gt; C:\Users\Blogs\Webscraping&amp;gt;  Scripts\activate



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This can vary from your operating system, if you are not familiar with Python and virtual environments, &lt;a href="https://www.freecodecamp.org/news/how-to-setup-virtual-environments-in-python/" rel="noopener noreferrer"&gt;refer here &lt;/a&gt;for more guidance.&lt;/p&gt;

&lt;p&gt;Okay, now that we have our environment running, I will install Selenium, our main dependency.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&amp;gt; pip install selenium&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Now that we have all of our tools ready, we shall code&lt;/p&gt;

&lt;h2&gt;
  
  
  Step two: Our code process
&lt;/h2&gt;

&lt;p&gt;Selenium is a free tool for automation processes on a web application. In these cases, we will be using the Selenium WebDriver, basically a tool that lets you run useful scripts in different browsers. In our case, we will be using Chrome.&lt;/p&gt;

&lt;p&gt;Our main process will look like this:&lt;br&gt;
(main.py)&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

from twitter import Twitter
from selenium import web driver
from time import sleep

##Desired post to scrape comments
URL_POST = "**"


##Account credentials
username = "**"
email = "**"
password = "**"

driver = webdriver.Chrome()

Twitter(driver).login()
Twitter(driver).get_post(URL_POST)
driver.quit()


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Selenium WebDriver lets us do a lot of stuff in a browser, but let's leave that for the next step. Right now I would recommend creating a new Twitter account and searching for a post that you would like to search. Yes, I know, we haven't defined things like the Twitter class but for now, it will be best to pass your driver as an argument.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 3: The Twitter class
&lt;/h2&gt;

&lt;p&gt;This will be the largest and most complex part of our program. It covers three methods: Twitter login get_post and scrape.&lt;/p&gt;

&lt;p&gt;We will first define a constructor with one input variable:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Driver: Our selenium driver that we started in main.py&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;-Wait: a useful method for searching HTML tags that are not loaded yet &lt;br&gt;
(twitter.py)&lt;/p&gt;

&lt;p&gt;`import sys&lt;br&gt;
from csv_exports import twitter_post_to_csv&lt;br&gt;
from time import sleep&lt;br&gt;
from selenium.webdriver.common.by import By&lt;br&gt;
from useful_functions import validate_span&lt;br&gt;
from selenium.webdriver.support.ui import WebDriverWait&lt;br&gt;
from selenium.webdriver.support import expected_conditions as EC&lt;/p&gt;

&lt;p&gt;class Twitter:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def __init__(self, driver):
    self.driver = driver
    self.wait = WebDriverWait(self.driver, 10)`
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Login Method&lt;/strong&gt;&lt;br&gt;
To access Twitter comments, we need to be logged in, and unfortunately, a web driver does not remember credentials. So let's start by automating our login process...&lt;/p&gt;

&lt;p&gt;The code&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;`def login(self, email, username, password):
    drive = self.driver
    wait = self.wait


    ##Going to the login page URL
    drive.get("https://x.com/i/flow/login")



    ##Sends email credential to the first form input
    input_email = wait.until(EC.presence_of_element_located((By.NAME, "text")))
    input_email.clear()
    input_email.send_keys(email)
    sleep(3)
    ##Submits form
    button_1 = drive.find_elements(By.CSS_SELECTOR, "button div span span")
    button_1[1].click()

    ##Sends username credential to the second form input
    input_verification = wait.until(EC.presence_of_element_located((By.NAME, "text")))
    input_verification.clear()
    sleep(3)
    input_verification.send_keys(username)
    ##Submits form
    button_2 = drive.find_element(By.CSS_SELECTOR, "button div span span")
    sleep(3)
    button_2.click()

    ##Sends username credential to the form input
    input_password = wait.until(EC.presence_of_element_located((By.NAME, "password")))
    input_password.clear()
    sleep(3)
    input_password.send_keys(password)
    sleep(3)

    #Submits last form
    button_3 = drive.find_element(By.CSS_SELECTOR, "button div span span")
    button_3.click()
    sleep(5)`
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Here are the forms your program will be filling up:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The first form:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyst9vzxlslc35liahuy7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyst9vzxlslc35liahuy7.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The second form&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvx85hy91hg86g1uagg04.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvx85hy91hg86g1uagg04.png" alt="Image description"&gt;&lt;/a&gt; &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The third form&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2vamke0txp51maler9pm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2vamke0txp51maler9pm.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;BREAKDOWN&lt;/p&gt;

&lt;p&gt;Our method waits for an, such as input_email, since it's our first request to the URL, everything needs to be loaded into full HTML.&lt;/p&gt;

&lt;p&gt;We use the find_elements() method from the web driver, to locate the inputs in the HTML.&lt;/p&gt;

&lt;p&gt;Our method systematically goes through every one of the forms one by one inputting and submitting keys with .send_keys() and .click() methods. We also use .clear() to make sure our input box does not contain information on it when we load the page.&lt;/p&gt;

&lt;p&gt;We have successfully logged in.&lt;/p&gt;

&lt;p&gt;NOTE*&lt;br&gt;
The second form only appears after a few times of using a Selenium web driver to interact with the Twitter login page. Twitter detects when a bot comes in and types numbers way too fast, so this second page appears only when a bot is detected. After a few times of using this program, you will always have this show up when logging in to your scraping account.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scrape&lt;/strong&gt;&lt;br&gt;
This will be the method that retrieves a post comments. To this, we have big limitations and some problems to solve. The first one is that there is no way possible to target only Twitter comments. Twitter comments are inside of , which, twitter likes to use a lot for lots of things. &lt;/p&gt;

&lt;p&gt;The best way I could find to get Twitter comments is by using this method: "drive.find_elements(By.XPATH,"//span[&lt;a class="mentioned-user" href="https://dev.to/class"&gt;@class&lt;/a&gt;='css-1jxf684 r-bcqeeo r-1ttztb7 r-qvutc0 r-poiln3']")! " in other words, using XML with classes. This returns a lot of unnecessary data and we have to do a lot of cleaning. &lt;/p&gt;

&lt;p&gt;The second problem, a little bit less severe, Twitter's dynamic reactions.&lt;br&gt;
When scrolling down or up, twitter loads or deletes HTML from the current document, so in order to get every comment possible, we have to go slowly and extract elements before we want to scroll again.&lt;/p&gt;

&lt;p&gt;Now that we have discovered this problem, lets get to work.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;def scrape(self):&lt;br&gt;
        drive = self.driver&lt;br&gt;
        containers = drive.find_elements(By.XPATH,&lt;br&gt;
                                         "//span[@class='css-1jxf684 r-bcqeeo r-1ttztb7 r-qvutc0 r-poiln3']")&lt;br&gt;
        ##Scrape data and store it in list&lt;br&gt;
        scraped_data = []&lt;br&gt;
        temporary_1 = ""&lt;br&gt;
        temporary_2 = ""&lt;br&gt;
        index = 0&lt;br&gt;
        index_dict = 0&lt;br&gt;
        while index &amp;lt; len(containers):&lt;br&gt;
            text = containers[index].text&lt;br&gt;
            if text:&lt;br&gt;
                if text[0] == "@":&lt;br&gt;
                    temporary_1 = text&lt;br&gt;
                    index_dict = index_dict + 1&lt;br&gt;
                if validate_span(text) is True and index_dict == 1:&lt;br&gt;
                    temporary_2 = text&lt;br&gt;
                    arr_push = {&lt;br&gt;
                        "username": temporary_1,&lt;br&gt;
                        "post": temporary_2&lt;br&gt;
                    }&lt;br&gt;
                    scraped_data.append(arr_push)&lt;br&gt;
                    temporary_2, temporary_1 = "", ""&lt;br&gt;
                    index_dict = 0&lt;br&gt;
            index = index + 1&lt;br&gt;
        return scraped_data&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This code retrieves all comments from the currently loaded document. &lt;br&gt;
By looping through certain conditions and adding additional methods like validate_span(), we were able to successfully, clean data all of the time. If you encounter a problem in the algorithm, feel free to let me know.&lt;/p&gt;

&lt;p&gt;The validate_span() function:&lt;br&gt;
(useful_functions.py)&lt;/p&gt;

&lt;p&gt;`def validate_span(span):&lt;br&gt;
    if span[0] == "." or span[0] == "·":&lt;br&gt;
        return False&lt;br&gt;
    if span[0] == "@":&lt;br&gt;
        return False&lt;br&gt;
    if validate_number(span):&lt;br&gt;
        return True&lt;br&gt;
    return False&lt;/p&gt;

&lt;p&gt;def validate_number(string):&lt;br&gt;
    if string[len(string) - 1] == "k":&lt;br&gt;
        string = string[0: len(string) - 1]&lt;br&gt;
    string = string.replace(".", "")&lt;br&gt;
    index = 0&lt;br&gt;
    for i in string:&lt;br&gt;
        if (i == "1" or i == "2" or i == "3" or i == "4" or i == "5"&lt;br&gt;
                or i == "6" or i == "7" or i == "8" or i == "9" or i == "0"):&lt;br&gt;
            index = index + 1&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;if len(string) &amp;lt;= index:
    return False
else:
    return True`
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;All of our unwanted elements usually follow counts, like counts or random dots and whitespace. By checking with a few conditions, this is an easy task to clean up.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The get_post method&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This is the method where we loop until we get to the bottom of the page,&lt;br&gt;
using the scraping method in every iteration to make sure all data is scraped.&lt;/p&gt;

&lt;p&gt;`def get_post(self, url):&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    drive = self.driver
    wait = self.wait

    drive.get(url)
    sleep(3)
    data = []
    javascript = "let inner_divs = document.querySelectorAll('[" "data-testid=\"cellInnerDiv\"]');" + ("window"
                                                                                                       ".scrollTo("
                                                                                                       "0, "
                                                                                                       "inner_divs[0].scrollHeight);") + "return inner_divs[2].scrollHeight;"

    previous_height = drive.execute_script("return document.body.scrollHeight")
    avg_scroll_height = int(drive.execute_script(javascript)) * 13
    while True:
        data = data + self.scrape()
        drive.execute_script("window.scrollTo(0, (document.body.scrollHeight +"+str(avg_scroll_height)+" ));")
        sleep(3)
        new_height = drive.execute_script("return document.body.scrollHeight")
        if new_height == previous_height:
            break
        previous_height = new_height`
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;By Injecting javascript into the driver and looping while the document's scroll heights are not the same, we are able to scrape data in every part of the page.&lt;/p&gt;

&lt;p&gt;Finally, we can do something useful with the data. In my case, am just going to print it. &lt;/p&gt;

&lt;p&gt;&lt;code&gt;for I in data:&lt;br&gt;
    print(I)&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Now all you have to do is change your desired URL and run the main file, then wait for your data to be returned.&lt;br&gt;
And you've done it! You have successfully created a web scraper for Twitter. Needless to say, use web scraping technologies in legal and ethical ways if you don't want to get in trouble...&lt;/p&gt;

&lt;p&gt;In conclusion, Twitter comments can scraped very efficiently, but should always be done with the correct legal use, apart from this, twitter data is very valuable and can help you understand the public opinion in a topic.&lt;/p&gt;

</description>
      <category>selenium</category>
      <category>webscraping</category>
      <category>automation</category>
    </item>
  </channel>
</rss>
