DEV Community

Cover image for Super smooth: Use Trae to track which executive orders Trump has signed!
Jinu Kim
Jinu Kim

Posted on

Super smooth: Use Trae to track which executive orders Trump has signed!

A new leader always brings a fiery start, and the new U.S. President, Donald Trump, is no exception. Every day, the White House website—built on WordPress—gets updated with numerous newly signed executive orders. For instance, on January 22nd alone, eight orders were published, almost rivaling the number of his tweets.

Today, let's use ByteDance's newly launched Trae IDE to fetch all the signed executive orders in one go—perfect for staying updated on The Donald's latest moves or simply satisfying your curiosity.

Download and Installation

Head over to the official website (https://www.trae.ai/?utm_source=juejin&utm_medium=juejin_trae&utm_campaign=techcall) to download the Trae IDE installer. Currently, it supports macOS only. You'll need to handle the issue of accessing the site via a VPN or proxy. Make sure to add *.trae.ai to the user-rule of your PAC configuration. Once that's set up, you'll be able to log in to the IDE without any issues.

Implementation of the Requirement

Here’s the context: I’m a backend developer, so my primary goal is to test whether Trae can genuinely assist me in coding more efficiently and improve productivity in real-world project development.

Create a Project and Python Virtual Environment

Essentially, it’s as simple as opening a new folder. If you’re familiar with VS Code, this process will feel very intuitive.

Then, open the terminal and use uv to create a Python virtual environment:

uv init .  
uv vent --python 3.12.8  
Enter fullscreen mode Exit fullscreen mode

Pro Tip: If you’re completely new to this, don’t worry! You can simply ask the chat panel on the right side of Trae for help—it’s there to guide you step by step.

Translation: Prompt for Requirement Submission


Requirement Prompt:

I need to retrieve the list of all articles from the webpage www.whitehouse.gov/presidential-actions, and the titles of the articles must be translated into Chinese.

HTML structure of an article:

Here is an example of the HTML code for one article (please verify the actual structure by inspecting the webpage):

<div class="wp-block-group wp-block-whitehouse-post-template__content has-global-padding is-layout-constrained wp-container-core-group-is-layout-6 wp-block-group-is-layout-constrained">  
  <h2 class="wp-block-post-title has-heading-4-font-size">
    <a href="https://www.whitehouse.gov/presidential-actions/2025/01/executive-grant-of-clemency-for-terence-sutton/" target="_self">
      Executive Grant of Clemency for Terence Sutton
    </a>
  </h2>   
  <div class="wp-block-group wp-block-whitehouse-post-template__meta is-nowrap is-layout-flex wp-container-core-group-is-layout-5 wp-block-group-is-layout-flex">
    <div class="taxonomy-category wp-block-post-terms">
      <a href="https://www.whitehouse.gov/presidential-actions/" rel="tag">Presidential Actions</a>
    </div>  
    <div class="wp-block-post-date">
      <time datetime="2025-01-22T18:19:33-05:00">January 22, 2025</time>
    </div>
  </div> 
</div>
Enter fullscreen mode Exit fullscreen mode
  • Required data to extract for each article:
    • Title of the article
    • Link to the article
    • Publication date

Pagination:

The article list spans multiple pages. Below is an example of the HTML structure for pagination (please verify the actual structure by inspecting the webpage):

<nav class="wp-block-query-pagination is-layout-flex wp-block-query-pagination-is-layout-flex" aria-label="Pagination">   
  <div class="wp-block-query-pagination-numbers">
    <span data-wp-key="index-0" aria-current="page" class="page-numbers current">1</span> 
    <a data-wp-key="index-1" data-wp-on--click="core/query::actions.navigate" class="page-numbers" href="https://www.whitehouse.gov/presidential-actions/page/2/">2</a> 
    <a data-wp-key="index-2" data-wp-on--click="core/query::actions.navigate" class="page-numbers" href="https://www.whitehouse.gov/presidential-actions/page/3/">3</a> 
    <span data-wp-key="index-3" class="page-numbers dots"></span> 
    <a data-wp-key="index-4" data-wp-on--click="core/query::actions.navigate" class="page-numbers" href="https://www.whitehouse.gov/presidential-actions/page/6/">6</a>
  </div>  
  <a data-wp-key="query-pagination-next" data-wp-on--click="core/query::actions.navigate" data-wp-on-async--mouseenter="core/query::actions.prefetch" data-wp-watch="core/query::callbacks.prefetch" href="https://www.whitehouse.gov/presidential-actions/page/2/" class="wp-block-query-pagination-next">Next</a> 
</nav>
Enter fullscreen mode Exit fullscreen mode
  • Required functionality:
    • Automatically navigate through all pages to retrieve the complete list of articles.

Dependency Installation and Code Execution:

  • Install the necessary dependencies and execute the code in the project environment.
  • Example: Open the project folder, and click "Run" to install dependencies.

Note:

The above HTML structures should be verified and copied from the actual webpage by inspecting the elements. Clear and precise requirements will lead to higher efficiency.


This prompt provides a clear structure for extracting the required data and automating the process of retrieving and translating article titles.

Image description
Clicking "Apply" will create an actual executable code file.

Image description
Then you can run the code file using:

python whitehouse_scraper.py
Enter fullscreen mode Exit fullscreen mode

The script will execute, and it will also print logs during the process.

Image description
Since there was no explicit instruction on where to store the data, it was automatically saved into a .txt file, as shown in the image.

Image description

Of course, the code can be further optimized to store the data in a database.

from selenium import webdriver
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from webdriver_manager.chrome import ChromeDriverManager
from datetime import datetime
import time
import os
import requests
from bs4 import BeautifulSoup
from translate import Translator

def scrape_whitehouse_actions():
    try:
        # Initialize translator and news list
        translator = Translator(to_lang='zh')
        news_items = []

        # Set request headers
        headers = {
            'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36'
        }

        # Fetch articles from all pages
        page = 1
        while True:
            # Build pagination URL
            if page == 1:
                url = "https://www.whitehouse.gov/presidential-actions/"
            else:
                url = f"https://www.whitehouse.gov/presidential-actions/page/{page}/"

            print(f"Accessing page {page}...")
            response = requests.get(url, headers=headers, timeout=30)

            # Check if it's the last page
            if response.status_code == 404:
                print("Reached the last page")
                break

            response.raise_for_status()
            soup = BeautifulSoup(response.text, 'html.parser')

            # Get all articles on the current page
            articles = soup.find_all('div', class_='wp-block-whitehouse-post-template__content')
            if not articles:
                print("No articles found on the current page, possibly reached the last page")
                break

            print(f"Found {len(articles)} articles on page {page}")

            # Process articles on the current page
            for article in articles:
                try:
                    # Extract title and link
                    title_element = article.find('h2', class_='wp-block-post-title')
                    if title_element:
                        link_element = title_element.find('a')
                        title = link_element.text.strip() if link_element else "No Title"
                        try:
                            translated_title = translator.translate(title)
                        except Exception as e:
                            print(f"Translation error: {str(e)}")
                            translated_title = title

                        link = link_element.get('href') if link_element else "#"
                        date_element = article.find('time')
                        date = date_element.get('datetime') if date_element else "No Date"

                        print(f"Found article: {title[:50]}...")
                        news_items.append({
                            'title': title,
                            'translated_title': translated_title,
                            'link': link,
                            'date': date
                        })
                except Exception as e:
                    print(f"Error processing article: {str(e)}")
                    continue

            # Check for next page
            next_link = soup.find('a', class_='wp-block-query-pagination-next')
            if not next_link:
                print("No more pages")
                break

            # Add delay to avoid making requests too quickly
            time.sleep(2)
            page += 1

        # Save all articles to a file
        timestamp = datetime.now().strftime('%Y%m%d_%H%M%S')
        filename = f'whitehouse_actions_{timestamp}.txt'
        with open(filename, 'w', encoding='utf-8') as f:
            for item in news_items:
                f.write(f"Original Title: {item['title']}\n")
                f.write(f"Translated Title: {item['translated_title']}\n")
                f.write(f"Link: {item['link']}\n")
                f.write(f"Date: {item['date']}\n")
                f.write('-' * 80 + '\n')

        print(f"Successfully scraped a total of {len(news_items)} articles, saved to {filename}")
        return news_items

    except Exception as e:
        print(f"An error occurred during scraping: {str(e)}")
        return None

# Remove the finally block since the driver is no longer needed

if __name__ == "__main__":
    scrape_whitehouse_actions()

Enter fullscreen mode Exit fullscreen mode

Summary
Overall, compared to previous tools like the Marscode plugin, the experience is much smoother.

The prerequisite is to have clear requirements and to combine product thinking with technical implementation to craft the best prompts for Trae.

Additionally, Tare has optimized the terminal experience, allowing error messages to be quickly sent to the chat window, making debugging highly efficient.

Image description

Top comments (0)