<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Double CHEN</title>
    <description>The latest articles on DEV Community by Double CHEN (@double_chen_70da460344c73).</description>
    <link>https://dev.to/double_chen_70da460344c73</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/double_chen_70da460344c73"/>
    <language>en</language>
    <item>
      <title>We just shipped browser-act CLI — browser automation without writing code</title>
      <dc:creator>Double CHEN</dc:creator>
      <pubDate>Fri, 10 Apr 2026 08:16:15 +0000</pubDate>
      <link>https://dev.to/double_chen_70da460344c73/we-just-shipped-browser-act-cli-browser-automation-without-writing-code-579f</link>
      <guid>https://dev.to/double_chen_70da460344c73/we-just-shipped-browser-act-cli-browser-automation-without-writing-code-579f</guid>
      <description>&lt;p&gt;We built BrowserAct because we kept running into the same wall: every time we needed to automate something in a browser, we had to start a whole project.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npm init &lt;span class="nt"&gt;-y&lt;/span&gt;
npm &lt;span class="nb"&gt;install &lt;/span&gt;playwright
npx playwright &lt;span class="nb"&gt;install &lt;/span&gt;chromium
&lt;span class="c"&gt;# ... now write 25 lines of async/await just to load a page&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's fine when you're building a test suite. Most of the time, you just want to grab a page's content, click something, or take a screenshot — from the terminal, in 30 seconds.&lt;/p&gt;

&lt;p&gt;So we built &lt;strong&gt;browser-act CLI&lt;/strong&gt;. Browser automation as terminal commands. No code, no project setup, no framework.&lt;/p&gt;

&lt;h2&gt;
  
  
  What it looks like
&lt;/h2&gt;

&lt;p&gt;This is a real run. Three commands against Hacker News:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;browser-act &lt;span class="nt"&gt;--session&lt;/span&gt; s1 navigate &lt;span class="s2"&gt;"https://news.ycombinator.com"&lt;/span&gt;
browser-act &lt;span class="nt"&gt;--session&lt;/span&gt; s1 &lt;span class="nb"&gt;wait &lt;/span&gt;stable
browser-act &lt;span class="nt"&gt;--session&lt;/span&gt; s1 get markdown
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Full page extracted as clean structured markdown — 3 commands, no code written.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Output: 15,547 characters of clean markdown&lt;/strong&gt; from 78,320 chars of raw HTML. browser-act automatically strips ads, nav bars, and irrelevant noise.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Why not just use Playwright?
&lt;/h2&gt;

&lt;p&gt;Playwright's getting started page: npm init, install, then download ~400MB of browser binaries — before a single line of automation.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;Playwright / Puppeteer&lt;/th&gt;
&lt;th&gt;browser-act CLI&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;First-time setup&lt;/td&gt;
&lt;td&gt;npm init + install + ~400MB browser download&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;npx skills add browser-act/skills --skill browser-act&lt;/code&gt; — once, global&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Navigate + extract content&lt;/td&gt;
&lt;td&gt;~25 lines of async/await boilerplate&lt;/td&gt;
&lt;td&gt;3 commands&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Session state&lt;/td&gt;
&lt;td&gt;Manual context management in every script&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;--session&lt;/code&gt; persists automatically between commands&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Shell integration&lt;/td&gt;
&lt;td&gt;Requires Node.js or Python runtime&lt;/td&gt;
&lt;td&gt;Pipe output directly to &lt;code&gt;grep&lt;/code&gt; / &lt;code&gt;jq&lt;/code&gt; / anything&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Playwright is still the right choice for full E2E test suites with parallel workers, trace viewers, and CI pipelines. browser-act CLI is for everything else.&lt;/p&gt;

&lt;h2&gt;
  
  
  Get started
&lt;/h2&gt;

&lt;p&gt;Install once:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npx skills add browser-act/skills &lt;span class="nt"&gt;--skill&lt;/span&gt; browser-act
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Core commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Open a page and extract its content&lt;/span&gt;
browser-act &lt;span class="nt"&gt;--session&lt;/span&gt; s1 navigate &lt;span class="s2"&gt;"https://example.com"&lt;/span&gt;
browser-act &lt;span class="nt"&gt;--session&lt;/span&gt; s1 &lt;span class="nb"&gt;wait &lt;/span&gt;stable
browser-act &lt;span class="nt"&gt;--session&lt;/span&gt; s1 get markdown      &lt;span class="c"&gt;# clean text output&lt;/span&gt;
browser-act &lt;span class="nt"&gt;--session&lt;/span&gt; s1 get html          &lt;span class="c"&gt;# raw HTML&lt;/span&gt;

&lt;span class="c"&gt;# Interact&lt;/span&gt;
browser-act &lt;span class="nt"&gt;--session&lt;/span&gt; s1 click 3           &lt;span class="c"&gt;# click element by index&lt;/span&gt;
browser-act &lt;span class="nt"&gt;--session&lt;/span&gt; s1 input 2 &lt;span class="s2"&gt;"query"&lt;/span&gt;   &lt;span class="c"&gt;# fill a field&lt;/span&gt;
browser-act &lt;span class="nt"&gt;--session&lt;/span&gt; s1 keys &lt;span class="s2"&gt;"Enter"&lt;/span&gt;

&lt;span class="c"&gt;# Capture&lt;/span&gt;
browser-act &lt;span class="nt"&gt;--session&lt;/span&gt; s1 screenshot ./out.png

&lt;span class="c"&gt;# Stealth mode (bypasses bot detection)&lt;/span&gt;
browser-act &lt;span class="nt"&gt;--session&lt;/span&gt; s1 browser list      &lt;span class="c"&gt;# pick a stealth profile&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Sessions persist between commands — build multi-step automations in shell scripts without managing state yourself.&lt;/p&gt;

&lt;h2&gt;
  
  
  What people are using it for
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Web scraping&lt;/strong&gt; — no boilerplate, just commands and output&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Shell pipelines&lt;/strong&gt; — &lt;code&gt;get markdown&lt;/code&gt; | &lt;code&gt;grep&lt;/code&gt; | &lt;code&gt;jq&lt;/code&gt; — works with every Unix tool you already use&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AI agents&lt;/strong&gt; — give an LLM direct browser access via CLI commands&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Deployment verification&lt;/strong&gt; — &lt;code&gt;navigate&lt;/code&gt; → &lt;code&gt;get markdown&lt;/code&gt; → assert expected content&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;n8n / Make / Zapier integrations&lt;/strong&gt; — use as a step in no-code workflows&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;&lt;strong&gt;browser-act CLI is live today&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;→ &lt;a href="https://www.browseract.com" rel="noopener noreferrer"&gt;browseract.com&lt;/a&gt; · Free to use · No credit card required&lt;/p&gt;

&lt;p&gt;GitHub: &lt;a href="https://github.com/browser-act/skills" rel="noopener noreferrer"&gt;github.com/browser-act/skills&lt;/a&gt; · AWS Marketplace available&lt;/p&gt;

&lt;p&gt;Questions? Drop them in the comments — we read everything.&lt;/p&gt;

</description>
      <category>opensource</category>
      <category>webdev</category>
      <category>cli</category>
      <category>automation</category>
    </item>
    <item>
      <title>Looking for Experienced Make.com &amp; Browser Act Creators!</title>
      <dc:creator>Double CHEN</dc:creator>
      <pubDate>Thu, 18 Sep 2025 11:48:21 +0000</pubDate>
      <link>https://dev.to/double_chen_70da460344c73/looking-for-experienced-makecom-browser-act-creators-p65</link>
      <guid>https://dev.to/double_chen_70da460344c73/looking-for-experienced-makecom-browser-act-creators-p65</guid>
      <description>&lt;h2&gt;
  
  
  We’re seeking skilled individuals to:
&lt;/h2&gt;

&lt;p&gt;Build public, non-customizable Browser Act workflows.&lt;br&gt;
Publish them to the Make community via your creator account.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What’s in it for you:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Competitive pay for each workflow.&lt;/li&gt;
&lt;li&gt;Keep all platform revenue and consulting fees.&lt;/li&gt;
&lt;li&gt;If you’re interested, message me to discuss the details.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let’s create something amazing together!&lt;/p&gt;

</description>
      <category>automation</category>
      <category>webscraping</category>
      <category>node</category>
      <category>api</category>
    </item>
    <item>
      <title>BrowserAct —Node Configuration &amp; Best Practices for Web Scraping Automation</title>
      <dc:creator>Double CHEN</dc:creator>
      <pubDate>Thu, 18 Sep 2025 08:25:45 +0000</pubDate>
      <link>https://dev.to/double_chen_70da460344c73/browseract-node-configuration-best-practices-for-web-scraping-automation-1226</link>
      <guid>https://dev.to/double_chen_70da460344c73/browseract-node-configuration-best-practices-for-web-scraping-automation-1226</guid>
      <description>&lt;h2&gt;
  
  
  🎯 What is &lt;a href="https://www.browseract.com/?co-from=dev" rel="noopener noreferrer"&gt;BrowserAct&lt;/a&gt; ?
&lt;/h2&gt;

&lt;p&gt;BrowserAct is an innovative web automation platform that combines AI-powered browser interaction with structured data extraction capabilities. It empowers users to create advanced web scraping workflows without any coding skills.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhjgmbiydecvz1oszi3uq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhjgmbiydecvz1oszi3uq.png" alt=" " width="800" height="396"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Core Features &amp;amp; Purpose
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;🚫 No Coding Required: Build workflows with a zero-code interface using visual nodes.&lt;/li&gt;
&lt;li&gt;🎯 Precise Data Extraction: Achieve higher accuracy than traditional AI Agents.&lt;/li&gt;
&lt;li&gt;🧠 Smart Page Understanding: Leverages AI for better recognition than standard RPA tools.&lt;/li&gt;
&lt;li&gt;💰 Cost-Effective: Save up to 90% of costs compared to agent-based scraping solutions.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6mbpa4x809u6u4vuss86.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6mbpa4x809u6u4vuss86.png" alt=" " width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Revolutionary Advantages of &lt;a href="https://www.browseract.com/?co-from=dev" rel="noopener noreferrer"&gt;BrowserAct&lt;/a&gt; Workflow
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Natural Language-Driven Smart Operations&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;Simplify Workflow Creation: Simply describe your intentions in natural language — no technical expertise required.&lt;/li&gt;
&lt;li&gt;AI-Driven Translation: Automatically converts descriptions into precise page operations.&lt;/li&gt;
&lt;li&gt;User-Friendly: Business users can easily create, understand, and modify workflows.&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Zero Exception Handling Burden&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;Built-in Smart Fault Tolerance: Automatically handles common exceptions during scraping tasks.&lt;/li&gt;
&lt;li&gt;Backup Operation Methods: Single nodes support multiple fallback strategies to ensure success.&lt;/li&gt;
&lt;li&gt;Graceful Degradation: Intelligent handling when critical steps fail, minimizing disruptions.&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Cost-Effectiveness Breakthrough&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;90% Cost Savings vs Agent Scraping: Significantly reduce expenses without sacrificing precision.&lt;/li&gt;
&lt;li&gt;80% Less Configuration Time: Compared to traditional RPA tools, setup is much faster.&lt;/li&gt;
&lt;li&gt;Low Maintenance: Adaptive algorithms reduce the need for manual updates when pages change.&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Precision and Intelligence Combined&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;Higher Accuracy than Agents: Professional extraction algorithms ensure data precision.&lt;/li&gt;
&lt;li&gt;Smarter than RPA: AI-powered understanding adapts to complex web pages.&lt;/li&gt;
&lt;li&gt;Dynamic Adaptation: Automatically adjusts to changes in page structure, ensuring reliability.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F279afxrwn2d6eytf7zra.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F279afxrwn2d6eytf7zra.png" alt=" " width="800" height="372"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Quick Start Guide：Create Your First Workflow in Minutes
&lt;/h2&gt;

&lt;p&gt;Ready to build your first scraping workflow? Follow these six simple steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create a Workflow - Start with a blank template.&lt;/li&gt;
&lt;li&gt;Set Parameters - Configure parameter variables, or delete parameter settings for more flexible data searching&lt;/li&gt;
&lt;li&gt;Add Nodes - Click the plus sign below nodes to add new nodes&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F996v7pfqr68ld9xyl8zc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F996v7pfqr68ld9xyl8zc.png" alt=" " width="800" height="387"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Use Natural Language: - Describe the operation for each node in plain language (e.g., “Click on the login button”).&lt;/li&gt;
&lt;li&gt;Run the Workflow - Click the run button to see scraping results&lt;/li&gt;
&lt;li&gt;Data Export - Automatically generate structured data files&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Don’t Know How to Set Up a Workflow?
&lt;/h2&gt;

&lt;p&gt;No problem! The &lt;a href="https://www.browseract.com/template/amazon-best-sellers-scraper?co-from=tpamz" rel="noopener noreferrer"&gt;BrowserAct Template&lt;/a&gt; Marketplace has a wide variety of ready-to-use templates. With just one click, you can experience pre-built workflows tailored to common use cases.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqhsrohqdsnxsywbq0nbv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqhsrohqdsnxsywbq0nbv.png" alt=" " width="800" height="497"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Choose BrowserAct?
&lt;/h2&gt;

&lt;p&gt;Experience the Future of Web Scraping Today!&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;📈 Boost Efficiency: Projects that traditionally take weeks now complete in hours&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;💰 Reduce Costs: No need for professional development teams - business users can operate directly&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;🎯 Reliable and Accurate: AI-powered smart scraping with over 95% accuracy rate&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;🚀 Rapid Iteration: Adjust workflows in minutes when requirements change&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;🎉 Start Your Zero-Code Data Scraping Journey Now!&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://www.browseract.com/?co-from=dev" rel="noopener noreferrer"&gt;Register&lt;/a&gt; now and unlock the potential of intelligent data scraping with BrowserAct.&lt;/p&gt;

</description>
      <category>automation</category>
      <category>webscraping</category>
    </item>
    <item>
      <title>**BrowserAct Integration with Make**</title>
      <dc:creator>Double CHEN</dc:creator>
      <pubDate>Tue, 16 Sep 2025 10:09:21 +0000</pubDate>
      <link>https://dev.to/double_chen_70da460344c73/browseract-integration-with-make-20ke</link>
      <guid>https://dev.to/double_chen_70da460344c73/browseract-integration-with-make-20ke</guid>
      <description>&lt;p&gt;BrowserAct App has officially launched on Make, bringing AI-powered automation to your data collection workflows. This integration allows you to supercharge your data processes with intelligent automation capabilities.&lt;/p&gt;

</description>
      <category>automation</category>
      <category>webscraping</category>
    </item>
  </channel>
</rss>
